McAfee unveils AI-powered deepfake audio detection

McAfee has revealed a pioneering AI-powered deepfake audio detection technology, Project Mockingbird, during CES 2024. This proprietary technology aims to defend consumers against the rising menace of cybercriminals employing fabricated, AI-generated audio for scams, cyberbullying, and manipulation of public figures' images.

Generative AI tools have enabled cybercriminals to craft convincing scams, including voice cloning to impersonate family members seeking money or manipulating...

Global AI security guidelines endorsed by 18 countries

The UK has published the world's first global guidelines for securing AI systems against cyberattacks. The new guidelines aim to ensure AI technology is developed safely and securely.

The guidelines were developed by the UK's National Cyber Security Centre (NCSC) and the US’ Cybersecurity and Infrastructure Security Agency (CISA). They have already secured endorsements from 17 other countries, including all G7 members.

The guidelines provide recommendations for...

GitLab’s new AI capabilities empower DevSecOps

GitLab is empowering DevSecOps with new AI-powered capabilities as part of its latest releases.

The recent GitLab 16.6 November release includes the beta launch of GitLab Duo Chat, a natural-language AI assistant. Additionally, the GitLab 16.7 December release sees the general availability of GitLab Duo Code Suggestions.

David DeSanto, Chief Product Officer at GitLab, said: "To realise AI’s full potential, it needs to be embedded across the software development...

OpenAI battles DDoS against its API and ChatGPT services

OpenAI has been grappling with a series of distributed denial-of-service (DDoS) attacks targeting its API and ChatGPT services over the past 24 hours.

While the company has not yet disclosed specific details about the source of these attacks, OpenAI acknowledged that they are dealing with "periodic outages due to an abnormal traffic pattern reflective of a DDoS attack."

Users affected by these incidents reported encountering errors such as "something seems to have gone...

NIST announces AI consortium to shape US policies

In a bid to address the challenges associated with the development and deployment of AI, the National Institute of Standards and Technology (NIST) has formed a new consortium. 

This development was announced in a document published to the Federal Registry on November 2, alongside an official notice inviting applications from individuals with the relevant credentials.

The document states, "This notice is the initial step for NIST in collaborating with non-profit...

Biden issues executive order to ensure responsible AI development

President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology's safe and responsible use.

The order establishes stringent standards for AI safety and security, safeguards Americans' privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership on the global stage.

Key actions outlined in the order:

New standards for AI...

Enterprises struggle to address generative AI’s security implications

In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use.

Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace.

The report delves into how organisations are dealing with the use of generative AI tools,...

Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime

In a packed session at Cyber Security & Cloud Expo Europe, Raviv Raz, Cloud Security Manager at ING, turned the spotlight away from traditional security threats and delved into the world of AI-powered cybercrime.

Raz shared insights from his extensive career, including his tenure as technical director for a web application firewall company. This role exposed him to the rise of the "Cyber Dragon" and Chinese cyberattacks, inspiring him to explore the offensive side of...

Mithril Security demos LLM supply chain ‘poisoning’

Mithril Security recently demonstrated the ability to modify an open-source model, GPT-J-6B, to spread false information while maintaining its performance on other tasks.

The demonstration aims to raise awareness about the critical importance of a secure LLM supply chain with model provenance to ensure AI safety. Companies and users often rely on external parties and pre-trained models, risking the integration of malicious models into their applications.

This situation...

The risk and reward of ChatGPT in cybersecurity

Unless you’ve been on a retreat in some far-flung location with no internet access for the past few months, chances are you’re well aware of how much hype and fear there’s been around ChatGPT, the artificial intelligence (AI) chatbot developed by OpenAI. Maybe you’ve seen articles about academics and teachers worrying that it’ll make cheating easier than ever. On the other side of the coin, you might have seen the articles evangelising all of ChatGPT’s potential...