Microsoft and Apple back away from OpenAI board

Microsoft and Apple have decided against taking up board seats at OpenAI. The decision comes as regulatory bodies intensify their scrutiny of big tech's involvement in AI development and deployment.

According to a Bloomberg report on July 10, citing an anonymous source familiar with the matter, Microsoft has officially communicated its withdrawal from the OpenAI board. This move comes approximately a year after the Redmond-based company made a substantial $13 billion investment in...

Industry experts call for tailored AI rules in post-election UK

As the UK gears up for its general election, industry leaders are weighing in on the potential impact on technology and AI regulation.

With economic challenges at the forefront of political debates, experts argue that the next government must prioritise technological innovation and efficiency to drive growth and maintain the UK's competitive edge.

Rupal Karia, Country Leader UK&I at Celonis, emphasises the need for immediate action to address inefficiencies in both...

Microsoft details ‘Skeleton Key’ AI jailbreak

Microsoft has disclosed a new type of AI jailbreak attack dubbed "Skeleton Key," which can bypass responsible AI guardrails in multiple generative AI models. This technique, capable of subverting most safety measures built into AI systems, highlights the critical need for robust security measures across all layers of the AI stack.

The Skeleton Key jailbreak employs a multi-turn strategy to convince an AI model to ignore its built-in safeguards. Once successful, the model becomes...

Think tank calls for AI incident reporting system

The Centre for Long-Term Resilience (CLTR) has called for a comprehensive incident reporting system to urgently address a critical gap in AI regulation plans.

According to the CLTR, AI has a history of failing in unexpected ways, with over 10,000 safety incidents recorded by news outlets in deployed AI systems since 2014. As AI becomes more integrated into society, the frequency and impact of these incidents are likely to increase.

The think tank argues that a...

SoftBank chief: Forget AGI, ASI will be here within 10 years

SoftBank founder and CEO Masayoshi Son has claimed that artificial super intelligence (ASI) could be a reality within the next decade.

Speaking at SoftBank's annual meeting in Tokyo on June 21, Son painted a picture of a future where AI far surpasses human intelligence, potentially revolutionising life as we know it. Son asserted that by 2030, AI could be "one to 10 times smarter than humans," and by 2035, it might reach a staggering "10,000 times smarter" than human...

OpenAI co-founder Ilya Sutskever’s new startup aims for ‘safe superintelligence’

Ilya Sutskever, former chief scientist at OpenAI, has revealed his next major project after departing the AI research company he co-founded in May.

Alongside fellow OpenAI alumnus Daniel Levy and Apple's former AI lead Daniel Gross, the trio has formed Safe Superintelligence Inc. (SSI), a startup solely focused on building safe superintelligent systems.

The formation of SSI follows the brief November 2023...

EU AI legislation sparks controversy over data transparency

EU AI legislation sparks controversy over data transparency

The European Union recently introduced the AI Act, a new governance framework compelling organisations to enhance transparency regarding their AI systems' training data.

Should this legislation come into force, it could penetrate the defences that many in Silicon Valley have built against such detailed scrutiny of AI development and deployment processes.

Since the public release of OpenAI's ChatGPT, backed by Microsoft 18 months ago, there has been significant growth in...

Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans

Elon Musk has dropped his lawsuit against OpenAI, the company he co-founded in 2015. Court filings from the Superior Court of California reveal that Musk called off the legal action on June 11th, just a day before an informal conference was scheduled to discuss the discovery process.

Musk had initially sued OpenAI in March 2024, alleging breach of contracts, unfair business practices, and failure in fiduciary duty. He claimed that his contributions to the company were made "in...

DuckDuckGo releases portal giving private access to AI models

DuckDuckGo has released a platform that allows users to interact with popular AI chatbots privately, ensuring that their data remains secure and protected.

The service, accessible at, is globally available and features a light and clean user interface. Users can choose from four AI models: two closed-source models and two open-source models. The closed-source models are OpenAI's GPT-3.5 Turbo and Anthropic's Claude 3 Haiku, while the open-source models are Meta's Llama-3...

AI pioneers turn whistleblowers and demand safeguards

OpenAI is facing a wave of internal strife and external criticism over its practices and the potential risks posed by its technology. 

In May, several high-profile employees departed from the company, including Jan Leike, the former head of OpenAI's "super alignment" efforts to ensure advanced AI systems remain aligned with human values. Leike's exit came shortly after OpenAI unveiled its new flagship GPT-4o model, which it touted as "magical" at its Spring Update...