Stable Diffusion text-to-image generator is now publicly available

Text-to-image generator Stable Diffusion is now available for anyone to put to the test.

Stable Diffusion is developed by Stability AI and was initially released for researchers earlier this month. The image generator claims to deliver a breakthrough in speed and quality that can run on consumer GPUs.

The model is based on the latent diffused model created by CompVis and Runway but enhanced with insights from conditional diffusion models by Stable Diffusion’s lead...

AI21 Labs raises $64M to help it compete against OpenAI

AI21 Labs has raised $64 million in a funding round to help it compete against OpenAI and other NLP leaders.

Competition in NLP (Natural Language Processing) is heating up. OpenAI is currently seen as the industry leader with its GPT-3 model but rivals are gaining traction.

Investors see AI21 Labs as one of the most promising contenders.

"We completed this round during a period of market uncertainty, which highlights the confidence our investors have in AI21's...

LabGenius uses Graphcore’s IPUs to speed up drug discovery

AI-driven scientific research firm LabGenius is harnessing the power of Graphcore’s IPUs (Intelligence Processing Units) to speed up its drug discovery efforts.

LabGenius is currently focused on discovering new treatments for cancer and inflammatory diseases. The firm combines AI, lab automation, and synthetic biology for its potentially life-saving work.

Until now, the company has been using traditional GPUs for its workloads. LabGenius reports that switching to...

Nvidia and Microsoft develop 530 billion parameter AI model, but it still suffers from bias

Nvidia and Microsoft have developed an incredible 530 billion parameter AI model, but it still suffers from bias.

The pair claim their Megatron-Turing Natural Language Generation (MT-NLG) model is the "most powerful monolithic transformer language model trained to date".

For comparison, OpenAI’s much-lauded GPT-3 has 175 billion parameters.

The duo trained their impressive model on 15 datasets with a total of 339 billion tokens. Various sampling weights...

Google’s MUM will start adding more context to searches

Google is getting a little help from its MUM (Multitask Unified Model) to add more context to search results.

MUM was announced at Google I/O in May and aims to transform how the web giant handles complex queries. The model uses the T5 text-to-text framework and is said to be “1,000 times more powerful” than BERT (Bidirectional Encoder Representations from Transformers), which itself was a major breakthrough when it was introduced in 2018.

One example used by Google...

Algorithmia announces Insights for ML model performance monitoring

Seattle-based Algorithmia has announced Insights, a solution for monitoring the performance of machine learning models.

Algorithmia specialises in artificial intelligence operations and management. The company is backed by Google LLC and focuses on simplifying AI projects for enterprises just getting started.

Diego Oppenheimer, CEO of Algorithmia, says:

“Organisations have specific needs when it comes to ML model monitoring and reporting.

For example,...

Microsoft’s new AI auto-captions images for the visually impaired

A new AI from Microsoft aims to automatically caption images in documents and emails so that software for visual impairments can read it out.

Researchers from Microsoft explained their machine learning model in a paper on preprint repository arXiv.

The model uses VIsual VOcabulary pre-training (VIVO) which leverages large amounts of paired image-tag data to learn a visual vocabulary.

A second dataset of properly captioned images is then used to help teach the...