M2 Macs now generate Stable Diffusion images in under 18 seconds

Ryan Daws is a senior editor at TechForge Media, with a seasoned background spanning over a decade in tech journalism. His expertise lies in identifying the latest technological trends, dissecting complex topics, and weaving compelling narratives around the most cutting-edge developments. His articles and interviews with leading industry figures have gained him recognition as a key influencer by organisations such as Onalytica. Publications under his stewardship have since gained recognition from leading analyst houses like Forrester for their performance. Find him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)


New optimisations have enabled M2-based Mac devices to generate Stable Diffusion images in under 18 seconds.

Stable Diffusion is an AI image generator similar to DALL-E. Users can input a text prompt and the AI will produce an image that’s often far better than what most of us mere mortals can do.

Apple is a supporter of the Stable Diffusion project and posted an update on its machine learning blog this week about how it’s improving the performance on Macs.

“Beyond image generation from text prompts, developers are also discovering other creative uses for Stable Diffusion, such as image editing, in-painting, out-painting, super-resolution, style transfer and even color palette generation,” wrote Apple.

“With the growing number of applications of Stable Diffusion, ensuring that developers can leverage this technology effectively is important for creating apps that creatives everywhere will be able to use.”

Apple highlights there are many reasons for people to want to run Stable Diffusion locally instead of a server, including:

  • Safeguarding privacy — User data remains on-device.
  • More flexibility — Users don’t require an internet connection.
  • Reduced cost — Users can eliminate server-related costs.

Apple says that it has released optimisations to Core ML for Stable Diffusion in macOS 13.1 and iOS 16.2, along with code to help get started on M-based devices.

Following the optimisations, a baseline M2 Macbook Air can generate an image using a 50 inference steps Stable Diffusion model in under 18 seconds. Arguably more impressively, even an M1 iPad Pro can do the job in under 30 seconds.

The release also features a Python package for converting Stable Diffusion models from PyTorch to Core ML using diffusers and coremltools, as well as a Swift package to deploy the models.

Detailed instructions on benchmarking and deployment are available on the Core ML Stable Diffusion repo here.

(Image Credit: Apple)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: , , ,

View Comments
Leave a comment

Leave a Reply