Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Ryan Morrison

Apple's new OpenELM teases the future of AI on the iPhone

Apple logo outside Apple Store.

Apple has released a new family of large language models, made it fully open source and offered it up on popular AI platform Hugging Face for other developers to play with and adapt.

The iPhone maker has become very active in the open source artificial intelligence space over the past few months and with its latest release the company hopes to help shape the direction of apps built using these on-device language models.

OpenELM is a framework designed to work well on edge devices such as smartphones or laptops. This is important for Apple as running AI locally is more secure and better for privacy.

There is no indication of whether these models will form part of Apple’s plans for on-device AI in iOS 18 or upgrades to Siri, but they do show the direction the company is going with its AI.

What is Apple OpenELM?

(Image credit: Getty Images)

The full title is Open Source Efficient LLMs and they are instruct models designed to be re-trained, adapted and integrated in other projects by third-party developers and researchers.

These new models have been designed to be more accurate and efficient. Initially Apple's focus is in providing support for the research community as OpenLEM can be used for investigating model biases, risks and levels of trustworthiness.

There are four models in the family pre-trained on the open source CoreNet data library. All are small language models with the largest at 3 billion parameters. This is a similar size to Microsoft's new Phi-3 small language model.

What makes OpenELM different?

Being able to deploy models to edge devices running Apple's own chips could also be a game changes for wearable tech. We could see future Apple AR glasses using an onboard AI to offer information on surroundings even when offline.

The big differentiator is getting similar performance to other open soruce language models but on a much smaller training dataset. This makes it perfect for niche use cases and research.

Apple researchers wrote in a paper on the new models: “With a parameter budget of approximately one billion parameters, OpenELM exhibits a 2.36% improvement in accuracy” compared to other similar sized models.

With the release of the new models Apple also offered code to use the MLX library. This is the toolkit Apple uses to run AI models like Stable Diffusion on its own chipsets.

Being able to deploy models to edge devices running Apple's own chips could also be a game changes for wearable tech. We could see future Apple AR glasses using an onboard AI to offer information on surroundings even when offline.

What does this mean for the future of the iPhone?

(Image credit: Science & Knowledge/YouTube)

OpenELM is primarily a research project, a way for data scientists and people investigating the safety and accuracy of AI models to run code more efficiently.

However, it further shows Apple's commitment to creating AI models able to run on devices like iPhones, iPads and MacBooks efficiently without compromising capability.

One reason Siri has always been seen as not as good as other legacy AI chatbots like Alexa and Google Assistant is because Apple had much of its functionality running on device, meaning it couldn't draw on as much compute powering for complex tasks.

A lot of Apple's recent work on AI, including research into improving efficiency of memory usage, running models that use the neural engine and new language models that work from a single prompt has been towards this goal and OpenELM is no different. It could even lead to a framework developers could use for AI in apps.

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.