Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Ryan Morrison

Meta is building a superintelligent AI — and one expert warns of ‘significant moral issues’

AI will be part of our everyday lives in future.

Meta is building a superintelligent artificial intelligence that the company says it will make open-source and available to anyone. CEO Mark Zuckerberg said advanced AI is required to power the next generation of smart assistants and AR glasses.

Most of the big AI labs including OpenAI and Anthropic are focused on Artificial General Intelligence (AGI), sometimes known as superintelligent AI. This is a form of AI capable of human or greater than human levels of reasoning and understanding.

In a statement on Facebook, Zuckerberg said: “It's become clearer that the next generation of services requires building full general intelligence. That needs advances in every area of AI, from reasoning to planning to coding to memory and other cognitive abilities.”

Ryan Carrier, founder and CEO of AI systems auditing agency forHumanity told Tom’s Guide said there were significant risks associated with AI around bias, copyright and misinformation that need to be addressed before we consider superintelligence. 

To achieve this goal the company is building out significant infrastructure, securing 350,000 Nvidia H100 GPUs by the end of this year. It is also having its advanced AI research division, FAIR, work more closely with its consumer GenAI team.

What is superintelligence or AGI?

There is no simple, single definition of superintelligence. Each of the large AI labs and leading thinkers in the space interpret the point of superintelligence differently. 

Some see it as the singularity — think Skynet in the Terminator franchise — a point where technology becomes so advanced it is uncontrollable by humanity.

For others AGI is more down to Earth, it is the point at which AI can understand, reason and compete with humans on an equal or greater footing — a form of general intelligence that works across different forms of input and output and understands the broader world.

They have expanded our collective lake of information into oceans of information which makes it harder to source knowledge and wisdom — which was the whole point of the Information Age.

Ryan Carrier, CEO forHumanity

Recent models from OpenAI and Google have started to show some indications of that broader general intelligence. But many still believe we are a long way from that point, while others still think we should be cautious about trying to reach it.

Carrier told Tom’s Guide that recent advances in generative AI, and moves to create general artificial intelligence have led to boosts in productivity but that comes with a downside. It is now harder to source information. and the AI tools don't provide any meaningful advancement in knowledge or wisdom.

“They have expanded our collective lake of information into oceans of information which makes it harder to source knowledge and wisdom — which was the whole point of the Information Age,” he explained. “So I remain skeptical that we have done nothing more than advanced productivity with the recent explosion of these tools.”

Why open-source the future of AI?

Mark Zuckerberg expects we will all be interacting daily with AI in the future (Image credit: Meta)

Zuckerberg says it isn’t a case of if, but when society needs to deploy general artificial intelligence tools. He argued in a statement that it is needed to power smart glasses, augmented reality and other forms of AI hardware that links the virtual and digital worlds. He says it should also be available as broadly as possible. 

“This technology is so important, and the opportunities are so great that we should open source and make it as widely available as we responsibly can, so that way everyone can benefit,” he declared.

I think a lot of us are going to talk to AI’s frequently throughout the day.

Mark Zuckerberg, Meta CEO

As well as a future general intelligence, he said this open-source ethos would apply to the next generation of large language models from Meta — Llama 3. The previous two versions are already among the most widely used open-source AI tools.

In much the way Linux and Android are the open-source operating systems that power computers and mobile devices, tools like Llama are the AI equivalents. Large Language Models (LLM) that anyone can deploy on their own hardware.

“People are also going to need new devices for AI, and this brings together AI and the metaverse, because over time, I think a lot of us are going to talk to AI’s frequently throughout the day,” Zuckerberg said in a statement explaining his vision.

Bringing research closer to consumer AI

Mark Zuckerberg says we need superintelligence to organize data coming from smart glasses  (Image credit: Future)

Meta is investing heavily in AI and not just through infrastructure. It is bringing its FAIR research division, which will focus on superintelligence, closer to its GenAI consumer division which is building tools for Meta products.

Yan LeCun, Chief AI Scientist at Meta said FAIR’s mission has always been to create machines that understand the world and build them so they can perceive, remember, reason, plan and act on that understanding. The issue is that requires human levels of intelligence.

“In the not-to-distant future, all of our interactions with the digital world will be mediated by AI assistants through our smart glasses and other devices,” he wrote on X. “We need these systems to have human-level intelligence if they are to understand the world, people, and tools, so as to help us in our daily lives.”

Carrier disagrees. He described this perspective as a religious one, calling it “faith-based” with no scientific evidence that human intelligence is required for AI to understand the world.

We are also some way from achieving that goal, regardless of what people like Sam Altman from OpenAI or Lecun say. Carrier added: “Our socio-technical tools are a long way away from replicating human intelligence, which is grounded in centuries-long development of shared moral frameworks.”

We need these systems to have human-level intelligence if they are to understand the world, people, and tools, so as to help us in our daily lives.

Yan Lecun, Meta Chief AI Scientist

These frameworks are also already being violated by the current moves in AI, particularly around bias, data overreach and intellectual property theft, he said.

Other areas of concern for Carrier include the use of emotional recognition tools with no grounding in truth and text generation treated as fact with no backing.

“This is a small sample of the meaningful problems that exist in our tools today that must be overcome prior to the leaps in innovation described by Mr. LeCunn, that remain faith-based,” Carrier said. 

“Lastly, I would echo the famous Jeff Goldblum line from Jurassic Park, 'your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should'.”

Someone is going to make AGI — why not make it open?

The major argument in favor of the Meta approach is that someone is going to make AGI eventually, whether it is a national government-backed project from China or a Big Tech-backed project like Microsoft’s funding of OpenAI. 

Aravind Srinivas, founder and CEO of Perplexity AI wrote on X: “Open Source AGI is an amazing vision. You are building a very powerful technology, and, actually aligning to what makes sense for the world: more people have a say in what makes sense and doesn't.”

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.