Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Sharon Goldman

Exclusive: Scale AI secures $1B funding at $14B valuation as its CEO predicts big revenue growth and profitability by year-end

Scale AI has landed $1 billion in new funding that values the buzzy six-year-old startup at $14 billion, placing it in an exclusive club of companies that have been able to surf the generative AI wave to such a high valuation.

Existing investor Accel led the Series F round, announced on Tuesday, with participation from other returning investors including Wellington Management, Y Combinator, Spark Capital, Founders Fund, Greenoaks and Tiger Global Management. New investors include DFJ Growth, Elad Gil, Amazon, ServiceNow Ventures, Intel Capital and AMD Ventures. 

Scale AI provides human workers and software services that help companies label and test data for AI model training, a critical step in getting AI to be effective. For Scale AI, that business is growing quickly as corporate customers race to adopt products related to generative AI—so much so that a whopping 90% of its business is now driven by that spending on that subset of AI.

In an exclusive interview with Fortune, Scale AI CEO Alexandr Wang shared previously undisclosed details about the company’s financials that show just how quickly that growth is happening. The company’s annual recurring revenue—the money paid by businesses for Scale AI's services over extended periods of time—tripled in 2023 to an undisclosed amount and is expected to reach $1.4 billion by the end of 2024, he said.

With revenue growing 200% year on year, Wang added, “we expect to be profitable by the end of this year as well." The company declined to say whether he was referring to a profit under generally accepted accounting principles or whether it excluded certain expenses.

Scale’s previous funding round was in April 2021, when it raised $325 million at a $7.3 billion valuation. Even back then, outsiders were speculating about a possible IPO, but Wang declined to discuss the company’s prospects of going public. "I think at this point, we're a large, private company,” he said. “We're definitely very thoughtful about IPO timelines and preparing."

Scale AI focuses on data, one of AI's ‘picks and shovels’

When Wang and co-founder Lucy Guo launched Scale within startup accelerator Y Combinator in 2016 (Guo left the company in 2018), Wang was just 19 and had dropped out of studying AI and machine learning at MIT. He made the leap after seeing the promise of hiring an army of on-demand human workers to process and label data, such as images or text, which were needed to create the high-quality datasets required to train AI models.

“The reason I started Scale was to solve the data problem in AI,” he said. Powering AI is a three-legged stool of algorithms, computational power, and data, he explained. There were already teams solving algorithmic problems, like OpenAI, which went on to create ChatGPT, and others solving computational bottlenecks, like chipmaker Nvidia. But there was “nobody working on solving the data problems,” he said. 

That might be because the work is considered by many to be the most laborious and menial part of tech. But Scale AI immediately scored big by filling the needs of autonomous-driving companies like Cruise and Waymo, which had massive amounts of data collected by cameras and other sensors. That kind of data work turned out to be among the key ‘picks and shovels’ infrastructure needs of the generative AI boom, which Scale AI jumped on early. In 2019, it helped OpenAI train GPT-2 with the team that later left to found fellow hot AI startup Anthropic.

In 2020, Scale expanded its focus to government and military clients, building the first systems for government geospatial data, and went on to snag major contracts with the US Department of Defense and other government agencies. By age 24, Wang was a billionaire on paper. 

Today, Wang says Scale has evolved: He sees its role as serving the entire AI ecosystem as an infrastructure provider building what it calls the “data foundry,” which goes beyond the massive abundance of data it has gathered and labeled through its contracted human data annotators. More recently, Scale has turned to curating highly-specialized data using experts in different fields to fine-tune models and push the boundaries of what the models can do. Finally, Scale has also become focused on measurement and evaluation of models to help address risks and improve security, which Wang says will be a key part of the company’s business in the next year or two.  

“Nearly every major large language model is built on top of our data foundry, so for us this is really a milestone,” he said. “I think the entire industry expects that AI is only going to grow, the models are only gonna get bigger, the algorithms are only going to get more complex and, therefore, the requirements on data will continue growing—we want to make sure that we're well-capitalized.” 

A history of human data work

Along with other data annotation firms like Amazon's Mechanical Turk and Appen, Scale has been roundly criticized previously for its focus on contracted human data annotators. Until recently, these were tens of thousands of gig workers in Africa or Asia working for Scale’s subsidiary Remotasks, with what the Washington Post in 2023 said was low-paying work and often delayed pay (Scale AI said in a statement that the pay system on Remotasks “is continually improving” based on worker feedback and that “delays or interruptions to payments are exceedingly rare.”)

More recently, while Remotasks workers still do work for Scale's automotive clients, the company has shifted its focus to more highly-specialized workers such as Ph.D-level academics, lawyers and accountants to poets, authors, and those fluent in specific languages. These workers, who help train and test models for companies from OpenAI and Cohere to Anthropic and Google, also work through a third-party, often another Scale subsidiary called Outlier, but are paid higher hourly wages (about $30-$60/hour, according to job listings). A New York Times article on the practice found worker reviews to be mixed.

When asked why Ph.D.-level experts would work to help train AI by rating chatbot responses, Wang said that there are various reasons. "They have an opportunity to have truly society-level impact," he said. "If you're a Ph.D. and you're used to doing some very niche, esoteric research that maybe a handful of people around the world understand and can comment on, [now you can help] improve and build frontier data for these AI systems." He added that scientists in particular are optimistic. "If we're able to keep improving these models, that's actually a tool that can potentially enable us to have far greater scientific discovery in the future—so actually I think it's an extremely exciting and inspirational opportunity for a lot of them."

Wang added that the data that includes complex reasoning from experts is a must for AI in the future. "You can’t feed any old data into these algorithms and it'll improve itself," he explained, mentioning the limits of scraping of data from sources like the comments on online message board Reddit. Scale has built processes where a model may take a first pass on, say, writing a research paper, and then humans take on what the technology spits out to improve the performance and, therefore, improve the model's output.

When asked about the future of AI-generated and annotated data, which some say could eliminate the need for human data annotation, Wang pointed out that Scale AI, too, is investing in so-called synthetic data as well as human-created data. "Our view of it is hybrid," he said, explaining that while AI-generated data will be important, the only way to get the necessary quality and accuracy will be through verification with human experts.

"It's really about what are the gems?" he said. "What are the really high quality pieces of data that you can produce that are going to be able to push the frontier for the technology? 

The future is measurement and evaluation of AI systems

If dealing with data is part of the “picks and shovels” of building AI, isn’t it a commodity that any company can tackle? But while Scale AI has a long list of competitors, including Snorkel AI and Labelbox, Wang insists that the data problem is far from being commoditized. 

“It's a pivotal moment for the industry,” he said. “I think we are now entering a phase where further improvements and further gains from the models are not going to be won easily. They're going to require increasing investments and are gonna require innovations and computation and efficient algorithms, innovations, and data. Our leg of that school is to ensure that we continue innovating on data.” 

That includes building a testing and evaluation system that is able to serve governments so that they can ensure that these models are safe for their constituencies for enterprises. Last year, for example, Scale provided an evaluation platform for a first-ever AI security challenge at the annual DEFCON hacker convention in Las Vegas, which tested AI models including OpenAI's GPT-4 and was supported by the White House Office of Science, Technology, and Policy (OSTP).

Wang is open about the fact that his background growing up in Los Alamos, N.M., where his parents were scientists at the National Lab—and where Robert Oppenheimer famously ran the Manhattan Project—led to his heightened concern about geopolitical events and what that meant for the U.S. and democracy. A 2010 trip to China further inspired him to work on national security problems and turn Scale's attention towards using data to boost the security and safety of AI models.

To that end, Wang has said AI is only as good as the quality of the data it is trained on, and as an infrastructure provider, he explained, Scale AI has to be one step ahead of where the technology's going. "We have to lay down the tracks before the trains run on top of it," he said. "So the burden that we have, is how do we constantly stay ahead to properly serve the entire ecosystem? If we can do that, then I think we will be incredibly successful."

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.