
Among the most interesting AI stories this week was an item about a Boston-area startup called OpenEvidence that uses generativeAI to provide answers to clinical questions based data from leading medical journals. The free-to-use app has proved enormously popular among doctors, with some surveys suggesting at least 40% of all U.S. physicians are using OpenEvidence to stay on top of the latest medical research and to ensure they are offering the most up-to-date treatments to patients. On the back of that kind of viral growth, OpenEvidence was able to raise $210 million in a venture capital deal in July that valued the company at $3.5 billion. OpenEvidence is also the same company that a few weeks back said that its AI system was able to score 100% on the U.S. medical licensing exam. (See the “Eye on AI Numbers” section of the August 21st edition of this newsletter.) All of which may explain why, just a month later, the company is reportedly in talks for another venture deal that would almost double that valuation to $6 billion. (That’s according to a story in tech publication The Information which cited three unnamed people it said had knowledge of the discussions.)
A lot of the use of OpenEvidence today would qualify as “shadow AI”—doctors are using it and finding value, but they aren’t necessarily admitting to their patients or employers that they are using it. They are also often using it outside enterprise-grade systems that are designed to provide higher-levels of security, data privacy, and compliance, and to integrate seamlessly with other business systems.
Ultimately, that could be a problem, according to Andreas Cleve, the cofounder and CEO of Corti, a Danish medical AI company that is increasingly finding traction by offering healthcare institutions “AI infrastructure” designed specifically for medical use cases. (Full disclosure: Corti’s partners include Wolters Kluwer, a huge software company that markets a clinical evidence engine called UpToDate that competes with OpenEvidence.)
From medical assistants to ‘AI infrastructure’ for healthcare
AI infrastructure is a pivot for Corti, which was founded way back in 2016 and spent the first decade of its existence building its own speech recognition and language understanding systems for emergency services and hospitals. The company still markets its “Corti assistant” as a solution for healthcare systems that want an AI-power clinical scribe that can operate well in noisy hospital environments and integrate with electronic health records. But Cleve told me in a recent conversation that the company doesn’t see its future in selling a front-end solution to doctors, but instead selling key components in “the AI stack” to the companies that are offering front-end tools.
“We tried to be both a product vendor for healthcare and an infrastructure vendor, and that meant competing with all the other apps in healthcare, and it was like, terrible,” he says. Instead, Corti has decided its real value lies in providing the “healthcare grade” backend on which AI applications, many of them produced by third parties, run. The backend Corti provides includes medical AI models—which others can wrap user-facing products around—as well as the platform on which AI agents for healthcare use cases can run. For instance, it has built an API called FactsR, which is an AI reasoning model that is designed to check the facts that medical notetaking scribes or clinical AI systems produce. It uses a lot of tokens, Cleve says, which would make it too expensive for general purpose voice transcription. But because of how much is riding on clinical notes being accurate, it can be worth it to a vendor to pay for FactsR, Cleve says.
Another example: earlier this summer Corti announced a partnership with Voicepoint, a speech recognition and digital transcription service that is used by doctors across Switzerland. Voicepoint will use Corti’s AI models to help with tasks such as summarization of conversations into medical notes and possibly, in the future, with diagnostic support. To do this though, Corti had to set up dedicated AI infrastructure, including data centers located in Switzerland, to comply with strict Swiss data residency rules. Now, Corti is able to offer this same backbone infrastructure to other healthcare companies that want to deploy AI solutions in Switzerland. And Corti has similar AI infrastructure in place in countries like Germany that also have strict data residency and data privacy rules.
Cleve tells me that healthcare is increasingly part of the discussions around “sovereign AI.” This is particularly true in Europe, where many governments are worried about having their citizens’ medical information stored on the servers of U.S. companies, which might be subject to U.S. government pressure, legal or otherwise, to provide data access. “None of these things are doable today, because the majority of all the AI apps are running on OpenAI, Anthropic, or Gemini, and they are all American companies over which America asserts jurisdiction,” Cleve says.
But even within the U.S., strict cybersecurity and patient privacy requirements often mean that using an off-the-shelf, general-purpose AI system won’t cut it. “A lot of customers have requirements like, ‘Hey, we will never want to have data leave premises, or we will never share a tenant, or we will never co-encrypt with our consumer customer on the GPU rack, because we want to know where our data is because we have to prove that to legislators,’” he says.
It’s unlikely one medical AI model will rule them all
Cleve also tells me that he thinks the giant, general purpose AI builders—the likes of OpenAI, Anthropic, and Google—are unlikely to conquer healthcare, despite the fact that they have been making moves to build models either fine-tuned or specifically-trained to answer clinical questions. He says this is because healthcare isn’t a single vertical, but rather a collection of highly-specialized niches, most of which are too narrow to be interesting to these tech behemoths. The note-taking needs of a GP in a relatively quiet office who needs to summarize a 10-minute consultation are quite different from a doctor working in the chaos and noise of a busy city ER, which are different again from a psychiatrist who needs to summarize not just a 10-minute consultation, but maybe an hour-long therapy session. As an example, Cleve says another Corti customer is a company in Germany that makes software just to help dentists automate billing based on audio transcripts of their sessions with patients. “They’re a vertical within a vertical,” he says. “But they are growing like 100% a year and have done so for several years. But they are super niche.”
It will be interesting to watch Corti going forward. Perhaps Cleve is correct that the AI stack is wide enough, deep enough and varied enough to create opportunities for lots of different vertical and regional players. Or, it could be that OpenAI, Microsoft, and Google devour everyone else. Time will tell.
With that, here’s more AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
Correction, Sept. 3: A previous version of this story misreported the year Corti was founded. It also mischaracterized the relationship between Corti and Wolters Kluwer.