Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

The generative A.I. software race has begun

Portrait of C3.ai CEO Tom Siebel. (Credit: Chris J. Ratcliffe—Bloomberg via Getty Images)

The generative A.I. future is coming very fast. It is going to be highly disruptive—in both good ways and bad. And we are definitely not ready.

These points were hammered home to me over the past couple of days in conversations with executives from three different companies.

First, I spoke earlier today with Tom Siebel, the billionaire co-founder and CEO of C3.ai. He was briefing me on a new enterprise search tool C3.ai just announced that is powered by the same kinds of large language models that underpin OpenAI’s ChatGPT. But unlike ChatGPT, C3.ai’s enterprise search bar retrieves answers from within a specific organization’s own knowledge base and can then both summarize that information into a concise paragraphs and provide citations to the original documents where it found that information. What’s more, the new search tool can generate analytics, including charts and graphs, on the fly.

News of the new generative A.I.-powered search tool sent C3.ai's stock soaring—up 27% at one point during the day.

In a hypothetical example Siebel showed me, a manager in the U.K.’s National Health Service could type a simple question into the search bar: What is the trend for outpatient procedures completed daily by specialty across the NHS? And within about a second, the search engine has retrieved information from multiple databases and created a pie chart showing a snapshot with live data on the percentage of procedures grouped by specialty as well as a fever chart showing how each of those numbers is changing over time.

The key here is that those charts and graphs didn’t exist anywhere in the NHS’s vast corpus of documents; they were generated by the A.I. in response to a natural language query. The manager can also see a page ranking of the documents that contributed to those charts and drill down into each of those documents with a simple mouse click. C3.ai has also built filters so a user can only retrieve data from the knowledge base that they are permitted to see—a key requirement for data privacy and national security for many of the government and financial services customers C3.ai works with.

“I believe this is going to fundamentally change the human computer interaction model for enterprise applications,” Siebel says. “This is a genuinely game changing event.” He points out that everyone knows how to type in a search query. It requires no special training in how to use complex software. And C3.ai will begin rolling it out to customers that include the U.S. Department of Defense, the U.S. intelligence community, the U.S. Air Force, Koch Industries, and Shell Oil, with a general release scheduled for March.

Nikhil Krishnan, the chief technology officer for products at C3.ai, tells me that under the hood, right now most of the natural language processing is being driven by a language model Google developed and open-sourced called FLAN T5. He says this has some advantages over OpenAI’s GPT models, not just in terms of cost, but also because it is small enough to run on almost any enterprise’s network. GPT is too big to use for most customers, Krishnan says.

Okay, so that’s pretty game changing. But in some ways, a system I had seen the day before seemed even more potentially disruptive. On Monday, I had coffee with Tariq Rauf, the founder and CEO of a London-based startup called Qatalog. Its A.I. software takes a simple prompt about the industry that a company is in and then creates essentially a set of bespoke software tools just for that business. A bit like C3.ai’s enterprise search tool, Qatalog’s software can also pull data from existing systems and company documentation. But it can then do more than just run some analytics on top of that data, it can generate the code needed to run a Facebook ad using your marketing assets, all from a simple text prompt. “We have never built software this way, ever,” Rauf says.

People are still needed in this process, he points out. But you need a lot fewer of them than before. Qatalog could enable very small teams—think just a handful of people—to do the kind of work that once would have required dozens or even hundreds of employees or contractors. “And we are just in the foothills of this stuff,” he says.

Interestingly, Qatalog is built on top of open-source language models — in this case BLOOM, a system created by a research collective that included A.I. company Hugging Face, EleutherAI, and more than 250 other institutions. (It also uses some technology from OpenAI.) It's a reminder that OpenAI is not the only game in town. And Microsoft's early lead and partnership with OpenAI does not mean it's destined to win the race to create the most popular and effective generative A.I. workplace productivity tools. There are a lot of other competitors circling and scrambling for marketshare. And right now it is far from clear who will emerge on top.

Finally, I also spent some time this week with Nicole Eagan, the chief strategy officer at the cybersecurity firm Darktrace, and Max Heinemeyer, the company’s chief product officer. For Fortune’s February/March magazine cover story on ChatGPT and its creator, OpenAI, I interviewed Maya Horowitz, the head of research at cybersecurity company Check Point, who told me that her team had managed to get ChatGPT to craft every stage of a cyberattack, starting with crafting a convincing phishing email and proceeding all the way through writing the malware, embedding the malware in a document, and attaching that to an email. Horowitz told me she worried that by lowering the barrier to writing malware, ChatGPT would lead to many more cyberattacks.

Darktrace's Eagan and Heinemeyer share this concern — but they point to another scary use of ChatGPT. While the total number of cyberattacks monitored by DarkTrace has remained about the same, Eagan and Heinemyer have noticed a shift in cybercriminals’ tactics: The number of phishing emails that rely on trying to trick a victim into clicking a malicious link embedded in the email has actually declined from 22% to just 14%. But the average linguistic complexity of the phishing emails Darktrace is analyzing has jumped by 17%.

Darktrace's working theory, Heinemeyer tells me, is that ChatGPT is allowing cybercriminals to rely less on infecting a victims’ machine with malware, and to instead hit paydirt through sophisticated social engineering scams. Consider a phishing email designed to impersonate a top executive at a company and flagging an overdue bill: If the style and tone of the message are convincing enough, an employee could be duped into wiring money to a fraudster's account. Criminal gangs could also use ChatGPT to pull off even more complex, long-term cons that depend on building a greater degree of trust with the victim. (Generative A.I. for voices is also making it easier to impersonate executives on phone calls, which can be combined with fake emails into complex scams—none of which depend on traditional hacking tools.)

Eagan shared that Darktrace has been experimenting with its own generative A.I. systems for red-teaming and cybersecurity testing, using a large language model fine-tuned on a customer’s own email archives that can produce incredibly convincing phishing emails. Eagan says she recently fell for one of these emails herself, sent by her own cybersecurity teams to test her. One of the tricks—the phishing email was inserted as what appeared to be a reply in a legitimate email thread, making detection of the phish nearly impossible based on any visual or linguistic cues in the email itself.

To Eagan this is just further evidence of the need to use automated systems to detect and contain cyberattacks at machine speed, since the odds of identifying and stopping every phishing email have just become that much longer.

Phishing attacks on steroids; analytics built on the fly on top of summary replies to search queries; bespoke software at the click of a mouse. The future is coming at us fast.

Before we get to the rest of this week’s A.I. news, a quick correction on last week's special edition of the newsletter: I misspelled the name of the computer scientist who heads JPMorgan Chase's A.I. research group. It is Manuela Veloso. I also misstated the amount the bank is spending per year on technology. It is $14 billion, not $12 billion. My apologies.

And with that, here is the rest of this week's A.I. news.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.