A Massachusetts Institute of Technology study shook up Wall Street recently by claiming 95% of organizations that have invested in the use of generative artificial intelligence systems to improve their operations or save costs have gotten zero returns on their investments.
Since generative artificial intelligence took the world by storm in early 2023 following the debut of OpenAI's ChatGPT, a big debate on Wall Street has been its impact on software makers that cater to the enterprise market — large companies, institutions and organizations.
And, since July, there's been a worry on Wall Street that generative artificial intelligence will be the "death of software," with software companies that have "per seat" business models possibly in trouble if increased productivity leads to smaller workforces.
Big-cap software maker Salesforce on Sept. 3 reported fiscal second-quarter earnings that topped estimates while its sales guidance for the October quarter came in below expectations.
With so much going on, IBD turned to Ben Lorica, who edits the Gradient Flow newsletter and hosts the Data Exchange podcast. Lorica also helps organize the AI Conference, the AI Agent Conference and the Applied AI Summit, and serves as the strategic content chair for AI at the Linux Foundation.
Enterprise Software: MIT Study Roils Wall Street
IBD: A recent MIT survey made waves by contending that the adoption of generative artificial intelligence has flopped at many companies. Were you surprised?
Lorica: The top line that most pilots fail to advance to production is in line with other surveys — including a survey we did earlier this year — though the number they cite was surprisingly high. The reality is that it's much easier to try things out and experiment as such, (but) many of these experiments won't work out. Also, the technology is moving quickly, teams need to get foundational pieces in place. So adoption will take trial and error on the part of AI teams.
IBD: Why is enterprise adoption of generative artificial intelligence going slower than expected?
Lorica: I think it's just the fact that at the end of the day there are certain basics that you need in place. You have to have your data lined up and you have to be able to identify use cases.
Companies that were experimenting with AI in one form or another before the advent of generative AI are generally doing better. They probably already had a data platform and data pipelines in place, and they had data engineers on staff and maybe some data scientists. They had some processes and talent in place. So now with generative AI, they can just extend those initiatives to another form of AI.
So let's say a company wants to deploy AI for some customer support function. They need to have their data in place so that the AI can answer questions based on their internal data. That presupposes they have a data platform, data pipelines, data governance.
Generative AI: Consumption Pricing
IBD: Given what happened in cloud computing, where enterprises learned that costs could wind up being much higher than expected, are companies leery of committing to AI in a big way because consumption-based pricing models are still in flux?
Lorica: I think it depends on the initiative they have in place. If their AI is simply calling some external API, there's a lot of options now and those costs are going down. But that doesn't mean that there's no cost. AI models are now much more computationally intensive. Reasoning models consume and use up a lot more tokens. So companies need to be a lot more careful in terms of how they set up their AI applications. They need to be able to optimize. If it's a simple question, I could just direct it to a smaller, simpler model. If it's a complex question, they go to a much more enhanced reasoning model.
At the enterprises I talk to, cost is not the top-line concern. If you fast forward a few years from now, when people will be using AI all over the place — not just training the models but actually deploying the models — then there's going to be a lot of demand for compute. So either we find alternative energy sources or we figure out a way to develop and deploy these models much more efficiently. I think both are happening. There are initiatives at both fronts.
IBD: How are enterprises evaluating the models that are out there? Is pricing and token cost the biggest consideration or reasoning performance or some score on a hallucination index?
Lorica: The companies I come across, right now their priority is let's get this going but we will architect it so that we're not too dependent on one single model provider. They're focused on getting AI applications out there and working. So if that means paying for a more expensive, proprietary model for now, that's fine. But the belief is that the models are increasingly becoming comparable, and many companies use multiple providers.
Enterprise Software: Palantir A Model?
IBD: How are enterprise software companies helping companies with AI projects and deployment? Palantir, for example, has what they call forward-deployed engineers at companies.
Lorica: Every big software player in the space has an army of solution engineers that can help companies. So I don't think that's specifically unique to Palantir. Maybe they're more aggressive in terms of holding hands. But the (cloud) hyperscalers have them, all the leading enterprise software companies have them.
The challenge for companies is doing it as efficiently as possible. To develop an initial working prototype is not that hard. What's hard is getting comfortable enough to deploy it to production. At end of the day, companies need to have their data strategy and data pipelines and data governance in place.
IBD: Are the large language model (LLM) companies like OpenAI going to emerge as competitors in the enterprise space with SaaS (software as a service) companies?
Lorica: Those companies have three sources of revenue, right? There's the consumer, which is hard because there's acquisition cost. Then there's charging for the API (cloud computing consumption), which is a sort of a race to the bottom. And then there are AI agents. On the agent side it's early. I haven't seen any signs that they're building a kind of data platform offering. The LLM or foundation model space is supercompetitive. Their competitors are releasing models at a regular cadence every four, six months. And they need to keep up. So that takes resources. There's a lot of capital investment just on making sure you're keeping up on the model front.
IBD: For enterprises, are greenfield data centers built from scratch for artificial intelligence workloads a good option?
Lorica: Most of the enterprises I encounter seem to be going with the hyperscalers, just because they already have those relationships. CoreWeave, Lambda Labs and others have a lot of customers. I just don't personally run across as many as hyperscalers.
Enterprise Software: AI Startups Challenge Incumbents
IBD: As enterprises develop their own custom projects, are they interested in solutions from AI native startups as opposed to incumbent SaaS companies?
Lorica: Enterprises probably will do a little bit of both. If they have relationships with one of the hyperscalers, or Databricks or Snowflake, it would be natural to start doing some generative AI projects with them because you already are working with them on the data front.
If you're a more adventurous enterprise and you have an army of really good engineers, you may want to do a little bit more (on your own) because that gives you more control of the models and all the tooling. So you might go that direction, but if you don't have that kind of staffing, then it's very natural to just go with whoever you're working with on the data front.
IBD: There are a lot of coding startups, such as Cursor, and buzz about vibe coding. Is there the potential that enterprises could write their own code and develop their own apps, replacing legacy software vendors like Salesforce, for example?
Lorica: First of all, I haven't seen the coding assistants build really large-scale enterprise software. They're good at helping you become more productive as a programmer at this point. But large-scale projects, they're not quite there yet. Maybe in a few years? The way I think of coding assistants is that it's a productivity tool, just like any other productivity tool.
Generative Artificial Intelligence: Building Blocks
IBD: There's a lot of momentum for Anthropic's MCP (model context protocol) in the AI agent space. Why?
Lorica: People want a much more streamlined way for getting AI agents to talk to external resources, external tools, external data sources. So either you do custom wiring for each of those, or you use MCP — originally Anthropic's protocol but it is becoming more of a community effort. I would not be surprised if, at some point, Anthropic just sends the protocol over to a neutral body.
IBD: Do any other building blocks come to mind that are needed to make generative artificial intelligence deployment faster?
Lorica: "Building blocks" is a broad term. Obviously if we had more chips, a bigger variety of chips besides Nvidia, that's a building block. If the AMD software stack got better and easier for people to deploy their models on, then that would be available on the protocol side.
Right now a lot of the focus is on foundation models and pretraining. But the reality is, most companies are not going to pretrain a foundation model. They're going to do post-training. So better tools are needed so that humans do post-training in whatever form that might be, from finetuning, quantization, distillation, all notions of post-training.
Multimodal foundation models will be important to watch. The first manifestation seems to be these visual language models that companies are releasing. So yeah, multimodal foundation models. That means that the internal data platforms of companies will increasingly need to account for multimodality. That's why the Lance file format, which is optimized for unstructured data, is starting to attract attention.
Follow Reinhardt Krause on Twitter @reinhardtk_tech for updates on artificial intelligence, cybersecurity and cloud computing.