Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jim Stratton

Those who solve the data dilemma will win the A.I. revolution

(Credit: Getty Images)

We’ve long talked about the necessity of getting your data house in order to prompt better and more effective decision-making, organizational agility, and employee engagement. But with artificial intelligence quickly becoming a key driver of competitive advantage, the need for quality, timely data is more important than ever.

That’s because good A.I. outcomes—whether that’s a recommended next-best action, some sort of anomaly or threat detection, or even a customer service response using generative A.I.—depend on plentiful high-quality data being used to train the underlying models.

Many of the generative A.I. use cases that have wowed the public in recent months—if delivered with sufficient safety and guardrails—will be truly value-additive to our ability to get work done. But they will also quickly become commoditized. To differentiate yourself from your competitors, you’ll need to leverage your organization’s proprietary data to deliver the best possible outcomes for your business.

Although data is the lifeblood of A.I., reliable data for your own organization is all too often hard to find. In fact, among the respondents in A.I. IQ: Insights on Artificial Intelligence in the Enterprise, a study sponsored by Workday, 77% of participants were concerned that their organization’s data is neither timely nor reliable enough to use with A.I. and machine learning (M.L.). Similarly, insufficient data volume or quality was the top reason (29%) for their A.I. and M.L. deployments falling short of expectations.

Just stop for a moment and let that sink in: The vast majority of organizations don’t fully trust their own data to get them the best possible A.I. outcomes.

A big driver of this is the sheer number of applications that the average organization uses. All of these applications use and produce data. A recent Accenture report found that the average company uses more than 500 applications from multiple vendors, and 80% of the survey respondents say they will buy more applications from additional vendors in the next two years. It’s a dilemma. You have vast amounts of something very valuable, but have a hard time getting all of it into a form that’s reliable and timely. It’s safe to say that this data dilemma is not going to be solved by most companies anytime soon.

Overly complex technology portfolios stifle value when it comes to data collection and curation, but complexity can also make it hard to get all of your data into one spot to feed into A.I. algorithms. Each one of these applications is a data silo that must be integrated, curated, governed, and secured if you want this data to fuel the best possible outcomes and insights.

The integration of disjointed systems often comes with a heavy development and maintenance cost, and by achieving consistency across these systems you are by definition sacrificing the timeliness of the unified dataset. This sacrifice leaves an outdated view that only shows how your company was running, not what is happening right now. If siloed data fosters risk and unreliability, simplifying the data domain via modern platforms offers hope. Key anchor systems that support the enterprise—core platforms like customer relationship management (CRM), human capital management (HCM), financial management, inventory management, and more—can help reduce security risk and data silos.

To solve the A.I. data dilemma, start with the business outcomes and insights you want to deliver. Then and only then should you start to identify what data will drive those business outcomes and insights. Many companies very often start with the data and then try to use that data to drive insights. That’s a backward way of doing things, wastes time, and doesn’t deliver business value.

Once you have the business outcome in mind, the solution to getting the most out of the data you do have—ideally in a unified platform versus a mosaic of systems—is to treat that data like a product in your own company. Explicitly define an owner for that data and an SLA, or service level agreement (indicating how reliable it is, how timely, etc.). Once you have done the engineering work to make that data available, it’s important that it’s available for the whole enterprise—not just the team that created it, owns it, or was the first to ask for it.

And remember, it’s a myth that quantity is the be-all and end-all of robust A.I. It’s our contention—and our experience—that data quality scales A.I. better than data quantity. It’s also true that data quality—at the scale needed for A.I.—is harder to get to than sheer quantity, but is fundamental to reliable, responsible, and useful A.I. in the workplace.

It’s important to note that the transformative nature of A.I. is likely both overhyped in the short term, and underhyped in the long term. There are use cases that we can’t imagine yet that will create new industries, business models, and ways of working. The one thing that won’t change though is the need for reliable data. Those who do the hard work now of getting their data houses in order are poised to reap the benefits of whatever the future holds.

Jim Stratton is chief technology officer at Workday. Workday is a Fortune Live Media partner.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.