Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Paolo Confino

Google’s A.I. is about to breach a new frontier: Life advice, from a chatbot

Alphabet CEO Sundar Pichai stands in front of a presentation. (Credit: Justin Sullivan)

“Hey Google, why is my life a mess?”

Soon, Alphabet’s A.I. assistant might very well be able to answer that question, formerly the province of best friends and therapists. Google is reportedly planning an A.I. chatbot dedicated to providing life advice, according to the New York Times

The project comes from Alphabet’s Google DeepMind division, created in April as a merger of DeepMind—the machine learning lab from England, acquired in 2014—and the Brain team from Google Research. The goal for the potential life-coach-styled bot is loftier than that of the more basic customer service bots that are widely used today. For example, the new technology would answer personal questions about how to navigate a sensitive conversation with a friend or ethical quandaries.

It would be the latest development in the arms race to create generative A.I. tools tailored for specific purposes. Doing so has already proved successful for a handful of A.I. startups. The specialty legal A.I. tool Casetext sold for $650 million to ThomsonReuters last month, and Jasper, designed specifically for marketing professionals, is now a unicorn valued at over $1.5 billion, according to venture capital deals tracking website Pitchbook. It’s such a promising exit strategy that Pitchbook even offers an A.I. tool meant to evaluate which startups it thinks will eventually be successful.  

Google’s newest effort is a “significant departure from the highly prevalent chatbots that many of us are already used to,” says Columbia Business School management professor Dan Wang, who researches A.I. and social media. (The company did not immediately respond to a request for comment.) 

Existing models like Google Bard and OpenAI’s ChatGPT are meant to be general, Wikipedia-style databases that can provide answers on or write essays about lots of topics but struggle with creating unique interactions. Wang calls these chatbots “deterministic,” because they are geared toward providing a very precise answer. However, to offer life advice they’d need to be more adaptable to address the randomness of a person’s personal queries. 

“In some cases, what customers really want is a real conversation,” Wang says. Generative A.I. “creates the possibility that a chatbot can develop relational intelligence, and also empathy to go along with it—or perceived empathy, from the human's perspective.” 

The science of conversation

A chatbot like the one Google is building would need to have what’s known as “voice flexibility,” according to Wang, which would allow it to pick up on the type of conversation a person wants to have. Are they trying to open up emotionally? In search of practical advice? Or just looking to vent about a long day? Even humans can fail to grab these emotional cues in conversations with their friends. If a highly specialized chatbot—especially one that endeavors to replicate the particularly human interaction of getting advice—misses the mark on this, it can quickly ruin the conversation. 

Such a customized bot would require extensive fine-tuning to hold a stimulating conversation, Wang says. And that’s saying nothing of providing actual advice, which can open up the bot to ethical and legal liabilities. In an example of a worst case scenario for A.I. handling sensitive topics, the National Eating Disorder had to shut down its A.I.-powered hotline after it gave callers poor advice. The codirector of the Artificial Intelligence & Equality Initiative at Carnegie-Mellon University even went as far accusing Silicon Valley firms of knowingly abdicating their responsibility to develop A.I. ethically. 

Without rigorous testing, “it's really hard to envision not only the potential benefits, but also the potential harms that an automated chatbot could produce.” Wang says. 

As research in this field progresses, other downsides will likely emerge, Wang noted, pointing to research on social media as an example

There’s been an overwhelming amount of evidence in recent years to support the case that social media use is directly tied to mental health issues ranging from anxiety, to body image issues, to insomnia. In May, the Surgeon General issued an advisory that excessive use of social media for kids can lead to increased likelihood of developing eating disorders, anxiety, and depression. A study commissioned by the U.K. parliament found that teens who used social media three to four hours a day reported mental health problems at more than twice the rate of those who didn’t use it at all. 

Many tech giants searching for A.I.’s human touch 

Google’s new project is just the latest example of an A.I. developer trying to replicate a decidedly human interaction. 

Even Apple, which has stayed conspicuously out of the A.I. furor as of late, has a health coaching app in the works that would use A.I. to make recommendations about exercise, sleep habits, and dieting. The new product, codenamed Quartz, appears rather similar to Google’s life-advice-purveying chatbot. While Google and Apple’s versions seem rather innocuous, there are other versions in the works from upstarts that are decidedly more focused on the wholesale replication of human relationships. 

In May, Fortune reported that influencer Caryn Marjorie had developed an A.I. chatbot version of herself that would serve as a virtual girlfriend to anyone online for the cost of $1 a minute. And A.I. startup Inflection AI has a chatbot called Pi—which stands for personal intelligence—that’s built to be, above all else, a good conversationalist, according to its founder Mustafa Suleyman, who also happens to be the founder of DeepMind. 

Judging by the market these humanity-infused A.I. versions seem to be potentially lucrative opportunities. Marjorie’s fake girlfriend made $71,610 in its first week and Inflection AI raised $1.3 billion in funding in June.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.