Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Sage Lazzaro

My chatbot's no Einstein, but it's got a great personality

A person holding a smartphone that is displaying a digital avatar of a young woman. (Credit: Photo illustJaap Arriens/NurPhoto—Getty Images)

Hello and welcome to Eye on AI. In today’s edition…A Stanford study digs into peoples’ perceptions of AI; a court ruling offers a blow to AI companies’ “fair use” defense for training models on copyrighted material; a co-developer of AlphaFold launches a new protein design startup; Adobe adds a more brand safe option to the slew of new text-to-video models; and Crowdstrike launches a new AI triage technology it says can save security teams 40 hours per week. 

It’s no secret that people are not only increasingly using AI chatbots, but in some cases, growing attached to them and viewing them as companions. 

A study conducted by Stanford Social Media Lab and BetterUp that tracked Americans’ perceptions of AI over the course of a year found that perceptions of AI’s human-likeness, warmth, and trust have significantly increased. Intriguingly, the same study showed even as respondents’ perceptions of trust in AI increased, their feelings around AI’s competency decreased.

“This is important because it suggests that people are changing how they think about these complex systems as they start to see AI less like powerful ‘computers’ or ‘search engines’ [and more like] friendly, helpful, human-like ‘assistants,’” Angela Y. Lee, one of the paper’s lead authors, told me. 

The research raises important questions about the role chatbots’ friendly attributes play in building blind trust, how this could lead to overreliance on the technology, and the responsibility AI companies have in terms of how they present their products. 

Perception of AI as a friend rises 

To get a sense of how the population’s perceptions of AI are changing over time, the researchers continually recruited and collected opinions on AI from May 2023 to August 2024, ultimately talking to a nationally representative sample of nearly 13,000 Americans. They asked questions about which AI tools people use, how frequently they use them, their willingness to adopt AI, and questions to assess their trust in AI, but they focused largely on how people responded when asked to provide a metaphor to describe AI.

Metaphors have long been used to describe technology and can say a lot about people’s implicit perceptions. For example, the paper notes how early metaphors of the internet as a “superhighway” showed how people thought it could connect users to diverse digital destinations.

Over the course of the study, the use of metaphors that describe AI as a distinctly non-human entity like a “computer” or “search engine” all decreased, while the rate of anthropomorphic metaphors (such as “friend,” “god,” “teacher,” and “assistant”) saw a significant jump (34%). Taken together with the 41% increase in respondent’s implicit perceptions of warmth toward the technology, the results suggest a societal shift toward seeing AI as more human-like and warm. The researchers also found differences among demographics, with older participants and non-white participants (and in particular Black participants) reporting significantly higher levels of trust in AI.

Trust issues

Notably, these positive feelings didn’t correspond to an increased perception that AI is competent: Implicit perceptions of AI as competent decreased by 8% over time.

Information about AI’s inaccuracies and penchant to hallucinate is everywhere, and scrutiny is only growing as the technology is further developed and people are increasingly encouraged to adopt it. From the blundered launch of Google’s AI Overviews where it told a user to put glue on pizza to the continuous onslaught of studies highlighting chatbots’ hallucination problem (my Eye on AI co-writer Jeremy Kahn covered a fresh BBC study on chatbot hallucinations pertaining to current events in Tuesday’s newsletter), it’s easy to see why confidence in AI chatbots’ abilities has only gone down. Just this past week, Google even had to correct a factual inaccuracy given by Gemini in its Super Bowl commercial.

With trust rising even amid decreased perceptions on competence, the researchers argue that the choices tech companies make when introducing AI technologies, especially chatbots, can influence how likely people are to trust them. For example, developers could give chatbots a more neutral tone, not have it use pronouns like “I,” and limit the ways it emotionally engages with the user.

“It’s important to remember that too much blind trust in AI may have consequences, such as overreliance on the technology,” Lee said. 

And with that, here’s more AI news. 

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.