Etiquette experts advise not talking about politics and religion at the holiday dinner table, but there's good reason to talk about AI, especially with older relatives.
Why it matters: Seniors are already prime targets for scams, and AI is making it easier and cheaper to generate convincing text, audio and video fakes pretending to be relatives in desperate need.
By the numbers: Older adults misidentified online content one out of every three times, according to a recent study by home healthcare firm The CareSide and researchers from Harvard and the University of Minnesota.
- In a short quiz, participating seniors called real content fake, or fake content real, about a third of the time, even when they were fairly confident about their ability to know the difference.
Here's a brief guide to helping your relatives (and yourself) navigate the blurring online world of human and machine creations.
Understand how LLMs work (and how they don't)
The big picture: You don't need a PhD in machine learning to understand or help others understand AI.
- The large language models (LLMs) that power chatbots aren't magic. They're prediction machines, guessing the most likely next word (or pixel or video frame).
- Bots are also smart. They're friendly, funny and can simulate empathy. And even though they're getting better at not automatically aligning with a user's view, they're still rewarded for telling you what you want to hear.
- It's important to take the shame out of not understanding how all of this works, Harvard researcher Fred Heiding told Axios. Especially if your loved one has fallen for a scam.
Talk about it
Case in point: Make sure your relatives know about phone and text scams. Talk about how a suspicious call or text might sound: A "grandchild" in trouble, a bank demanding immediate payment, a caregiver asking for urgent gift cards.
- Tell grandparents that it is always OK to hang up and call back on a number they know, or to check in a family group text before sending money.
- Come up with a family "safe word" as a simple way to verify if a relative is who they say they are on a call or text. "It can also be a gentle on-ramp to talking about AI risks more generally with grandparents," Sarah Dooley, founder of AI-Empowered Mom, told Axios.
Learn AI's "tells"
Between the lines: Text, image and video generators are getting more realistic, but there are still ways to detect AI content.
- If you see something slightly off, "your gut should tell you to look closer," says AI debunker Jeremy Carrasco, founder of Showtools.ai on YouTube and TikTok.
- If the video looks like security camera or body cam footage, that's a sign. AI video generators are great at creating realistic-looking cam footage because we're used to those videos looking a little grainy.
Know that chatbots are often confidently wrong
Reality check: By now most people understand that AI always needs to be fact-checked. LLMs hallucinate or make things up.
- Chatbots sound confident all the time. Their responses can fool even the smartest users.
- Videos especially train us to respond instantly and instinctively, without asking the questions we normally would. "Carefulness and awareness are the most important skills to rebuild, especially for seniors who grew up in a world where photos and videos were proof," NewsGuard managing editor Roberta Schmid tells Axios.
- Sit down with your older relatives and ask their favorite chatbot something your loved one knows well. See if they can find fault in its answers.
Go deeper: The simplest guide to using chatbots.