
Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition…a new report says a growing standard for fighting AI fakes puts privacy on the line…Nvidia and Intel announces sweeping partnership to co-develop AI infrastructure and personal computing products…Meta raises its bets on smart glasses with an AI assistant…China’s DeepSeek says its hit model cost just $294,000 to train.
Last week, Google said its new Pixel 10 phones will ship with a feature aimed at one of the biggest questions of the AI era: Can you trust what you see? The devices now support the Coalition for Content Provenance and Authenticity (C2PA), a standard backed by Google and other heavyweights like Adobe, Microsoft, Amazon, OpenAI and Meta. At its core is something called Content Credentials—essentially a digital nutrition label for photos, videos, or audio. The metadata tag, which can’t easily be tampered with, shows who created a piece of media, how it was made, and whether AI played a role.
Over a year ago, I reported that TikTok would automatically label all realistic AI-generated content created using TikTok Tools with Content Credentials. And the standard was actually founded before the current generative AI boom: The C2PA was founded in February 2021 by a group of technology and media companies to create an open, interoperable standard for digital content provenance, or the origin and history of a piece of content, to build trust in online information.
But a new report from the World Privacy Forum, a data-privacy nonprofit, warns that this growing push for trust could put privacy on the line. The group argues C2PA is widely misunderstood: it doesn’t detect deepfakes or flag potential copyright infringement. Instead, it’s quietly laying down a new technical layer of media infrastructure—one that generates vast amounts of shareable data about creators and can link to commercial, government, or even biometric identity systems.
Because C2PA is an open framework, its metadata is designed to be replicated, ingested, and analyzed across platforms. That raises thorny questions: Who decides what counts as “trustworthy”? For example, C2PA relies on developing “trust lists” and a compliance program to verify participants. But if small media outlets, indie journalists, or independent creators don’t make the list, their work could be penalized or dismissed. In theory, any creator can apply credentials to their work and apply to C2PA to become a trusted entity. But to get full “trusted status,” the creator often needs to have a recognized certificate authority, meet criteria that are not fully public and navigate verification. According to the the report, this risks sidelining marginalized voices, even as policymakers — including a New York state lawmaker — push for “critical mass” adoption.
But inclusion on these “trust lists” isn’t the only concern. The report also warns that C2PA’s openness also cuts the other way: the framework can be too easy to manipulate, since so much depends on the discretion of whoever attaches the credentials—and there’s little to stop bad actors from applying them in misleading ways.
“A lot of people think, oh, this is a content labeling system, they’re not necessarily cognizant of all of the layers of identifiable information that might be baked in here,” said Kate Kaye, deputy director of the World Privacy Forum and co-author of the report. She emphasized that C2PA isn’t just a simple label on a piece of media — it creates a stream of data that can be ingested, stored, and linked to identity information across countless systems.
All of this matters for both corporate entities and consumers. For example, Kaye stressed that businesses might not realize that C2PA falls into privacy and data governance and requires policies around how it’s collected, shared, and secured. Also, researchers have already shown it’s possible to cryptographically sign forged images. So while companies may embrace C2PA to gain credibility — they also assume new obligations, potential liabilities, and dependence on a trust system controlled by Big Tech players.
For consumers, there are definitely privacy and identity exposure issues. C2PA metadata can include timestamps, geolocation, details on editing, and even connections to identity systems (including government IDs), but consumers may have little control or awareness that this is being captured. It’s technically opt-in—but if you don’t opt in, your content could be marked less trustworthy. And in the case of TikTok, for example, users are automatically opted in (other platforms like Meta and Adobe are adopting C2PA, but generally as opt-in for creators).
Overall, there are a lot of power dynamics at play, Kaye said. “Who is trusted and who isn’t and who decides – that’s a big, open-ended thing right now.” But the burden to figure it out isn’t on consumers, she emphasized: Instead, it’s on businesses and organizations to think carefully about how they implement C2PA, with appropriate risk assessments.
With that, here’s the rest of the AI news.
Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman