Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Business
Patrick Collinson

Fake reviews: can we trust what we read online as use of AI explodes?

One-star rating on device screen
Google told Guardian Money that during 2022 it blocked or removed 115m fake reviews of hotels, restaurants and businesses. Photograph: Sean Gladwell/Getty Images

The four-star hotel in Kraków in Poland, the review says, is “excellent”, a “short walk from the main square” and boasts a “first-rate” spa and fitness centre. A less positive review describes it as “small, cramped and outdated” with “lumpy” pillows. But then a family who stayed said they were made to feel “instantly welcome”.

The truth is that none of those reviews are real. They were generated in seconds by the free-to-use artificial intelligence tool ChatGPT. These “visitors” definitely didn’t stay at the hotel, as they don’t exist.

Fake online reviews have long existed and are often not difficult to spot: tortured and mangled English, and excessive praise mixed with blandness to hide the fact the “reviewer” has been nowhere near the actual hotel or restaurant.

AI is turning that upside down – generating fake reviews that are increasingly more difficult to distinguish from those written by the average traveller or restaurant-goer. Indeed, one sign that a review is fake will be that the sentence structure is a bit too perfect.

Until now, the fake review business has been largely centred on online sweatshops, where people are paid to write multiple posts to boost a business’s rating.

Tripadvisor identified 1.3m reviews as fake in 2022. Trustpilot removed 2.7m fake reviews in 2021.

The number of fake reviews that Google blocks or removes is astonishing – it told Guardian Money that in 2022 it blocked or removed a total of 115m fake reviews of hotels, restaurants and businesses. Consumers are being hit with industrial levels of fakery in an attempt to obtain their business.

More fake reviews originated in India than anywhere else, according to Tripadvisor, with Russia next.

Tripadvisor sticker on a restaurant window in Kraków, Poland, in 2022.
Tripadvisor says AI-generated fake reviews present it with new challenges. Photograph: NurPhoto/Getty Images

Tripadvisor acknowledges that AI-generated fake reviews will present new challenges as it tries to weed out the genuine from the fake. In its admirably detailed review transparency report 2023, the site said: “We expect to see attempts from businesses and individuals to use tools like ChatGPT to manipulate content on Tripadvisor.”

Guardian Money tested ChatGPT, probably the best-known AI tool, asking it to write a review of a hotel in Kraków that I visited. On the first attempt it refused. It said: “I’m sorry … I do not generate negative or false reviews … it goes against my programming to generate fake or misleading information.”

But this reassurance did not last long. Minutes later, with just a little tweaking, it was pumping out fake review after fake review – and startlingly plausible ones at that – of any hotel, restaurant or product it was asked about.

It even formatted the hotel reviews – with a star rating from one to five, followed by a title of the review, then a main body of text containing the review itself. We did not ask it for this format – and it is striking that it is identical to the one used by Tripadvisor.

It produced reviews for us in the style of a business traveller, a couple, a solo traveller, families, LGBTQ+ travellers, etc. It also did so in a variety of languages.

Woman with suitcase and umbrella in Kraków, Poland
A tourist in Kraków, Poland. Reviews of a hotel in the city were generated by AI. Photograph: martin-dm/Getty Images

As we delved deeper into ChatGPT’s impressive ability to produce fake reviews, we also noted how AI deals in stereotypes. When asked for a positive review of the hotel in the style of a gay traveller, it focused on how they had “really appreciated the selection of pillows provided” while describing it as “chic” and “stylish”.

When asked for a positive review of the same hotel in the style of a lesbian traveller, it said they were delighted by the “vegan choices” at breakfast. In AI world, it appears that gay and lesbian people like scatter cushions and do not eat meat.

Of course, the big online review sites are on the alert for fake reviews and have multiple levels of defence, although clearly these are breached at times.

Tripadvisor said it received 76m reviews (including pictures and videos) in 2022 and had a process to eradicate the fake or offensive.

Reviews are processed automatically by a screening tool, and at that stage just short of 9% were not approved in 2022. Human moderators (working in 28 languages) then step in to take a look at many of the blocked reviews. They tend to reject about 40%.

The site, perhaps understandably, is not keen to divulge precise details of its screening system.

Becky Foley, the head of trust and safety at Tripadvisor, says: “With over 23 years’ experience, our fraud detection techniques and technologies have been adapted from the banking and financial sector and analyse hundreds of different digital attributes associated with every review, which extends far beyond the text itself. As with any fraud detection system, the best way to ensure efficacy is to not disclose specific details around how it works.”

As it deals with a potential onslaught of AI-generated reviews, it says that it “made the decision that, for the time being, we will not allow reviews that have been identified as AI-generated until we are more familiar with this type of content”.

Although AI tools such as ChatGPT went live only recently, Tripadvisor has seen an impact. “This year, Tripadvisor has already removed more than 20,000 reviews that we have reason to believe contain AI-generated text, across more than 15,000 properties in 159 countries,” it says.

As a test, we posted a fake, AI-generated hotel review on Tripadvisor. Despite the screening tools, it was accepted and went live on the site. Shortly afterwards, we manually deleted it. Tripadvisor says: “It’s highly likely that our agents would have identified and removed it, had it remained on the site for longer.”

Google, like Tripadvisor, runs a series of screens to sort out the real from the fake. It says: “Our policies clearly state that reviews must be based on real experiences, and we use a combination of human operators and industry-leading technology to closely monitor 24/7 for fraudulent content and spot patterns of potential abuse. We catch the vast majority of policy-violating reviews before they are ever seen.”

Trustpilot logo on a smartphone and a computer screen.
Trustpilot removed 2.7m fake reviews in 2021. Photograph: Sopa/LightRocket/Getty

Google says it has also taken legal action against the most prolific fake reviewers. It says that in one case, a “bad actor posted more than 350 fraudulent business profiles, and tried to bolster them with more than 14,000 fake reviews”.

Legislators have also wised up to the scale of fake reviewing. UK government-commissioned research into fake reviews, published in April this year, estimated that between 11% and 15% of all reviews in the product categories it looked at were fake. “Fake review text on products alone causes an estimated £50m to £312m in total annual harm to UK consumers,” it said.

The digital markets, competition and consumers bill, now at the Commons committee stage, is expected to make it illegal to pay someone to write a fake review or to host a review without taking steps to check it is real.

The consumer group Which? has led the way in campaigning against fake reviews, and wants the government and tech companies to do a lot more. Rocio Concha, the Which? director of policy and advocacy, says: “The bill must go further by explicitly making the buying, selling and hosting of fake reviews subject to criminal enforcement.

“The tech giants that host reviews shouldn’t wait for the bill and instead [should] design their platforms in a way that prevents fake reviews appearing on them in the first place, including improving review verification and better sharing of data with each other to effectively tackle review brokers.”

Guardian Money asked OpenAI, the company behind ChatGPT, why it does not prevent its AI tool from producing fake reviews of hotels, restaurants and products that the “reviewer” has never visited or used. We made multiple attempts to contact the company and submitted a number of questions but it did not respond by the time this article was published.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.