
A vast majority of people use artificial intelligence (AI) every day, even though they don't trust it's outputs, according to a new study.
Researchers from the University of Melbourne in Australia and global consulting firm KPMG surveyed over 48,000 people in 47 countries from November to January 2025 about their levels of trust, uses, and attitudes towards AI.
The study found that while over two-thirds of their respondents use AI with some regularity, either at work, for school, or in their personal time, only 46 percent are willing to trust these systems.
The participants were asked to rank how much they trusted the technical ability of AI systems and the safety, security, and ethical soundness of the systems. Their responses were then rated on a nine-point scale by researchers to determine how much each recipient trusted the AI.
“Trust is sort of the strongest predictor of AI’s acceptance,” Samantha Gloede, managing director at KPMG, told Euronews Next.
“We don’t think any organisation can move faster than the speed of trust”.
What people are struggling to believe in is the AI’s ability to be fair and do no harm, according to the study authors.
Where people have more faith is in the technical ability of AI to provide accurate and reliable output and services.
‘Taking it into their own hands’
The study also found that 58 per cent of their respondents are using AI regularly at work with 33 per cent of these respondents using it weekly or daily in their jobs.
These employees say it makes them more efficient, gives them greater access to information, and lets them be more innovative. In almost half of the cases, the respondents noted that AI has increased revenue-generating activity.
There’s added risk, though, for companies whose employees do use AI at work, because half of the respondents who use chatbots at work say they use them despite violating company policies.
“People feel almost pressured that if they don’t use it, they will be… set behind their competitors,” Gloede said.
Gloede said they heard examples where employees admitted to uploading sensitive company information into free public tools like ChatGPT or that deepfakes were being made of senior leadership, which could damage their reputation or that of the company.
Employees have also presented the work of AI chatbots as their own, with 57 per cent saying they’ve hidden the fact that they’ve used AI in their work. These employees have also done so without necessarily evaluating the accuracy of the content that the AI generated for them.
Another 56 per cent report making mistakes in their work due to their use of AI without subsequent fact-checking, the report continues.
“For organisations that don’t have an AI literacy study in place … [employees] are taking things into their own hands,” she said.
‘We have so much to gain’
The study also found that half of the employees surveyed for this study didn't understand AI and how it's used. Furthermore, only two out of five employees reported getting any AI-related training or education about how to use it.
One example of what companies could do to teach their employees about how to use AI is to create a “trusted AI framework,” Gloede said, that includes 10 different principles that should be considered when using the technology in their work.
She said she’s hoping that the survey findings will encourage C-suite executives, tech companies, and public policy makers to take action.
“We have so much to gain from [AI] if it is executed by organisations, by governments, in a responsible way,” she said.