Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

One Tech Startup Found the Key To Safe AI Adoption

The field of artificial intelligence has become one focused on treading a fine line between risks, harms and benefits. The tech, without regulation, could become disastrous, in ways far more real and far more nuanced than the potential of the so-called singularity, which involves the destruction of the human race by a super-intelligent AI. 

Experts have cited concerns about a list of real-world harms -- not threats, active harms -- that AI will continue to enhance. Among these are the economic implications of a fully-automated workforce and the inequities that will come with that, in addition to the damage, in propaganda and fraud, bad actors could cause. But at the top of everyone's list are AI biases and hallucinations. 

DON'T MISS: US Expert Warns of One Overlooked AI Risk

Noticing rising risks around these issues of bias, Liran Hason, a machine learning engineer, realized that the only way to ensure mass AI adoption was to somehow install guardrails around the models. So he started Aporia, a tech company whose mission is just that. 

Aporia offers an AI observation software that gives its clients the tools to see what "decisions are being made, to get live alerts when a bias has occurred or when a potential mistake" has been made. 

The software then allows clients to investigate the root causes of any biases, hallucinations or mistakes that their AI models may be causing so they can work quickly to correct any issues. 

"In the early days of the company, we played around with the idea of AI for AI. But we realized this shouldn't work like that; in order to truly achieve responsible AI, we have to have deterministic software," Hason said. "If you rely on AI to watch AI, then you suffer from the same rift and the same issues."

More Business of AI:

In line with the idea of fighting fire with water, rather than more fire, is Hason's opinion on how all AI models ought to be monitored: through third parties who are objective by design. 

The Real AI Risks Are Far Less Dramatic Than the Singularity

And while Hason is bullish on the benefits AI could bring to society, he acknowledged that the risks are very prevalent as well. The key is figuring out ways to responsibly mitigate the risks so that the benefits can actually benefit people. 

"I honestly think that AI could either be the best thing that happens to humanity or the worst thing," Hason said, adding that many of the negative impacts exist even without the unlikely result of a "Terminator" scenario. 

"Just think about cancer patients in a world where hospitals are being run by AI. So every detection of blood tumors, of cancer is being driven by these systems," Hason said. "What happens when it's wrong?"

But stopping the tech isn't realistic to him; AI is here and it's not going anywhere. The only thing to do now is to find the best ways to mitigate the risks posed by it, and act quickly to insulate society against those risks. 

"I don't think we can stop technology from happening," he said. 

And five, ten years down the line, Hason envisions a world where AI has become ubiquitous. And in hand with that, AI regulation has been carefully cemented. 

"I'd like to see AI involved in every aspect of our lives on one hand. On the other hand, I'm hoping -- and I'm working on it 24/7 -- to make sure that there are proper guardrails for each and every one of these systems," Hason said. He hopes that "governments, by that time, have already defined rules and regulation, to make sure that companies are not getting wild with this technology."

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.