Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tribune News Service
Tribune News Service
Lifestyle
Ian Munro

Health care’s AI revolution is well under way

NEWPORT NEWS, Va. -- Discussions about artificial intelligence have proliferated recently as more people have access to programs that can make art or answer questions.

In the health care industry, the move to using AI is already well under way.

Dr. Jose Morey, an Eastern Virginia Medical School radiologist, has been an AI consultant for the White House Office of Science and Technology and the United Nations and been involved with NASA initiatives. For example, NASA iTech evaluated a fully autonomous surgical robot that was being developed to perform appendix removals and similar types of surgeries.

Public health predictions, streamlining administration and new drug discovery are among some of the uses for AI already under way, he said.

“It’s just data and mathematics,” Morey said. “That’s it. It’s data and fancy math and it spits out a solution at the end.”

He said that’s why it’s important to have the engineers and mathematicians to build algorithms, and to have subject matter experts to know what kind of data to feed into it and know how to read what comes out.

“If you have bad data, you’re going to have a bad solution,” Morey said.

About two weeks ago, the American Medical Association voted to develop recommendations around AI to ensure it is able to streamline administrative burdens, provide accurate information and improve the kind of medical advice one day AI may provide.

“AI holds the promise of transforming medicine,” said Dr. Alexander Ding, AMA Trustee in a June 13 AMA news release. “We don’t want to be chasing technology. Rather, as scientists, we want to use our expertise to structure guidelines, and guardrails to prevent unintended consequences, such as baking in bias and widening disparities, dissemination of incorrect medical advice, or spread of misinformation or disinformation.”

However, just a day later, the AMA released another statement calling for more oversight of AI’s use by insurance companies in reviewing patient claims and requests for prior authorization.

Drug discovery is an area where AI can streamline the process and potentially revolutionize the current method of drug discovery that takes billions of dollars and decades and still sometimes does not produce an effective remedy, according to Morey.

Riverside has been using AI for over 10 years and uses it for health prediction and administrative needs, according to Dr. Charles O. Frazier, senior vice president and chief medical information and innovation officer.

“Though many hear AI and think of ChatGPT, there are various forms and the technology and the systems in place at Riverside are ones that have gone through careful testing and validation,” Frazier said in an email. “For example, one form of artificial intelligence that we employ at Riverside includes several cognitive computing or machine learning models to predict a variety of clinical conditions, including sepsis, clinical deterioration in the hospital, opioid use disorder, risk of readmission, etc.”

He said they are not using AI directly in clinical work other than with prediction modelling but there could be clinical uses for AI in the future. AI is also used at Riverside for automating processes in billing/accounting, Frazier said.

AI’s role in modelling health outcomes is also being used by Vienna-based ClearForce to help soldiers and veterans. The company is developing a model that can help health providers and the military identify veterans and service members who are more likely to commit suicide.

Previously, the company has worked with Oklahoma and is now working with Virginia, according to retired Marine Col. Mark Hudson, ClearForce’s vice president of insider threat prevention and suicide prevention.

Essentially, the AI helps the company flip the model of suicide prevention, according to Hudson. The usual model requires an individual to realize they need help, ask and find it. Using the AI model creates a situation where the individual can be identified as at a higher risk of being prone to suicide so health providers and organizations can take the first step to reach out and see if that soldier or veteran is struggling with their mental health.

To do this, the company analyzes the risk factors of suicide and have partnered with states to look at data around deaths to find them.

“We can then backwards (work) through that and look at the indicators that took them to that tragic outcome,” Hudson said.

Similarly, earlier this year, University of Virginia researchers including Dr. Randall Moorman received $5.9 million to study AI’s use with patients. UVA has been using data to help clinicians treat patients for about 20 years, starting by identifying deadly blood infections in premature infants, according to Moorman.

“We developed numerical algorithms, let’s call that machine learning,” he said.

Using the machine learning reduced death rates by 20% across nine neonatal ICUs when it was implemented, Moorman said. And in the years that have followed, they’ve expanded the principle — implementing machine learning for early detection of lung failure, deterioration and more.

UVA researchers are part of 13 other center researchers in the Bridge 2 AI program, which will provide data for 100,000 ICU patients for developers to make models to improve health outcomes.

Racial disparities can be bridged or widened depending on how AI is implemented and used, according to Moorman, Morey and a panel on the future of health care at the Richmond Health Summit earlier this month.

Moorman said it is the job of AI researchers and developers to ensure AI is equally accurate for everybody.

“So in medicine, my feeling is there’s always going to be somebody smart and informed in between the computer and the patient and that I think is a great safeguard in using artificial intelligence in medicine.”

Researchers, such as Moorman’s colleague Ishan Williams and Ismail El Moudden of EVMS, are looking into cutting these potential inequities through AI off at the pass. El Moudden’s project on using AI to reduce cardiovascular disease disparities in the state received an award from the American Heart Association.

Morey said AI’s use by insurance companies has resulted in situations where care is denied because of presumptive outcomes because of data such as zip codes.

“If you have biased data, you could have a biased AI output and that’s something you have to be aware of,” he said.

Like the stethoscope, AI can play a role in helping providers care for patients, but is ultimately “simply a tool in the doctor’s bag,” Morey said.

“AI has a lot of potential, 1,000%. And it’s doing a lot of good,” he said. “We have to understand its limitations and you still need humans in there to do that.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.