JANUARY 24, 2024 — Paul Rad, the associate director of research in the UTSA School of Data Science, and Marc Feldman, M.D., emeritus and adjoint professor of medicine in the Janey and Dolph Briscoe Division of Cardiology at UT Health San Antonio, are teaming up to develop a cutting-edge medical technology driven by artificial intelligence that identifies the exact composition of coronary heart disease.
Their innovation, a generative AI model based on existing optical arterial scans, would enable clinicians to view inside a person’s coronary artery in the heart catheterization laboratory to assess plaque build-up and future risk of heart attacks in real time.
Most people are familiar with medical imaging tools such as x-rays, ultrasounds and MRIs, but cardiologists need technology that is much more precise to see inside coronary arteries, which supply blood to the heart. This viewpoint is essential to predict future heart attacks, which are often caused by the buildup of fatty plaques that reduce arterial flow. These plaques can also rupture and trigger blood clots that can block the artery entirely, halt blood flow to the heart and cause tissue damage.
“We know at death, by autopsy, the characteristics of plaque that lead to heart attacks. They are usually large amounts of lipid covered by very thin caps of fibrous tissue,” said Feldman.
While an autopsy allows for definitive analysis of an artery via histology, or microscopic analysis, clinicans cannot microscopically analyze an artery while a person is alive.
Instead, clinicians use optical coherence tomography (OCT), a procedure that is most used for retinal scans.
Several years ago, Feldman and former UT Austin engineering professor Thomas E. Milner collaborated and adapted the OCT technology to peek inside the coronary arteries of living patients.
Coronary OCT works by capturing the reflection of an infrared light emitted by a catheter inserted into the artery in question. This allows doctors to “see” inside an artery and provides images with much higher resolution than other imaging techniques. However, this level of detail also makes the images inherently more difficult to analyze.
“The problem is that this optical technique gives so much information, it’s almost too much,” Feldman explained, “so the physicians that are using it are overwhelmed by the amount of information and detail.”
This is where artificial intelligence – and UTSA’s Rad – come into the picture. Rad and UTSA doctoral research assistant Paul Young are developing an algorithm that can interpret coronary OCT images faster and more reliably than humans.
“The goal is to build a generative AI model that can learn from the existing images Dr. Feldman has captured in his lab, so our model can predict heart attacks at the earliest stage and doctors can make impactful decisions to avoid potential heart damage in the future,” said Rad, who also serves as a core faculty member in the UTSA School of Data Science.
The images Rad will use are crucial to the project. Feldman and his team, including UT Health San Antonio senior research scientists Aleksandra Gruslova and Drew Nolen and research assistant Luis Diaz Sanmartin have spent the last five years collecting approximately 2,000 OCT scans and histology images. Feldman believes it is the largest dataset of its kind in the world. They hope that by matching an OCT image to its corresponding histology, their AI model will begin learning to interpret other OCT images.
Rad says it’s not as easy as it sounds.
“The OCT sees a reflection of light in the material of the artery then Paul [Young] has to make sure the AI’s learning is adjusted based on the reflection of different materials because lipid is different from other materials in arteries,” Rad explained. “Basically, his model has to learn the physics of light.”
If the team’s model can learn the physics, the clinical implications could be groundbreaking. An AI model that can accurately identify what cardiologists see in real time will allow them to make on-the-spot treatment decisions such as whether to place a biodegradable stent in a patient and which location is the safest for a patient that needs one. Just as important, the AI would enable a uniformity and consistency of analysis that humans struggle to match.
Equally as important, if the model became sufficiently advanced, it could likewise be used to help humans better understand and interpret OCT imaging. From there, Feldman and Rad believe the possibilities for this technology are significant.
“Once you provide new information to a bunch of smart doctors, they’ll start creating new ways to apply or use it that we can’t foresee right now,” Feldman said.
Rad added, “This has the potential to change health care as we know it.”
Feldman’s lab is funded by The Clayton Foundation for Research, a Houston-based nonprofit medical research organization, and the National Institutes of Health. He and Rad are partnering with an OCT manufacturer and supplier of medical equipment to make their technology widely accessible when the project is completed, which Feldman anticipates will take several years.
Learn to use the simple but powerful features of EndNote®, a citation management tool. In this hands-on workshop, participants will learn to setup an EndNote library, save references and PDFs, and automatically create and edit a bibliography.Virtual Event
Learn to use Zotero®, a citation manager that can help you store and organize citations you find during your research. Zotero can generate bibliographies in various styles, insert in-text citations and allow you to share sources with collaborators.Virtual Event
This solo exhibition features the work of Delita Martin, a world-renowned master printmaker known for creating representations of black women in complex and luxuriant narrative portraits.Russell Hill Rogers Galleries, UTSA Southwest Campus
Hear perspectives on open educational resources (OER) from a variety of UTSA stakeholders. Learn about recent OER work across UT System, as well as platforms and support available to all UTSA faculty adopting or creating OER.Virtual Event
The UTSA Study Abroad office is pleased to announce the Spring 2024 Study Abroad Fair on Tuesday, March 5, 2024. Explore a world of opportunities through our exchange programs and UTSA-led programs.Sombrilla Plaza, Main Campus
Come celebrate the 30th anniversary of the MAS program in the UTSA College of Education and Human Development.La Villita Room, Downtown Campus
This workshop will teach you how to explore, clean, and transform your data and reproduce the steps you have taken using OpenRefine. Required to install OpenRefine before attending the workshop.John Peace Library (JPL 3.02.32,) Main Campus
The University of Texas at San Antonio is dedicated to the advancement of knowledge through research and discovery, teaching and learning, community engagement and public service. As an institution of access and excellence, UTSA embraces multicultural traditions and serves as a center for intellectual and creative resources as well as a catalyst for socioeconomic development and the commercialization of intellectual property - for Texas, the nation and the world.
To be a premier public research university, providing access to educational excellence and preparing citizen leaders for the global environment.
We encourage an environment of dialogue and discovery, where integrity, excellence, inclusiveness, respect, collaboration and innovation are fostered.
UTSA is a proud Hispanic Serving Institution (HSI) as designated by the U.S. Department of Education .
The University of Texas at San Antonio, a Hispanic Serving Institution situated in a global city that has been a crossroads of peoples and cultures for centuries, values diversity and inclusion in all aspects of university life. As an institution expressly founded to advance the education of Mexican Americans and other underserved communities, our university is committed to promoting access for all. UTSA, a premier public research university, fosters academic excellence through a community of dialogue, discovery and innovation that embraces the uniqueness of each voice.