← Go Back

LTU Information Technology Professor’s Breakthrough Medical Imaging Research Has Multidisciplinary Impact

Armed with his Bachelor of Science degree and a concentration in coding languages from the University of Wisconsin–Milwaukee, the young IT engineer was already engaged in fulfilling work in machine languages and image processing and working on his master’s degree. Then his career path added a new dimension as fate intervened. Yash Patel, Ph.D., assistant professor of information technology at Lawrence Technological University’s College of Business and Information technology, suddenly experienced what he calls a “life event”; in specific medical terms, a myocardial infarction (MI), known more commonly as a heart attack.  But he did not know immediately that he had a heart attack because a radiologist had misdiagnosed his condition. 

Recovering from his health setback and thinking about this near miss, Patel considered how similar situations could be avoided. With his interest in image processing now turning toward medical areas, Patel pursued the application of deep learning, computer vision, and novel neural networks in the form of CNNs (convolutional neural networks) to find manageable and workable solutions to healthcare challenges.

Neural networks are creations of AI; they reflect the human brain and mimic its learning functions. CNNs can assist with image recognition and identify patterns. Deep learning is possible through the use of neural networks, and in Patel’s research, CNNs have been the foundation because they can use images.

Patel continued his master’s degree work with his advisor and mentor, Zeyun Yu, Ph.D. Their focus was on knee localization from X-rays. Patel then secured an internship at the prestigious Mayo Clinic. Following this experience and now with his master’s degree, Patel began his Ph.D. work. His main project was the detection of wound locations, and he created the first AI imaging model for the detection and analysis of the foot wounds typically experienced by many diabetes patients, which he developed into an app. This is a multimodal solution which can locate wounds. Using this model, Patel also began work on breast cancer classification. 

The detection process as outlined by Patel begins with taking an image of a wound on the body surface. Localization identifies precise regions of interest in the wound, laying the groundwork for segmentation and classification. Classification places the localized information in context by categorizing wound types, while segmentation refines the analysis by further delineating specific regions of interest, such as tissues or cells, at a granular level. Together, these methodologies create a path, which leads to comprehensive, automated analysis for diagnosis and treatment planning. 

With the prospect of this work having a dramatic impact in medicine and in many other fields, Patel reported on his research in his Ph.D. dissertation. Although centered around treatment of diabetic wounds, Patel’s dissertation is much more; it presents a “cohesive body of research that demonstrates the transformative potential of deep learning in medical image analysis,” Patel said. He shows how the localization, classification, and segmentation approaches integrate to form a united framework for advancing healthcare. Manual effort by clinicians is reduced, allowing them to concentrate on treatment strategies. Targeted and actionable insights can result from the use of this framework. 

Beyond wound care, this integrated framework could be used in other areas in healthcare. In oncology, a similar pipeline can localize and classify tumors, assisting precise treatment planning. In cardiovascular imaging, it can identify arterial plaques and segment areas for surgical preparation. Spine alignment is another treatment possibility. And reams of unlabeled medical data could be probed through this pipeline to unlock mysteries and make data more robust.

The body map as used in Patel’s research is a simplified tool for location selection, with numbers denoting 484 distinct regions.
IMAGE SOURCE:
Scientific Report 14, article number: 7043 (2024), 25 March 2024, Springer Nature (www.nature.com).

But the really exciting prospect is in continuing the sideline research Patel had begun in applying this framework to breast cancer diagnostics. Consider the real possibility of being able to push detection of this pervasive, dreaded disease ahead five to ten years through improved classification of abnormalities. In his dissertation, Patel shows how the localization/classification/segmentation pipeline could be a route for breast cancer research to travel. The availability of enough image data sets is the only factor in the way of a potential breakthrough. 

Patel is excited to be able to work on imaging advances through deep learning at Lawrence Tech. “LTU has state-of-the-art machines with high-end GPUs,” he says.  He wants to “jump into” 3D imaging. Patel is looking for synergy at LTU; the medical work will come first, but he is anxious to show how this work can be used elsewhere. 

“Professor Patel’s research clearly extends the capabilities of AI in healthcare, with the real prospect of saving lives. When you step back and see this work as one big simulation with impact on a range of disciplines, that’s really exciting,” says Matthew Cole, dean of the College of Business Administration and Information Technology. 

Automation, improving retention rates in education, market research, and quality control— anywhere big data is converted into an image, the models developed in this dissertation move theoretical innovation to practical implementation. AI is enhancing diagnostic accuracy, streamlining workflows, and improving outcomes. As professor Patel says, “The whole is indeed greater than the sum of its parts.”  An early use of his framework will likely be to assist radiologists in making more informed diagnosessuch as detecting a heart attack.  

 

By: Peter Hollinshead 

Upcoming Events

Doctor of Business Administration Virtual Open House
May 23, 2025
12:30 PM
Doctor of Business Administration Virtual Open House
June 13, 2025
12:30 PM
Doctor of Business Administration Virtual Open House
June 27, 2025
12:30 PM

» Document Viewer

Use Your Cell Phone as a Document Camera in Zoom

  • What you will need to have and do
  • Download the mobile Zoom app (either App Store or Google Play)
  • Have your phone plugged in
  • Set up video stand phone holder

From Computer

Log in and start your Zoom session with participants

From Phone

  • Start the Zoom session on your phone app (suggest setting your phone to “Do not disturb” since your phone screen will be seen in Zoom)
  • Type in the Meeting ID and Join
  • Do not use phone audio option to avoid feedback
  • Select “share content” and “screen” to share your cell phone’s screen in your Zoom session
  • Select “start broadcast” from Zoom app. The home screen of your cell phone is now being shared with your participants.

To use your cell phone as a makeshift document camera

  • Open (swipe to switch apps) and select the camera app on your phone
  • Start in photo mode and aim the camera at whatever materials you would like to share
  • This is where you will have to position what you want to share to get the best view – but you will see ‘how you are doing’ in the main Zoom session.