Described simply, artificial intelligence (AI) is a field that combines computer science and robust data sets, to enable problem-solving. The umbrella term encompasses the subfields of machine learning and the more recently developed deep learning, which itself is a subfield of machine learning. Both use AI algorithms to create expert systems that make predictions or classifications based on input data.
The first reports of AI use in radiology date back to 1992 when it was used to detect microcalcifications in mammography1 and was more commonly known as computer-aided detection. It wasn’t until around the mid-2010s that it really started to be seen as a potential solution to the daily challenges, such as volume burden, faced by radiologists.
Artificial intelligence (AI) has recently made substantial strides in perception (the interpretation of sensory information), allowing machines to better represent and interpret complex data. This has led to major advances in applications ranging from web search and self-driving vehicles to natural language processing and computer vision — tasks that until a few years ago could be done only by humans1. Deep learning is a subset of machine learning that is based on a neural network structure loosely inspired by the human brain. Such structures learn discriminative features from data automatically, giving them the ability to approximate very complex nonlinear relationships. While most earlier AI methods have led to applications with subhuman performance, recent deep learning algorithms are able to match and even surpass humans in task-specific applications2–5. This is owing to recent advances in AI research, the massive amounts of digital data now available to train algorithms and modern, powerful computational hardware. Deep learning methods have been able to defeat humans in the strategy board game of Go, an achievement that was previously thought to be decades away given the highly complex game space and massive number of potential moves6. Following the trend towards a human-level general AI, researchers predict that AI will automate many tasks, including translating languages, writing best-selling books and performing surgery — all within the coming decades7.
This plot outlines the performance levels of artificial intelligence (AI) and human intelligence starting from the early computer age and extrapolating into the future. Early AI came with a subhuman performance and varying degrees of success. Currently, we are witnessing narrow task-specific AI applications that are able to match and occasionally surpass human intelligence. It is expected that general AI will surpass human performance in specific applications within the coming years. Humans will potentially benefit from the human-AI interaction, bringing them to higher levels of intelligence.
Ronald Summers, chief of the Imaging Biomarkers and Computer-Aided Diagnosis Laboratory at the National Institutes of Health Clinical Center in Bethesda, Maryland, describes the development of deep-learning techniques around this time as having a “democratizing influence” on the field. “The research is so much easier now,” he says. “You don’t have to have mathematical representations of disease; instead you feed in large amounts of data and the neural network that’s part of the deep-learning system learns the patterns on its own.”
With the advent of AI came the emergence of radiomics, where digital images that are normally interpreted by radiologists in a qualitative way are transformed into quantitative data. The digital data can then be used to train machine-learning algorithms to recognize certain features that may give insight into a diagnosis or prognosis invisible to the human eye.
In addition, AI can be applied to almost every part of a patient’s journey through a radiology department with its use broadly categorized in to three main areas.
The first of these is hospital operations. “There’s a lot of interest in AI applications to improve workflow,” says Summers. The automation of operational tasks, such as the evaluation of appropriateness of imaging, patient scheduling, selection of examination protocols, and improvement of radiologists’ reporting workflow, among others, can all be done with the assistance of AI.
The advantages of these applications have not gone unnoticed by some of the biggest players in the field of radiology. “Workflow automation is a big opportunity and our customers recognize this need, especially with the staff shortages post-COVID,” says Madhuri Sebastian, business leader of enterprise imaging at Philips. “We are focusing on improving operational efficiency with solutions like the Workflow Orchestration in our Image Management offering that streamlines the worklist for the radiologist and other solutions driving efficiency in the imaging workflow.”
Philips offers a series of products across the magnetic resonance (MR) workflow such as VitalEye, which detects patient physiology and breathing movement, allowing routine MR examination setup to occur in less than a minute. There is also MR Workspace for AI-based protocol selection and SmartExam for examination planning.
Another industry leader, GE Healthcare, has its Effortless Workflow model that similarly offers users AI-based tools to automate and simplify time-consuming tasks. Prior to scanning, the applications use machine learning to automatically suggest protocols for each exam, position patients in the scanner, provide the correct scan range for head, chest, abdomen, and pelvis scans, including multigroup scans. Post scanning, the tools help with image review and analysis on a Revolution Ascend computed tomography (CT) scanner.
Post-scanning image reconstruction is the second big area that is benefitting from the use of AI. “One of the things that we’re proudest of here at GE Healthcare is that we’ve really put a lot of effort into the image reconstruction stage because it just gives a much better machine for the customers and the patients,” says Jan Makela, president and CEO of imaging at GE Healthcare.
Improving image reconstruction with AI is typically done using deep-learning algorithms that increase the signal-to-noise ratios, which in turn cuts scan times, increases diagnostic confidence, and speeds up workflow. “You get 30% faster images, which are higher quality with more resolution,” notes Makela when describing GE Healthcare’s AIR Recon DL image reconstruction product. The tool, which can be used for 2D and 3D images, was recently recognized in the annual Best of What’s New awards by Popular Science magazine and is compatible with the vast majority of the company’s 16,000 MRI systems that are currently installed worldwide.
Makela says that installing the software can increase the number of patients scanned per day from 20–25 to 30–35. “It’s always about the patient outcomes and patient safety, but the next thing is more productivity and it’s [also] fairly cost-effective because you’re not building a new building or hiring more people; you’re just putting software in.”
Meanwhile, Philips report that their SmartSpeed product, which is used with their MRI scanners, can produce scans up to three times faster than conventional techniques without sacrificing image quality. It combines a state-of-the-art speed engine with a deep-learning algorithm, implemented at the source of the MR signal, that has been trained to remove noise while preserving detail.
Other big manufacturers of MRI hardware, including Siemens and Canon, also incorporate their own deep-learning algorithms into their systems to speed up image reconstruction. Regulatory requirements mean that third-party companies are not generally involved at this stage.
Lung cancer is one of the most common and deadly tumours. Lung cancer screening can help identify pulmonary nodules, with early detection being lifesaving in many patients. Artificial intelligence (AI) can help in automatically identifying these nodules and categorizing them as benign or malignant.
Screening mammography is technically challenging to expertly interpret. AI can assist in the interpretation, in part by identifying and characterizing microcalcifications (small deposits of calcium in the breast).
Brain tumours are characterized by abnormal growth of tissue and can be benign, malignant, primary or metastatic; AI could be used to make diagnostic predictions.
Radiation treatment planning can be automated by segmenting tumours for radiation dose optimization. Furthermore, assessing response to treatment by monitoring over time is essential for evaluating the success of radiation therapy efforts. AI is able to perform these assessments, thereby improving accuracy and speed.
Diagnosing skin cancer requires trained dermatologists to visually inspect suspicious areas. With the large variability in sizes, shades and textures, skin lesions are rather challenging to interpre. The massive learning capacity of deep learning algorithms qualifies them to handle such variance and detect characteristics well beyond those considered by humans.
The quantification of digital whole-slide images of biopsy samples is vital in the accurate diagnosis of many types of cancers. With the large variation in imaging hardware, slide preparation, magnification and staining techniques, traditional AI methods often require considerable tuning to address this problem. More robust AI is able to more accurately perform mitosis detection, segment histologic primitives (such as nuclei, tubules and epithelium), count events and characterize and classify tissue.
The ever-increasing amount of available sequencing data continues to provide opportunities for utilizing genomic end points in cancer diagnosis and care. AI-based tools are able to identify and extract high-level features correlating somatic point mutations and cancer types as well as predict the effect of mutations on sequence specificities of RNA-binding and DNA-binding proteins.
Where third parties, often start-ups, do come into the picture is in the reading and interpreting of the images—the third big area to which AI is applied in radiology. Radiologists read numerous images each day. It’s a high-pressure job trying to spot something suspicious without reporting false positives; AI-based clinical decision support can help with this.
The digital images that the radiologists read are produced in a standard DICOM (digital imaging and communication in medicine) format and are accessed via the picture archiving communication system (PACS). The standardized nature of these images and the PACS makes it easier for third parties to use the data for algorithm development. They can then make those algorithms available to the widest possible customer base, as they can be applied regardless of the system used to capture the image. In 2021, the market for AI in medical imaging was valued at $1.06 billion and is expected to reach $10.14 billion by 2027, growing at an average rate of 45.7% per year.2 The number of tools being developed by these companies to assist with image interpretation, not only for MRI but also X-ray, CT, and ultrasound, is vast, as are the potential applications.
With the uses of AI expanding in radiology, it’s fair to wonder what radiologists think of it all. Early reports often cite reluctance to support the adoption of AI tools within their discipline for fear of being replaced by computers, but this no longer appears to be the case.
“About 5 or 6 years ago, I was seeing a lot more hesitation and the feedback was quite mixed,” says Shetty. “There were some people who were excited about what could happen with deep learning but there were also a lot of clinicians for whom there was some hesitation, either because they didn’t think this would work or because there were underlying concerns around whether AI was going replace radiologists and so on.”
However, she notes that since then there has been a steady shift in attitude. “Most radiologists are at the point today where they don’t believe that the AI solutions are replacing what radiologists do but instead see that this could potentially make them more efficient—that it has ways of reducing workload and could be a second check or a second reader.”
A degree of hesitancy is not a bad thing though, according to Culot. “I think that the medical field is appropriately conservative when it comes to adopting these technologies, because you are dealing with patients,” he says.
An American College of Radiology survey suggests that AI adoption in radiology increased from zero to 30% between 2015 to 2020,8 and it is clear that close collaboration between the AI developers and users is key to bolstering acceptance.
“I don’t think we can be successful in the space if we didn’t have clinicians working closely with us on a day-to-day basis,” says Shetty. And Summers notes that professional societies such as the Radiological Society of North America have “really done, in my view, a tremendous job at educating radiologists about what AIs strengths and limitations are, about how to do the research, and about the factors involved in regulation of AI as a device.”
Although AI use is becoming widespread in medical imaging research, there are still several challenges to be overcome before it can become more broadly adopted in the clinic. AI development is often restricted by a lack of high-quality, high-volume, longitudinal outcomes data.9 There are also challenges associated with the curation and labeling of medical imaging data.
Over his 30-year career in radiology, Summers believes that “AI software has gotten much more useful [and the] breadth of clinical applications has exploded. Not a day goes by where there isn’t another research paper looking at another clinical application of interest that I haven’t seen before.”
And Sebastian points out that “by the year 2025, the amount of healthcare data is expected to grow by 36% per year.”12 She adds that although “this is all tremendous for customers to develop and deliver a holistic view of the patient and drive outcomes, it’s also overwhelming, and that is where AI comes in.”
If the imaging companies and AI developers are doing their jobs right, radiologists will be “less burnt out” in 10 years’ time and their work will be more data driven, she says.
With collaboration between radiologists and industry, the possibilities for AI use appear to be endless, but Culot says that the field “is just at the beginning of optimization—right from the acquisition all the way through in any application of radiology.”
He suggests that the technology could eventually be used for biomarker detection, but at that stage you must question whether radiologic biomarkers provide more information than molecular tests. His view is that “before [AI use] will change therapy, it’ll change diagnostic strategy.”
source:leaps.org – insideprecisionmedicine.com – ncbi.nlm.nih.gov
Parsaland Trading Company with many activities in the fields of import and export, investment consulting, blockchain consulting, information technology and building construction