With headlines like these, it is clear that artificial intelligence has some major implications for lung cancer:
But, fantastical headlines notwithstanding, what does this mean for patients or caretakers and the daily race against the clock for curing lung cancer? Kam Kafi MDCM, Imagia’s Director of Oncology, is here to break it down for you during Lung Cancer Awareness Month this November.
Artificial intelligence, or AI, is a computer science term referring to research that studies computer algorithms which attempt to replicate human cognition. Deep Learning is a category of AI that relates to algorithms that automatically learn to recognize patterns in data.
By feeding huge amounts of data from medical systems into artificial neural networks, computers synthesize the massive amount of information. Then, researchers train computers to recognize patterns that would be hard—or even impossible—for a person to see. The system follows an algorithm, or set of instructions, and learns as it goes. The more data it receives, the better it becomes at interpretation.
Current lung cancer screening guidelines use basic criteria to ascertain the timing of the next CT scan: nodule size, size or density of the largest lung nodule on the previous CT scan, or the appearance of a new nodule.
But the current test has pitfalls: It can miss tumors or mistake benign spots for malignancies. This pushes patients into invasive and risky procedures like lung biopsies or surgery. And radiologists looking at the same scan may have different opinions about it.
In contrast, the deep learning algorithm was able to recognise patterns in both temporal and spatial changes, including synergy among changes from different nodules, taking into account aggregate changes in nodule characteristics and non-nodule features from the same individual.
This is one of the most promising areas for AI, which is able to recognize patterns and interpret images: developing the ability to read microscope slides, X-rays, M.R.I.s, and other medical scans. This came in concert with the abundance of lung cancer screening programs which increased after promising results were seen in several pivotal trials. This led hospitals to scan and track many at risk patients. With this information at hand, it allowed a great deal of research regarding the implementation of AI to assist radiologists in the detection and diagnosis of lung cancer.
One example is computer aided detection (CADe), which refers to methods for detecting pulmonary nodules, or lung spot that may be cancerous. In addition,, computer aided diagnosis (CADx) does not detect the nodules, but differentiates whether the nodule is benign or malignant: reducing reliance on invasive and risky biopsies. All of this is done with AICADe, CADx, and entire CAD collective which have been available for many years. However, they did not see widespread clinical adaptation in the detection and diagnosis of lung cancer for some time. This was primarily due to limited accuracy and lack of robust clinical studies showing efficacy.
However, that’s changed. With the availability of larger and diverse data sets, deep learning algorithms are becoming better at both detecting nodules on scans and predicting their cancerous risk.
Researchers from the Terry Fox Research Institute and several medical centers recently published a Lancet Journal paper offering a futuristic glimpse of artificial intelligence in lung cancer screening.
By training and validating a deep learning algorithm using data from over 25,000 patients, they were able to accurately estimate the three year risk for lung cancer and lung cancer-specific mortality. This test properly guided timing of diagnostic tests and identify the optimum interval between screening CT scans.
Current lung cancer screening guidelines either use size or density of the largest lung nodule on the previous CT scan or appearance of a new nodule to ascertain the timing of the next CT scan.
But the current test has pitfalls: It can miss tumours, or mistake benign spots for malignancies and push patients into invasive, risky procedures like lung biopsies or surgery. And radiologists looking at the same scan may have different opinions about it.
In contrast, the deep learning algorithm was able to recognize patterns in both temporal and spatial changes, including synergy among changes from different nodules, taking into account aggregate changes in nodule characteristics and non-nodule features from the same individual.
The ability to process vast amounts of data makes it possible for artificial intelligence to recognize subtle patterns that humans simply cannot see. However, the truth about AI is that large and diverse data sets needed for training AI are rare. In healthcare, most organizations lack the infrastructure required to cohort the data needed to optimally train algorithms. Furthermore, aggregated healthcare data exists as siloed information hubs, and requires broader infrastructure development to connect these islands. Without it, innovations made at a single institution (or particular siloed setting) won’t translate beyond their doors, or be applied outside of the narrow parameters on which it is trained to evolve in real-life, dynamic settings.
Furthermore, there are legitimate concerns about data safety and privacy that prohibit the sharing or use of healthcare data for any purpose. These privacy concerns, while vitally important and necessary, limited the development of AI solutions at a large scale. And because Imagia’s believes so strongly about AI’s power to change healthcare, our privacy pledge oath means strict data governance and ensuring institutional data privacy.
We employ cutting edge techniques, that will have a huge impact on the future of machine learning in healthcare. They are:
Federated Learning: Trains AI models on distributed datasets that you cannot directly access, thus ensuring patient and institutional privacy.
Differential Privacy: Makes formal, mathematical guarantees around privacy preservation when publishing our results (either directly or through AI models).
Encrypted Computation: Allows machine learning to be done on data while it remains encrypted.
These privacy-preserving developments allow AI training on data from multiple institutions, hospitals, and clinics without sharing the patient data. It allows the access of data but circumvents transfer or centralization of data. Yet, this can be done while still developing AI models in a collaborative multi institutional setting.
These collaborative tools are what we harness at Imagia. We enable our partner institutions to securely train models while preserving patient privacy. We believe that increased cooperation between researchers, hospitals, and clinicians will help us make full use of the benefits of AI in lung cancer and beyond.
Press Dec 17, 2019
New partnerships enable hospitals to access the Imagia EVIDENS™ AI platform, empowering clinicians to utilize cohorted and structured real...
Blog Dec 11, 2019
Created by leading financiers wanting to combine academic research and industry expertise, the Onco-Tech Competition is an opportunity for Q...
Please complete the form in order to direct your request to the appropriate department, and we will reach out as soon as possible.