Dr. Maria A. Zuluaga – Eurecom, 3IA Institute Côte d’Azur
Title: The data challenges of AI for medical imaging
Artificial intelligence (AI) has shown a great potential to assist clinicians in the analysis of medical images for the diagnosis and follow-up of several conditions. This success, however, depends on two critical factors. First, AI models require a large set of training data with high quality annotations. Second, the images to be processed by the AI model once deployed are expected to have the same statistical properties than the training data. When any of these two conditions is not met, it is likely for the AI model to fail, which is critical within a a clinical setting. In this talk, i will present some of our recent work that aim to address these challenges. In the first part of my talk, I will discuss novel methodological strategies to ease data annotation and make better use of small samples sizes at training. On the second part, I will present some of our work on quality control strategies to monitor the performance of deployed AI models and detect potential drifts in the distribution of the test data.
Dr. Melek Önen – Eurecom
Title: Privacy & Security for AI
The rise of cloud computing technology led to a paradigm shift
in technological services that enabled stakeholders to delegate their
data analytics tasks to third party (cloud) servers. Machine Learning as a Service (MLaaS) is one such service which provides stakeholders the ease to perform machine learning tasks on a cloud platform. This advantage of outsourcing these computationally-intensive operations, unfortunately comes with a high cost in terms of privacy exposures. The goal is therefore to come up with customized ML algorithms that would by design preserve the privacy of the processed data. Advanced cryptographic techniques such as fully homomorphic encryption or secure multi-party computation enable the execution of some operations over encrypted data and therefore can be considered as potential candidates for these algorithms. Yet, these incur high computational and/or communication costs for some operations. In this talk, we will analyze the tension between ML techniques and relevant cryptographic tools. We will further overview existing solutions addressing both privacy and security.
Dr. Antonia Machlouzarides-Shalit and Léonie Borne – NeuroPin, Incubee, Inria
Title: Augmented Neuroradiology: Enhancing neuroscience with AI to improve clinical interpretations of MRI images
The current dynamics of medical imaging are reaching a crisis
point; there is too much data and not enough radiologists. With non-standardised workflows, thorough qualitative interpretations are time-consuming and require human experience and expertise, and quantitative measurements are often arbitrary and ambiguous. Our start-up NeuroPin addresses these two issues. Qualitatively, we automate the detection of anomalies. Quantitatively, we standardise the volumetry of those anomalies, and their changes over time. NeuroPin’s future stages include fortifying the relationships between pathological change over time with relevant biomarkers and treatments, to develop a prediction model which can improve patient outcomes.
Dr. Stéphanie Lopez – LungScreenCT, in collaboration with the UCA, CHU of Nice and Therapixel
Title: AI applied to lung cancer screening : LungScreenAI project
Lung cancer is the leading cause of death from cancer. Most of the efforts are dedicated to early-stage detection of lesions so that the prognosis for patients with lung cancer can be improved.
Even though no lung cancer screening has been done in France so far, high-risks patients are advised to be monitored. Thoracic scans allow suspicious pulmonary nodules to be identified and additional exams such as PET scans, biopsies or surgery may be proposed. An increase in the number of thoracic scans has led to the detection of an increasing number of pulmonary nodules, most of them being benign. In order not to miss any cancers, many patients are over diagnosed, resulting in an increase in the number of additional exams which is both costly and stressful for the patient.
The main difficulty for radiologists stems from the evaluation of the malignancy of nodules, depending on the high variability of their characteristics (size, texture, growth rate…). International guidelines such as the ones proposed by Fleischner Society in 2017 have been introduced to standardize the follow-up procedure of patients with pulmonary nodules.
LungScrenAI aims to provide radiologists with a reliable first-reading tool, based on artificial intelligence algorithms. This will allow time savings for diagnosis, while maintaining or improving diagnostic accuracy and limiting over-diagnosis, and the costs and stress associated. This project is a partnership project with Innovation. Université Côte d’Azur and hospital university of Nice are the promoters of the project, in partnership with a French start-up.
Dr. Marco Lorenzi – Inria
Title: Fed-BioMed for open-source federated learning in healthcare
Fed-BioMed is an open-source initiative for federated learning (FL) in healthcare, led by Inria, and featuring contributions from research, clinical and industrial partners.
During the talk, I will present the research and development activity currently ongoing in Fed-BioMed, and I will illustrate our use cases for real life applications of federated learning in networks of hospitals. I will detail the basic paradigms for software components for clients and central node, and illustrate the workflow for deploying models in typical FL scenarios. Finally, I will illustrate how Fed-BioMed allows deploying state-of-the-art research in the real setting, including schemes for private and robust FL.
Pr. Frédéric Precioso – Inria, I3S
Title: Why transformers are expected to be the next super neural model?
The success of Deep Learning started first with the convolutional neural networks which are neural architectures preserving spatial patterns in data. This family of models is easily parallelizable. For sequential data and time series, the interest has moved to recurrent neural networks which preserve sequential patterns. This family of models is not parallelizable but can take into account large contexts (or long term dependencies). The recurrent neural networks have also benefited a lot from attention mechanisms and attentional layers. Transformers are expected to be the next super neural model because they gather all advantages of previous families. After detailing the internal mechanisms of transformers, we will see recent applications.
Dr. Mahdi Rajabizadeh – AI.Nature
Title: AI can help in the management of snakebite
Snake envenomation is a public health challenge in many tropical and subtropical countries, mostly in Africa, Asia, and Latin America. The WHO recently considered snake-bite envenomation as a neglected tropical disease, which is an important milestone in disease control. About 5.4 million snake bites occur each year, resulting in about 81.000 to 137.000 deaths and around three times as many permanent disabilities each year (www.who.int). This human-snake conflict partly results from the difficulty of snake identification that currently relies on expert knowledge. Snake identification help doctors to better plan the snakebite treatment. AI.Nature is a French startup project, incubated in INRIA, Paris, and tries to combine AI and zoological science to help doctors in the identification of snakes. AI.Nature produced a web application that provides AI-based services for 1) image-based snake auto-identification; 2) location-based snake identification. The web application is already launched for western Asia and northern Africa and is tested by doctors, even in real cases.
Pr. Maxime Sermesant – Inria, IHU Liryc, 3IA Côte d’Azur
Title: AI & Personalised Cardiac Modelling: Learning by Heart
Machine learning and biophysical modelling are very complementary approaches. The recent progress in computing power and available data makes it possible to develop accurate data-driven approaches for healthcare, while biophysical models offer a principled way to represent anatomy and physiology. In this talk, I will present research where we combine both methodologies in order to leverage their strengths. Different clinical applications in computational cardiology will be presented.
Pr. Alejandro F Frangi – University of Leeds, KU Leuven, Alan Turing Institute
Title: Precision Imaging – from model-based imaging to image-based modelling
Medical image analysis has grown into a matured field challenged by the progress made across all medical imaging technologies and more recent breakthroughs in biological imaging. The cross-fertilisation between medical image analysis, medical imaging physics and technology, and domain knowledge from medicine and biology has spurred a truly interdisciplinary effort that stretched outside the original boundaries of the disciplines that gave birth to this field and created stimulating and enriching synergies.
Precision Imaging is not a new discipline but rather a distinct emphasis in medical imaging borne at the crossroads between and unifying the efforts behind mechanistic and phenomenological model-based imaging. Precision Imaging is characterised by being descriptive, predictive and integrative. It captures three main directions in the effort to deal with the information deluge in imaging sciences and thus achieve wisdom from data, information, and knowledge. Precision imaging can lead to carefully and mechanistically engineered imaging biomarkers and the use of medical imaging-based computational modelling and simulation for improved regulatory science and innovation of medical products.
This talk summarises and formalises our vision of Precision Imaging for Precision Medicine and highlights some connections with past research and our current focus on large-scale computational phenomics and in silico clinical trials.
Pr. Benoit Huet et Dr Pierre Baudot – Median Technologies
Title: iBiopsy® Lung Cancer Screening: An AI diagnostic software for improving patient care at scale
Medical images reveal the disease as it really is, at every stage and allow to monitor its evolution in a non-invasive manner. Harnessing the true power of medical images is key to accelerate clinical innovations, drug development and improve patient care. Since 2002, Median has been expanding the boundaries of the identification, analysis and reporting of imaging data in the medical world, with a dedicated focus on cancer and other chronic diseases. We, at Median, provide state of the art imaging services for oncology trials and AI medical imaging capabilities that reveal novel insights into previously unreachable knowledge. Our two proprietary platforms iSee® for imaging services in clinical trials and iBiopsy® for image-based non-invasive diagnostics, leverage the power of medical images for accelerating therapeutic innovations and improving cancer patient care. iBiopsy®, as a unique end-to-end AI-powered technology Software as a Medical Device (SaMD), automatically integrates and optimizes the entire medical imaging workflow. iBiopsy® supports radiologists and clinicians in the screening and early diagnosis and treatment of life-threatening diseases worldwide. iBiopsy® is currently developing SaMD for end-to-end AI-diagnostic for three indications (Lung cancer screening, HCC early diagnosis and recurrence prediction, and NASH diagnosis and scoring), the presentation will focus on a component of our lung cancer screening diagnostic models aiming to characterize malignant state by automatic quantification of the tumor’s morphology.
Pathologies systematically induce morphological changes, which therefore provide a major but yet insufficiently quantified source of observables for diagnosis. The study develops a predictive model of the pathological states based on morphological features (3D-morphomics) on Computed Tomography (CT) volumes. A complete workflow for mesh extraction and simplification of an organ’s surface is developed and coupled with an automatic extraction of morphological features given by the distribution of mean curvature and mesh energy. A supervised classifier consisting in a XGBoost model is then trained and tested on the 3D-morphomics to predict the pathological states. This framework is applied to the prediction of the malignancy of lung’s nodules. On a subset of NLST database with malignancy confirmed biopsy, using 3D-morphomics only, the classification model of lung nodules into malignant vs. benign achieves an AUC of 0.964. Three additional sets of classical features are trained and tested, (1) a clinical set containing clinical relevant features gives an AUC of 0.58, (2) a set of 111 radiomics gives an AUC of 0.976, (3) a set of radiologist ground truth (GT) containing the nodule size, attenuation and spiculation qualitative annotations gives an AUC of 0.979. Combining the 3D-morphomics with radiomics features achieves state-of-the-art results with an AUC of 0.978 where the 3D-morphomics have some of the highest predictive powers. It establishes the curvature distributions as efficient features for predicting lung nodule malignancy and a new method that can be applied directly to arbitrary computer aided diagnosis task.
Pr. Hervé Delingette – Inria
Title: Some Strategies to cope with the cost of annotations in Medical Image Analysis
Image annotations such as image labels or organ delineations are required to train supervised learning algorithms to solve various tasks in medical image analysis but also to evaluate their performance. Producing high quality annotations is very time consuming especially when dealing with volumetric images. Furthermore, inter-rater variability when producing those annotations has to be taken into account to reflect the complexity of the tasks. In this lecture, I will present some strategies related to data and models to cope with the cost of annotations. A first set of approaches are data-centric and aim to keep only high quality annotations and to precisely measure the agreement or disagreement between the raters. A second set of methods focused on machine learning models try to minimize the amount of required strong annotations for instance through the use of semi-supervised or mixed-supervised techniques.
Dr. Remi Bernhard – Quantificare
Title: Quantificare: 2D/3D imaging and artificial intelligence at the service of dermatology and cosmetic surgery
Quantificare is a pioneer in 2D and 3D imaging systems and in standardized photographic documentation for clinical study centers. First, as a contract research company, Quantificare provides its services in the context of studies, in particular to assess the quality of certain treatments. Secondly, Quantificare offers cameras from the LifeViz range and the LifeVizApp software, intended for cosmetic surgery practitioners, which offer a tool for viewing, simulating and analyzing the results of operations. This allows the patient to have a clear and precise vision of the changes to come, and the practitioner to evaluate the operations he wishes to perform.
In addition, in a medical context, taking advantage of AI is particularly relevant. It is for this reason that the solutions offered by Quantificare include AI at different levels, in particular to obtain a diagnosis for various diseases, automatically and very precisely.
In this presentation, I will first present the main activities and services provided by Quantificare. I will then detail certain use cases of AI relating to Quantificare’s activity, presenting certain issues in terms of feasibility and robustness.
Pr. Irina Voiculescu – University of Oxford, Computer Science department
Title: How good is good enough? Exercising care when using mathematical evaluation measures in medical image segmentation
A wide variety of Artificial Intelligence methods are pervading all areas of medicine that involve image acquisition. In a century where clinical professionals are overwhelmed with data, carefully designed AI models genuinely advance the medical screening process. All too often, however, the novelty of AI methods consists merely of using established machine learning models and showing their performance on single datasets. It is important to explore scenarios where such conventional pipelines of data crunching and evaluation do not apply directly. That is where our mathematical reasoning needs to shine through: what meaning can we still extract from results which appear to be less than perfect?