Conditional Generation of Medical Images via Disentangled Adversarial Inference

Mohammad Havaei, Ximeng Mao, Yiping Wang, Qicheng Lao. Medical Image Analysis, 2021, 102106, ISSN 1361-8415.

Synthetic medical image generation has a huge potential for improving healthcare through many applications, from data augmentation for training machine learning systems to preserving patient privacy. Conditional Adversarial Generative Networks (cGANs) use a conditioning factor to generate images and have shown great success in recent years. Intuitively, the information in an image can be divided into two parts: 1) content which is presented through the conditioning vector and 2) style which is the undiscovered information missing from the conditioning vector. Current practices in using cGANs for medical image generation, only use a single variable for image generation (i.e., content) and therefore, do not provide much flexibility nor control over the generated image.

In this work we propose DRAI—a dual adversarial inference framework with augmented disentanglement constraints—to learn from the image itself, disentangled representations of style and content, and use this information to impose control over the generation process. In this framework, style is learned in a fully unsupervised manner, while content is learned through both supervised learning (using the conditioning vector) and unsupervised learning (with the inference mechanism). We undergo two novel regularization steps to ensure content-style disentanglement. First, we minimize the shared information between content and style by introducing a novel application of the gradient reverse layer (GRL); second, we introduce a self-supervised regularization method to further separate information in the content and style variables.

For evaluation, we consider two types of baselines: single latent variable models that infer a single variable, and double latent variable models that infer two variables (style and content). We conduct extensive qualitative and quantitative assessments on two publicly available medical imaging datasets (LIDC and HAM10000) and test for conditional image generation, image retrieval and style-content disentanglement. We show that in general, two latent variable models achieve better performance and give more control over the generated image. We also show that our proposed model (DRAI) achieves the best disentanglement score and has the best overall performance.

Related posts

Digital Technology Supercluster Announces Investment to Increase the Effectiveness of Precision Oncology

Digital Technology Supercluster Announces Investment to Increase the Effectiveness of Precision Oncology

Harnessing artificial intelligence to take the guesswork out of diagnosing cancer recurrence for millions of cancer survivors

Read more
How to Bring Biomarker Testing In-House for Cancer Targeted Treatment Selection

How to Bring Biomarker Testing In-House for Cancer Targeted Treatment Selection

Personalized cancer treatment via targeted therapies is two-to-three times more effective than standard chemotherapy for patients with advan

...
Read more
Imagia Cybernetics & Canexia Health Merge to Supercharge Precision Oncology Accessibility

Imagia Cybernetics & Canexia Health Merge to Supercharge Precision Oncology Accessibility

Imagia Cybernetics, an AI-healthcare company that accelerates oncology solutions generated from real world data, today announced its merger

...
Read more