For real exampleswe used the real images and their segmentation/annotation masks (Mi) as an input. The green and red colored annotations correspond to Ki67 positive and Ki67 negative nuclei, respectively. For fake exampleswe applied a two-step procedureIn Step 1we used generator (U-net) algorithm to create a synthetic image by using the segmentation/annotationIn Step 2the output of the generator and initial segmentation (Mi) are used as an input for D

Fig 2. Used neural network framework for generator, G

General Overview:
In pathology, Immunohistochemical staining (IHC) of tissue sections is regularly used to diagnose and grade malignant tumors. Typically, IHC stain interpretation is rendered by a trained pathologist using a manual method, which consists of counting each positively- and negatively-stained cell under a microscope. The manual enumeration suffers from poor reproducibility even in the hands of expert pathologists. To facilitate this process, we proposed a novel method to create artificial datasets with the known ground truth which allows us to analyze the recall, precision, accuracy, and intra- and inter-observer variability in a systematic manner, enabling us to compare different computer analysis approaches.

Current Findings:
Our method employs a conditional Generative Adversarial Network that uses a database of Ki67 stained tissues of breast cancer patients to generate synthetic digital slides. Our experiments show that synthetic images are indistinguishable from real images. Six readers (three pathologists and three image analysts) tried to differentiate 15 real from 15 synthetic images and the probability that the average reader would be able to correctly classify an image as synthetic or real more than 50% of the time was only 44.7%

Contributors: Caglar Senaras, Muhammad Khalid Khan Niazi, Berkman Sahiner, Michael P. Pennell, Gary Tozbikian, Gerard Lozanski, Metin N. Gurcan


Fig 3.Fully synthetic images (e-g). We created several toy data to generate synthetic images with different characteristics by using annotation based input (a) and segmentation based input (b and c).