Supervised semantic segmentation normally assumes the test data being in a similar data domain as the training data. However, in practice, the domain mismatch between the training and unseen data could lead to a significant performance drop. Obtaining accurate pixel- wise label for images in different domains is tedious and labor intensive, especially for histopathology images. In this paper, we propose a dual adaptive pyramid network (DAPNet) for histopathological gland segmentation adapting from one stain domain to another. We tackle the domain adaptation problem on two levels: 1) the image-level considers the differences of image color and style; 2) the feature-level addresses the spatial inconsistency between two domains. The two components are implemented as domain classifiers with adversarial training. We evaluate our new approach using two gland segmentation datasets with H&E and DAB-H stains respectively. The extensive experiments and ablation study demonstrate the effectiveness of our approach on the domain adaptive segmentation task. We show that the proposed approach performs favorably against other state-of-the-art methods.
- Deep convolutional neural networks (DCNNs) have achieved remarkable success in the field of medical image segmentation. Although excellent performance has been achieved on benchmark dataset, deep segmentation models have poor generalization capability to unseen datasets due to the domain shift between the training and test data. Such domain shift is commonly observed especially in histopathology image analysis. For instance, the Hematoxylin and Eosin (H&E) stained colon image has significantly different visual appearances from that stained by Diaminobenzidene and Hematoxylin (DAB-H) as shown in Figure 1. Thus, the model trained on one (source) dataset would not generalize well when applied to the other (target) dataset. Therefore, it is of great interest to develop algorithms to adapt segmentation models from a source domain to a visually different target domain without requiring additional labels in the target domain.
- In this paper, we propose a DCNN-based domain adaptation algorithm for histopathology image segmentation, referred to as Dual Adaptive Pyramid Network (DAPNet). The proposed DAPNet is designed to reduce the discrepancy between two domains by incorporating two domain adaptation components on image level and feature level. The image-level adaptation considers the overall difference between source and target domain like image color and style, while feature-level adaptation addresses the spatial inconsistency of the two domains. In particular, each component is implemented as a domain classifier with an adversarial training strategy to learn domain-invariant features. The overview of our DAPNet is shown in Figure 2.
- We evaluate the performance of our DAPNet for gland segmentation in both adaptive directions. In particular, we denote Warwick-QU (source) to GlandVision (target) as Warwick-QU → GlandVision and vice versa, and the test images in the target domain are used for evaluation. We compare our DAPNet with three state-of-the-art unsupervised domain adaptation methods: CycleGAN, CyCADA and AdaptSeg. We report the segmentation results using Pixel Accuracy (Acc.) and the Intersection over Union (IoU) in Table 1 below. We can observe that our model DAPNet outperforms all the other methods for domain adaptation between WarwickQU and GlandVision in both directions. The Figure 3 presents qualitative results of two example images for each of the domain adaptation case. Our proposed DAPNet produces significantly better predictions with accurate layout.
Xianxu Hou, Jingxin Liu, Bolei Xu, Xin Chen, Mohammad Ilyas, Jon Garibaldi and Guoping Qiu. “Dual Adaptive Pyramid Network for Cross-Stain Histopathology Image Segmentation”, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2019.