All papers that have not been peer-reviewed will not appear here, including preprints. You can access my all of papers at 🔗Google Scholar.
Gaoyuan Li, Mingxin Liu, Jun Lu, Jiquan Ma†(† corresponding author)
Biomedical Physics & Engineering Express 2024 Journal
We introduce a dual branch network, incorporating edge attention, and deep supervision strategy. Edge attention is introduced to fully utilize the spatial relationship between the scar and the atrium. Besides, dense attention is embedded in the bottom layer to solve feature disappearance. At the same time, deep supervision accelerates the convergence of the model and improves segmentation accuracy.
Mingxin Liu, Yunzan Liu, Pengbo Xu, Hui Cui, Jing Ke, Jiquan Ma†(† corresponding author)
IEEE Transactions on Medical Imaging 2024 Journal
This study proposed HGPT, a novel framework that jointly considers geometric and global representation for cancer diagnosis in histopathological images. HGPT leverages a multi-head graph aggregator to aggregate the geometric representation from pathological morphological features, and a locality feature enhancement block to highly enhance the 2D local feature perception in vision transformers, leading to improved performance on histopathological image classification. Extensive experiments on Kather-5K, MHIST, NCT-CRC-HE, and GasHisSDB four public datasets demonstrate the advantages of the proposed HGPT over bleeding-edge approaches in improving cancer diagnosis performance.
Mingxin Liu, Yunzan Liu, Pengbo Xu, Jiquan Ma†(† corresponding author)
2024 IEEE International Symposium on Biomedical Imaging (ISBI) 2024 ConferenceOral
We proposed a novel weakly-supervised framework, Geometry-Aware Transformer (GOAT), in which we urge the model to pay attention to the geometric characteristics within the tumor microenvironment which often serve as potent indicators. In addition, a context-aware attention mechanism is designed to extract and enhance the morphological features within WSIs. Extensive experimental results demonstrated that the proposed method is capable of consistently reaching superior classification outcomes for gigapixel whole slide images.
Mingxin Liu, Yunzan Liu, Hui Cui, Chunquan Li†, Jiquan Ma†(† corresponding author)
2023 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) 2023 ConferenceOral
We propose the Mutual-Guided Cross-Modality Transformer (MGCT), a weakly-supervised, attention-based multimodal learning framework that can combine histology features and genomic features to model the genotype-phenotype interactions within the tumor microenvironment. Extensive experimental results on five benchmark datasets consistently emphasize that MGCT outperforms the state-of-the-art (SOTA) methods.