Mingxin Liu
Ph.D. student, Nanjing University of Information Science and Technology
I am a 1st year Ph.D. student advised by Prof. Jun Xu at Nanjing University of Information Science and Technology. My research interests are Computational Pathology and Medical Image Analysis. Prior to starting my Ph.D., I obtained my M.S degree in Computer Science at Heilongjiang University, advised by Prof. Jiquan Ma, and worked with Prof. Dinggang Shen, Prof. Jing Ke, Prof. Chunquan Li, and Dr. Hui Cui.
Nanjing University of Information Science and Technology
Ph.D. student in Artificial Intelligence Sep. 2024 - Present
Heilongjiang University
M.S. in Computer Science Sep. 2021 - Jul. 2024
Heilongjiang International University
B.S. in Computer Science and Technology Sep. 2017 - Jul. 2021
Mingxin Liu, Yunzan Liu, Pengbo Xu, Hui Cui, Jing Ke, Jiquan Ma†(† corresponding author)
IEEE Transactions on Medical Imaging 2024 Journal
This study proposed HGPT, a novel framework that jointly considers geometric and global representation for cancer diagnosis in histopathological images. HGPT leverages a multi-head graph aggregator to aggregate the geometric representation from pathological morphological features, and a locality feature enhancement block to highly enhance the 2D local feature perception in vision transformers, leading to improved performance on histopathological image classification. Extensive experiments on Kather-5K, MHIST, NCT-CRC-HE, and GasHisSDB four public datasets demonstrate the advantages of the proposed HGPT over bleeding-edge approaches in improving cancer diagnosis performance.
Mingxin Liu, Yunzan Liu, Pengbo Xu, Jiquan Ma†(† corresponding author)
2024 IEEE International Symposium on Biomedical Imaging (ISBI) 2024 ConferenceOral
We proposed a novel weakly-supervised framework, Geometry-Aware Transformer (GOAT), in which we urge the model to pay attention to the geometric characteristics within the tumor microenvironment which often serve as potent indicators. In addition, a context-aware attention mechanism is designed to extract and enhance the morphological features within WSIs. Extensive experimental results demonstrated that the proposed method is capable of consistently reaching superior classification outcomes for gigapixel whole slide images.
Mingxin Liu, Yunzan Liu, Hui Cui, Chunquan Li†, Jiquan Ma†(† corresponding author)
2023 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) 2023 ConferenceOral
We propose the Mutual-Guided Cross-Modality Transformer (MGCT), a weakly-supervised, attention-based multimodal learning framework that can combine histology features and genomic features to model the genotype-phenotype interactions within the tumor microenvironment. Extensive experimental results on five benchmark datasets consistently emphasize that MGCT outperforms the state-of-the-art (SOTA) methods.