SP
BravenNow
Meta-D: Metadata-Aware Architectures for Brain Tumor Analysis and Missing-Modality Segmentation
| USA | technology | ✓ Verified - arxiv.org

Meta-D: Metadata-Aware Architectures for Brain Tumor Analysis and Missing-Modality Segmentation

#Meta-D #metadata-aware #brain tumor analysis #missing-modality segmentation #medical imaging #AI architectures #segmentation accuracy

📌 Key Takeaways

  • Meta-D introduces metadata-aware architectures for brain tumor analysis.
  • The approach addresses missing-modality segmentation challenges in medical imaging.
  • It leverages metadata to improve segmentation accuracy when data is incomplete.
  • The method aims to enhance diagnostic tools for brain tumor treatment planning.

📖 Full Retelling

arXiv:2603.04811v1 Announce Type: cross Abstract: We present Meta-D, an architecture that explicitly leverages categorical scanner metadata such as MRI sequence and plane orientation to guide feature extraction for brain tumor analysis. We aim to improve the performance of medical image deep learning pipelines by integrating explicit metadata to stabilize feature representations. We first evaluate this in 2D tumor detection, where injecting sequence (e.g., T1, T2) and plane (e.g., axial) metada

🏷️ Themes

Medical Imaging, AI in Healthcare

Entity Intersection Graph

No entity connections available yet for this article.

}
Original Source
--> Computer Science > Computer Vision and Pattern Recognition arXiv:2603.04811 [Submitted on 5 Mar 2026] Title: Meta-D: Metadata-Aware Architectures for Brain Tumor Analysis and Missing-Modality Segmentation Authors: SangHyuk Kim , Daniel Haehn , Sumientra Rampersad View a PDF of the paper titled Meta-D: Metadata-Aware Architectures for Brain Tumor Analysis and Missing-Modality Segmentation, by SangHyuk Kim and 2 other authors View PDF HTML Abstract: We present Meta-D, an architecture that explicitly leverages categorical scanner metadata such as MRI sequence and plane orientation to guide feature extraction for brain tumor analysis. We aim to improve the performance of medical image deep learning pipelines by integrating explicit metadata to stabilize feature representations. We first evaluate this in 2D tumor detection, where injecting sequence (e.g., T1, T2) and plane (e.g., axial) metadata dynamically modulates convolutional features, yielding an absolute increase of up to 2.62% in F1-score over image-only baselines. Because metadata grounds feature extraction when data are available, we hypothesize it can serve as a robust anchor when data are missing. We apply this to 3D missing-modality tumor segmentation. Our Transformer Maximizer utilizes metadata-based cross-attention to isolate and route available modalities, ensuring the network focuses on valid slices. This targeted attention improves brain tumor segmentation Dice scores by up to 5.12% under extreme modality scarcity while reducing model parameters by 24.1%. Comments: 9 pages, 2 figures, 3 tables Subjects: Computer Vision and Pattern Recognition (cs.CV) ; Artificial Intelligence (cs.AI) Cite as: arXiv:2603.04811 [cs.CV] (or arXiv:2603.04811v1 [cs.CV] for this version) https://doi.org/10.48550/arXiv.2603.04811 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: SangHyuk Kim [ view email ] [v1] Thu, 5 Mar 2026 04:54:49 UTC (872 KB) Full-text links: Access Pap...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine