MobileFetalCLIP: Selective Repulsive Knowledge Distillation for Mobile Fetal Ultrasound Analysis
#MobileFetalCLIP #fetal ultrasound #knowledge distillation #mobile health #medical imaging #AI efficiency #prenatal diagnostics
📌 Key Takeaways
- MobileFetalCLIP introduces a selective repulsive knowledge distillation method for fetal ultrasound analysis on mobile devices.
- The approach aims to improve model efficiency and accuracy by selectively transferring knowledge from a larger teacher model to a compact student model.
- It addresses challenges in deploying complex AI models on resource-constrained mobile platforms for medical imaging.
- The method focuses on enhancing fetal ultrasound analysis, potentially aiding in prenatal diagnostics and healthcare accessibility.
📖 Full Retelling
arXiv:2603.05421v1 Announce Type: cross
Abstract: Fetal ultrasound AI could transform prenatal care in low-resource settings, yet current foundation models exceed 300M visual parameters, precluding deployment on point-of-care devices. Standard knowledge distillation fails under such extreme capacity gaps (~26x), as compact students waste capacity mimicking architectural artifacts of oversized teachers. We introduce Selective Repulsive Knowledge Distillation, which decomposes contrastive KD into
🏷️ Themes
Medical AI, Mobile Health, Knowledge Distillation
Entity Intersection Graph
No entity connections available yet for this article.
Original Source
--> Computer Science > Computer Vision and Pattern Recognition arXiv:2603.05421 [Submitted on 5 Mar 2026] Title: MobileFetalCLIP: Selective Repulsive Knowledge Distillation for Mobile Fetal Ultrasound Analysis Authors: Numan Saeed , Fadillah Adamsyah Maani , Mohammad Yaqub View a PDF of the paper titled MobileFetalCLIP: Selective Repulsive Knowledge Distillation for Mobile Fetal Ultrasound Analysis, by Numan Saeed and 2 other authors View PDF HTML Abstract: Fetal ultrasound AI could transform prenatal care in low-resource settings, yet current foundation models exceed 300M visual parameters, precluding deployment on point-of-care devices. Standard knowledge distillation fails under such extreme capacity gaps (~26x), as compact students waste capacity mimicking architectural artifacts of oversized teachers. We introduce Selective Repulsive Knowledge Distillation, which decomposes contrastive KD into diagonal and off-diagonal components: matched pair alignment is preserved while the off-diagonal weight decays into negative values, repelling the student from the teacher's inter-class confusions and forcing discovery of architecturally native features. Our 11.4M parameter student surpasses the 304M-parameter FetalCLIP teacher on zero-shot HC18 biometry validity (88.6% vs. 83.5%) and brain sub-plane F1 (0.784 vs. 0.702), while running at 1.6 ms on iPhone 16 Pro, enabling real-time assistive AI on handheld ultrasound devices. Our code, models, and app are publicly available at this https URL . Comments: Project website: this http URL Subjects: Computer Vision and Pattern Recognition (cs.CV) ; Artificial Intelligence (cs.AI); Machine Learning (cs.LG) Cite as: arXiv:2603.05421 [cs.CV] (or arXiv:2603.05421v1 [cs.CV] for this version) https://doi.org/10.48550/arXiv.2603.05421 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Numan Saeed [ view email ] [v1] Thu, 5 Mar 2026 17:43:00 UTC (1,082 KB) Full-text links: Access Paper: Vi...
Read full article at source