#Model Compression
Latest news articles tagged with "Model Compression". Follow the timeline of events, related topics, and entities.
Articles (6)
-
๐บ๐ธ TT-SEAL: TTD-Aware Selective Encryption for Adversarially-Robust and Low-Latency Edge AI
[USA]
arXiv:2602.22238v1 Announce Type: cross Abstract: Cloud-edge AI must jointly satisfy model compression and security under tight device budgets. While Tensor-Train Decomposition (TTD) shrinks on-devic...
Related: #Edge AI Security, #Efficient Encryption -
๐บ๐ธ Sink-Aware Pruning for Diffusion Language Models
[USA]
arXiv:2602.17664v1 Announce Type: cross Abstract: Diffusion Language Models (DLMs) incur high inference cost due to iterative denoising, motivating efficient pruning. Existing pruning heuristics larg...
Related: #Machine Learning, #Natural Language Processing, #Diffusion Models, #Inference Efficiency -
๐บ๐ธ Texo: Formula Recognition within 20M Parameters
[USA]
arXiv:2602.17189v1 Announce Type: new Abstract: In this paper we present Texo, a minimalist yet highperformance formula recognition model that contains only 20 million parameters. By attentive design...
Related: #Artificial Intelligence, #Computer Vision and Pattern Recognition, #Realโtime Inference, #Formula Recognition -
๐บ๐ธ Trust the uncertain teacher: distilling dark knowledge via calibrated uncertainty
[USA]
arXiv:2602.12687v1 Announce Type: cross Abstract: The core of knowledge distillation lies in transferring the teacher's rich 'dark knowledge'-subtle probabilistic patterns that reveal how classes are...
Related: #Knowledge Distillation, #Uncertainty Quantification -
๐บ๐ธ NanoFLUX: Distillation-Driven Compression of Large Text-to-Image Generation Models for Mobile Devices
[USA]
arXiv:2602.06879v1 Announce Type: cross Abstract: While large-scale text-to-image diffusion models continue to improve in visual quality, their increasing scale has widened the gap between state-of-t...
Related: #Artificial Intelligence, #Mobile Technology -
๐บ๐ธ FastWhisper: Adaptive Self-knowledge Distillation for Real-time Automatic Speech Recognition
[USA]
arXiv:2601.19919v1 Announce Type: cross Abstract: Knowledge distillation is one of the most effective methods for model compression. Previous studies have focused on the student model effectively tra...
Related: #Artificial Intelligence, #Knowledge Distillation