Just KIDDIN: Knowledge Infusion and Distillation for Detection of INdecent Memes
#Knowledge Distillation #Large Visual Language Models #Hateful Memes #Sub-knowledge Graphs #Multimodal Toxicity Detection #AI Ethics #Graph-Based AI #Model Compression
📌 Key Takeaways
- Introduces a dual approach of knowledge distillation and infusion for meme toxicity classification.
- Utilizes Large Visual Language Models to transfer specialized knowledge to smaller, efficient classifiers.
- Extracts sub-knowledge graphs to encode relationships between textual and visual cues.
- Demonstrates improvements over baseline models in multimodal toxicity detection.
- Offers a scalable framework adaptable to various online content modalities.
📖 Full Retelling
The paper presents a new framework aimed at improving the detection of toxic content in online hateful memes. It proposes to use knowledge distillation from Large Visual Language Models (LVLMs) combined with a knowledge infusion strategy, extracting sub-knowledge graphs to capture complex cross-modal contextual connections. The study responds to the ongoing need for better tools in multimodal environments where textual and visual elements signal toxicity, illustrating how combining advanced AI models and graph-based knowledge can boost detection performance.
🏷️ Themes
Artificial Intelligence Ethics, Multimodal Machine Learning, Toxic Content Detection, Knowledge Graphs, Model Compression and Distillation
Entity Intersection Graph
No entity connections available yet for this article.
Original Source
arXiv:2411.12174v3 Announce Type: replace-cross
Abstract: Toxicity identification in online multimodal environments remains a challenging task due to the complexity of contextual connections across modalities (e.g., textual and visual). In this paper, we propose a novel framework that integrates Knowledge Distillation (KD) from Large Visual Language Models (LVLMs) and knowledge infusion to enhance the performance of toxicity detection in hateful memes. Our approach extracts sub-knowledge graphs
Read full article at source