SP
BravenNow
Explainability-Aware Evaluation of Transfer Learning Models for IoT DDoS Detection Under Resource Constraints
| USA | technology | ✓ Verified - arxiv.org

Explainability-Aware Evaluation of Transfer Learning Models for IoT DDoS Detection Under Resource Constraints

#IoT DDoS detection #Transfer learning #Explainable AI #Convolutional neural networks #Resource constraints #DenseNet #MobileNet #Cybersecurity

📌 Key Takeaways

  • Nelly Elsayed evaluated seven pre-trained convolutional neural network architectures for IoT DDoS detection
  • The research focused on performance, reliability, computational efficiency, and interpretability
  • DenseNet169 showed the strongest reliability and interpretability alignment
  • MobileNetV3 provided effective latency-accuracy trade-off for fog-level deployment
  • The study emphasizes combining multiple criteria when selecting models for IoT security

📖 Full Retelling

Nelly Elsayed published a comprehensive research paper on February 25, 2026, evaluating seven pre-trained convolutional neural network architectures for detecting Distributed Denial-of-Service attacks on Internet of Things devices, addressing critical gaps in understanding the reliability and interpretability of these models under resource-constrained deployment conditions. The study presents an explainability-aware empirical analysis that integrates multiple evaluation criteria including performance metrics, reliability-oriented statistics (MCC, Youden Index, confidence intervals), latency and training cost assessment, and interpretability evaluation using Grad-CAM and SHAP methods. Using the CICDDoS2019 dataset and an image-based traffic representation, the research provides valuable insights into how these models perform in realistic operational environments where computational resources are limited.

🏷️ Themes

Cybersecurity, Machine Learning, IoT Systems, Explainable AI

📚 Related People & Topics

Convolutional neural network

Type of artificial neural network

A convolutional neural network (CNN) is a type of feedforward neural network that learns features via filter (or kernel) optimization. This type of deep learning network has been applied to process and make predictions from many different types of data including text, images and audio. CNNs are the ...

View Profile → Wikipedia ↗
Transfer learning

Transfer learning

Machine learning technique

Transfer learning (TL) is a technique in machine learning (ML) in which knowledge learned from a task is re-used in order to boost performance on a related task. For example, for image classification, knowledge gained while learning to recognize cars could be applied when trying to recognize trucks....

View Profile → Wikipedia ↗

Resource slack

Level of availability of a resource

Resource slack, in the business and management literature, is the level of availability of a resource. Resource slack can be considered as the opposite of resource scarcity or resource constraints. The availability of resources can therefore be defined in terms of resource slack versus constraints, ...

View Profile → Wikipedia ↗

Explainable artificial intelligence

AI whose outputs can be understood by humans

Within artificial intelligence (AI), explainable AI (XAI), generally overlapping with interpretable AI or explainable machine learning (XML), is a field of research that explores methods that provide humans with the ability of intellectual oversight over AI algorithms. The main focus is on the reaso...

View Profile → Wikipedia ↗

Entity Intersection Graph

No entity connections available yet for this article.

Original Source
--> Computer Science > Cryptography and Security arXiv:2602.22488 [Submitted on 25 Feb 2026] Title: Explainability-Aware Evaluation of Transfer Learning Models for IoT DDoS Detection Under Resource Constraints Authors: Nelly Elsayed View a PDF of the paper titled Explainability-Aware Evaluation of Transfer Learning Models for IoT DDoS Detection Under Resource Constraints, by Nelly Elsayed View PDF HTML Abstract: Distributed denial-of-service attacks threaten the availability of Internet of Things infrastructures, particularly under resource-constrained deployment conditions. Although transfer learning models have shown promising detection accuracy, their reliability, computational feasibility, and interpretability in operational environments remain insufficiently explored. This study presents an explainability-aware empirical evaluation of seven pre-trained convolutional neural network architectures for multi-class IoT DDoS detection using the CICDDoS2019 dataset and an image-based traffic representation. The analysis integrates performance metrics, reliability-oriented statistics (MCC, Youden Index, confidence intervals), latency and training cost assessment, and interpretability evaluation using Grad-CAM and SHAP. Results indicate that DenseNet and MobileNet-based architectures achieve strong detection performance while demonstrating superior reliability and compact, class-consistent attribution patterns. DenseNet169 offers the strongest reliability and interpretability alignment, whereas MobileNetV3 provides an effective latency-accuracy trade-off for fog-level deployment. The findings emphasize the importance of combining performance, reliability, and explainability criteria when selecting deep learning models for IoT DDoS detection. Comments: 24 pages, under review Subjects: Cryptography and Security (cs.CR) ; Artificial Intelligence (cs.AI) Cite as: arXiv:2602.22488 [cs.CR] (or arXiv:2602.22488v1 [cs.CR] for this version) https://doi.org/10.48550/arXiv.2602.22...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine