EyeLayer: Integrating Human Attention Patterns into LLM-Based Code Summarization
#EyeLayer #code summarization #large language models #eye-tracking #attention patterns #software engineering #AI enhancement
📌 Key Takeaways
- EyeLayer integrates human eye-gaze patterns into LLMs for code summarization
- The approach uses Multimodal Gaussian Mixture to model developer attention patterns
- EyeLayer outperformed baseline models by up to 13.17% on BLEU-4 metric
- The method works across different LLM architectures and scales
- Human gaze patterns provide complementary signals that enhance AI understanding of code
📖 Full Retelling
Researchers Jiahao Zhang, Yifan Zhang, Kevin Leach, and Yu Huang introduced EyeLayer, a novel approach that integrates human eye-gaze patterns into large language models for improved code summarization, in a paper submitted to arXiv on February 25, 2026, which will be presented at the IEEE/ACM International Conference on Program Comprehension in Rio de Janeiro next month. The research addresses the critical challenge of how human expertise in code understanding can guide and enhance artificial intelligence systems that are increasingly used for software comprehension and maintenance. EyeLayer functions as a lightweight attention-augmentation module that models human attention during code reading through a Multimodal Gaussian Mixture, redistributing token embeddings based on learned parameters that capture where and how intensively developers focus when reading code. This innovative design enables the learning of generalizable attention priors from eye-tracking data, allowing them to be seamlessly incorporated into LLMs without disturbing existing representations. The researchers rigorously evaluated EyeLayer across diverse model families including LLaMA-3.2, Qwen3, and CodeBERT, covering different scales and architectures, and found that it consistently outperformed strong fine-tuning baselines across standard metrics, achieving gains of up to 13.17% on BLEU-4. These results demonstrate that human gaze patterns encode complementary attention signals that enhance the semantic focus of LLMs and transfer effectively across diverse models for code summarization.
🏷️ Themes
AI Research, Software Engineering, Human-Computer Interaction
Entity Intersection Graph
No entity connections available yet for this article.
Original Source
--> Computer Science > Software Engineering arXiv:2602.22368 [Submitted on 25 Feb 2026] Title: EyeLayer: Integrating Human Attention Patterns into LLM-Based Code Summarization Authors: Jiahao Zhang , Yifan Zhang , Kevin Leach , Yu Huang View a PDF of the paper titled EyeLayer: Integrating Human Attention Patterns into LLM-Based Code Summarization, by Jiahao Zhang and 3 other authors View PDF HTML Abstract: Code summarization is the task of generating natural language descriptions of source code, which is critical for software comprehension and maintenance. While large language models have achieved remarkable progress on this task, an open question remains: can human expertise in code understanding further guide and enhance these models? We propose EyeLayer, a lightweight attention-augmentation module that incorporates human eye-gaze patterns, as a proxy of human expertise, into LLM-based code summarization. EyeLayer models human attention during code reading via a Multimodal Gaussian Mixture, redistributing token embeddings based on learned parameters (\mu_i, \sigma_i^2) that capture where and how intensively developers focus. This design enables learning generalizable attention priors from eye-tracking data and incorporating them into LLMs seamlessly, without disturbing existing representations. We evaluate EyeLayer across diverse model families (i.e., LLaMA-3.2, Qwen3, and CodeBERT) covering different scales and architectures. EyeLayer consistently outperforms strong fine-tuning baselines across standard metrics, achieving gains of up to 13.17% on BLEU-4. These results demonstrate that human gaze patterns encode complementary attention signals that enhance the semantic focus of LLMs and transfer effectively across diverse models for code summarization. Comments: Accepted at the 34th IEEE/ACM International Conference on Program Comprehension (ICPC 2026), April 12-13, 2026, Rio de Janeiro, Brazil Subjects: Software Engineering (cs.SE) ; Artificial Intelligence (cs.A...
Read full article at source