The study investigates internal neural representations of cognitive complexity using Bloom’s Taxonomy as a hierarchical lens.
Activation vectors from different LLMs are probed to see if Bloom levels (recall to abstraction) are linearly separable within the model’s residual streams.
Linear classifiers achieve about 95% mean accuracy across all Bloom levels, suggesting that cognitive level is encoded in a linearly accessible subspace.
The model appears to resolve the cognitive difficulty of a prompt early in the forward pass, with separability increasing across layers.
📖 Full Retelling
Bianca Raimondi and Maurizio Gabbrielli, researchers in computer science, published a preprint titled *Mechanistic Interpretability of Cognitive Complexity in LLMs via Linear Probing using Bloom's Taxonomy* on 19 February 2026 on arXiv (cs.AI) to address the need for deeper evaluation frameworks beyond surface-level metrics in large language models.
🏷️ Themes
Mechanistic interpretability, Bloom’s Taxonomy, Large Language Models, Linear probing, Cognitive complexity
Entity Intersection Graph
No entity connections available yet for this article.
Deep Analysis
Why It Matters
It shows that LLMs encode cognitive levels in a linearly separable subspace, which could help explain how they process prompts early in the forward pass. This insight could improve model interpretability and guide safer deployment.
Context & Background
Large language models are often treated as black boxes
Bloom's taxonomy provides a hierarchical framework for cognitive complexity
Linear probing can reveal whether representations are linearly separable
What Happens Next
The study opens avenues for designing more interpretable models and for developing diagnostic tools that can assess cognitive load during inference. Future work may extend the approach to other hierarchical taxonomies and to larger model families.
Frequently Asked Questions
What does linear separability mean in this context?
It means that a simple linear classifier can distinguish between different cognitive levels encoded in the model's internal representations.
How might this research impact practical applications?
By providing a clearer understanding of how models process prompts, developers could build more reliable systems and improve prompt design.
Original Source
--> Computer Science > Artificial Intelligence arXiv:2602.17229 [Submitted on 19 Feb 2026] Title: Mechanistic Interpretability of Cognitive Complexity in LLMs via Linear Probing using Bloom's Taxonomy Authors: Bianca Raimondi , Maurizio Gabbrielli View a PDF of the paper titled Mechanistic Interpretability of Cognitive Complexity in LLMs via Linear Probing using Bloom's Taxonomy, by Bianca Raimondi and Maurizio Gabbrielli View PDF HTML Abstract: The black-box nature of Large Language Models necessitates novel evaluation frameworks that transcend surface-level performance metrics. This study investigates the internal neural representations of cognitive complexity using Bloom's Taxonomy as a hierarchical lens. By analyzing high-dimensional activation vectors from different LLMs, we probe whether different cognitive levels, ranging from basic recall to abstract synthesis , are linearly separable within the model's residual streams. Our results demonstrate that linear classifiers achieve approximately 95% mean accuracy across all Bloom levels, providing strong evidence that cognitive level is encoded in a linearly accessible subspace of the model's representations. These findings provide evidence that the model resolves the cognitive difficulty of a prompt early in the forward pass, with representations becoming increasingly separable across layers. Comments: Preprint. Under review Subjects: Artificial Intelligence (cs.AI) ; Computation and Language (cs.CL) Cite as: arXiv:2602.17229 [cs.AI] (or arXiv:2602.17229v1 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2602.17229 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Bianca Raimondi [ view email ] [v1] Thu, 19 Feb 2026 10:19:04 UTC (302 KB) Full-text links: Access Paper: View a PDF of the paper titled Mechanistic Interpretability of Cognitive Complexity in LLMs via Linear Probing using Bloom's Taxonomy, by Bianca Raimondi and Maurizio Gabbrielli View PDF HTML T...