What Makes a Good Query? Measuring the Impact of Human-Confusing Linguistic Features on LLM Performance
#Large Language Models #Hallucinations #Query Features #Linguistic Complexity #Computational Linguistics #Model Accuracy #Query Optimization
📌 Key Takeaways
- LLM hallucinations are influenced by query structure, not just model defects
- Deep clause nesting and underspecification increase hallucination risk
- Clear intention grounding and answerability reduce hallucination rates
- The research provides a framework for guided query rewriting to improve LLM performance
📖 Full Retelling
🏷️ Themes
Artificial Intelligence, Linguistic Analysis, Model Performance
📚 Related People & Topics
Hallucination
Perception that only seems real
A hallucination is a perception in the absence of an external context stimulus that has the compelling sense of reality. They are distinguishable from several related phenomena, such as dreaming (REM sleep), which does not involve wakefulness; pseudohallucination, which does not mimic real perceptio...
Computational linguistics
Use of computational tools for the study of linguistics
Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics draws upon linguistics, computer science, artificial int...
Large language model
Type of machine learning model
A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the c...
Entity Intersection Graph
Connections for Hallucination:
View full profile