SP
BravenNow
Not All Queries Need Rewriting: When Prompt-Only LLM Refinement Helps and Hurts Dense Retrieval
| USA | technology | ✓ Verified - arxiv.org

Not All Queries Need Rewriting: When Prompt-Only LLM Refinement Helps and Hurts Dense Retrieval

#LLM #dense retrieval #query rewriting #prompt refinement #information retrieval #search optimization #query ambiguity

📌 Key Takeaways

  • LLM query rewriting is not always beneficial for dense retrieval systems.
  • Prompt-only refinement can improve retrieval for ambiguous or complex queries.
  • Simple or clear queries may see degraded performance from unnecessary rewriting.
  • The study identifies specific query types where LLM intervention is counterproductive.
  • Optimal retrieval strategies should selectively apply LLM-based query refinement.

📖 Full Retelling

arXiv:2603.13301v1 Announce Type: cross Abstract: Prompt-only, single-step LLM query rewriting, where a rewrite is generated from the query alone without retrieval feedback, is commonly used in production RAG pipelines, but its effect on dense retrieval is poorly understood. We present a systematic empirical study across three BEIR benchmarks, two dense retrievers, and multiple training configurations, and find strongly domain-dependent behavior: rewriting degrades nDCG@10 by 9.0 percent on FiQ

🏷️ Themes

Information Retrieval, LLM Optimization

Entity Intersection Graph

No entity connections available yet for this article.

Deep Analysis

Why It Matters

This research matters because it addresses a critical efficiency problem in modern information retrieval systems. Dense retrieval using large language models has become fundamental to search engines, chatbots, and enterprise knowledge systems, but current approaches often waste computational resources by rewriting all queries. The findings affect AI researchers, search engine developers, and organizations deploying retrieval-augmented generation systems by potentially reducing costs and improving response times while maintaining accuracy. This work helps optimize the balance between query processing overhead and retrieval quality in real-world applications.

Context & Background

  • Dense retrieval systems use neural networks to map queries and documents into vector spaces for semantic similarity matching
  • Query rewriting with LLMs has become standard practice to improve retrieval by clarifying ambiguous queries and adding context
  • Previous research assumed all queries benefit from LLM rewriting despite the computational cost
  • Retrieval-augmented generation (RAG) systems combine dense retrieval with LLMs for knowledge-intensive tasks
  • Computational efficiency has become increasingly important as LLM deployment scales across industries

What Happens Next

Researchers will likely develop automated classifiers to determine when query rewriting is necessary versus when prompt-only refinement suffices. We can expect new benchmarks comparing different query refinement strategies across various domains. Within 6-12 months, major search platforms may implement selective query rewriting based on this research to reduce computational costs while maintaining quality.

Frequently Asked Questions

What is dense retrieval and why is it important?

Dense retrieval is an AI technique that converts text into numerical vectors to find semantically similar content, enabling more accurate search results than traditional keyword matching. It's crucial for modern search engines and AI assistants that need to understand user intent rather than just matching words.

How does query rewriting typically work with LLMs?

Query rewriting uses large language models to rephrase or expand user queries before retrieval, often adding context or clarifying ambiguous terms. This process helps the retrieval system better understand what information the user is actually seeking.

What types of queries don't need rewriting according to this research?

The research suggests clear, specific queries with well-defined information needs often don't benefit from full LLM rewriting. Simple factual questions or queries with precise terminology may perform equally well with prompt-only refinement.

How much computational cost could this approach save?

While exact savings depend on implementation, skipping unnecessary LLM rewrites could reduce processing time and computational resources significantly for systems handling millions of queries daily. This is particularly important for real-time applications and cost-sensitive deployments.

Does this mean we should stop using LLMs for query improvement?

No, the research shows LLMs remain valuable for complex or ambiguous queries that need clarification. The key insight is to use LLMs selectively rather than applying them uniformly to all queries, optimizing the trade-off between cost and performance.

}
Original Source
arXiv:2603.13301v1 Announce Type: cross Abstract: Prompt-only, single-step LLM query rewriting, where a rewrite is generated from the query alone without retrieval feedback, is commonly used in production RAG pipelines, but its effect on dense retrieval is poorly understood. We present a systematic empirical study across three BEIR benchmarks, two dense retrievers, and multiple training configurations, and find strongly domain-dependent behavior: rewriting degrades nDCG@10 by 9.0 percent on FiQ
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine