SP
BravenNow
Benchmarking Zero-Shot Reasoning Approaches for Error Detection in Solidity Smart Contracts
| USA | technology | ✓ Verified - arxiv.org

Benchmarking Zero-Shot Reasoning Approaches for Error Detection in Solidity Smart Contracts

📖 Full Retelling

arXiv:2603.13239v1 Announce Type: new Abstract: Smart contracts play a central role in blockchain systems by encoding financial and operational logic. Still, their susceptibility to subtle security flaws poses significant risks of financial loss and erosion of trust. LLMs create new opportunities for automating vulnerability detection, yet the effectiveness of different prompting strategies and model choices in real-world contexts remains uncertain. This paper evaluates state-of-the-art LLMs on

Entity Intersection Graph

No entity connections available yet for this article.

}
Original Source
--> Computer Science > Artificial Intelligence arXiv:2603.13239 [Submitted on 17 Feb 2026] Title: Benchmarking Zero-Shot Reasoning Approaches for Error Detection in Solidity Smart Contracts Authors: Eduardo Sardenberg , Antonio José Grandson Busson , Daniel de Sousa Moraes , Sérgio Colcher View a PDF of the paper titled Benchmarking Zero-Shot Reasoning Approaches for Error Detection in Solidity Smart Contracts, by Eduardo Sardenberg and 3 other authors View PDF HTML Abstract: Smart contracts play a central role in blockchain systems by encoding financial and operational logic. Still, their susceptibility to subtle security flaws poses significant risks of financial loss and erosion of trust. LLMs create new opportunities for automating vulnerability detection, yet the effectiveness of different prompting strategies and model choices in real-world contexts remains uncertain. This paper evaluates state-of-the-art LLMs on Solidity smart contract analysis using a balanced dataset of 400 contracts under two tasks: Error Detection, where the model performs binary classification to decide whether a contract is vulnerable, and Error Classification, where the model must assign the predicted issue to a specific vulnerability category. Models are evaluated using zero-shot prompting strategies, including zero-shot, zero-shot Chain-of-Thought , and zero-shot Tree-of-Thought . In the Error Detection task, CoT and ToT substantially increase recall (often approaching $\approx 95$--$99\%$), but typically reduce precision, indicating a more sensitive decision regime with more false positives. In the Error Classification task, Claude 3 Opus attains the best Weighted F1-score (90.8) under the ToT prompt, followed closely by its CoT. Subjects: Artificial Intelligence (cs.AI) Cite as: arXiv:2603.13239 [cs.AI] (or arXiv:2603.13239v1 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2603.13239 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Antonio Busson ...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine