Transforming GenAI Policy to Prompting Instruction: An RCT of Scalable Prompting Interventions in a CS1 Course
#Generative AI #Prompting interventions #Randomized controlled trial #CS1 curriculum #Learning outcomes #AI tutoring #Academic performance
📌 Key Takeaways
- Randomized controlled trial with 979 CS1 students over a full semester.
- Interventions focused on teaching scalable prompting techniques to shift GenAI from a solution provider to a tutoring role.
- The study addresses the issue of students' inability to distinguish task performance from real learning, which has been linked to poorer exam results.
- Results show the potential of prompting instruction to enhance learning outcomes and promote reflective AI use.
- This RCT is among the first large‑scale evaluations of prompting interventions in an introductory CS course.
📖 Full Retelling
A randomized controlled trial (RCT) involving 979 first‑year computer science (CS1) students was conducted over an academic semester to test scalable prompting interventions that reframe generative AI (GenAI) from a solution provider to a tutor. The study, published on February 26, 2026, aimed to address the problem that students often cannot differentiate between mere task completion and genuine learning when using GenAI, which previously led to lower exam performance. The researchers implemented and evaluated various prompting strategies to teach students how to leverage GenAI more effectively for learning, with the goal of improving academic outcomes while fostering responsible AI use.
🏷️ Themes
Generative AI in education, Prompt engineering and pedagogical design, Skill development for AI‑mediated learning, Randomized controlled trials in higher education research, Responsible AI use and reflective practice
Entity Intersection Graph
No entity connections available yet for this article.
Original Source
arXiv:2602.16033v1 Announce Type: cross
Abstract: Despite universal GenAI adoption, students cannot distinguish task performance from actual learning and lack skills to leverage AI for learning, leading to worse exam performance when AI use remains unreflective. Yet few interventions teaching students to prompt AI as a tutor rather than solution provider have been validated at scale through randomized controlled trials (RCTs). To bridge this gap, we conducted a semester-long RCT (N=979) with fo
Read full article at source