Understanding Perceptions and Use of AI in K-12 Education Using a Nationally Representative Sample
Description
Schoolchildren are exposed to hundreds of digital tools each year, many of which are already driven by AI technologies. Parents and teachers must consider how to incorporate these learning tools into their daily lives at a rapid pace. Yet, very little is known about the current use and perceptions of AI among these key stakeholders. This time-sensitive RAPID project will identify the opportunities and challenges that arise for parents, teachers, and youth regarding the use of these AI-driven technologies in K-12 education. Specifically, it will identify risks of generative AI in educational settings and explore opportunities for supporting future design and engagement, including supports for teacher training and regulation in educational contexts. Findings will have theoretical and practical relevance for education, child development, technology design, and policy. Results will also support policymakers as they work to establish guidelines and regulations to protect youth?s privacy, safety, and well-being in the context of AI interactions. Ideally, this work will inform how AI might be used to close versus widen existing disparities and learning gaps in youth. This proposal was received in response to the Dear Colleague Letter (DCL): Rapidly Accelerating Research on Artificial Intelligence in K-12 Education in Formal and Informal Settings (NSF 23-097) and funded by the Innovative Technology Experiences for Students and Teachers (ITEST) program, which supports projects that build understandings of practices, program elements, contexts and processes contributing to increasing students' knowledge and interest in science, technology, engineering, and mathematics (STEM) and information and communication technology (ICT) careers.
The study employs a large stratified random sample of parents, teachers, and youth across age, gender, socio-economic status, and geographical location. The probability-based nationally representative sample will be drawn from the NORC AmeriSpeak panel. The survey includes both open and close-ended questions that focus on use, perception, and trust in AI systems. Questions will probe how youth engage with generative AI as well as more traditional forms of AI in education, such as personalized and adaptive learning. An embedded experimental manipulation within the survey will test how participants evaluate AI-powered versus non-AI powered educational platforms with respect to usefulness, expertise, and trust. Qualitative interviews augment the survey findings, relying on remote participation to ensure broad geographic representation in a very short time window. Research materials will be made available as part of an online toolkit, enabling multiple investigators to use them across their studies and pool the data for future analyses through an open science approach. This large-scale mixed methods study will generate knowledge to directly inform policy guidelines regarding youth?s safety, privacy, and well-being in AI interactions. It is also positioned to help build design guidelines for AI product developers that prioritize child well-being, learning, and development.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.