Resources included in these libraries were submitted by ITEST projects or STELAR and are relevant to the work of the NSF ITEST Program. PDFs and/or URLs to the original resource are included in the resource description whenever possible. In some cases, full text publications are located behind publishers’ paywalls and a fee or membership to the third party site may be required for access. Permission for use must be requested through the publisher or author listed in each entry.
A Program Director's Guide to Evaluating STEM Education Programs: Lessons Learned from Local, State, and National Initiatives
PublicationIn today's world of high accountability, strong evidence on intended outcomes is key to building credibility and replicability of science, technology, engineering and mathematics (STEM) programs. This primer, for program directors/managers, educators and others responsible for developing and implementing STEM programs: Provides evaluation guidelines and resources for program leaders who are implementing STEM programs in schools and community-based organizations. Reduces "evaluation anxiety" for individuals who are not professional evaluators by providing guidelines for good evaluation
How Can Multi-Site Evaluations Be Participatory?
PublicationMulti-site evaluations are becoming increasingly common in federal funding portfolios. Although much thought has been given to multi-site evaluation, there has been little emphasis on how it might interact with participatory evaluation. Therefore, this paper reviews several National Science Foundation educational, multi-site evaluations for the purpose of examining the extent to which these evaluations are participatory. Based on this examination, the paper proposes a model for implementing multi-site, participatory evaluation.
A Guide to Evaluation Primers
PublicationProduced by the Association for the Study and Development of Community (ASDC) for the Robert Wood Johnson Foundation, this document offers an orientation to handbooks and basic primers on evaluation. These resources are designed to meet the needs of the non-expert, explaining some of the central issues in evaluation and why they are important. The Guide itself helps organizations to assess which evaluation primer might be most useful for their needs and what to expect, in terms of information, from each.
A Framework for Understanding and Improving Multisite Evaluations
PublicationMultisite evaluations are increasingly being used by the federal government and large foundations. A framework for understanding the variations among multisite evaluations and developing ways to improve them is presented in this chapter.
Lessons Learned About Science and Participation from Multisite Evaluations
PublicationPublicly funded multisite evaluations should help bring safe, effective interventions to the mental health and substance abuse treatment fields. Two principles, science-based practice and stakeholder participation, drive multisite evaluations of behavioral interventions. We examine the roles of these principles in the five programs described in this volume and draw lessons for future studies.
OERL Evaluation Reports for ITEST Projects
PublicationOERL, the Online Evaluation Resource Library, was developed for professionals seeking to design, conduct, document, or review project evaluations. The purpose of this system is to collect and make available evaluation plans, instruments, and reports for NSF projects that can be used as examples by Principal Investigators, project evaluators, and others outside the NSF community as they design proposals and projects. OERL also includes professional development modules that can be used to better understand and utilize the materials made available.A collection of evaluation report resources for
The ITEST Learning Resource Center's Online Evaluation Database: Examples from the Collection
PublicationThe ITEST Learning Resource Center at EDC developed an online database of instruments developed by ITEST project evaluators and researchers from 2003 to 2007. This 2007 article details the purpose and development of that database and highlights three instruments from it that represent the kind of evaluation tools archived there. While the database is no longer available, the three instruments described in the article provide useful examples of project-developed evaluation tools. More information about the database and the instruments can be obtained by emailing stelar@edc.org.
Preparing Tomorrow's STEM Workforce through Exploration, Equity, and Engagement
PublicationPreparing Tomorrow's STEM Workforce through Exploration, Equity, and Engagement is a product of the previous ITEST Learning Resource Center (LRC) and includes a series of six articles (plus a preface) highlighting key themes and lessons learned in the first five years of the ITEST program.
Inclusion, Disabilities, and Informal Science Learning
PublicationInclusion, Disabilities, and Informal Science Learning, a report by the CAISE Access Inquiry Group, offers a theoretical framework for thinking about inclusion of people with disabilities in informal science education (ISE), then reviews current practice in museums (broadly defined), in media and technology, and in youth and community programs.
Tech-Savvy: Educating Girls in the New Computer Age
PublicationThis report provides recommendations for changes in the the way information technology is used, applied, and taught in the nation's classrooms in order to engage girls and women.