NSF DCL: Cybersecurity Education in the Age of Artificial Intelligence
The National Science Foundation (NSF) is announcing its intention to fund a small number of Early Concept Grants for Exploratory Research (EAGER) to encourage advances in cybersecurity education, an area supported by the Foundation's Secure and Trustworthy Cyberspace Education Designation (SaTC-EDU), CyberCorps®: Scholarships for Service, and Advanced Technological Education (ATE) programs
EAGER is a mechanism to support exploratory work, in its early stages, on untested but potentially transformative research ideas or approaches. This work may be considered especially "high risk – high payoff" in the sense that it, for example, involves radically different approaches, applies new expertise, or engages novel disciplinary or interdisciplinary perspectives.
In particular, with this Dear Colleague Letter (DCL), we wish to alert you to our interest in using the EAGER mechanism to encourage new collaborations between the Artificial Intelligence (AI), cybersecurity, and education research communities.
The 2019 Federal Cybersecurity Research and Development Strategic Plan 1 highlighted the mutual needs and benefits of AI and cybersecurity. AI techniques are expected to enhance cybersecurity by assisting human system managers with automated monitoring, analysis, and responses to adversarial attacks. Conversely, it is essential to guard AI technologies from unintended uses and hostile exploitation by leveraging cybersecurity practices. Research results at the intersection of AI and cybersecurity can lead to widespread changes in our understanding of the foundations of cybersecurity, and in turn, give rise to fundamentally new ways to motivate and educate students about cybersecurity in the age of AI. Likewise, the summary report for a June 2019 technical workshop on "Artificial Intelligence and Cybersecurity: Opportunities and Challenges" noted how the interplay between AI, machine learning, and cybersecurity will continue to introduce new opportunities and challenges in the security of AI as well as AI for cybersecurity. Basic research in AI together with research on cybersecurity education might expand existing AI opportunities and resources in cybersecurity education and workforce development. Education efforts are needed to foster workforce knowledge and skills about applying AI expertise to cybersecurity as well as building robust and trustworthy AI. This DCL seeks to promote exploration of possible partnerships between AI researchers, cybersecurity researchers, and education researchers in order to inspire novel education and outreach efforts. Such collaborative efforts could also foster a robust workforce with integrated AI and cybersecurity competencies, and develop an informed public that understands the privacy, confidentiality, ethics, safety, and security implications of AI.
Opportunities for participation by undergraduate and graduate students and postdoctoral fellows, K-12 students, industry representatives, and others are encouraged. NSF welcomes proposals that include efforts to broaden participation of underrepresented groups (women, minorities, and persons with disabilities) in the development of the research and education agendas.
DEADLINES: Two rounds of submissions are available with the deadline for the first round at midnight EDT on May 15, 2020, and for the second round at midnight EDT on August 31, 2020.
For more information view the Dear Colleague Letter on NSF’s website: https://www.nsf.gov/pubs/2020/nsf20072/nsf20072.jsp