AI Rec 7 Risk Management
Risk Management
Policy recommendation 7, Commission on AI in Education
States should partner with school districts and postsecondary systems and institutions to ensure risk management policies have been revised to assess and reduce risks associated with AI.
To assess, mitigate and manage the risks associated with artificial intelligence, school districts and postsecondary institutions will need comprehensive risk management plans. This may only require that current risk management policies be updated.
Risk-management components to consider:
Governance
To ensure all AI systems in educational institutions are used responsibly, ethically and transparently, school districts and institutions should consider establishing some type of AI governance structure responsible for overseeing the deployment, monitoring and assessment of AI technologies.
Governance structures should include members from the various groups of users such as teachers, faculty, staff, students and administrators. These groups would be responsible for developing clear guidelines and acceptable use policies.
Data Privacy and Security
To protect student and personnel data and uphold privacy laws such as FERPA, or the Family Educational Rights and Privacy Act, risk management plans should include strong data encryption, access control and regular security audits for AI systems that handle sensitive student data.
The plans should limit data collection to the information necessary for the AI’s intended purpose and incorporate strict data retention policies.
School districts and postsecondary institutions should consider incorporating a group or committee that reviews and approves all AI applications prior to purchasing and implementation. The criteria for evaluating these applications might include educational relevance, technical reliability and alignment with school values and legal requirements.
Bias Mitigation
To mitigate the possibility of human bias in AI, school districts and postsecondary institutions should consider periodic audits of AI tools, particularly those impacting critical areas such as grading, student assessments and disciplinary actions.
School districts and postsecondary institutions should consider establishing human review processes when students or parents choose to appeal decisions made by AI applications.
Risk Training
To ensure that staff understand AI functionalities, potential risks and responsible use, school districts and postsecondary institutions should consider developing AI training that focuses on ethical, technical and practical aspects of risk detection and mitigation techniques. The objective should be to help staff identify potential issues such as AI bias or inappropriate data use.
School districts and postsecondary institutions should also ensure that staff are equipped with guidelines on AI-augmented learning strategies. These should help educators effectively integrate AI tools into their classes in ways that enhance learning while safeguarding students’ well-being.
Accountability and Oversight
To ensure accountability in the deployment and ongoing use of AI tools, school districts and postsecondary institutions should favor AI tools that have traceable accountability features and monitor decision-making processes with a clear line of responsibility for oversight and issue resolution.
School districts and postsecondary institutions should set up transparent reporting channels where students, parents, and staff can report issues, request clarifications or submit appeals related to AI-driven decisions such as grading.
School districts and postsecondary institutions should also conduct regularly scheduled reviews to evaluate the performance of AI systems, the impact of AI on educational outcomes, and each AI application’s adherence to ethical and legal standards.
Transparency and Consent
School districts and institutions should ensure clear lines of communication with students and parents about the use of AI tools, the data they collect and the potential risks involved.
School districts and postsecondary institutions should provide opt-out options for AI-driven data collection or decisions, especially in cases where data privacy or student agency — or a student’s ability to take an active role in their learning — may be impacted.
School districts and postsecondary institutions should also create public transparency reports that detail AI tool usage, risk management actions and impact evaluations.
References
Akgun, S., & Greenhow, C. (2022). Artificial intelligence in education: Addressing ethical challenges in K-12 settings. AI and Ethics, 2(3), 431–440. https://doi.org/10.1007/s43681-021-00096-7
European Commission: Directorate-General for Communications Networks, Content and Technology. (2019). Ethics guidelines for trustworthy AI. Publications Office. https://data.europa.eu/doi/10.2759/346720
European Commission: Directorate-General for Education, Youth, Sport and Culture. (2022). Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators. Publications Office of the European Union. https://data.europa.eu/doi/10.2766/15375
Groza, A., & Mărginean, A. (2023). Brave new world: Artificial intelligence in teaching and learning. arXiv preprint arXiv:2310.06856. https://arxiv.org/abs/2310.06856
Li, Z., Dhruv, A., & Jain, V. (2024). Ethical considerations in the use of AI for higher education: A comprehensive guide. 2024 IEEE 18th International Conference on Semantic Computing (ICSC), 218–223. https://doi.org/10.1109/ICSC59802.2024.00041
National Institute of Standards and Technology. (2022). Artificial intelligence risk management framework (AI RMF 1.0). U.S. Department of Commerce.
Shibani, A., & Buckingham Shum, S. (2024). AI-assisted writing in education: Ecosystem risks and mitigations. arXiv preprint arXiv:2404.10281. https://arxiv.org/abs/2404.10281
Schaeffer, D., Coombs, L., Luckett, J., Marin, M., & Olson, P. (2024). Risks of AI applications used in higher education. Electronic Journal of e-Learning, 22(6), 60–65. https://doi.org/10.34190/ejel.22.6.3457
U.S. Department of Education. (2021). Student privacy and AI: Guidelines for K-12 schools. Office of Educational Technology.
Zheng, X., & Chen, Y. (2024). Research on risk assessment and management strategies of applying artificial intelligence technology in educational institutions. Applied Mathematics and Nonlinear Sciences, 9(1), 1–19. https://doi.org/10.2478/amns-2024-2920