Section 1 - Risk identification and evaluation
I have developed an original classification system tailored for K–12 education, defining “unreasonable risks” as those that pose unacceptable threats to student safety, institutional credibility, or public trust. Risks are categorized into ethical (e.g., algorithmic bias), operational (e.g., system misuse), and systemic (e.g., governance failure) types, informed by the OECD AI Principles and the G7 Hiroshima AI Process Code of Conduct.
Although the framework has not yet been deployed, I have designed a lifecycle-based risk evaluation process, including pre-use rubrics, misuse scenario logs, and oversight workflows. These are intended to help institutions identify and monitor vulnerabilities, misuse, and emerging risks in school contexts before and after AI system deployment.
I have structured a red-teaming methodology focused on educational AI systems. This includes simulation of edge cases such as biased feedback loops, adversarial misuse, and unintended student interactions. These tests are intended to be implemented during pilot phases and reviewed by independent advisors.
I have developed both qualitative tools (e.g., stakeholder interviews, perception audits) and quantitative methods (e.g., deviation scoring, impact thresholds). Reporting mechanisms are designed to be multilingual, anonymous, and inclusive. While no monetary incentives are in place yet, the framework encourages recognition-based disclosures via student and staff governance channels.
Yes. I have consulted with international accreditation and AI policy experts in the development phase. The framework includes pathways for independent third-party reporting, along with secure intake protocols to ensure confidentiality and transparency.
Yes. The framework is explicitly aligned with the OECD AI Principles, the G7 Hiroshima AI Process Code of Conduct, and informed by best practices from UNESCO, NIST, and ISO/IEC standards. It is designed to translate these technical and policy frameworks into practical tools for school-level governance.
I am in the process of establishing cross-sector partnerships to pilot the framework. It is designed to support collaboration between school leaders, AI developers, policymakers, and accreditation bodies. Mitigation strategies will be co-developed during implementation, grounded in real institutional needs.
This is an original, independently developed governance framework authored by me, Timothy Kang. It is currently pre-deployment and was created to fill a policy gap in the application of international AI principles to school systems. I welcome collaboration with OECD and G7 stakeholders to pilot, refine, and scale this contribution.


























