Section 1 - Risk identification and evaluation
AI risks are classified based on their potential impact on security, safety, societal impacts, and human rights and as defined by the EU AI Act, focusing first on High Risk AI that could affect humans and introduce an unreasonable risk. The risk assessment and mitigating actions are documented in our AI Governance and GRC Tool.
Milestone has put in place processes to identify possible known and reasonably foreseeable risks to health, safety, and fundamental rights that could follow from the use of relevant AI systems throughout their lifecycle. We have developed policies to ensure high-quality training, validation, and testing datasets for relevant AI systems. Milestone has defined processes, defined in our Responsible Development Policy, to employ and document comprehensive risk and impact assessments from ideation through development, deployment, and usage. This includes assessing potential vulnerabilities, emerging risks, misuse, considering human rights and ethical impacts. Milestone is also a member of the AI Pact and already now start taking High Risk AI requirements from the EU AI Act into account. Besides the initial risk assessments throughout the development process incl. foreseeable risks, Milestone also plan to continuously perform diverse testing measures to monitor the performance of the AI throughout the AI lifecycle. The ability to identify, monitor and evaluate all the performance and risks throughout the AI lifecycle is limited by the extend that Milestone control the deployment and runtime environment. For environments outside Milestones control, we are depending on the HRDD process and the assessments and reporting performed on these environments by our partners and customers.
As part of our planned MLOps process and CI/CD toolchain, our organization conducts testing measures to evaluate the system’s fitness for deployment. These tests assess the accuracy, safety, security, and resilience of the systems against various metrics. As an AI Pact member, Milestone already plan to use accuracy metrics from the EU AI Act to ensure early conformity.
Yes, qualitative metrics are used, with caveats such as the complexity of accurately predicting risks. Through our AI Governance and GRC Tool we have accessible reporting mechanisms for various stakeholders. We do not have an incentive program for responsible disclosure.
Incidents are reported through our Whistleblower and AI incident reporting webpage and the reports are used to identify new risks when they get reported. The Incident Reports are automatically generated in our Incident Management System and the reports are automatically assigned a dedicated AI Incident Manager. Milestone does not yet use incident reports shared by other organisations, but instead the latest news about AI risks are used.
No, we do not leverage external independent expertise for risk identification and evaluation. We have an internal task force with deep understanding and experience in AI for the identification, assessment, and evaluation of risks. We also have mechanisms in place to receive reports of risks, incidents, and vulnerabilities from third parties, both through due diligence processes and assessments, but also through our whistleblower function.
Yes, Milestone actively develop and use international standards as active members of Working Group 3 (WG3) under CEN-CENELEC JTC 21, that focuses on engineering aspects of creating standards for the EU AI Act. This also gives us early insights into the EU AI Act requirements on risk management for AI systems as developed in Working Group 2, with a clear and actionable guidance on how risk can be addressed and mitigated throughout the entire lifecycle of the AI system. These requirements build the coming EU AI Act standards from CEN-CENELEC that again build on other standards like EN IEC 1010:2019 Risk management - Risk assessment techniques, ISO/IEC TS 4213:2022 Information technology — Artificial intelligence — Assessment of machine learning classification performance, ISO/IEC Guide 51:2014 Safety aspects — Guidelines for their inclusion in standards, EN ISO/IEC 22989:2023 Information technology - Artificial intelligence - Artificial intelligence concepts and terminology (ISO/IEC 22989:2022), EN ISO 9000:2015 Quality management systems - Fundamentals and vocabulary (ISO 9000:2015).
Milestone is directly involved in developing standards as described earlier, collaborating across the sector to assess and adopt risk mitigating measures. Milestone also help to develop the General-Purpose AI Code of Practice led by the EU AI Office. The General-Purpose AI Code of Practice will detail the AI Act rules for providers of general-purpose AI models and general-purpose AI models with systemic risks, including requirements for downstream providers. As part of this work Milestone collaborate with relevant stakeholders across sectors to assess and adopt risk mitigation measures to address risks, in particular systemic risks of General-Purpose AI involving nearly 1000 stakeholders, as well as EU Member States representatives, European and international observers.
No answer provided


























