Section 1 - Risk identification and evaluation
At Infosys, AI systems can be classified into four risk levels—Prohibited, High, Limited, and Minimal.
Prohibited AI Use Cases: AI systems that pose an unacceptable risk to fundamental rights and safety are prohibited.
High-Risk AI Use Cases: AI systems that significantly impact fundamental rights and safety but are not outright prohibited are considered high-risk. They require stringent obligations before they can be deployed.
Limited Risk AI Use Cases: AI systems in the limited risk category must comply with basic transparency requirements, such as informing users that they are interacting with an AI system.
Minimal Risk AI Use Cases: Minimal risk AI systems pose little to no risk to users' rights or safety and are therefore subject to the least stringent regulatory requirements.
Infosys Responsible AI Office continuously does a market scan for AI risks, vulnerabilities, incidents and risks emerging due to misuse of technology. These vulnerabilities are then assessed for exposure in various projects and also identifies fixes to mitigate the vulnerabilities. These are then cascaded to all the project teams that are exposed for such vulnerabilities. Also, all AI Use cases must mandatorily undergo Impact and risk assessment, and the identified risks must be mitigated throughout the AI lifecycle.
Infosys evaluates AI systems through structured adversarial testing, including automated and manual red-teaming, to identify vulnerabilities and ensure ethical, secure, and resilient performance. Findings inform remediation and compliance checks under Responsible AI governance before production deployment, aligning with ISO 42001 and global safety standards.
Yes. Infosys applies both quantitative and qualitative risk metrics during AI evaluations, with caveats wherever applicable. These caveats depend on the type of the project, the applicable jurisdiction and regulations, the AI Use case and the project context. These are documented in governance reviews. Vulnerability and incident reporting channels are accessible to all project stakeholders. Additionally, Infosys encourages responsible disclosure through defined communication channels, ensuring transparency and continuous improvement under its Responsible AI framework subject to client and project confidentiality clauses.
Yes. Infosys engages external independent experts for risk identification, assessment, and evaluation through audits and advisory reviews. The periodic external audits for ISO 42001 and the constant assessment of our best practices with industry leading research analysts help us to identify, evaluate and mitigate the risks. Our strong partner and vendor ecosystem also helps us to proactively identify risks, incidents and vulnerabilities by 3rd parties.
Yes, Infosys both adopts and contributes to international standards for AI risk management.
We are ISO/IEC 42001 certified, with processes aligned to global frameworks like the NIST AI Risk Management Framework, EU AI Act, and other sector-specific guidelines.
We also actively participate in shaping standards through consultations and working groups, including ISO, NIST, OWASP, World Economic Forum (WEF) and Coalition for Content Provenance and Authenticity (C2PA) etc. ensuring our AI practices reflect global best practices in governance, transparency, and accountability.
Infosys collaborates with clients, industry bodies, and regulatory forums to assess systemic AI risks and implement mitigation measures. Our clients operate across sectors. We participate in global industry bodies like ISO, OWASP and Coalition for Content Provenance and Authenticity (C2PA) to identify the risks. Our regular assessment of AI Use cases through our internal risk management process, helps in identifying the specific systemic risk for a Use case implementation. Our AI risk management process, aligned to global frameworks like NIST AI Risk Management Framework and EU AI Act, helps us to address the identified systemic risks for a Use case implementation.
No answer provided


























