Section 1 - Risk identification and evaluation
AI Risk Definition
KYP.ai adheres to the risk definitions formulated by the NIST AI RMF. This framework defines risk as a composite measure of the probability of an event occurring and the magnitude of its consequences.
Unreasonable Risks are those specific to AI that cannot be adequately controlled or mitigated through available technical or governance measures. These include:
• Risks to Fundamental Rights and Dignity
• Uncontrollable or Unpredictable Systems
• Discriminatory or Harmful Bias
• Deceptive or Manipulative Systems
• Dual-Use Technologies with Severe Misuse Potential
Predictable Risks are those specific to AI that can be adequately controlled or mitigated through available technical or governance measures. These include:
• Data-Related Risks
• Model-Related Risks
• Security-Related Risks
• Operational Risks
• Human-AI Interaction Risks
• Ethical and Social Risks
Each of these risks has sub-categories.
Both unreasonable and predictable risks are classified as threats (negative) and opportunities (positive), with appropriate risk mitigation plans such as:
• Threats: Eliminate, mitigate, outsource, or acceptance
• Opportunities: Explore, venture, observe, or quit
These risks are subject to bi-annual assessment and are included in the company risk register.
Risk Identification
We emphasize preventive and proactive measures, integrating risk assessment into the planning and development stages. New software features are evaluated for potential vulnerabilities, incidents, emerging risks, and misuse.
Risk Evaluation
Risk evaluation is continuous throughout the software's lifecycle and the organization's operations, conducted bi-annually as outlined above.
• Beyond the development stage: Application penetration testing is performed on new software versions using top pen testing tools like Burp Suite before it is released to customers.
• Additional penetration testing: Infrastructure components (e.g., cloud servers) hosting KYP.ai software undergo penetration testing before releasing a new cloud server environment to customers.
• Regular assessments: Regular penetration tests and asset configuration reviews are conducted on components and tools used for source code development (internal KYP.ai infrastructure).All the above - supported with ongoing contributory testing with internal representatives - KYP.ai in-house solution (exact copy of software provided to customers) to test both technical and non-technical impact of user and organizational experience.
• Delphi Method used for GenAI Ris Assessment (KYP.ai Product) and Third-Party Risk Assessment (Third Party - AI vendors).
• KYP.ai Operational risks – quantitative evaluation matrix for negative (threats) and positive (opportunities) risks.
• Monte Carlo risk analysis – used to assess new KYP.ai Product features and reassessment of existing features.
Yes, on different levels:
• external consultation with privacy lawyers.
• external audits – 2023&2024 EU GDPR legal audit.
• external audit - 2024& 2025 – SOC2 attestation.
• targeted 2025 & 2026 – external technical validation;
Yes:
• contribution to pilot reporting framework on Code of Conduct in 2024 (OECD.AI).
• EDPB stakeholder meetings on EU GDPR for AIs – 2024.
further strenghtening of presence expected in 2025 and 2026.
Use of international technical standards:
• KYP.ai single control framework – one control addressing requirements of many frameworks/standards/laws
Such as: SOC2, ISO 42001, ISO 27001, HIPAA, EU GDPR, US Privacy Laws, PCI DSS, NIST AI RMF, DORA, ITIL, COBIT5
• ongoing feedback from customers.
• regular assessment of applicable sector – specific regulations and frameworks.
• product risks – regular GenAI risk reviews.
• bi-annual operational risks – reviews with Heads of Business Units with further re-assessment by Founders Team.
• risks categories are divided into threats (negative) and opportunities (positive) with appropriate risk mitigation/exploration plan.
• engage in active observation and research on the relevant topic, incorporating feedback from competent authorities and international organizations, such as think tanks.
Supplementary information re: section 1d above (reports):
d. Does your organization use incident reports, including reports shared by other organizations, to help identify risks?
• Reports received from vendors providing cloud infrastructure for SaaS services through configured alerts.
• Reports received from the source code producer (Java). Monitoring AI incidents via sources such as the OECD AI Incidents Monitor (AIM), social media, data protection authorities, and other competent authorities, including recommendations for remediation.
• Active participation in workshops and seminars related to AI technology development.
• Proactive engagement in forums and social media discussions about AI risks, and involvement in global projects addressing these topics.


























