Section 1 - Risk identification and evaluation
We define and classify risks based on established guidelines such as the AI Guidelines for Business and QA4AI.
We have developed our AI Quality Guidelines for AI system development and implementation based on our internal AI governance guidelines. By following these guidelines, we ensure to understand the implementation status of each project.
We are considering implementing red team testing for our LLM products.
Yes, we use the metrics provided in the AI Guidelines for Business and carefully implement them throughout our product development process.
We also provide a contact point for users, enabling us to gather diverse opinions.
We do not have such an incentive program.
Red teaming testing will be conducted by an external organization.
There is a dedicated contact point on our website for reporting security and privacy issues.
We use multiple references such as the AI Guidelines for Business, QA4AI, and OWASP LLM TOP10.
We have established an AI Governance Promotion team as part of our company-wide risk management initiative led by our CEO.
No answer provided


























