Section 1 - Risk identification and evaluation
We adopt a risk-based approach to AI governance. We define risk levels in four categories: Prohibited, High, Mid, and Low.
Our risk definitions take into account the social and business impacts when risks manifest. Additionally, they are formulated to allow employees to easily and clearly classify them.
Specifically, we evaluate risks based on two axes: firstly, categories delineated by the EU AIAct and high-risk sectors in Japan (government, finance, energy, transportation, traffic, telecommunications, broadcasting, and healthcare), and secondly, our own additional criteria, including the scale of users and revenue.
*High-risk sectors in Japan are identified based on the 'AI Guidelines for Business' and other codes of conduct published by the Japanese government, which ensure compliance and promote the use of AI, as outlined in the draft document: https://www8.cao.go.jp/cstp/ai/ai_senryaku/6kai/13rikoukakuho.pdf
We perform distinct risk assessment and governance tasks for risk levels classified as 1-a, as follows:
・During the planning stage, we discuss and identify potential risks using a checklist.
・We evaluate the severity and impact of risks using a risk assessment framework.
・Before release, we conduct vulnerability assessments, submit checklists, and hold review meetings involving departments related to data, legal, intellectual property, and AI ethics/governance.
・For high-risk cases, in addition to the checklist, we conduct review meetings including executives.
・Regular reviews and monitoring (especially for high-risk levels, review meetings are held every few months).
When we commercially provide services, we implement a rule that mandates undergoing quality assurance processes. If the services do not meet the prescribed standards, we decide not to release them. When conducting AI red-teaming, we establish a lifecycle that incorporates the evaluation results of the AI red-teaming into post-learning processes.
We evaluate risks using two axes: the EU AI Act and high-risk domestic sectors (such as government, finance, energy, transportation, telecommunications, broadcasting, and healthcare), as well as our own additional criteria, and factors like user numbers and revenue scales. Important points in our evaluations include considering future projections rather than current figures, presenting guidelines to field personnel for unified risk assessment, and having certain risks re-evaluated by a central organization. High-risk items require confirmation from executive-level personnel and, if necessary, expert opinions from lawyers.
Regarding vulnerability and incident reporting mechanisms, we communicate company's incident impact assessment criteria, reporting rules, and flows to internal stakeholders and both internal and external executives, ensuring accessibility.
Additionally, we do not have an incentive program in place.
In April 2024, we established an AI Ethics Committee consisting of internal members and external experts to create a mechanism for receiving insights and information on risks, incidents, and vulnerabilities from third parties. Although most judgments can be made internally, we consult specialists, such as lawyers, when specialized expertise is required.
For risk management evaluation, we conduct verifications in line with the ISO31000 (JISQ31000) risk management system and its processes. For AI risk checks, we collaborate with external organizations to develop checklists and use them for risk assessments. Additionally, we establish mechanisms to receive reports on risks, incidents, or vulnerabilities from third parties by collaborating with threat intelligence vendors and industry voluntary organizations. When external parties discover vulnerabilities, the SoftBank CSIRT accepts the information, and we handle the response with the relevant departments and stakeholders.
[Reference: https://www.softbank.jp/en/corp/aboutus/governance/security/cooperation/ ]
We regularly output efforts towards the development of international technical standards and best practices to relevant ministries and organizations, and these efforts are published as AI Guidelines for Business in Japan, a unified guideline for AI governance. In generative AI development, we ensure comprehensive risk considerations in line with research by Wang et al. (EACL Findings2023) and Röttger et al. (NAACL2024), forming a foundation for safety evaluation development.
[References: Wang et al. (EACL Findings2023): https://arxiv.org/abs/2308.13387, Röttger et al. (NAACL2024): https://arxiv.org/abs/2308.01263]
We convene review meetings involving departments related to security, data, legal, intellectual property, and AI ethics/governance for all AI development and service offerings. Additionally, in April 2024, we established an AI Ethics Committee that includes external experts to integrate diverse perspectives, ensuring objectivity in our internal rules and the handling of sampled cases.
The Risk Management Committee consists of the President, Vice President, auditors, and heads of relevant departments from various sectors. This committee determines the significance of risks, assigns responsibility for managing them, and issues directives for risk mitigation measures.
No answer provided


























