Section 1 - Risk identification and evaluation
Fujitsu is focusing on ethical biases and vulnerability risks, and as it actively promotes research and technological development initiatives, it is advancing the definition and classification of AI-related risks.Specifically, Fujitsu’s "AI Ethics Risk Comprehension Technology", which helps understand AI-caused incidents according to different scenarios, defines and classifies AI-related risks in accordance with the principles and requirements of the European Ethics Guidelines for Trustworthy AI.
"AI Ethics Risk Comprehension Technology"
https://www.fujitsu.com/global/about/research/article/202304-aiethics-risk-comprehension.html
Fujitsu implements the following practices to identify and assess the aforementioned risks throughout the AI lifecycle.
First, we have "AI Ethics Risk Comprehension Technology," which helps us understand AI-caused incidents within the specific contexts in which they occur. This technology enables detailed analysis of AI usage and allows for the early detection of ethical biases and potential problems.
Next, we have the "LLM Vulnerability Scanner." This scanner is equipped with multi-AI agent security technology that supports proactive measures against vulnerabilities and emerging threats, enabling continuous risk monitoring not only before AI model deployment but also during the operational phase.
"AI Ethics Risk Comprehension Technology"
https://www.fujitsu.com/global/about/research/article/202304-aiethics-risk-comprehension.html
"LLM Vulnerability Scanner"
https://www.fujitsu.com/global/about/resources/news/press-releases/2024/1212-01.html
Fujitsu aims to provide a safe and reliable AI system by rigorously evaluating the compatibility of models and systems through the following tests included in the Generated AI Security Enhancement Technology developed in collaboration with Ben Gurion University.
First, the LLM Vulnerability Scanner automatically checks security resistance with high completeness. It addresses more than 7,700 industry-leading, up-to-date vulnerabilities for a variety of vulnerabilities known to exist in Generative AI. This helps identify potential vulnerabilities that are often overlooked during development and improves the overall robustness of the system.
Next is the LLM Guardrail, which automatically defends and mitigates attacks. It validates that the model does not produce inappropriate responses or harmful content, based on attack scenarios that assume a real production environment. In this way, through attack simulations such as red teaming, the behavior of the model is analyzed in detail to ensure safety.
“Fujitsu develops world’s first multi-AI agent security technology to protect against vulnerabilities and new threats, Collaboration among AI agents specialized in security with skills and knowledges of attacks and protection”
https://www.fujitsu.com/global/about/resources/news/press-releases/2024/1212-01.html
Fujitsu's internal rules for information security stipulate that employees must report risks and incidents. Fujitsu has also established group-wide risk management rules for reporting incidents. In addition, Fujitsu is open to receive voices from external security researchers and Security information provider about product vulnerabilities.
https://www.fujitsu.com/global/about/csr/security/
https://www.fujitsu.com/global/about/csr/riskmanagement/
Fujitsu leverages the knowledge of external experts, such as AISI, to identify and assess risks.
As a mechanism for receiving reports of risks, incidents, or vulnerabilities from third parties, we have an inquiry form for products and services on our website. We also respond to inquiries from the media, investors, and the general public through our reporting hotline.
https://www.fujitsu.com/global/about/csr/compliance/
Fujitsu contributes to the establishment of international standards and best practices, with notable contributions including:
i. Two members appointed to the AI4People Institute Scientific Committee (from FY23 to the present), where they contributed to the drafting and publication of white papers.
ii. Three members acting as members of CEN and CENELEC's JTC 21 (Artificial Intelligence) committee. One member also serving as an editor for ISO/IEC JTC1 SC42, contributing to the development of key standards such as ISO 24030 (AI Use Cases), ISO 42001 (AI Risk Management Systems), and others.
ⅲ.Fujitsu has provided the Italy-based startup with AI Trust technologies, consisting of five core tools from its Fujitsu Kozuchi AI service. These technologies will enable AKOS’s AI governance platform AKOS HUB, to offer EU AI Act compliance, risk management and general AI governance services and solutions to enterprise customers.
https://www.fujitsu.com/global/about/resources/news/press-releases/2025/0418-01.html
Fujitsu is incorporating controls, including AI risk assessments, into its global-group-wide quality assurance processes described in the next section as part of its AI risk mitigation measures, and cooperate with relevant parties across divisions.
No answer provided


























