Section 1 - Risk identification and evaluation
In the Principles guiding the ethical use of AI in Social Innovation Business (hereinafter, “Guiding Principles for the Ethical Use of AI”) of the Hitachi Group (hereinafter, “We”), the Standards of conduct are defined across three phases: Planning, Social Implementation, and Maintenance and Management. In addition, seven Items to be addressed across all phases are stipulated: Safety, Privacy, Fairness, Equality and Prevention of Discrimination, Proper and Responsible Development and Use, Transparency, Explainability and Accountability, Security, and Compliance. Risks, including unreasonable risks, are defined from the perspective of ensuring these items are properly addressed. Furthermore, the degree of risk is also assessed, and those that affect human life, fundamental human rights, or influence people’s emotions and thoughts are classified as high risk.
・Principles guiding the ethical use of AI in Social Innovation Business
https://www.hitachi.co.jp/products/it/lumada/about/ai/ldsl/document/ai_document_en.pdf
We have established processes for risk management throughout the entire lifecycle—Planning, Social Implementation, and Maintenance and Management—as set forth in the Standards of conduct defined by the Guiding Principles for the Ethical Use of AI. These processes are integrated into business operations and include identifying risks including vulnerabilities, assessing the degree of risks, and mitigating them through appropriate countermeasures. Furthermore, with regard to incidents, processes are defined covering detection, information sharing, countermeasures, and prevention of recurrence.
We do not develop or provide in-house LLMs, and therefore do not conduct third-party assessments of LLMs. In cases where we develop or provide AI-based systems, quality verification by a third-party organization (independent from the development process) and customer acceptance testing are conducted.
We conduct both quantitative and qualitative assessments of risks associated with the use of AI, and implement countermeasures based on these assessments. In particular, quantitative evaluation metrics are established for aspects such as quality, accuracy, and bias.
A reporting process has been established for AI-related vulnerabilities and incidents to the AI Supervisory Committee, and critical cases are shared company-wide through the Information Infrastructure Division and the organization responsible for overseeing AI governance. In addition, reporting mechanisms are accessible to a diverse set of stakeholders through these established channels. While we do not have an incentive program for disclosing risks, incidents, and vulnerabilities, disclosures are made when legally required.
We leverage the expertise of external experts, for example by inviting external advisors to the AI Supervisory Committee.
In addition, a contact point is publicly available on our website to receive reports from third parties regarding risks, incidents, or vulnerabilities.
We have appointed three experts to ISO/IEC JTC 1/SC 42 Artificial Intelligence to contribute to the development of international standards. As one example, in ISO/IEC TR 5469 Functional safety and AI systems, we contributed to shaping the core concept related to functions that ensure the safety of AI-controlled equipment. As a best practice for implementing this concept, we have developed plant control technologies utilizing AI. Furthermore, with respect to fundamental standards related to AI such as ISO/IEC 22989 Artificial intelligence concepts and terminology and ISO/IEC 42001 AI management system, we have been involved not only in international standardization but also in the development of national standards, thereby contributing to the promotion of international standards.
To appropriately address systemic risks, we, primarily led by the R&D division, conduct research on technical risks and monitor related trends. These insights are shared with the Quality Assurance Division and other relevant organizations to ensure appropriate risk mitigation when implementing AI in business operations. In addition, such information is also shared across industries through academic societies and industry associations.
No answer provided


























