Building a responsible AI future: How the G7 Hiroshima AI Process is enhancing responsible AI around the globe

The AI landscape is evolving rapidly, and with the rise of agentic AI, trust has never been more critical. As businesses continue to integrate AI into their operations and customer experiences, leaders must ensure that these technologies are developed and deployed in a responsible manner. Leading with trust and responsibility is not optional. Enterprise customers require this as part of their AI adoption journey, and trust is essential to a future in which AI creates opportunities for everyone.
Salesforce is proud to be one of the first companies to contribute to the reporting framework developed by the OECD under the G7 Hiroshima AI Process (HAIP). Voluntary frameworks like this empower organisations to prioritise ethical practices, transparency, and governance at every stage of AI development and deployment, fostering more trustworthy AI ecosystems and enhancing global alignment on best practices.
Risk identification: Laying the foundation for trustworthy AI
An effective, responsible AI approach begins with a comprehensive strategy for risk identification and evaluation. Organisations should define and classify different types of AI-related risks, particularly those that could cause serious harm. This is especially important in enterprise settings, where AI systems are often tailored and used in various contexts.
At Salesforce, the Responsible AI and Tech (RAIT) product managers within our Office of Ethical and Humane Use (OEHU) are central to this effort. During these reviews, RAIT product managers work closely with product teams to understand use cases, technology stacks, and intended audiences. The process involves identifying and categorising potential risks into subtypes of sociotechnical harm, as well as assessing both inherent and residual risks to provide a holistic view of potential impacts, enabling informed decision-making and effective mitigation strategies.
Our AI Acceptable Use Policy provides clear guidance on the uses for which our customers are prohibited from using AI tools. This includes automated decision-making with legal consequences, predictions of an individual’s protected characteristics, or high-risk scenarios that could result in serious harm or injury.
Ongoing risk management: Protecting AI systems in real-time
Responsible AI experts must collaborate closely with product teams at all stages of the innovation process to devise effective mitigation strategies. Standardised guardrails, such as Salesforce’s “trust patterns”, can include features like mindful friction, which introduces checkpoints for thoughtful decision-making, or transparency notifications that inform users when they are interacting with AI systems.
Organisations should also establish comprehensive frameworks that protect data privacy and security throughout every stage of the product development process. Salesforce’s Trust Layer includes functionalities such as secure data handling, zero data retention, ethics by design, an audit trail, and real-time toxicity detection.
Finally, Salesforce has clear evidence from enterprise customers that testing products against trust and safety metrics, such as bias, privacy, and truthfulness, is an important business strategy and benefit. At Salesforce, we regularly introduce red teaming exercises, which simulate potential risks in controlled environments, to identify vulnerabilities and risks within products. Tactics like this are particularly important as autonomous agents become increasingly widespread.
Transparency reporting: Building trust through honest communication and knowledge-sharing
Transparency and honesty are core tenets of our trusted AI principles, which we augmented with our guidelines for trusted generative AI, and remain applicable to the agentic AI era. Organisations should ensure that users and stakeholders are informed about how and when AI is used. At Salesforce, we regularly share information about our product capabilities through our newsroom, blogs, and Trailhead, our free online learning platform.
Salesforce also regularly reports on our progress in responsible AI efforts. Most recently, our Trusted AI and Agents Report explained our approach to designing and deploying AI agents.
Furthermore, we aim to be transparent about the use of personal data. Salesforce enables customers to control how their data is used for AI. Whether using our own Salesforce-hosted models or external models within our shared trust boundary, no context is stored. The large language model forgets both the prompt and the output immediately after processing.
Organisational governance: Embedding responsible AI practices across the company
Gaining the buy-in from all parts of the organisation to deliver a truly effective responsible AI approach is critical. Salesforce embeds AI risk management within its organisational governance framework through various structures and practices. The company’s trusted AI principles, first developed in 2018 and augmented for generative AI in 2023, guide responsible development and deployment, focusing on intentional design and system-level controls.
Our governance infrastructure includes:
- The Office of Ethical and Humane Use (OEHU), which regularly interacts with the executive leadership team for policy and product review and approval. The OEHU also leads the Trusted AI Review process to identify, mitigate, and track potential risks early in development.
- The AI Trust Council, comprising executives across various departments, aligns and speeds up decision-making for AI products.
- The Ethical Use Advisory Council, established in 2018 with external experts and internal executives, provides strategic guidance on product and policy recommendations.
- The Cybersecurity and Privacy Committee of the Board of Directors, which meets quarterly with the Chief Ethical and Humane Use Officer to review AI priorities.
- The Human Rights Steering Committee, meeting quarterly, oversees the human rights program, including identifying and mitigating salient risks.
A shared commitment to responsible AI: Aligning with global standards
The future of responsible AI depends on a collective commitment to developing systems that are innovative, trustworthy, ethical, and secure. Emphasising transparency and robust governance will unlock AI’s full potential while ensuring the safety of customers and stakeholders.
The G7 HAIP reporting framework provides an effective global benchmark for responsible AI initiatives, providing a structured approach for organisations to manage the risks and benefits of AI technologies. As these frameworks gain widespread adoption, they will promote consistency in responsible AI practices, building greater trust among users and society. Salesforce is committed to working with all stakeholders and navigating this transformative AI era with trust, responsibility, and ethics guiding the way.