Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Enterprise ChatGPT and LLM Governance

Apr 19, 2023

Enterprise ChatGPT and LLM Governance

Trust in AI is needed

 

Many organisations are utilising AI in a  production setting without proper governance in place. This is now more widespread than ever, with large language models (LLMs) like ChatGPT being widely used in daily work in many organisations and businesses. One of the potential risks for LLMs is to deliver plausible but false information,  hallucinations,  along with the prevalence of non referenceable content from the internet, underscores the importance of proper governance and oversight. Other potential risks include such as biased information, lack of transparency, and not being compliant to company policies and ethical guidelines, which are major concern for the use of LLMs. To mitigate these risks, a robust AI Governance framework and a real- time monitoring solution are essential.


An AI Governance framework entails a systematic approach to the design, development, deployment, and operation of AI models and systems to ensure that they align with ethical and legal standards. First of all this requires organisations to register all AI being used across the organisation and closely monitor its application and use. 

 

AI Governance frameworks should also include measures to monitor and manage risks, ensure transparency in decision-making, promote fairness, accountability, the ethical and responsible usage of AI, and finally provide human oversight to prevent unintended consequences.

 

Specifically for LLMs like ChatGPT, organisations need to address privacy concerns, sharing of personal identifiable Information (PII) and other sensitive information when prompting as, such information is stored at the LLM.

 

To secure Trust in AI you need to implement an efficient AI Governance framework and a real- time monitoring solutions. These are crucial for all organisations to avoid legal and ethical issues when developing and using LLMs. 

 

What is needed


For LLM's in particular, there must be certainty on who is using the model and ensuring that all users have understood and acknowledged the organisation's guidelines and policies. Getting to know how the users are interacting with the model and how it is used is needed in combination central oversight. To make such Governance for LLMs complete real-time monitoring of all interaction with the LLM is required.

2021.AI GRACE Governance platform addresses precisely such real-time monitoring and reporting of all in-and-output. In addition, the feedback from the model is logged and stored, which will also  prepare you  for a similar request from the upcoming EU AI Act .
 

By implementing GRACE Governance the following concerns using LLM's are addressed:

 

  • Sharing of sensitive or privacy related information 
  • Intellectual property leak via dialogue with the model 
  • Lack of control over the content generated by the model
  • Misinterpretation or miscommunication of sensitive information
  • Lack of resource control for model usage
  • Responses with biased, discriminatory or hate language
  • Transparency of usage across the organisation or company 

 

The AI Governance platform, GRACE, is a complete AI Governance, Risk & Compliance solution offered to public and private sectors. GRACE is developed on the back of projects and engagements with industry leaders, regulators and ethical committees to facilitate a full AI Governance, Risk & compliance implementations. 

 

With the GRACE Governance for LLMs you will be able to:


Onboard LLM users according to conformity assessments, company policies and more.

  • Mask and anonymise sensitive information, such as Personal Identifiable Information (PII), to the LLM while still offering to interact with the model.
  • Real-time monitor the prompting and responses (in- and output from the model).
  • Define guardrails and risks around the LLM and monitor that such are followed in real-time. 
  • Log and report breaches for follow-up and preventive training of employees.
  • Sentiment analysis of prompts and model responses for biased or discriminatory, or hate language.
  • Install the GRACE Governance for LLM in your IT infrastructure (including on-prem environments) of choice.

 

Benefits of using the tool in this use case

  • Improved compliance: Grace ensures that Chat GPT is being used in a compliant manner, minimising the risk of regulatory fines and reputational damage.
  • Enhanced security: Grace ensures that Chat GPT is being used in a secure manner, protecting sensitive information and preventing data breaches.
  • Increased transparency: By logging model input/output (prompt/response) and feedback on model output, Grace provides greater transparency into how Chat GPT is being used.
  • Better control: Grace applied controls to Chat GPT usage, such as limiting access to the model or restricting certain types of input.
  • Performance monitoring: Grace computed performance metrics live, enabling organisations to continuously monitor Chat GPT usage and identify areas for improvement.
  • Conformity assessment: Grace can be used to perform conformity assessments around the usage of Chat GPT, ensuring that it aligns with organisational policies and regulatory requirements.

Shortcomings of using the tool in this use case

With the rapid growth of generative AI and the evolving AI regulatory landscape, there are challenges for all in this field to respond quickly to these changes. We are working with regulators and other authorities to enable our investments in R&D to be focused on offering the highest level of AI Governance, Risk Management and Compliance that the market and clients demand.

Modify this use case

About the use case


Developing organisation(s):



Country of origin:


Target users: