These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Newton’s Tree’s Federated AI Monitoring Service (FAMOS)
Newton’s Tree’s Federated AI Monitoring Service (FAMOS) is a dashboard for real-time monitoring of healthcare AI products. The dashboard is designed to enable users to observe and monitor the quality of data that goes into the AI, changes to the outputs of the AI, and developments in how healthcare staff use the product. Monitoring these outputs is necessary if drift is to be mitigated. Drift is a change that impacts the performance of an AI product, and its means that products that start safe do not necessarily remain safe.
FAMOS is part of Newton’s Tree’s enterprise-wide deployment platform, which allows healthcare organisations to assess and download healthcare AI products to improve the delivery of care. It is a vendor-neutral service, not a re-seller, meaning healthcare organisations and AI vendors can independently negotiate what is best for them.
Newton’s Tree’s services were developed following experience from the frontline of healthcare. The first iteration of the deployment platform was developed by Newton’s Tree’s Chief Executive Officer due to his work in the National Health Service (NHS). Deploying algorithms was expensive and time consuming so he sought to solve this problem. Further, it became clear standard practice may not be adequate for maintaining patient safety once some AI products were deployed. As the ambition for solving this problem grew, Newton’s Tree was created to deliver a solution.
As demands for healthcare AI have grown, so has the need to spread these solutions both locally and internationally.
Throughout health care delivery patient care and safety is the primary concern. There is evidence that AI can improve patient care but it should not come at the cost of patient safety. An AI product may work well at the time of deployment but that does not guarantee it will work 3 months, or 3 years, later. Changes in the AI or its environment may change the impact of the AI, and monitoring must be maintained to mitigate this risk. The FAMOS dashboard allows manufacturers and healthcare organisations to monitor changes to AI inputs, AI outputs, and use of the AI in real-time. This means that AI use that starts safe can stay safe.
FAMOS is only designed to cover the latter half of the AI lifecycle (post-deployment), and therefore has no impact on the initial stages (development), although these early stages also impact the quality of a product. For example, if an AI product was built and had little utility for clinicians, using FAMOS could not help to address that, as creating utility - ensuring what is built is useful to users - comes at the start of the AI lifecycle. However, if a product is not useful, it is less likely to be purchased and utilised by a healthcare organisation.
Link to the full use case.
This case study was published in collaboration with the UK Department for Science, Innovation and Technology Portfolio of AI Assurance Techniques. You can read more about the Portfolio and how you can upload your own use case here.
About the tool
You can click on the links to see the associated tools
Tool type(s):
Objective(s):
Impacted stakeholders:
Target sector(s):
Country of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Target users:
Validity:
Geographical scope:
Required skills:
Tags:
- ai responsible
- documentation
- evaluation
- healthcare
- medical imaging
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case