MedPerf is an open-source platform for benchmarking AI models to deliver clinical efficacy. How it works and who’s involved:
1. Benchmark Committee
Domain experts organize and kick off an initiative.
2. Researchers
Algorithm holders prepare their algorithms to test against real-world clinical data.
3. Clinicians
Data providers run the algorithms against their population data.
4. Results
Proven global benchmark data assists in future medical research.
MedPerf was used for the large-scale FeTS 2.0 study in partnership with RANO.
Read moreMedPerf was presented as a key component of the Comprehensive Open Federated Ecosystem (COFE), highlighting its role in making federated AI experiments in healthcare more accessible and efficient.
Read moreMedPerf continues supporting the BraTS challenge in 2024.
Read moreMedPerf Open Benchmarking Platform for Medical AI on Nature Machine Intelligence
Read moreRANO, the clinical authority for neuro-oncology in the world, will use MedPerf for assessment of brain tumor treatment outcomes.
Read moreMedPerf is going to be used again for the BraTS 2023 challenge
Read moreMedPerf is an open-source framework for benchmarking AI models to deliver clinical efficacy while prioritizing patient privacy and mitigating legal and regulatory risks. It enables federated evaluation in which AI models are securely distributed to various facilities for evaluation.
The MedPerf approach empowers healthcare organizations to assess and verify the performance of AI models in an efficient and human-supervised process without sharing any patient data across facilities during the process. It reduces the risks and costs associated with data sharing, towards maximizing medical and patient outcomes.
MedPerf provides the end-to-end toolchain you’ll need to get involved: from organizing the experiment, to carrying it out, to producing results. We provide the database storage, a REST API, researcher tooling to package algorithms to run with any data, and clinician tooling to easily run the algorithms without patient data ever leaving their premises.
MedPerf is licensed under Apache license, making it free to use for any purpose, redistribute, or modify.
A benchmark committee consists of groups of experts (e.g., clinicians, patient representative groups, regulators) and data or model owners wishing to drive the evaluation of their model or data. Initiating or joining a benchmark committee in the MedPerf environment allows you to:
Interested in forming or joining a benchmark committee?
Initiate or join a benchmark committeeIf you are an AI researcher or software vendor that holds a trained medical AI model and want to evaluate its performance, with MedPerf you can:
Interested in testing your algorithm?
Join as a researcherData providers include hospitals, medical practices, research organizations, and healthcare insurance providers that own medical data. If you fit into this category, you can take advantage of MedPerf to:
Interested in connecting with a machine learning initiative?
Join as a clinicianBenchmark results empower and enable leaders to improve healthcare outcomes: patient outcomes, clinical workflows, cost reductions, etc.
Trusting the Results:
The benchmark committee gives us the right tasks and the right metrics; the medical Machine Learning research community gives us the state-of-the-art algorithms; and the evaluating hospitals give us the right real-world data with meaningful diversity. This leads to our results being future-forward & clinic-ready.
We have an internal working group focused on current projects and goals, and can also connect you with other groups in starting a new experiment.
Powered by ML Commons