Evaluating artificial intelligence in healthcare: new research presented by the Barco Labs team
헬스케어 1분 읽기
This week, our Barco Labs team has presented a new research paper in the field of artificial intelligence. The paper was shown at the SPIE Medical Imaging Congress in San Diego (California).
About bias in artificial intelligence
Did you know that AI models can be biased? Algorithms are trained on specific sets of data or people. So when it comes to minority groups (defined by age, gender, or skin type, for example), there is a risk that they overlook or even misinterpret specific characteristics. This strengthens inequality and could have disastrous results when it comes to diagnosis and treatment of patients.
A new model for validation of AI algorithms
The study we presented at SPIE focused on exactly this problem. How can we guarantee safety and effectiveness of AI models, for all subgroups of a target population?
The paper presents an open-source, customizable tool that can measure how good an AI algorithm works. For example, it can highlight the minority groups that are most at risk of being misinterpreted by the algorithm.
Towards safe use of AI in healthcare
We hope that this can contribute to a more equal world where artificial intelligence can support healthcare professionals in a safe way!
This research was funded through the Vivaldy project, PENTA 19021, and financially supported by the Flemish Government (Vlaio grant HBC.2019.274).
#WeAreVisioneers