Trustable and Responsible AI (TR-AI) for decision support in healthcare (Habilitation Project)

The habilitation project aims at increasing trust and reliability of the usage of AI in the healthcare sector. The focus is set on three use cases, each dealing with different aspects of trustworthiness.

Use Case: Wearable AI and Explainability

A major reason for distrusting the results of an AI system is that domain experts cannot understand how the machine learning model, used for the decision output, ultimately makes its decision. Explainable AI or XAI (also transparent AI or interpretable AI) is an AI in which the actions can be easily understood and analyzed by humans by providing an auditable record including all factors and associations related with a given prediction (Hagras, 2018).

In healthcare, explainability has been identified as a “requirement for clinical decision support systems because the ability to interpret system outputs facilitates shared decision-making between medical professionals and patients and provides much-needed system transparency” (Turri, 2022).

Use Case: AI in Oncology CDSS

Use Case: Infodemic and disease modeling

The World Health Organization (WHO) defines an infodemic as “too much information including false or misleading information in digital and physical environments during a disease outbreak” (WHO, 2022). Misinformation as a concept has been defined in various ways in academic research, with the most widely used definition to be given by Lewandowsky et al. (2012) – “any piece of information that is initially processed as valid but is subsequently retracted or corrected”.

Nowadays, social media is at the heart of misinformation. The situation during the pandemic got worse prompting the surgeon general of the US to declare that the spread of misinformation through social media has become an “urgent threat to public health” (Juthani & Agbafe, 2021). In Germany, there has been much resistance as well such that mandatory vaccinations were introduced in some workplaces in the end of 2021. Austria was the first EU country introducing mandatory vaccination policy for all adults, although it got suspended in March of this year.

Social media platforms and the Internet in general, have been beneficial in helping people receiving and sharing information about their health with family and friends faster than with any other available medium. However, this is a sword with two blades, because this possibility for everyone to share their opinion freely online without (mostly) personal consequences, have led to an even bigger issue with wrong information spread during the pandemic. Misinformation on health issues has always been present, but the recent pandemic has exacerbated this and has put it back on the map of scientists and governments.

The consequences of spreading misinformation can be devastating. People have refused in the past to seek appropriate treatments for cancer or HIV; public health workers, airline staff and other frontline workers have been subjected to violence and harassment. During the pandemic, vaccine misinformation has taken an unprecedented scale on global level. One study by Loomba et al., has found that even brief exposure to COVID-19 vaccine misinformation made people less likely to want a vaccine (Loomba et al., 2021).   

Most experts agree that in the future we will be likely facing another pandemic which warrants a better preparation and more research on ways to tackle it properly – not only from a medical perspective but also considering misinformation. Therefore, social and behavioral scientists need to work in cooperation with health experts to find innovative solutions to diminish the consequences of a future infodemic.

Previous work on health misinformation is focused on sources in the English language mostly. Other regions are underrepresented – a conclusion that has been made in a very recent review of literature on the topic (Yeung et al., 2022). Although Chinese language sources could understandably be interesting for research, natural language processing does not offer sufficient solutions in order to process Chinese language text data. Luckily, this is not the case with German language sources.

Researcher

Partner

  • Swinburne University of Technology

References

Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and Its Correction: Continued Influence and Successful Debiasing. Psychological Science in the Public Interest : A Journal of the American Psychological Society, 13(3), 106–131. https://doi.org/10.1177/1529100612451018

Loomba, S., Figueiredo, A. de, Piatek, S. J., Graaf, K. de, & Larson, H. J. (2021). Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nature Human Behaviour, 5(3), 337–348. https://doi.org/10.1038/s41562-021-01056-1

Turri, 2022. https://insights.sei.cmu.edu/blog/what-is-explainable-ai/

WHO (2022). Infodemic [Press release]. Retrieved from https://www.who.int/health-topics/infodemic#tab=tab_1

Yeung, A. W. K., Tosevska, A., Klager, E., Eibensteiner, F., Tsagkaris, C., Parvanov, E. D., . . . Atanasov, A. G. (2022). Medical and Health-Related Misinformation on Social Media: Bibliometric Study of the Scientific Literature. Journal of Medical Internet Research, 24(1), e28152. https://doi.org/10.2196/28152