Germany
Georg Starke
Georg Starke works as senior research associate at the Chair for Ethics of AI and Neuroscience at the Technical University of Munich. Trained as philosopher and physician, he conducts research at the intersection of AI ethics, medical ethics, and neuroscience. His recent theoretical and empirical work focuses on AI in psychiatry, clinical neurotechnology and the use of digital methods in bioethics.
Georg studied medicine in Munich, Oxford, Buenos Aires, and Jerusalem and completed his medical doctorate at TU Munich with a thesis on neural correlates of human fear. In parallel, he obtained a BA in philosophy at the Munich School of Philosophy and an MPhil in History, Philosophy and Sociology of Science, Technology and Medicine at the University of Cambridge. After receiving his medical license, he worked at the University of Basel, where he finished his PhD on trust in medical AI with summa cum laude in 2022. Prior to his return to Munich, he completed a postdoc at the Swiss Federal Institute of Technology Lausanne (EPFL).
Throughout his training, Georg received scholarships from the German Academic Scholarship Foundation, the Max Weber Programme and the University of Cambridge. His research won him international awards, among them the Paul Schotsmans Prize from the European Association of Centers for Medical Ethics, and two research residencies at Fondation Brocher in Switzerland. In winter 2025/26, he will be fellow at FRIAS.
Patient preferences are central to providing good clinical care. Special challenges arise when patients cannot make healthcare decisions themselves and have no advance directives. In such situations, surrogate decision-makers can be asked to step in and determine the patient’s presumed best interest. However, research shows that such surrogates, usually close relatives, frequently struggle to predict a patient’s wishes, creating significant ethical and emotional challenges. One recently proposed solution relies on artificial intelligence to predict and guide decision making. Supposedly, such a patient preference predictor (PPP), for instance in the form of a fine-tuned large language model, can give accurate accounts of what a person would have wanted for themselves, and possibly better than their next of kin. This project critically examines the feasibility and desirability of such a PPP. The current debate suffers from the curious fact that so far, no PPP has been developed. Drawing on a representative dataset from Switzerland, this project therefore proposes a first proof-of-principle PPP, allowing to highlight potential pitfalls with view to both data and modelling. Based on this model, an ethical assessment of the technology will then further examine the potential and limitations of AI-based preference predictors in depth, scrutinizing the risks of techno-solutionism in end-of-life decision-making.
Medical ethics; Artificial Intelligence; Ethics of AI; Neuroscience