"Ethics must be considered in AI from start to finish in healthcare"
How do we ensure that artificial intelligence becomes a tool for better patient care and not just a technology that saves money? This is one of the central questions that PhD candidate Victor Vadmand Jensen is working with in his research on AI ethics in healthcare.

Article series on AI in health research
Artificial intelligence is well underway in transforming health research - but how is the technology used in practice, and what challenges come with it?
In the coming period, Inside Health will focus on how researchers at the faculty work with AI.
We would like to hear from you if you use AI in your work and would like to share your experiences in the newsletter.
Contact Jakob Binderup
In hospital departments across the country, artificial intelligence is increasingly being integrated as a natural part of the work. But how do we ensure that the technology is used ethically and for the benefit of patients?
Victor Vadmand Jensen, PhD candidate at Department of Clinical Medicine and Silkeborg Regional Hospital, has spent time in hospital departments observing how AI is used in clinical daily work and how ethical considerations are incorporated into the work.
How did you become interested in ethics and AI in health research?
I have been interested in ethics since my bachelor's degree, where I was focused on understanding how we can integrate ethical considerations into the development of technologies from the start. This is important because it makes us consider the values we think technologies should contribute before the technologies are developed – not afterwards.
AI is an exciting technology, but it requires many ethical considerations. How do we explain AI decisions? How much control should AI have? And the ethical considerations are particularly relevant in healthcare because it's about life and death or care and apathy.
You have observed how AI is used in hospital departments. What surprised you most?
I was very surprised by how much AI is actually already being used in hospitals today. The staff I have observed have largely managed to integrate AI as a natural part of their work. I was also surprised by how much AI actually demands of the staff. Among other things, I have seen how staff consistently have to take positions on decisions made by AI. Some AI decisions can be wrong – and staff must be able to detect and react to this.
You have investigated how AI can be used to detect falls in elderly patients – what ethical problems does this create?
An important ethical question is how to obtain consent for the use of AI from elderly patients. Elderly patients probably know the technology less, and they are also admitted with conditions like stroke or diseases like dementia. This means that consent from an elderly patient for AI is much harder to obtain, and it's not necessarily clear when there is real consent. On the other hand, it would also in many ways be irresponsible not to use an AI system that can increase patient safety. So what do we choose? That's a central ethical question.
You say that AI shouldn't only be used to minimize errors. What should it be used for then?
It should be used for things that make a difference for staff and patients. Of course it's good to make fewer errors, but the fewer errors should also mean something for the people involved in the healthcare system. We should focus on using AI to give staff more time with patients, so we make it safer to be a patient and more enjoyable to be staff in healthcare. It's also important that we use AI in a way that helps make staff better at their work, rather than just replacing them.
Are Danish hospitals ready for the ethical challenges of AI?
I have investigated the use of AI in several hospital departments, and I have generally experienced that ethics has been very important for clinicians in different professional groups and at different experience levels. The difficult thing for hospitals might be helping individual departments so they themselves have the competencies and resources to ensure ethically sound implementation of AI systems.
What should politicians and hospitals do better?
Politicians should focus more on the ethical questions about AI before the technologies are implemented. In a study from my PhD, we have investigated Danish policy on AI in healthcare and found that ethics is often described as something that should be considered in connection with implementation. But this means that some of the ethical considerations actually have to be handled by healthcare staff, which is difficult and demanding. It would make good sense to integrate ethics into the entire process – from the first idea to the first tests of the AI system, so we don't have to handle the ethical questions retrospectively.
What do you see as the biggest opportunities and dangers of AI in the healthcare system?
I actually see the dangers as ones that resemble other healthcare technologies. Personally, I would fear that AI doesn't lead to better care or more time with patients, but actually just leads to savings in our healthcare system. At the same time, we need to consider how healthcare staff don't become tired of AI systems, which in the worst case can lead to care fatigue.
But if we can get AI to take some of the boring tasks and give time for the more interesting ones, then we can hopefully make it more fun and rewarding to work in healthcare. I hope that in the long term it can help us recruit more people to work in healthcare.
Contact
PhD student Victor Vadmand Jensen
Aarhus University, Department of Clinical Medicine
Silkeborg Regional Hospital, University Clinic for Interdisciplinary Orthopedic Surgery Pathways
Phone: 61281733
vvj@clin.au.dk