up2date. Das Onlinemagazin der Universtiät Bremen

Digital Ethics: How Technology Tests Our Values

An interview with Björn Haferkamp about how digital ethics affect everyone

Research / University & Society / AI

Digitalization raises many ethical issues, from artificial intelligence in medicine, to automated decision-making in government, to the influence of social media on our behavior. How can we shape and use technology in a way that reflects our values? Björn Haferkamp investigates these and other questions related to digital ethics in the Institute for Philosophy at the University of Bremen. On May 8, those interested can learn more in a course he is holding as part of the U Bremen Research Alliance’s (UBRA) Data Train lecture series.

Mr. Haferkamp, what exactly is digital ethics?

The field of digital ethics is by no means new. Terms such as computer ethics, internet ethics, or data ethics have been used in the past within this research field. Even 20 years ago, many people went about their day believing that the moment they turned off their computer, their connection to the internet was over. Now, digitalization has infiltrated so many areas of our lives that we can no longer evaluate individual areas of research separately, but rather combine them under an overarching concept called digital ethics. The basic premise of this research is to analyze the ethical problems of digitalization in their context and then generate ethically acceptable solutions.

What are you working on within this research area?

Topics pertaining to the philosophy of digitalization are very broad and diverse. I am currently focusing on artificial intelligence (AI), particularly the development of large language models such as chatbots. AI has had a large and very promising role within medicine and healthcare for quite a while, with the hope that its use will aid in diagnosing and treating illnesses. One ethical question in this is whether we still do justice to humans as individuals with the use of AI, as AI can lead to stereotyping. In the medical field, this could be detrimental to certain patients, especially those whose measurements deviate from the norms set by AI. Another question is whether doctors would dare to reject AI’s diagnoses and suggested treatment plans when these differ from their own assessments, even or especially if AI is statistically often correct. Ultimately, methods need to be found to deal with such situations and to determine who bears responsibility.

Another example is the use of AI to automate processes in government agencies, for example, in creating welfare notices and similar documents, and whether this use is legally and ethically justifiable.

A portrait photo of a man
Björn Haferkamp researches digital ethics and teaches at the University of Bremen’s Institute of Philosophy.
© privat

Many people would have an initial visceral reaction to AI deciding how much welfare a person should receive. To what extent is the use of AI automations in public services currently tenable?

Legally speaking, the use of AI or automated processes in government agencies is permitted as long as the decisions or results are clearly understandable – for example in calculations based on fixed factors. Wherever AI assists in decision-making, the data and models used need to be completely reliable. The results can only be as good as the data used to create them; faulty or incomplete data leads to inaccurate results a la “garbage in – garbage out.”

In the legal realm, one ethical consideration would be whether AI could replace judges, since it would presumably be less biased in determining a verdict. However, since reaching a verdict is complex and dependent on many factors, this is currently not a realistic use for AI. Digital ethics therefore considers the challenge of how to reconcile the advancements in digitalization with democratic values such as freedom and responsibility.

Which groups of people should consider digital ethics? Are these questions addressed to the policymakers who make our laws?

These are not just questions for them; all of us should take the time to consider ethical questions related to digitalization. This affects our lives constantly – when we disclose data to apps, use chatbots, or consume social media. We should conscientiously ask ourselves how we want to respond to the advancements in digitalization, and consider what role we want social media apps or AI to have in our lives. We must then make intentional decisions – for example, how long we want to scroll in social networks or which data we enter in AI systems.

As a society, we already have pronounced competencies in ethics. In my opinion, it is now time to apply and practice using these in the context of digitalization – for example by incorporating media literacy in schools from an early age to promote a deliberate and reflective approach to using digital technologies.

Data Train

Björn Haferkamp’s Digital Ethics course will take place online on May 8 at 10:00 a.m. as part of Data Train, a multidisciplinary graduate program for research data management and data science within the U Bremen Research Alliance (UBRA). The Starter Track lecture series provides basic information about data science, big data, statistics, computer science, and the principles of research data management and will be held in English. This track is available for anyone who wants to receive further training in these areas. In addition to Björn Haferkamp’s course, various other courses will take place regularly until June 2025. Interested parties can register for the Data Scientist operator track as of May, which includes workshops on methods used in quantitative analysis, machine learning, and deep learning, as well as on data visualization and visual analysis. Visit the UBRA website for further information on the individual courses and tracks.

zurück back


Also interesting…

Universität Bremen