News in detail

ECDF study: Trust in AI Systems

Many of us use AI systems every day. Especially when using Assistive Artificial Intelligence (AI) in particularly responsible professions such as health care, police or justice, the right level of trust in AI is important for responsible use of such decision-making tools. Too much or even blind trust can lead to rash decisions. And too little trust in an AI can ignore valuable knowledge. To improve trust in AI systems, many methods have been proposed in recent years to make AI decisions more transparent. However, to what extent this transparency influences trust in AI systems has not been researched. Together with Philipp Schmidt, two professors of the Einstein Center Digital Future (ECDF) Felix Biessmann and Timm Teubner have now investigated the influence of transparency of AI-based decision support systems on human trust in AI.

In an experimental-economic study they had 200 participants* classify short texts as "positive" or "negative". For each correctly classified text the participants* received a payment. In addition, the participants were supported by an AI who also gave an assessment (positive or negative). The transparency was systematically varied in different groups of experiments. The AI "explained" their decision by 1) highlighting the most relevant words in the text (e.g. "wonderful" as an indication for a positive assessment), and 2) communicating the confidence of the prediction (e.g. 65% or 98%).

"Contrary to the widely held view that transparency is always beneficial, the transparency measures did not contribute to confidence in the CI. On the contrary, the participants relied significantly less frequently on AI and deviated in their assessment from AI assessments - and were therefore more frequently wrong," reports Prof. Dr. Felix Biessmann. With regard to the AI confidence, it also became apparent that the AI suffers from the Imposter Syndrome to a certain extent. "If the AI was right, but attached too much uncertainty to its prediction, the participants often did not follow the AI's suggestion," Biessmann continues.

The right level of trust also means not following wrong AI predictions. This is exactly what more transparency should ensure. "However, the results indicate that people make up to six times more mistakes in text classification when they follow wrong AI predictions than when they ignore correct AI predictions. Too much trust in wrong AI predictions was therefore far worse than ignoring correct AI predictions," says Prof. Dr. Timm Teubner.

In summary, the results show that transparency of AI systems does not always increase trust in such systems. It was also shown that transparency often does not have the effect of allowing people to recognise incorrect AI forecasts.

"Next, we would like to investigate whether and how quickly trust in AI systems can be restored after false AI predictions have led to a loss of trust," reports Teubner.

The study is published in the Journal of Decision Systems and is already available online here.