While generative AI systems such as ChatGPT in particular attract millions of users, researchers are also registering a general mistrust of people towards advice and recommendations from algorithms. And that too when people know that the recommendation algorithms work and are superior to human advisors. Researchers have now found an amazing way to at least significantly weaken this “algorithm aversion”: evoking religious thoughts.
Advertisement
The phenomenon of aversion to algorithms has been the subject of intensive research for several years, because the effect also influences people’s willingness to work with software or robots. How it can be explained, however, is essentially not yet understood. In a comprehensive literature review in 2020, Ekaterina Yusupov and colleagues identified four factors that influence aversion to algorithms: the autonomy of algorithms, the perceived and actual performance of the algorithms, and the extent of human involvement in algorithmic decision-making. From the literature, however, only very general statements such as “Algorithms with more autonomy are rejected more strongly” could be made at this point in time.
sense of their own limitations
Mustafa Karataş of Nazarbayev University in Kazakhstan and Keisha M. Cutright now suggest that when people think of God – which they believe is the core of religious feelings and attitudes – they tend to develop a sense of their own limitations and are therefore more apt to to get advice from a machine. In an article for the journal PNAS, Karataş and Cutright describe their study.
“We admit that the thesis seems counterintuitive at first glance,” the authors write in their paper. “It is generally believed that more God-centricity leads to a conservative attitude, less openness to new experiences, and reduced risk-taking. This would suggest that God-centric people are less open to new technologies like AI.”
However, Karataş and Cutright were actually able to prove their thesis in several experiments. In a first experiment, they gave their subjects the task of writing down what they personally associate with the thought of God, while the control group wrote down how their day had gone. In psychology, this preparation is called priming. The participants were then asked to rate how much they would trust human and machine advice for 24 topics – from recommendations for a movie to dating suggestions. Across all subject areas, the God-prepared participants’ trust in human advice was lower than that of the control group.
Advertisement
Greater willingness to trust the algorithms
The effect was much stronger in further, more specific experiments. The researchers tried to induce thoughts of God through the “presence or absence of environmental stimuli”. In one study, for example, they asked participants in front of a mosque which snack they would prefer, which fund they would invest in or which piece of music they would rather listen to: one recommended by leading experts or one chosen by an AI. The control group was interviewed in a location with no religious context.
In another setting, they played either religious or non-religious music in the waiting room of a dental clinic. Before going to the treatment, the participants were asked to fill out a short questionnaire about the music in the waiting room. After treatment, they could then choose an omega-3 fatty acid supplement as a “reward” for participating. One recommended by an AI and one recommended by a human expert. Regardless of the product and setting, the intervention group showed a 10 to 15 percent greater willingness to trust the algorithms.
However, the authors do not see their approach as a practical method for increasing machine acceptance. Instead, they emphasize that the work makes “an important contribution to better understanding the acceptance of AI for decision support”.
(wst)
To home page
#Religious #thoughts #increase #acceptance