
Generative AI is being increasingly integrated into search engines, but it has been criticized for producing content that sounds plausible yet is sometimes inaccurate. Currently, little is known about the factors that prompt users to verify such responses. The study “Determinants of Verification Behavior in Generative Search: Evidence from a Conjoint Experiment,” by Eva-Luise Knor, Dr. Michael V. Reiss, Prof. Dr. Judith Möller, and Dr. Lisa Merten, examines users’ verification intentions based on political search queries related to the 2024 European elections, filling this gap.
Click here for the article.
Abstract
Generative AI is increasingly integrated into search engines but is often criticized for generating plausible yet inaccurate content, delivered with authoritative language. Although developers encourage users to verify responses, little is known about the factors influencing this decision. This study addresses this gap by investigating user verification intentions in a preregistered conjoint experiment with German participants (N = 1417), focusing on political queries about the 2024 European elections. In a 3x4x3x3 factorial design, we examined the effects of verifiability attributes (verification disclaimers and cited sources) and content attributes (content veracity and topic) on verification choice. Results show that despite their intended design, verifiability attributes did not significantly increase the probability of verification choice compared to their absence, while content attributes did. Beyond the primary attributes, our study also addressed dispositional factors influencing verification decisions. Conditional trust in cited sources did not significantly affect verification likelihood, but sources with high credibility reduced the likelihood of verification, irrespective of trust levels. Additionally, participants were more likely to verify information when they were already skeptical of its accuracy. Personal topic relevance inconsistently influenced the likelihood of verification. These results suggest that simply providing verification cues may not be sufficient to promote responsible AI usage. Instead, they point to a critical design vulnerability. Verification cues may unintentionally reduce inclinations to scrutinize information, as verification appears to be driven more by preexisting doubt than by habitual critical engagement.
Key Findings
- Characteristics of the content influenced the decision to verify information more strongly than characteristics related to verifiability did.
- Verification cues did not increase the likelihood of verification compared to situations without such cues.
- Sources with high credibility reduced the overall likelihood of verification.
- Participants were more likely to verify information when they were skeptical about its accuracy.
- Cues to verify can lead to less critical scrutiny of results from generative search engines.
E. L. Knor, M. Reiss, J. Möller, L. Merten (2026): Determinants of Verification Behavior in Generative Search: Evidence from a Conjoint Experiment. In: Computers in Human Behavior Reports, 22, 101056. https://doi.org/10.1016/j.chbr.2026.101056