In:
Public Policy and Administration, SAGE Publications
Abstract:
Artificial intelligence (AI) applications in public services are an emerging and crucial issue in the modern world. Many countries utilize AI-enabled systems to serve citizens and deliver public services. Although AI can bring more efficiency and responsiveness, this technology raises privacy and social inequality concerns. From the perspective of behavioral public administration (BPA), citizens’ use of AI-enabled systems depends on their perception of this technology. This study proposes a conceptual framework connecting citizens’ perceptions, trust, and intention to follow instructions from the government-supported AI-enabled recommendation system in the pandemic. Our study launches an online-based experimental survey and analyzes the data with the partial least square structural equation model (PLS-SEM). The research findings suggest that algorithmic transparency increases trust in the recommendations, but privacy concerns decrease the trust when the system asks for sensitive information. Additionally, citizens familiar with technologies are more likely to trust the recommendations in the feature-based communication strategy. Finally, trust in the recommendations can mediate the impacts of citizens’ perceptions of the AI system. This study clarifies the effects of perceptions, identifies the role of trust, and explores the communication strategies in citizens’ intention to follow the AI-enabled system recommendations. The results can deepen AI research in public administration and provide policy suggestions for the public sector to develop strategies to increase policy compliance with system recommendations.
Type of Medium:
Online Resource
ISSN:
0952-0767
,
1749-4192
DOI:
10.1177/09520767231176126
Language:
English
Publisher:
SAGE Publications
Publication Date:
2023
detail.hit.zdb_id:
2262638-4
SSG:
2
SSG:
3,6
SSG:
3,7
Bookmarklink