TY - JOUR
T1 - Accountability and AI: redundancy, overlaps and blind-Spots
AU - Elliott, Marc T. J.
AU - MacCarthaigh, Muiris
PY - 2025/4/21
Y1 - 2025/4/21
N2 - Artificial Intelligence (AI) increasingly reshapes public sector activities, raising critical questions about its impact on traditional democratic accountability. Despite recent achievements in unpacking the multiple forms and dimensions of accountability, limited attention has been given to how diverse AI forms - beyond their perception as a singular phenomenon - uniquely contribute to accountability challenges. This paper examines how different AI schools (like connectionist and symbolic) affect accountability in public governance, using the AI, Algorithmic and Automation Incidents and Controversies (AIAAIC) repository to analyze 115 real-world public sector AI incidents. Our methodology clusters cases by AI school to identify recurring accountability issues: redundancy (unnecessary accountability efforts), overlaps (competing accountability demands), and blind-spots (insufficient or no accountability appears evident). Our findings reveal that connectionist systems dominate public sector deployments, often linked to transparency issues. Less commonly deployed AI forms, such as symbolic or analogizer systems, may better align with public governance principles under certain conditions. This study highlights compatibility issues between AI forms and accountability dimensions, emphasizing how algorithmic design choices significantly shape governance outcomes. By addressing these challenges, the paper advances understanding of AI accountability in public administration and reinforces the need for strategic AI adoption to enhance democratic processes.
AB - Artificial Intelligence (AI) increasingly reshapes public sector activities, raising critical questions about its impact on traditional democratic accountability. Despite recent achievements in unpacking the multiple forms and dimensions of accountability, limited attention has been given to how diverse AI forms - beyond their perception as a singular phenomenon - uniquely contribute to accountability challenges. This paper examines how different AI schools (like connectionist and symbolic) affect accountability in public governance, using the AI, Algorithmic and Automation Incidents and Controversies (AIAAIC) repository to analyze 115 real-world public sector AI incidents. Our methodology clusters cases by AI school to identify recurring accountability issues: redundancy (unnecessary accountability efforts), overlaps (competing accountability demands), and blind-spots (insufficient or no accountability appears evident). Our findings reveal that connectionist systems dominate public sector deployments, often linked to transparency issues. Less commonly deployed AI forms, such as symbolic or analogizer systems, may better align with public governance principles under certain conditions. This study highlights compatibility issues between AI forms and accountability dimensions, emphasizing how algorithmic design choices significantly shape governance outcomes. By addressing these challenges, the paper advances understanding of AI accountability in public administration and reinforces the need for strategic AI adoption to enhance democratic processes.
KW - public sector
KW - accountability
KW - artificial intelligence
KW - algorithmic accountability
KW - AIAAIC repository
U2 - 10.1080/15309576.2025.2493889
DO - 10.1080/15309576.2025.2493889
M3 - Article
SN - 1557-9271
JO - Public Performance & Management Review
JF - Public Performance & Management Review
ER -