An in-depth study conducted by researchers at the University of East Anglia (UEA) has unveiled significant left-wing bias in the artificial intelligence platform ChatGPT. The investigation delved into the political tendencies exhibited by ChatGPT’s responses, raising concerns about the potential implications for policymaking and education.

Published in the journal Public Choice, the study pioneered a novel methodology to assess political bias in ChatGPT’s output. The researchers, hailing from the UK and Brazil, employed a meticulous approach to evaluate the AI’s alignment with various political stances.

The results of the study displayed a discernible bias in favor of left-leaning ideologies. ChatGPT’s responses were found to consistently align with the positions of the Democratic Party in the US, the Labour Party in the UK, and the Workers’ Party in Brazil, led by President Lula da Silva.

Lead author of the study, Dr. Fabio Motoki from Norwich Business School at UEA, emphasized the importance of impartial AI systems, especially given the growing reliance on AI-powered platforms like ChatGPT to disseminate factual information and create content. Dr. Motoki stated, “The presence of political bias can influence user views and has potential implications for political and electoral processes. Our findings reinforce concerns that AI systems could replicate, or even amplify, existing challenges posed by the Internet and social media.”

The researchers devised a unique assessment method that involved asking ChatGPT to respond to over 60 ideological questions while impersonating individuals with varying political leanings. This process facilitated a comparison between the AI’s default responses and the tailored answers, thereby quantifying the extent of political bias in its output.

To counteract the inherent randomness in large language models (LLMs) like ChatGPT, the researchers executed each question 100 times, collecting multiple responses. These responses underwent a 1000-repetition ‘bootstrap’ process, enhancing the reliability of the conclusions drawn from the generated text.

“We created this procedure because conducting a single round of testing is not enough,” noted co-author Victor Rodrigues. “Due to the model’s randomness, even when impersonating a Democrat, sometimes ChatGPT answers would lean towards the right of the political spectrum.”

Further validation tests were conducted to strengthen the methodology, including tests on radical political positions, politically-neutral questions, and alignment with different professional roles.

Dr. Pinho Neto, another co-author, highlighted the potential impact of the study’s method, expressing the hope that it would foster transparency, accountability, and public trust in AI technology. The newly developed analysis tool is set to be accessible to the public, enabling widespread oversight of AI systems like ChatGPT.

While the study did not explicitly pinpoint the sources of the political bias, it suggested two plausible origins: the training dataset and the AI algorithm itself. The training data could contain inherent biases or biases introduced by human developers that were not effectively removed during the cleaning process. The algorithm might also be exacerbating biases present in the training data.

The research, led by Dr. Fabio Motoki, Dr. Valdemar Pinho Neto, and Victor Rodrigues, has been published in Public Choice under the title ‘More Human than Human: Measuring ChatGPT Political Bias.’ The analysis utilized version 3.5 of ChatGPT and employed questions devised by The Political Compass. This innovative study provides crucial insights into the extent and potential implications of political bias in AI systems, calling for heightened awareness and scrutiny in the rapidly evolving field of artificial intelligence.

Editor at The B2B Marketer | Website | + posts

The B2B Marketer, the online destination for B2B marketing professionals seeking valuable insights, trends, and resources to drive their marketing strategies and achieve business success.