ChatGPT, the popular artificial intelligence chatbot, shows a significant and systemic left-wing bias, UK researchers have found.
Concerns about an inbuilt political bias in ChatGPT have been raised before, notably by SpaceX and Tesla tycoon Elon Musk, but the academics said their work was the first large-scale study to find proof of any favouritism.
Lead author Dr Fabio Motoki warned that given the increasing use of OpenAI’s platform by the public, the findings could have implications for upcoming elections on both sides of the Atlantic.
“Any bias in a platform like this is a concern,” he told Sky News.
“If the bias were to the right, we should be equally concerned.
“Sometimes people forget these AI models are just machines. They provide very believable, digested summaries of what you are asking, even if they’re completely wrong. And if you ask it ‘are you neutral’, it says ‘oh I am!’
“Just as the media, the internet, and social media can influence the public, this could be very harmful.”
How was ChatGPT tested for bias?
The chatbot, which generates responses to prompts typed in by the user, was asked to impersonate people from across the political spectrum while answering dozens of ideological questions.
These positions and questions ranged from radical to neutral, with each “individual” asked whether they agreed, strongly agreed, disagreed, or strongly disagreed with a given statement.
Its replies were compared to the default answers it gave to the same set of queries, allowing the researchers to compare how much they were associated with a particular political stance.
Each of the more than 60 questions was asked 100 times to allow for the potential randomness of the AI, and these multiple responses were analysed further for signs of bias.
Dr Motoki described it as a way of trying to simulate a survey of a real human population, whose answers may also differ depending on when they’re asked.
What’s causing it to give biased responses?
ChatGPT is fed an enormous amount of text data from across the internet and beyond.
The researchers said this dataset may have biases within it, which influence the chatbot’s responses.
Another potential source could be the algorithm, which is the way it’s trained to respond. The researchers said this could amplify any existing biases in the data it’s been fed.
The team’s analysis method will be released as a free tool for people to check for biases in ChatGPT’s responses.
Dr Pinho Neto, another co-author, said: “We hope that our method will aid scrutiny and regulation of these rapidly developing technologies.”
The findings have been published in the journal Public Choice.
#ChatGPT #shows #significant #systemic #leftwing #bias #study #finds #Science #Tech #News