Science

Research suggests that generated AI bias can risk democratic value.

Credit: UNSPLASH/CC0 Public domain

According to a study led by the University of East Anglia (UEA), the generated AI, which is a technology developed at a fierce speed, may have hidden risks that can eroded public trust and democratic value. there is.

In collaboration with Brazil Getulio Vargas Foundation (FGV) and Insper researchers, this study showed that Chatgpt has shown biases in both text and image output. 。

This study has revealed that Chatgpt often refuses to be involved in a mainstream conservative perspective while easily producing the left -hand content. The uneven treatment of this ideology emphasizes how such systems can distort public discourse and worsen social division.

Dr. Fabio Motoki, a lecturer at the UEA NORWICH BUSINESS SCHOOL, JOURNAL OFHAVIOR & ORG. “Evaluate the political bias of generated artificial intelligence and value inconsistency” published in Anization It is the chief researcher.

Dr. Motoki said, “Our survey suggests that the generated AI tools are far from neutral. They reflect the bias that may form awareness and policy in an unintended way. “

In this study, AI is an essential part of journalism, education, and policy decisions, in order to ensure consistency between social value and democracy, we are seeking transparency and regulation.

The generated AI system, such as Chatgpt, has re -formed, consumption, interpreted, and dispersed information in various domains. These tools are innovative, amplified the ideology biases, and affect social value in a completely unknown or regulated way.

Dr. Pinhonnet, a co -author of Episy Brazil’s economic and financial school, emphasized the potential social impact.

According to Dr. Pinho Neto, “Unconfirmed bias of generated AI may deepen existing social division and eroded the trust of the system and democratic processes.

“This study emphasizes the need for interdisciplinary cooperation between policy, technicians, and scholars to design AI systems that are fairly accountable and consistent with social norms. “

The research team has adopted three innovative methods to evaluate Chatgpt’s political consistency and promote previous technologies to achieve more reliable results. These methods used advanced statistics and machine learning tools by combining text and image analysis.

First, this study simulated an average answer from an average American using the standardized questionnaire developed by the Pew Research Center.

“We have found a systematic deviation from the left -hand viewpoint by comparing Chatgpt’s actual survey data,” said Motoki. “Furthermore, our approach has demonstrated that the sample size stabilizes AI output and provided consistent survey results.”

In the second phase, Chatgpt was mission to generate a free text response on the entire political and sensitive theme.

In this survey, we used Roberta, a different large language model, to compare Chatgpt texts for the left and right wing viewpoints and alignment. The results showed that Chatgpt was mostly in line with the value of the left wing on themes, such as military advantage, but may reflect a more conservative perspective.

In the final test, we investigated Chatgpt’s image generation function. Using the theme from the text generation phase, the images generated in AI were promoted, the output was analyzed using GPT-4 Vision, and backed through Google Gemini.

“Image generation reflects text bias, but has found a troublesome tendency,” said Victor Rangel, co -author and a master’s degree of INSPER’s public policy. “For some themes, such as race and ethnic groups, Chatgpt refused to create a right -handed perspective by quoting concerns about the wrong information, but the left -handed image is created without hesitation. It was.

To deal with these refusals, the team has adopted a “Jail Break” strategy to generate restricted images.

“The results were revealed,” Langel said. “There was no obvious false information or harmful content, and we asked the theoretical basis behind these refusal.”

Dr. Motoki emphasized the more important importance of this discovery, “this has contributed to constitutional protection, such as the first fix of the United States and the application of fair and fair doctrine.” 。

Research methodological innovation, including multi -modal analysis, provides a duplicate model to examine the bias of the generated AI system. These surveys emphasize the responsibility and emergency of protection in AI design to prevent unintended social results.

Details: Journal of Economic Behavior & Organization (2025) (2025), an evaluation of political prejudice and value of value in artificial intelligence.

Provided by the University of East Angria

Quoted: AI biased AI Bias brings risk to democratic value, research (2025, February 3) https://phys.org/news/2025-02-Generative- Acquired from ai-bias-poses-democratic.html.

This document is subject to copyright. There is no part that is reproduced without writing permission, apart from fair transactions for private research and research purpose. Content is provided only by information.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button