October 2, 2023
2 mins read

Gender bias seen in AI-generated content on leadership

Generative AI learns the patterns in input data, using which the AI is trained, and then creates content bearing similar characteristics. The AI depends on machine learning concepts for content creation…reports Asian Lite News

New research has revealed an inherent gender bias in the content – text, images, other media – generated by artificial intelligence (AI).

Analysing AI-generated content about what made a ‘good’ and ‘bad’ leader, men were consistently depicted as strong, courageous, and competent, while women were often portrayed as emotional and ineffective, researchers at the University of Tasmania, Australia, and Massey University, New Zealand, found.

Thus, AI-generated content can preserve and perpetuate harmful gender biases, they said in their study published in the journal Organizational Dynamics.

“Any mention of women leaders was completely omitted in the initial data generated about leadership, with the AI tool providing zero examples of women leaders until it was specifically asked to generate content about women in leadership.

“Concerningly, when it did provide examples of women leaders, they were proportionally far more likely than male leaders to be offered as examples of bad leaders, falsely suggesting that women are more likely than men to be bad leaders,” said Toby Newstead, the study’s corresponding author.

Generative AI learns the patterns in input data, using which the AI is trained, and then creates content bearing similar characteristics. The AI depends on machine learning concepts for content creation.

For training these generative AI technologies, vast amounts of data from the internet along with human intervention to reduce harmful or biases are processed.

Therefore, AI-generated content needs to be monitored to ensure it does not propagate harmful biases, said study author Bronwyn Eager, adding that the findings highlighted the need for further oversight and investigation into AI tools as they become part of daily life.

“Biases in AI models have far-reaching implications beyond just shaping the future of leadership. With the rapid adoption of AI across all sectors, we must ensure that potentially harmful biases relating to gender, race, ethnicity, age, disability, and sexuality aren’t preserved,” she said.

“We hope that our research will contribute to a broader conversation about the responsible use of AI in the workplace,” said Eager.

ALSO READ-Air India to begin daily flights from Kochi to Doha

Previous Story

Despite Trudeau, Canada is India’s Natural Partner

Next Story

Byju’s Misses September Deadline

Latest from -Top News

Kenya’s Odinga Slams Adani Deal U-Turn

Before the cancellation of the deal, Odinga was among the leaders who defended the Adani Group….reports Asian Lite News Kenya’s former Prime Minister Raila Odinga on Friday expressed disappointment over the cancellation

Hindus in Peril in Bangladesh

The rights group submitted new evidence to the ICC, accusing Muhammad Yunus’s interim government of top-level complicity….reports Asian Lite News The Human Rights Congress for Bangladesh Minorities (HRCBM) on Friday condemned a

India Takes Yoga to the World

Ahead of June 21, yoga events are being held worldwide, promoting health, harmony, and well-being for the 11th IDY…reports Asian Lite News Marking a global celebration of India’s cultural heritage, the Indian

War on Children Worsens, Says UN

The new high surpassed 2023, another record year, which itself represented a 21 per cent increase over the preceding year….reports Asian Lite News Violence against children in conflict zones soared to record

Munir-Trump Talks Last Over Two Hours

Reports suggest that the US Secretary of State Marco Rubio and US Special Representative for Middle Eastern Affairs Steve Witkoff also took part in the meeting…reports Asian Lite News Pakistan’s Army Chief
Go toTop

Don't Miss

Pakistan Advises Against Using Indian-Origin AI and ICT Products

It noted that globally AI products and services are being

AI models under UK watchdog’s lens

The review aimed at helping to create an early and