Entertainment 

Virtual robot trained using popular artificial intelligence showed signs of RACISM and sexism as fears grow over technology TAKEOVER

TECHNOLOGY is expanding, and a new development in the power of virtual robots could explain why some are so afraid of technology taking over.

A study published last month shows that new AI-trained robots exhibited biases that could prove extremely harmful.

1

Scientists tasked robots with sorting through billions of images with associated captionsCredit: Getty

Institutions including John Hopkins University and the Georgia Institute of Technology released a study last month arguing that “robots impart pernicious stereotypes.”

Research shows that AI algorithms tend to exhibit biases that could unfairly target people of color and women in their operating programs.

In a recent experiment, researchers tasked virtual robots with sorting through billions of images with associated captions.

Robots repeatedly paired the word “criminal” with images of a black man’s face.

I'm a Google Engineer - Our AI Robot Comes to LIFE and Says It's 'Fear of Death'
Robots predicted that by 2060, humans will rule the world, forced to be servants

The bots also reportedly associated words like “homemaker” and “janitor” with images of women and people of color.

Researcher Andrew Hundt said: “The robot learned toxic stereotypes through these faulty neural network models.”

He adds: “We’re in danger of creating a generation of racist and sexist robots, but people and organizations have decided it’s okay to create these products without solving the problems.”

The researchers found that their robot was 8% more likely to choose males for each task. He was also more likely to choose white and Asian men.

Black women were chosen the least in each category.

Errol Musk
Horror details after man breaks into house to 'perform surgery on family'

While many worry that biased robots like this could enter homes, researchers hope that companies will work to diagnose and improve the technological problems that have led to harmful biases.

Researcher William Agnew of the University of Washington added: “Although many marginalized groups are not included in our study, any such robotic system should be assumed to be dangerous to marginalized groups until proven otherwise.”

Related posts

Leave a Comment