Here’s why Twitteratis are condemning AI image tool on racial bias for a professional Linkedin pic

Recently, an MIT grad asked an AI tool to provide her a professional photo of herself for the LinkedIn page, but the results shocked her.

Here’s why Twitteratis are condemning  AI image tool on racial bias for a professional Linkedin pic
Here’s why Twitteratis are condemning AI image tool on racial bias for a professional Linkedin pic

Highlights

  • 24-year-old Rona Wang used playground AI image editor for the professional LinkedIn image
  • The MIT graduate eventually accused playground AI for promoting racial bias

Everyone is thinking about artificial intelligence (AI) right now. Either people are talking about it or they are using the tools for their own gain. To make a professional photo of oneself for her LinkedIn page, an MIT graduate did the same.

The 24-year-old Rona Wang reportedly asked the playground AI image editor to improve the quality of her photo. However, the results shocked her.

AI tool embarks racial bias

As per the recent information, the Asian woman on Twitter shared the result and her reaction. In order to edit the image, the woman asked the AI app to create a more professional headshot for her, by stating, “Give the girl from the original photo a professional LinkedIn profile photo." 

She was shocked when the AI tool gave her blue eyes, darker hair, and a fairer complexion. Wang captioned the shot, saying, "Was trying to get a LinkedIn profile photo with AI editing & this is what it gave me." 

“My initial reaction upon seeing the result was amusement. However, I'm glad to see that this has catalysed a larger conversation around AI bias and who is or isn't included in this new wave of technology,” she said.

While highlighting the problem of racial bias in AI tools to be persistent, the asian woman at the same time also  confirmed to avoid the usage of AI image, stating that it gives her no satisfaction and is thus of no use to her. 

Twitter users came forward in support of Asian woman 

The tweet of Wang later got the attention of Suhail Doshi, the creator of Playground AI. To clarify, the creator said, since the models aren't instructable in that way, it will choose any random item in response to the request. Unfortunately, they lack sufficient intelligence. “Happy to assist you, but it will require a little more work than using ChatGPT,”  he said. “We're quite upset about this, and would find a solution for it soon,” Doshi continued.

In the wake of this incident, one Twitter user commented, the use of unbiased training data is extremely rare (and may perhaps not exist), which is why we must exercise caution when delegating critical jobs to AI tools (such as those in teaching, hiring, medicine, etc.).

 We eventually replicate a lot of the biases & inequities that went into creating the training data," a user wrote. Another user commented, "That’s messed up, but I do believe they’re supposed to be handling the “white default”, at least that’s what I read recently anyways."