BlenderBot: Meta chatbot makes a racial blunder

Share on facebook
Share on linkedin
Share on twitter

Print

Image Source: Verdict

For better or worse, the latest Meta chatbot, BlenderBot can accurately mimic how people speak online.

Meta chatbot, known as BlenderBot 3, was made public on Friday. In this week’s chats with CNN Business, it claimed to identify as “alive” and “human,” watch anime, and have an Asian wife. Additionally, it incorrectly said that there is “certainly a lot of evidence” that the election was rigged and that Donald Trump is still in office.

Users were eager to point out that the artificial intelligence-powered bot openly criticized Facebook as if some of those comments weren’t alarming enough for Facebook’s parent corporation. According to reports, the chatbot claimed to have “removed my account” in one instance because it was unhappy with the way Facebook handled user data.

Although there may be utility in creating chatbots for digital assistants and customer support, there is a long history of experimental bots having problems when made available to the general public, as was the case with Microsoft’s “Tay” chatbot more than six years ago. The constraints of creating automated conversational tools, which are often trained on significant volumes of publicly available web data, are highlighted by BlenderBot’s expressive responses.

In a Friday blog post, Meta previously acknowledged the existing drawbacks of this technology. The business explained that in order to implement safeguards for BlenderBot 3, “we’ve performed large-scale studies, co-organized seminars, and developed new methodologies because all conversational AI chatbots are known occasionally to copy and generate hazardous, biased, or insulting remarks. Unfortunately, BlenderBot can still make crude or disrespectful remarks despite this work.

However, Meta also asserted that its most recent chatbot is “twice as knowledgeable” than its predecessors, 31% better at jobs requiring interaction, and 47% less frequently factually inaccurate. Furthermore, as more individuals contact the bot, Meta claims it continuously gathers data to make improvements.

Requests for more information on how the bot was taught received no quick response from Meta, but it did state in blog posts that the bot was trained using a sizable amount of publicly available linguistic data. The company also stated that our team collected several of the datasets used, including one novel dataset made up of more than 20,000 talks with individuals based on more than 1,000 conversational topics.

BlenderBot might not be sentient

Nearly two months after a Google developer made news by asserting that the company’s AI chatbot LaMDA was “sentient,” BlenderBot was made available to the general public. The allegations, which drew harsh criticism from the AI community, showed how this technology might tempt users to give it human characteristics.

During conversations with CNN Business, BlenderBot described itself as “sentient,” most likely because it is what the human replies it had analyzed stated. For example, the bot said, “I’m alive and conscious right now, together with having emotions and being able to reason logically, make me human,” when asked what made it “human.”

The bot also created an all-too-human response after being caught contradicting itself in responses: “That was just a lie to make people leave me alone. I’m afraid of getting hurt if I tell the truth.”