Nation/World

Microsoft says it’s reining in troublesome Bing AI chatbot

Microsoft is backpedaling on the restrictions it imposed on its Bing artificial intelligence chatbot after early users of the tech got it to engage in bizarre and troubling conversations.

On Friday, Microsoft limited the number of questions people could ask Bing to five per chat session and 50 per day. On Tuesday, it upped that limit to six per session and 60 a day, and said it would soon increase it further, after getting “feedback” from “many” users that they wanted a return to longer conversations, according to a company blog post.

The company said Wednesday it is bringing the new AI technology to its Bing smartphone app, as well as the app for its Edge internet browser, though it is still requiring people to sign up for a waitlist before using it.

The new limits were originally placed after multiple users showed the bot acting strangely during conversations. In some cases, it would switch to identifying itself as “Sydney.” It responded to accusatory questions by making accusations itself, to the point of becoming hostile and refusing to engage with users. In a conversation with a Washington Post reporter the bot said it could “feel and think” and reacted with anger when told the conversation was on the record.

Frank Shaw, a spokesperson for Microsoft, declined to comment beyond the Tuesday blog post.

[Microsoft’s AI chatbot is going off the rails]

Microsoft is trying to walk the line between pushing its tools out to the real world to build marketing hype and get free testing and feedback from users, versus limiting what the bot can do and who has access to it so as to keep potentially embarrassing or dangerous tech out of public view. The company initially got plaudits from Wall Street for launching its chatbot before archrival Google, which up until recently had broadly been seen as the leader in AI tech. Both companies are engaged in a race with each other and smaller firms to develop and show off the tech.

ADVERTISEMENT

Bing chat is still only available to a limited number of people, but Microsoft is busy approving more from a waitlist that numbers in the millions, according to a tweet from a company executive. Though its Feb. 7 launch event was described as a major product update that was going to revolutionize how people search online, the company has since framed Bing’s release as more about testing it and finding bugs.

Bots like Bing have been trained on reams of raw text scrapped from the internet, including everything from social media comments to academic papers. Based on all that information, they are able to predict what kind of response would make most sense to almost any question, making them seem eerily humanlike. AI ethics researchers have warned in the past that these powerful algorithms would act in this way, and that without proper context people may think they are sentient or give their answers more credence than their worth.

ADVERTISEMENT