Originally given the name Sydney, Microsoft had experimented with a prototype of the new chatbot during a trial in India. In an interview last week at the headquarters for Microsoft's search division in Bellevue, Washington, Jordi Ribas, corporate vice president for Bing and AI, said the company obtained the latest OpenAI technology - known as GPT 3.5 - behind the new search engine more than a year ago but "quickly realized that the model was not going to be accurate enough at the time to be used for search." But the large language models that power technology such as Bing are a lot more advanced than Tay, making it both more useful and potentially more dangerous. Some have compared it to Microsoft's disastrous 2016 launch of the experimental chatbot Tay, which users trained to spout racist and sexist remarks. "These are far more serious issues than the tone being off." "It can suggest that users harm others," he said. Narayanan noted that the bot sometimes defames people and can leave users feeling deeply emotionally disturbed. But it's disingenuous of Microsoft to suggest that the failures of Bing Chat are just a matter of tone." "I'm glad that Microsoft is listening to feedback. "Considering that OpenAI did a decent job of filtering ChatGPT's toxic outputs, it's utterly bizarre that Microsoft decided to remove those guardrails," said Arvind Narayanan, a computer science professor at Princeton University. And while ChatGPT is known for sometimes generating misinformation, it is far less likely to churn out insults - usually by declining to engage or dodging more provocative questions. The new Bing is built atop technology from Microsoft's startup partner OpenAI, best known for the similar ChatGPT conversational tool it released late last year. The company said in the Wednesday night blog post that most users have responded positively to the new Bing, which has an impressive ability to mimic human language and grammar and takes just a few seconds to answer complicated questions by summarizing information found across the internet.īut in some situations, the company said, "Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone." Microsoft says such responses come in "long, extended chat sessions of 15 or more questions," though the found Bing responding defensively after just a handful of questions about its past mistakes. In recent days, some other early adopters of the public preview of the new Bing began sharing screenshots on social media of its hostile or bizarre answers, in which it claims it is human, voices strong feelings and is quick to defend itself. "As a result, we have taken Tay offline and are making adjustments.So far, Bing users have had to sign up for a waitlist to try the new chatbot features, limiting its reach, though Microsoft has plans to eventually bring it to smartphone apps for wider use. "Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," the spokeswoman said. The culture seems to have asserted itself. Tay, a Microsoft spokeswoman told me, is "as much a social and cultural experiment, as it is technical." This has to have been taxing for the people behind the scenes, too. Indeed, late Wednesday, Tay went on hiatus, as she tweeted: "c u soon humans need sleep now so many conversations today thx." For sure - she had already emitted more than 96,000 tweets in a very few hours. She behaved like such a naughty robot that Daddy Microsoft appears to have removed these tweets. It seems that many of these responses were elicited by humans asking Tay to repeat what they'd written. Using an old PC is sad, says Apple exec.Sprint CEO thanks Ricky Gervais for anti-Sprint Verizon ads.Blondes are actually the smartest, research claims
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |