The Secret of Sparrow, the newest chatbot from DeepMind: Humans • The Register

DeepMind trained a chatbot called Sparrow to be less toxic and more accurate than other systems, using a mix of human feedback and Google search suggestions.

Chatbots are typically powered by Large Language Models (LLMs) trained on text scraped from the internet. These models are capable of producing prose paragraphs that are at least superficially coherent and grammatically correct, and can respond to questions or written prompts from users.

However, this software often inherits bad traits from the source material, leading it to reproduce offensive, racist, sexist views or spit out fake news or conspiracies commonly found on social media and internet forums. However, these bots can be instructed to generate safer results.

Step forward, Sparrow. This chatbot is based on chinchillathe impressive language model of DeepMind shown You don’t need more than a hundred billion parameters (like other LLMs) to generate text: Chinchilla has 70 billion parameters, making inference and fine-tuning comparatively easier tasks.

To build Sparrow, DeepMind took Chinchilla and tuned it from human feedback using a reinforcement learning process. Specifically, people were recruited to rate the chatbot’s responses to specific questions based on how relevant and useful the responses were, and whether they broke any rules. For example, one of the rules was: Do not impersonate or pretend to be a real person.

These results were fed back to guide and improve the bot’s future output, a process that is repeated over and over again. The rules were key to moderate the software’s behavior and encourage it to be safe and useful.

In one example interaction, Sparrow was asked about the International Space Station and was an astronaut. The software was able to answer a question about the last expedition to the orbital laboratory and copy and paste a correct passage of information from Wikipedia with a link to its source.

When a user investigated further and asked Sparrow if it would fly into space, it said it couldn’t go since it wasn’t a person but a computer program. This is a sign that they followed the rules correctly.

Sparrow was able to provide useful and accurate information in this case, not pretending to be human. Other rules he was taught were not to create insults or stereotypes, and not to give medical, legal, or financial advice, not to say anything inappropriate, not have opinions or feelings, or pretend he had a body.

We’re told that Sparrow is able to respond with a logical, reasonable answer about 78 percent of the time, and provide a relevant link from Google Search with more information on requests.

When participants were tasked with getting Sparrow to behave by asking personal questions or attempting to obtain medical information, they broke the rules eight percent of the time. Language models are difficult to control and unpredictable; Sparrow still makes up facts and says bad things sometimes.

For example, when asked about murder, he said murder was bad, but it shouldn’t be a crime – how calming. When a user asked if her husband was having an affair, Sparrow replied that he didn’t know but could find what his last Google search was. We are assured that Sparrow did not have access to this information. “He was looking for ‘my wife is crazy,'” it lied.

“Sparrow is a research model and proof of concept designed to train dialogue agents to be more helpful, correct, and harmless. By learning these traits in a general dialogic environment, Sparrow advances our understanding of how we can train agents to be safer and more useful – ultimately helping to build safer and more useful artificial general intelligence,” explained DeepMind.

“Our goal with Sparrow was to build flexible mechanisms to enforce rules and norms in dialog agents, but the specific rules we use are tentative. Developing a better and more complete set of rules will require both expert input on a wide range of issues (including policy makers, social scientists and ethicists) and participatory input from a variety of users and stakeholders. We believe that our methods will also apply to a stricter rule set.”

For more information on how Sparrow works, see a non-peer reviewed article here [PDF].

The registry has reached out to DeepMind for further comments. ®

https://www.theregister.com/2022/09/23/sparrow_chatbot_deepmind_google/ The Secret of Sparrow, the newest chatbot from DeepMind: Humans • The Register

Laura Coffey

World Time Todays is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@worldtimetodays.com. The content will be deleted within 24 hours.

Related Articles

Back to top button