In March, Google launched an artificial intelligence chatbot called Bard. It was Google’s answer to OpenAI’s wildly popular ChatGPT.
But Bard used a less sophisticated AI than ChatGPT. He came across as less efficient and less conversational. Within weeks, Google revamped the tool with improved technology, but ChatGPT remained the chatbot that captured the public’s attention.
On Tuesday, Google unveiled a plan to overtake ChatGPT by connecting Bard to its most popular consumer services, such as Gmail, Docs and YouTube. With these new features, Google has taken a step toward integrating Bard into the company’s vast constellation of online products.
Although Bard hasn’t received as much attention as ChatGPT, Google’s AI tool has moved from being an also-ran to a close competitor. In August, Bard saw nearly 200 million web visits across desktop and mobile, second only to ChatGPT, according to data from Similarweb, a data analytics company.
Still, Jack Krawczyk, Google product manager for Bard, said in an interview that Google was aware of the problems that limited the appeal of its chatbot. “It’s cool and new, but it doesn’t really fit into my personal life,” Mr. Krawczyk told the company’s users.
Google’s release of what it calls Bard Extensions follows OpenAI’s March announcement of ChatGPT plugins that allow the chatbot to access updated information and third-party services from other companies, including Expedia, Instacart and OpenTable.
With the latest updates, Google will attempt to replicate some of the capabilities of its search engine, integrating Flights, Hotels and Maps, so users can search for travel and transportation. And Bard could get closer to a personalized assistant for users, allowing them to ask what emails they missed and what the most important points in a document are.
AI chatbots are widely known for offering not only correct information, but also lies, in a phenomenon known as “hallucinations.” Users have no way of telling what is true and what is not.
Google believes it has taken a step forward in addressing these issues by revamping the “Google It” button on Bard’s website, which allowed users to perform Google searches on queries they asked the chatbot.
Now the button will recheck Bard’s answers. When Google has high confidence in a claim and can support it with evidence, it highlights the text in green and links to another web page that backs up the information. When Google can’t find facts to support a claim, the text is highlighted in orange.
“We are really committed to making Bard more trustworthy, not only by showing confidence in our response, but by admitting when we make a mistake,” Mr. Krawczyk said.
Various technology companies have invested billions of dollars in developing the so-called large language models that underpin Bard and other chatbots, systems that need large amounts of data to learn. This has raised concerns about how companies like Google use consumer information.
The company sought to allay concerns about how Bard would use this information.
“We are committed to protecting your personal information,” Yury Pinsky, director of product management at Bard, wrote in a blog post. “If you choose to use Workspace extensions, your content from Gmail, Docs, and Drive is not seen by human reviewers, nor used by Bard to show you ads, nor used to train the Bard model.”
Mr. Krawczyk said Bard would respect users’ privacy, although he declined to comment on how other Google services used this type of data.
Google also updated Bard’s underlying AI, Pathways Language Model 2. It expanded the functionality that allows users to upload images in more than 40 languages. And Google lets users share Bard conversations with each other, so they can see the responses and ask the chatbot additional questions on the topic.
Even though people in more than 200 countries and territories can use Bard, Google continues to refer to the tool as an “experience” rather than a full-fledged product.
“We are in the early days of this technology,” Mr. Krawczyk said, “and they have profound capabilities, but they need to be well understood by the people who use them.”
nytimes