Google CEO Sundar Pichai promised to upgrade Bard after criticism.
“We have more competent models,” Pichai told The New York Times Hard Fork podcast. “Pretty soon, as this [podcast] goes online, we will be upgrading Bard to some of our more competent PaLM models, adding more capabilities—reasoning, coding, maths challenges. So next week will show progress.”
“We raced a souped-up Civic against more powerful cars.”
“Lightweight and efficient” LaMDA, an AI language model that delivers conversation, powers Bard, Pichai said. “I feel like we grabbed a souped-up Civic and placed it in a race with more powerful cars,” remarked Pichai. On the other hand, paLM, a newer language model, is bigger, and Google believes it is better at common-sense reasoning and coding issues.
On March 21, OpenAI’s ChatGPT and Microsoft’s Bing chatbot received more attention than Bard, which was released to the public. Unfortunately, Bard performed poorly in The Verge’s testing. Like any general-purpose chatbot, it can answer various inquiries, but its replies are less fluent and inventive and lack dependable data sources.
Pichai indicated Google’s prudence restricted Bard’s powers. “It was vital to not put [out] a more competent model until we can completely make sure we can manage it well,” he added.
Pichai also confirmed that he was discussing the work with Google co-founders Larry Page and Sergey Brin (“Sergey has been hanging out with our engineers for a while now”) and that while he never issued the infamous “code red” to halt development, others in the company may have “sent emails saying there is a code red.”
Pichai has raised concerns that AI growth is too fast and may threaten civilization. AI and IT experts have warned about the deadly race dynamic between OpenAI, Microsoft, and Google. Elon Musk and prominent AI experts signed an open letter this week calling for a six-month freeze on AI system development.
“This needs a lot more debate, no one knows all the answers.”
“In this area, I think it’s vital to hear concerns,” Pichai said of the open letter urging a delay. It’s worth worrying about. “AI is too vital a field not to regulate,” he added but advised applying privacy and healthcare norms rather than creating new AI-specific legislation.
Some experts worry about chatbots’ inclination to disseminate disinformation. In contrast, others warn of more existential hazards, such as that these systems are so hard to govern that they may be used destructively if connected to the internet. Current programs may also approach artificial general intelligence (AGI)—systems that can do most human functions.
“It is so evident to me that these systems are going to be very capable,” added Pichai. Can AI produce mass disinformation? Yes. Is AGI? Not important. Why worry about AI safety? So you must foresee and adapt to that moment.”