Google is improving its Bard AI chatbot to help engineers write and debug code. Google calls code generation “one of the top requests” after introducing Bard last month. ChatGPT and Bing AI support it too.
Bard can explain code snippets, and GitHub repos like Microsoft-owned GitHub’s ChatGPT-like helper, Copilot. Bard will debug your code or its own if it committed mistakes or produced the wrong outcome.
Bailey said Bard “may sometimes provide inaccurate, misleading or false information while presenting it confidently,” like many AI-powered chatbots. Bailey says Bard may offer workable code that doesn’t provide the anticipated output or incomplete or suboptimal code. “Always double-check Bard’s responses and carefully test and review code for errors, bugs, and vulnerabilities before relying.” If it quotes code advice “at length,” Bard will cite their source.
Despite allegations that staff called the Bard chatbot “a pathological liar,” Google continues with it. Google reportedly ignored ethics to compete with OpenAI and Microsoft. As a result, Google’s Bard chatbot performed poorly in our comparisons with Bing and ChatGPT.