New Delhi: Artificial Intelligence, Alphabet is issuing a word of caution to its employees regarding the usage of chatbots, including its own Bard, while simultaneously promoting the program worldwide, according to four individuals familiar with the matter who spoke to Reuters.
The parent company of Google has instructed its employees not to input any confidential materials into AI chatbots, a policy that both the company and the sources have confirmed. This policy is in place to protect sensitive information, in line with the company’s longstanding commitment to safeguarding data.
These chatbots, such as Bard and ChatGPT, are designed to engage in human-like conversations with users, utilising generative artificial intelligence to respond to various prompts. However, as part of the training process, the AI systems may absorb and reproduce the data they have been exposed to, thereby creating a potential risk of data leakage. Human reviewers may also review these conversations.
Alphabet has also advised its engineers to refrain from directly utilising the computer code generated by chatbots, as disclosed by some of the individuals familiar with the matter.
When approached for comment by Reuters, the company acknowledged that Bard might occasionally provide undesired code suggestions. Nevertheless, Google emphasised that Bard still assists programmers while aiming to maintain transparency about the limitations of its technology.
These concerns demonstrate Google’s efforts to mitigate any potential negative impact on its business resulting from the competition with ChatGPT. Billions of dollars in investments, unexplored advertising, and cloud revenue from new AI programs are at stake in Google’s race against ChatGPT’s backers, OpenAI and Microsoft.
Google’s cautionary approach is also aligned with the emerging security standard adopted by corporations, which includes warning employees against using publicly-available chat programs.
Numerous businesses worldwide, including Samsung, Amazon, and Deutsche Bank, have implemented safeguards for AI chatbots.
According to a survey conducted by the networking site Fishbowl, approximately 43 per cent of professionals were using ChatGPT or other AI tools as of January, often needing to inform their superiors.
In February, Google informed its staff testing Bard prior to its launch not to disclose internal information to the chatbot, as reported . Presently, Google is expanding Bard’s availability to more than 180 countries and 40 languages, positioning it as a tool for fostering creativity. The company’s warnings extend to the code suggestions provided by Bard as well.
Google has confirmed that it has engaged in extensive discussions with Ireland’s Data Protection Commission and is addressing regulators’ queries. This comes after a Politico report indicated that Google was postponing Bard’s launch in the European Union this week, pending further information on the chatbot’s impact on privacy.
Concerns around sensitive information
While such technology holds the promise of expediting tasks by generating emails, documents, and even software itself, it also introduces the risk of misinformation, exposure of sensitive data, or the inclusion of copyrighted content, such as passages from the “Harry Potter” series.
On June 1, Google updated its privacy notice, which now includes the statement, “Don’t include confidential or sensitive information in your Bard conversations.”
Some companies have developed software solutions to address these concerns. For example, Cloudflare, a provider of website security and other cloud services, allows businesses to tag and restrict certain data from being transmitted externally.
Both Google and Microsoft are also offering conversational tools to their business customers, albeit at a higher price point, that do not involve data absorption into public AI models. By default, Bard and ChatGPT retain users’ conversation history, which users can choose to delete.
Yusuf Mehdi, Microsoft’s consumer chief marketing officer, stated that it is reasonable for companies to discourage their staff from using public chatbots for work purposes. He explained that companies take a duly conservative standpoint, highlighting that Microsoft’s free Bing chatbot operates under stricter policies than its enterprise software.