Google warns its own employees about the use of AI chatbots

shantanu-kumar-_CquNNr1744-unsplash
Artificial intelligence has experienced tremendous growth in recent months. The general public now has the opportunity to try out various AI-based chatbots, but their use comes with a number of pitfalls that we need to be aware of. What do developers currently consider the biggest risk?

The Risk Of Sensitive Data Leakage 

Reuters recently published an article featuring testimonies from four individuals about how Google warns its employees against using the chatbot Bard. What’s remarkable is that this AI comes directly from Google itself. While the company promotes the product to the public, it discourages its own employees from using it.

The primary reason behind this warning is the way data shared with the AI chatbot is handled. A significant amount of text is analyzed by humans within companies, making it easy for internal information to be inadvertently exposed. Moreover, Google employees are not even allowed to share the computer code they have written.

Chatbots do not guarantee privacy

Similar measures prohibiting information sharing with chatbots have been implemented by other companies such as Amazon, Samsung, and Deutsche Bank. It is highly likely that we would find a similar restriction at Apple as well. Data protection is one of the top priorities for such companies.

The steps taken by companies to prevent information leaks can also serve as a warning to the public not to input their personal data into chatbots like Bard or GPT. Although AI can be a helpful assistant, it may not be safe to share all the issues we need assistance with.

xc40-bev-og.jpg

The End of Diesel: Volvo Sets a Date!

Leave a Reply

Your email address will not be published. Required fields are marked *

Newsletter

Translate »