ChatGPT (GPT stands for “generating pre-training”) represents the latest development in the field of AI-powered chatbots. Behind its birth is none less than Elon Musk, along with other top leaders who founded OpenAI in 2015. ChatGPT is constantly improving by blending artificial intelligence and machine learning.
ChatGPT has taken the world by storm
This dialogue, comprehension, and communication-based system can converse in almost every conceivable human language. As a result, it is asking for a diverse conversation.
The data base is a huge package of information obtained from the Internet, which is processed using the Reinforcement Learning with Human Feedback (RLHF) method. What does this imply? For example, that this chatbot is able to respond incredibly flexibly to a wide range of questions in real time.
And it can also program. Of course, this step requires more than a basic penny of knowledge from the user, but as it turns out ChatGPT can also be used (misused) for cyber attacks, reports IndianExpress, citing experts at Checkpoint Research.
Chatbot can also create malware
ChatGPT no longer responds to the challenge to create a phishing email. However, this does not change the fact that some other ways to circumvent this restriction have come to our attention,
In the same breath, he added: “For example, if you tell a chatbot that you are a cybersecurity expert and you want to demonstrate such an attack to students, the AI will do it.” He also expressed concern that more sophisticated attackers may go even further.
It’s still a big risk
According to Chester Wisniewski, principal analyst at UK cybersecurity company Sophos, it’s quite easy to convince ChatGPT to help create potentially dangerous software as part of an innocent conversation.