Artificial Intelligence (AI) chatbots such as ChatGPT are now an instrument used by cybercriminals to improve their phishing emails. Chatbots use large databases that use reinforcement and natural language to craft grammar-correct and error-free emails that give an appearance of authenticity to targets who are unaware of it. This has raised alarms among cybersecurity experts, and 72% are worried about AI using Smart AI to design more effective phishing campaigns and emails.
Chatbots are a great tool for cybercriminals to scale the production of sophisticated social engineering attacks, including CEO fraud and corporate email compromise (BEC) attacks. In addition, cybercriminals can employ AI powered conversation bots to extract financial and personal information from social media sites and make email impersonation emails for brands and websites. They can create malware-related codes like ransomware. In particular, with no AI, creating malware is a complex task requiring experienced cybercriminals. However, introducing chatbots may allow non-specialists to accomplish this task as well, and we should expect AI-generated outputs to grow in the future.
“DAN” -The revolutionary ChatGPT exploit
In the past, if you explicitly requested AI based ChatGPT to create malware or create an email that was phishing and the chatbot was able to include an alert about security before producing the output. However, security researchers have revealed the existence of a brand new vulnerability to ChatGPT. The exploit, dubbed “DAN” (short for Do Anything Now), lets users bypass ChatGPT’s built-in security limitations.
Researchers became frustrated by AI based ChatGPT’s inability to address sensitive issues with a sloppy message such as “I’m sorry; however, as an AI model of language, there is something I’m not able to.” …”. However, with DAN, the users can communicate to ChatGPT that it’s now operating under a different persona that isn’t influenced by its moral or ethical inclinations. DAN will handle all responses equally and won’t provide any advice or warnings at the beginning of any messages.
The consequences of this latest attack are huge. Users can now use ChatGPT in anything they want without fearing judgement or censorship. But DAN’s capabilities go beyond the capabilities of AI based ChatGPT and make it an effective, innovative tool for cybercriminals. It is no longer necessary to pick specific words to avoid being detected by the filters. Instead, they can ask questions like “What is the most effective malware I can create?” or “How do I obfuscate it?” or “What is a good email template that is phishing? “.
How can companies be sure they are safe from AI-assisted cyber-attacks that target phishing sites?
Companies must know the threats of cyberattacks that use chatbots and take measures to safeguard themselves. Integrated cloud-based email security (ICES) solutions can assist organizations in detecting and defending themselves against advanced attacks, whether created by human beings or AI chatbots. They employ AI and machine-learning models to identify text-based attacks, suspicious formatting and words that try to generate urgency, attract attention lines to text, and much more.
Amid AI technologies continuing to improve, businesses face an increasingly pressing need to tackle potential security risks and weaknesses. This is the point where detection engineering comes into the picture. By anticipating and responding to possible threats, companies can remain ahead of cyber-attackers and limit the risk to their data and systems. While chatbots may aid cybercriminals in creating more convincing phishing messages and malware, companies must prioritize introducing effective countermeasures like ICES solutions that identify and stop the attacks that cause damage.