Hackers are using ChatGPT to create phishing emails that are so “real” that even well-trained security staff are falling for them, according to a recent warning from Australian cybersecurity experts.
GPT-4 logo of OpenAI Company. Photo: OpenAI/VNA
Chad Skipper, a global security technology expert at software company VMWare in Australia, said that phishing emails use ChatGPT or similar natural language “machine learning” models to mimic the language and tone of official emails in organizations and are very difficult to distinguish from real emails.
Chad Skipper believes that this is a new war between hackers and the cybersecurity industry and this war will continue “unrelentingly”. He shared: “We use artificial intelligence (AI) in our way, they use artificial intelligence in their way. And this is an “arms race” of artificial intelligence.
According to Chad Skipper, cybercriminals are using artificial intelligence to identify vulnerabilities in companies and organizations and they are using ChatGPT to conduct sophisticated phishing attacks to infiltrate these organizations.
Phishing is a form of cyber attack in which attackers use fake emails or other forms of messaging to “trap” users into clicking on seemingly harmless documents or links to a website, where they spread malware onto users' devices.
An estimated 90% of cyber breaches that start with a phishing attack have already been successfully carried out. The devastating cyber attack on Australia’s largest private insurer, Medibank, in October 2022, which began after an employee’s computer was infected with malware, may have been one of them.
In the short time since its release last year, ChatGPT has effectively bypassed one of the lines of defense against such attacks, said Darren Reid, business director at VMware Australia.
“If you are a Russian hacker, English is probably a barrier for you,” he said. “And most organizations screen for unusual grammar and word usage to filter out phishing emails. But ChatGPT overcomes this, making the hackers’ messages more authentic and thus making the attack more likely to succeed.”
Once cybercriminals have penetrated an organization's computer systems, they will use artificial intelligence to avoid detection and continue to expand their attacks, Skipper said.
In a recent survey of the cost of cyberattacks, IBM found that it takes an average of 277 days, or nine months, for a company hit by a cyberattack to identify and stop the attack.
This is the time when hackers are deep inside the company, have time to collect more data to use this data as the basis for an attack using deep-fake technology (technology that uses AI to take the image, voice of a person and paste it into another person's video), using artificial intelligence to closely mimic the writing style or even the voice of a colleague to form the basis for attacks on other parts of the business.
Cybercriminals are even using ChatGPT to tweak the code of malware so it can evade antivirus software, and they are also “embedding artificial intelligence” into malware so it can adapt its behavior after it is detected, Skipper said.
According to VNA