Hackers Could Use ChatGPT to Infiltrate Vessels
Most documented cyberattacks against individual vessels have historically been executed by jamming and spoofing navigation signals. However, vessels are increasingly threatened by a wider range of attacks, to include ransomware. Recently, 1000 shipping vessels were affected when DNV’s ShipManager software system was hit by a cyberattack. Luckily many vessels maintained their offline functionalities which reduced disruption, but this attack demonstrated the potential widespread reach of cyberattacks against vessels. In addition, there are potentially large financial gains to be made from attacks against vessels. Following the blockage of the Suez Canal by a 400-meter-long containership in 2021 and the ensuing disruption to global trade and financial markets, criminal hackers discovered that they could take advantage of the stock market changes associated with a grounded vessel to profit. Therefore, there are potential benefits for attackers of attacking a vessel.
One way of compromising a vessel is through phishing emails. Phishing emails are a form of social engineering which encourage crewmembers to click on insecure links and unknowingly download harmful content onto their computer. The emails appear legitimate and links are disguised as secure and genuine. They may well be personalized to that particular crew or ship, using information derived from open sources, such as social media. Phishing emails play a key role for many types of maritime cyberattacks, which rely on placing malicious software on target computers, including ransomware attacks.
Writing these emails has typically been a manual exercise. However, a newly released AI tool is changing that.
ChatGPT is a novel tool developed by OpenAI with many linguistic skills, including explaining quantum physics and writing poetry on command. ChatGPT was not designed for criminals, and in fact has internal barriers to prevent it from creating malicious material when directly ordered to. However, attackers have found a way around this. AI can be a force multiplier for attackers, especially when using social engineering techniques. In particular, the AI chatbot produces persuasive phishing emails when prompted.
There are many benefits for attackers who utilise ChatGPT. For instance, it writes in good American English, helping attackers to disguise any typical differentiators between legitimate and illegitimate emails, such as typos or unique formats. Indeed, it has many different ways of responding to a single prompt, making emails individual and authentic looking.
ChatGPT can create a convincing and emotionally manipulative phishing email, according to prompts provided by the user:
ChatGPT will also happily include a malicious link as an attachment:
So how real is the threat?
Before the latest version of ChatGPT was released, one research paper revealed analysis of over 50,000 emails sent to users in over 100 countries as part of its phishing training workflow. Professional red teamers had a 4.2 percent click rate, compared to ChatGPT’s 2.9 percent. In some countries, including Sweden, the AI chatbot’s click rate was higher. Furthermore, a survey of 1,500 IT decision makers across North America, UK and Australia revealed that 53 percent are concerned specifically by the threat of more believable phishing emails and 51 percent expect a ChatGPT supported cyber attack within the next year. Darktrace also commissioned a survey, with Censuswide, reporting that 73 percent of UK employees are concerned by hackers’ use of generative AI to create indistinguishable scam emails. Further research reveals that ChatGPT is already manipulating people to bypass security requirements, after an AI successfully asked a TaskRabbit worker to solve a captcha for them, due to a vision impairment.
The threat of phishing emails is further indicated by a recent study by Darktrace, which revealed a 135 percent increase in “novel social engineering attacks,” including increased text volume, punctuation, and sentence length with no links or attachments, in 2023 in correspondence with the spread of ChatGPT. It also revealed that the overall volume of malicious email campaigns has dropped and been replaced with more linguistically complex emails.
It is evident then that people are legitimately concerned by the ability of ChatGPT to send convincing phishing emails, and this threat has been highlighted by national and regional authorities. Europol has released a warning regarding the use of ChatGPT for creating highly convincing and realistic text. The UK’s NCSC has also warned of LLM’s use to write convincing phishing emails.
What does this mean for the maritime industry?
ChatGPT has near encyclopedic knowledge, which can easily be used to find useful maritime-specific information, such as the names of vessels or IMO regulations, in order to make emails more convincing:
The threat posed to maritime by ChatGPT is substantial, especially because the rewards could be significant for the hacker. Shipping is a global industry, and disruption could be deeply costly. Vessels with networks that have been taken down due to a cyberattack cannot deliver essential commodities on which industries rely, like raw materials. A hacking event could even result in a grounding on a main trade route, with wider financial implications. As a result, increased security measures - like staff training - are required in order to raise awareness of the threats posed by clicking on malicious links.
The opinions expressed herein are the author's and not necessarily those of The Maritime Executive.