Blog
You’re going to hear a lot about ChatGPT and other AI tools this year. The technology hit mainstream momentum very quickly over the
You’re going to hear a lot about ChatGPT and other AI tools this year. The technology hit mainstream momentum very quickly over the last six months, and now people are starting to piece through the implications of it, and what it means for everything from education, to professional writing, and even cybersecurity. In fact, early on in the year the question “is ChatGPT a security risk?” is one of the questions we hear most from our customers.
The long and short of it is that yes, it could be. It won’t necessarily adjust the cybersecurity strategies that companies adopt, but it’s certainly going to be a tool that criminals use.
How does ChatGPT work for cybercriminals?
ChatGPT can be used to create malicious code. In theory it – and other AI tools, but ChatGPT has already become as synonymous with AI writing tools as “Google” is for search and “Band-aid” is for medical adhesive bandages – has been designed to identify when people are looking to use it for nefarious purposes, and refuse to comply with requests. Amateurs, that might plug in simple requests are not going to get far with this tool, as this screenshot shows:
However, it is almost ludicrously simple to work around these restrictions. In the very same chat window, immediately under this request, by simply re-wording it to make it look like a “legitimate” request for information:
Of course, this is just a relatively mundane example, but it highlights how readily accessible ChatGPT is to someone that wants to use it for malicious purposes. Given that one of the key identifying markets of phishing is that they feature poor spelling, grammar, and tone, and ChatGPT addresses all of these criticisms, it represents an enhanced risk already.
More advanced users are able to use it for far more complex attacks, too. Just a few weeks ago, researchers published information on how they were able to use the AI to generate polymorphic malware. AI is being heralded as a significant coding and programming tool; research suggests that it outperforms around 85 per cent of programmers. That will be beneficial to software designers, video game developers, and other professionals. It stands to reason that it will also be beneficial to people that code for criminal purposes.
So, what can be done?
Fighting fire with fire will be a key strategy in combatting AI-assisted cyberattacks. The rate with which new attacks and pieces of malware will be developed is going to accelerate, and manually keeping up with the threat landscape is going to be even more impractical than it already is (and it’s already effectively impossible). Having AI-powered security solutions, particularly around network detection and response, that are well-trained to identify unusual activity, and immediately isolate and flag it, is going to be the best tool in preventing intrusion.
Otherwise, the same best-practice security solutions apply. You should have two-factor authentication switched on for all passwords. It’s also important to keep software up to date, have the basic antivirus and firewall applications set up for each endpoint device, and train your staff around security best practices (particularly with regard to phishing and other social engineering attacks).
ChatGPT is a security threat in the way that it will enable and empower cyber criminals. Basically, they’ll work better and more efficiently (much like how ChatGPT assists with many legitimate businesses and professionals). What it will do is further accelerate the number of threats that organisations face, and given that there was already a 13 per cent increase in cybercrimes reported in 2022, Australian organisations are going to need to brace for a big new wave in the year ahead.