Things You Should Know But Don’t: How Hackers Use AI

Posted March 18, 2024

It’s not uncommon to hear that someone has implemented elements of AI in their day-to-day workflow. In fact, it’s becoming difficult to avoid programs that use AI to some extent. From image-editing software like Adobe Photoshop to web browsers like Microsoft Edge, AI has now been woven into the fabric of computer use.

The idea behind AI is typically not to replace people, but rather to act as a helpful tool and augment someone’s work. Whether or not this is a good thing is still open for debate. However, it is important to consider that not every person who uses AI will have good intentions—computers don’t inherently have morals of their own, after all.

The biases and intentions of the person behind the algorithms have been a known concern since the birth of AI and machine learning. It’s the reason that people are so critical of things like facial recognition technology and crime forecasting. Recently, Microsoft and OpenAI uploaded a blog detailing their research into how cybercriminals have used AI and might continue to do so in the future. Together, the two companies began looking into the activity of known hackers that were utilizing large language AI models like ChatGPT. While they have yet to find any significant attacks associated with the AI programs they monitored, “Microsoft and OpenAI have detected attempts by Russian, North Korean, Iranian, and Chinese-backed groups using tools like ChatGPT for research into targets, to improve scripts, and to help build social engineering techniques.

As it stands, it seems that AI is helping hackers in much the same way it helps others—it is currently just a tool that supports the work and methods they were already employing.

For instance, consider two known large language models — WormGPT and FraudGPT, that cybercriminals can use to assist with their work.

WormGPT is essentially a “blackhat counterpart to OpenAI’s ChatGPT, but without ethical boundaries or limitations.” FraudGPT is similar but was created specifically for “offensive operations,” such as targeted phishing emails or credit card fraud. It is also advertised as being able to write malicious code, create phishing pages, and identify leaks and vulnerabilities.

Even without those specific programs aimed towards hackers, large language models are able to be used maliciously by anyone in the general public. Sure, OpenAI has placed restrictions on the kind of responses that ChatGPT can give (such as illegal or explicit content), but a quick bit of research will show you how easily these restrictions can be bypassed. Because an AI is not human, certain inputs or phrases can easily “trick” it into generating normally restricted outputs, such as drafting a more believable phishing email.

Who’s to say what else hackers might use these programs for, especially if people are starting to figure out how to use them to write code?

AI in and of itself may be morally neutral, but staying on top of the ways that people use it will be very important in the coming years. If it proves to be an invaluable, life and time-saving tool, will it largely be used for good or will it too easily fall into the hands of malefactors?

Leave a Reply

Your email address will not be published. Required fields are marked *