Artificial Intelligence and Bad Actors: A Formidable Foe

{authorName}

Ana BeraHead of Marketing and Content at Subscriptionly.net

Friday, June 19, 2020

AI is a useful tool for both cybersecurity specialists and bad actors. Going forward, companies must be aware of the risks of AI-powered attacks. Those companies that don’t will pay a steep price going forward.

Article 4 Minutes
Artificial Intelligence and Bad Actors: A Formidable Foe
  • Home
  • IT
  • Security
  • Artificial Intelligence and Bad Actors: A Formidable Foe

In our report, Artificial Intelligence: The Smarter Approach to Information Security, we touched on the increasing importance of AI in the fight against cybercrime. Today we’re looking at things from a completely different angle. We’ll examine how cybercriminals can, and have, used AI to enhance their operations.

The evolution of cybercrime in 2020

Despite AI becoming a more integral part of cybersecurity software solutions, we’ve still got a long way to go. Security breaches increased by 69% between 2015 and 2019. While cybersecurity defenses are becoming more sophisticated, so are the attacks.

Ransomware gets nastier

Bad actors are constantly adapting their attack vectors to improve their business returns. In the report we spoke about earlier, you’ll see that highly targeted, single-use malware was popular at the time.

Today we see bad actors also start to adopt a more comprehensive approach to net more money. Take ransomware, for example. Your typical ransomware attack in the past had a simple objective: to encrypt your data and demand a ransom. In the past, that’s where it ended. You paid the ransom, and you got the decryption key.

In 2020, things are changing. Bad actors have started to multitask. Step one is to gain control of your system and steal sensitive information. Step two is to encrypt your systems so that you’re locked out.

You’ll pay for the key to decrypt your information. The bad actors then publish small bits of the information that they’ve stolen onto the dark web. To add insult to injury, they’ll contact a few of the affected clients and let them know about the breach.

Any hope that companies may have had of hiding the breach is shattered. To avoid further details being compromised, the company must pay an additional ransom. It’s a simple and highly effective strategy.

How does AI feature in ransomware?

Machine learning makes it possible for malware to learn more sophisticated attack strategies. Over time, AI learns from all of the attacks that happened before. It can collate data from hundreds of thousands of attacks in a few minutes, thereby identifying the most effective strategies.

AI could be used for smarter attacks. Once it’s made it onto the system, it can intelligently evade security measures. With the internet of things now becoming a reality, smart hacks can take on a whole new meaning. Instead of hacking your business’s secure server, hackers might focus on IoT devices that have far less security.

For example, they could hack your security company and send out malware as a “security update.” When your smart CCTV system checks for updates, it would upload the malware.  

Deepfakes are coming your way

To date, deepfake technology has mostly been abused to spread disinformation. Using AI, deepfake technology takes pictures or videos of someone and creates a replica. Bad actors can then change the facial features or what the person is saying. When done well, no one will know the difference.

Why should you worry about this?

Let’s play through a scenario quickly. Your boss phones you and asks you to make a payment to a supplier to get an order through fast. You recognize his voice and mannerisms.

What’s your next step?

If you’re like the CEO of an unnamed British company, you’ll transfer the money without question. Our scenario here is based on a real incident. It’s believed that deepfake technology using AI was used to recreate the boss’s voice and mannerisms.

The CEO only became suspicious when his “boss” phoned again about another transfer. Fortunately, the CEO realized what was going on, but only after he’d transferred $243,000.

Smart phishing

Chatbots have been around for some time, but the technology was initially limited. A simple chatbot had to give a preset range of responses. Thanks to advances in AI, that’s changing. Many bots can now learn and respond much as a real person would.

This leaves interesting avenues for phishers to exploit. A bad actor using an AI-powered program could easily determine which messages make it through your spam filter. They could then use an AI-powered chatbot to strike up a conversation with you to learn which approach works best with you.

They could use ten, twenty, or fifty different chatbots to learn what types of social engineering work on you.

Ana Bera

5’3" ray of sunshine and a chocolate addict. As the Head of Marketing and Content at Subscriptionly.net, she uses every opportunity she gets to learn from others and generate fun and informative content. Ana is a Toronto-born, world traveler, hungry for knowledge and ready to make a difference in the marketing world.  

Comments

Join the conversation...