How cyber criminals use artificial intelligence to manipulate human behavior
Artificial Intelligence and Phishing How cyber criminals use artificial intelligence to manipulate human behavior
Artificial intelligence is within everyone’s reach, once again, one might say. Service providers of all kinds are integrating the appropriate technologies into their security solutions. But how much are cyber criminals and how far are they, especially when it comes to phishing?
Companies on this topic
Cyber criminals have been using all technical means for years to automate their actions as much as possible, including avoiding being followed by law enforcement. One of the most effective and easiest way to infect an IT system is phishing email. 67 percent of the security incidents Verizon investigated in its 2020 Data Breach Investigation Report can be traced back to social hacking techniques such as phishing. These methods are popular due to the speed of setting up a fake email alias, and unlike phone calls, sending phishing emails does not cost money and is nearly impossible to track by law enforcement agencies. Too often one reads that this type of social engineering automation is also done with the support of machine learning and artificial intelligence. Machine learning is already being used primarily to optimize and use the most successful campaigns in a variety of languages and cultures. This would be a cause for concern, because wherever people make mistakes, machines are able to write error-free texts grammatically or do very good translations. However, the potential is much greater.
Artificial intelligence learns to direct human behavior
In Australia, the research team CSIRO Data61, a data-driven company with ties to science, developed and presented a systematic method for analyzing human behavior. They refer to it as “the recurrent neural network and deep reinforcement learning”. It describes how people make decisions and what triggers these decisions. In various tests, three experiments were conducted in which test subjects were asked to play different games against a computer. CSIRO President John Whittle summarized the findings in an article for “The Conversation” magazine. After each experiment, the machine learned from the responses of the participants and identified and targeted weaknesses in people’s decision-making. Thus the machine learned how to direct participants to certain actions. However, the results are, and Whittle explicitly admits that, for the time being, abstraction can only be linked to limited and somewhat unrealistic situations. However, this is what makes IT security experts around the world upset, unfortunately it can also be seen from these results that with adequate training and data, machines can be able to influence human decisions through their interactions.
The current status of KI in cybercrime
But how far are cyber criminals on their side? Artificial intelligence will be used primarily for spear phishing. The use of artificial intelligence in spear phishing is comparable to hunting with a sniper rifle. Neither is currently implemented, but it is theoretically possible. This is the bigger problem on the other hand, because how should companies prepare for something that is not yet known in practice? Of course, spear phishing is especially beneficial if the target is financially attractive enough. This is usually the case, as with BEC fraud or CEO fraud, a managing director or other member of the management team is imitated in order to quickly access the amount of millions.
Deep faking affects human behavior
When we talk about spear phishing AI, we’re talking about deepfakes. Deep phishing in phishing in particular is a common method. Phishing phishing is most effective when it comes to simply voice impersonation. Seasoned criminals can do it themselves with some voice training, but there are also enough programs like Hybrid, Receiver or Deep Vaccino that can do just that and rely on machine learning and artificial intelligence methods. Cybercriminals will then start out in exactly the same way as with recorded and fake voices; First they use all the information about the boss they want to imitate that can be found on the Internet, collect this data, evaluate it, and prepare the procedure. Then they look for weaknesses, and they get to know the staff on the one hand through the information available on the Internet, and on the other hand through fake calls to the secretary’s office. You can access the boss’s contact details, call him an argument, record his voice and let IT systems repeat it. Then you think about why the CEO is scamming, call the accounting department, call the alleged boss, and apply pressure. Ultimately, it’s as simple as it sounds, because simulating sounds is easy to learn for the programs.
Of course there is also the possibility of creating deepfakes with pictures or videos, but the effort is still very much to achieve the desired success. Faking photos takes longer, takes longer in videos with people, and takes so long for the result to be deceptively real that it takes a second look. Both the photos and videos are the next stage in the CEO scam security experts have come to expect. Currently, the simple email method or deep voice spoofing is still very successful. Employees are still exposed to this type of fraud online so investments are not very necessary yet. Cyber criminals, and we see this in phishing, always try to go the easiest path and only put in the effort they have to put in to achieve their goal.
conclusion
The hype surrounding AI is huge and IT security is definitely affected as well. Cyber criminals do use technology today, but not as widely as possible. Deepfakes technology will be the preferred method in the long run, because with better mimicry of voice, fake images or even videos, human feelings and behavior can be controlled and predicted better than is the case with regular text emails. This demonstrates the enormous potential cyber criminals are already dealing with that IT security officers have to deal with today. Training sessions that show employees what to look for, how to recognize deepfakes and what to learn in order to be able to assess situations should be an integral part of any IT security strategy.
About the author: Gili Weirenga is a security awareness attorney at Know 4.
(ID: 47351968)
Communicator. Reader. Hipster-friendly introvert. General zombie specialist. Tv trailblazer