The human cost of social engineering
Adversary simulation, simulated targeted attack, red teaming… Whatever you want to call it, a technical exercise that assesses your defences by simulating the tactics, techniques and procedures of a real attacker is of great value – especially when you want to understand how well your incident response plans hold out against attack.
Simulating the whole attack chain for most adversaries means that we are not just targeting technology – we are also targeting processes and people.
In the security industry, we will often talk about people being the weak link. We spend our time outlining the ways that people will fail, or be fooled, or will be tricked. Of course it’s important that we, and our customers, understand the fallibility of people in any security assumptions we make. On the other hand, we also have a moral and ethical obligation to look after the very people we are targeting, and to avoid causing undue distress.
“Social engineering” is a bloodless, sterile term. We call it “social engineering” because it covers a lot of different bases, and it sounds more professional than the alternative – “lying to people”, “abusing trust”, “betraying relationships”. These are tactics that adversaries use mercilessly and without consideration for the impact on the victims. If we are to accurately simulate the attack chain and the activities of adversaries, then we need to adopt these tactics as well.
Because of our distance from our targets and our focus on objectives, it’s easy to overlook the fact that those who are on the receiving end of our social engineering attacks are living breathing human beings, with lives outside of work and powerful emotions. The same goes for our customers – it’s not in any of our interests to lose sight of the wellbeing of our human victims when we simulate advanced adversaries.
Anybody who has been involved in an urgent response to a cyber security incident will be familiar with the heightened emotions that tend to rise to the surface in these situations. People work long hours, adrenaline flows and the response team are under a lot of pressure.
When making incident response plans, companies often consider the wellbeing of the responders – making sure they take regular breaks, making sure they have access to food, rest and support as necessary. When we simulate an incident, we know that the responders will likely end up in this situation if and when they locate our intrusion and start responding to it. Our customers are prepared for that. But there’s possible collateral damage that is easy to overlook.
I would like to tell you the story of a woman caught up in a simulated cyber security incident. She had been in email correspondence with somebody that she thought was from one of her company’s customers. She had spoken to this person on the telephone several times and built a relationship over a few days. The inevitable happened – she was targeted with malware during the correspondence as part of the simulation, opened the document sent to her and her workstation was compromised.
At some point, the compromise was identified by the internal IT team and they started investigation – identifying her as Patient Zero. So, of course, they spoke with her to find out what had happened, who she had been talking to, and what she had done. Victim to a sophisticated attack methodology, involving a closely-registered domain, telephone conversations with a person who did not exist and the pressure of trying to keep a “customer” happy, she had exposed her company to attack.
I don’t think any cyber security professional would expect an individual employee to be able to defend against that level of sophisticated social engineering – defending against these types of tactics would require an absolutely exhausting level of vigilance on the behalf of the employee.
What stays with me is how the targeted employee reacted to this situation. She was absolutely devastated to have been the “cause” of a cyber security incident for her employer. Once she had realised that she had fallen victim to a lie, she couldn’t sleep. She couldn’t stop going over and over in her head what had happened and whether she should have known better. She spent days dissecting her conversations with this “customer” and questioning the relationship that she thought she had built up with them during their conversations.
If she couldn’t trust her judgement in this situation, with this “customer” and their conversations, could she trust her judgement in anything she did? There is no doubt that she was substantially upset by what occurred, and that she blamed herself for it.
When the incident was revealed as part of an exercise (rather than a real attack) some days later, she had mixed feelings. She was hugely relieved that her error had not had any real negative consequences for her employer. Accompanying that, however, was a level of resentment and anger. She had spent days in a state of anxiety and upset as a result of an exercise that her employer had commissioned.
Was it wrong to conduct the adversary simulation? Was it wrong to target the employee in this way?
I don’t have all the answers to moral and ethical dilemmas of this nature. This is a grey area that cyber security practitioners have to navigate together with our customers. It’s clear to me, however, that we have an obligation to consider the impact of our operations on the humans who are on the receiving end of our attacks. We have to find a balance between the realistic simulation of an adversary without ethical constraints, and protecting those that we would target from unnecessary distress.
When we employ social engineering, we consciously decide to use tactics that will bring us the best chance of success. Most social engineering relies on a “hook” – something that will cause a victim to engage with us or be interested in something we are doing. This is how we lure somebody into clicking on a link, or downloading a document containing malware to give us a foothold in a network.
The best hooks – the ones with the greatest chance of success – tend to be the ones that will have the most emotional impact. Looking at the start of the COVID-19 pandemic, we saw huge changes in hooks used by adversaries: mass phishing campaigns pivoted to use COVID-19 information as a hook to draw people in. Adversaries did it because it worked, and it worked because people were scared about COVID-19; fear and anxiety are very powerful emotions. A person who is anxious or fearful is less likely to be thinking clearly and logically than somebody who is calm and relaxed.
As professionals, we need to make an ethical decision about the hooks we choose. Consider an employee who has posted publicly on social media about their difficulty conceiving a child. This is an information an adversary could use to create a hook which this person is very likely to bite at – a new fertility treatment, or a change in employment benefits allowing fertility treatment costs to be covered. Would it be right to use such a personal and emotive topic as a hook against an employee on behalf of their employer? Clearly not. The level of emotional distress that we might cause to the individual by using this information could be devastating. Even if a criminal adversary might have no qualms using this information against the employee, for us to do the same in the name of the employer would be unconscionable.
In this case, the decision might be clear cut – most of the time, the possible impacts of a hook might be less obvious.
What about using COVID-19 as a hook, as adversaries everywhere have been? It seems like an obvious way to simulate adversary actions, but we still have to consider what we would be doing to recipients. Have people we are targeting with that hook lost loved ones to the pandemic? Are they suffering from anxiety as a result of the pandemic and the restrictions on their lives?
Every single time we choose a social engineering hook, we have an obligation to consider the emotional impacts of the hook we choose, and weigh those up against the benefits to our customers and to society of the work we are doing.
When we’re crafting an attack scenario, we use tactics which are designed to make people make poor decisions. We create a sense of urgency by creating false deadlines that have to be met. We apply pressure by suggesting negative consequences should our targets fail to comply. We actively seek to cause anxiety which will cause our victims to do what we want them to do. The more pressure we create, the more urgency we create, the more likely we are to succeed. Again, we must weigh the - very real - stress that we are causing against the benefits we are realising.
Within any enterprise, trust between employer and employee is critical to a positive and successful security culture. The most successful security programmes avoid a blame culture and instead inculcate a positive and supportive culture of reporting for security incidents.
Open reporting culture is incredibly valuable for cyber security resilience. Whilst we often refer to people as the weak link, they can also be the strongest asset we have in cyber security.
As an employer, breaching trust with your employees by neglecting the care of those who have fallen victim to social engineering attacks can damage the very relationships that you rely on for the protection of your assets and data.
As an industry, whilst social engineering is something that we must do to support the protection of our customers, we must be mindful of the very real human cost of our actions, and take steps to ensure that we minimise the damage to our victims. This is a landscape painted in shades of grey, but we forget the humanity of our targets at our peril.
Improve your security
Our experienced team will identify and address your most critical information security concerns.