Informed consent: Social engineering and 'assumed compromise'
Informed consent: Permission granted in full knowledge of the possible consequences
We're familiar with the concept of informed consent; in medicine, we treat it as criminal to perform a medical intervention without valid informed consent being in place. In red teaming, informed consent is just as important.
When we run a red team simulation, we convene a control group with our clients. The control group works with us throughout the scoping, planning and delivery of a red team exercise to help us ensure that the simulation runs effectively and without damaging our client's business. The control group will generally comprise several senior stakeholders, including representatives from senior risk ownership, technical areas of the business and even areas like legal and HR.
In every red team exercise, we need to make choices between different possible pathways for forward movement during an attack. The control group will be asked to make decisions about which social engineering tactics and pretexts to use, who to target, which systems are fair game and which are out of scope. They will be asked to decide whether a vulnerability can be exploited or not, what data can be accessed and used, and how long a response will be allowed to run with the incident response team unaware they are facing a benign simulation.
Ethics demand that we carefully consider the implications of our actions. We must ensure that we appreciate the impact on individuals and businesses and that we - as professionals executing these simulations - are providing complete and transparent information to decision makers about the possible consequences of their decisions.
I've managed red team simulations for many years, and I've been exposed to disparate corporate environments. One of the areas where implications may not be fully appreciated in red teaming is when it comes to making use of compromised user accounts, and the impact that can have on individuals.
Pretexts and pretences
Social engineering is a feature of the vast majority of red team simulations - it's a tactic used by many adversaries at some point to gain a foothold in a target organisation, or to move laterally through systems. Because social engineering is so commonly used by adversaries, we need to simulate these attacks by doing some social engineering ourselves.
Phishing is probably the best-known social engineering tactic used, where an adversary emails somebody trying to persuade them to do something that furthers the attack - such as open a document, click on a link or log in to a portal. Phishing emails are ubiquitous, but when we do phishing as part of a red team exercise, there are many possible consequences that we need to consider.
Pretexts for social engineering can be complicated. A pretext is the story you tell to get your target to do the thing you want them to do. Your imagination is the only limit to what you could do when putting together a pretext to use, but there are consequences to the choices made - and as an ethical red team simulating an attack, there are things that an adversary would be happy to do that we would never countenance. I've discussed before the impact that a phishing attack can have on an individual targeted, and there have been some very well-publicised incidents of phishing simulations going wrong for large businesses when staff are upset about the pretexts used for simulations. We have an obligation, ethically, to minimise the emotional impact of social engineering on the targets we choose, whilst also having a legitimate interest in ensuring that any social engineering pretext we use has a reasonable chance of succeeding in the context of causing the behaviour the red team wants.
Part of the responsibility of the control group is to approve the pretexts used. For a control group to make an informed decision about a pretext, we have to anticipate the risks and consequences of different pretext choices and weigh these against how successful we think the pretext will be in terms of causing the behaviour we want in the target and avoiding detection by responders.
Employees will often post identifying information on their personal social media profiles which can disclose their likes and dislikes, their personal situations and their interests. Platforms such as Facebook, Instagram, X and LinkedIn can be a rich source of information about employees and their lives. An adversary wishing to target an organisation and its employees will have no qualms about crossing the boundaries into personal social media space. Motivated criminals will use personal information about individuals to persuade those employees to open documents, click on links and otherwise act to their advantage. Adversaries will also target users directly via their personal social media accounts - effectively crossing the boundaries between an individual's professional life and their personal online presence. Highly personalised phishing attacks are often very credible and have a very high likelihood of succeeding for an adversary. There is an ethical and moral difference, however, between a criminal using personal details against an individual and somebody’s employer using personal details in the same way. Users who are targeted by highly tailored attacks – once they realise they have been targeted – are likely to be strongly impacted by this. They may feel a sense of violation and victimisation. They may experience excessive anxiety and stress as a result. They are likely to be (rightfully, in my opinion) aggrieved that they have been ‘used’ in this way.
Where the targeted use of information about individuals is not desirable, there are also purely work-related pretexts which also have a substantial chance of causing excessive distress and generally bring counterproductive outcomes. Offering financial incentives or bonuses to employees which do not exist, for example, are likely to be successful in getting employees to click links or interact with emails, but these pretexts are likely to backfire as the targets will be aggrieved when they find out no such incentives exist. Similarly, threats of disciplinary action or consequences for inaction are likely to be successful in achieving interaction, but also provoke unnecessary distress in recipients.
An ethical pretext to use in social engineering of this nature is one which is not personal or highly targeted and is often very bland. Carefully constructed, even bland untargeted pretexts can be very successful in attaining the right outcome for the red team even though they tend to have lower interaction rates than more controversial campaigns. Generally, though, there is a pretext that can be used in most situations which poses very little risk of distress or harm to recipients and also has a realistic prospect of a successful attack outcome. With this in mind, a control group should always be looking to anticipate the reaction of social engineering targets and ensuring that they fully understand the risks of the pretexts they authorise.
Stolen access and stolen accounts
Once a target falls for our social engineering attack, it's likely that we will - if our technical tactics stack up - have achieved our foothold within the target environment. Traditionally, this has often meant that we have managed to introduce and run our malware on the target's workstation through their download of a document or clicking of a link. In more modern cloud-based architectures, it now might mean that we have stolen authentication tokens from their browser to use with various SaaS services. What we do with that foothold, and how we handle the access we have, needs careful consideration.
In approving an attack simulation where social engineering provides access to user accounts, an employer is effectively authorising us to access the data as that user, and masquerade as that user internally.
When we gain access to an individual user account via social engineering, we gain the same access that the user has themselves. We can read their email. We can access their messages and send messages to their colleagues - using those relationships to introduce malware to more users internally, or cause other employees to act in ways that further our attack. We can go through their files and their notes and maybe even their browsing history. As professionals, we are of course extremely mindful of protecting our targets: we are careful about what we access when we gain access to a user account, and how we handle the user's data. However careful we are, we are always likely to access - at least transiently - data that is personal to the individual.
HR and IT policies are likely to outline that an employer has the right to monitor activities conducted on work-provided equipment, and to underline that an individual may not have a right to privacy when using work assets. Even with these policies in place, users are likely to make reasonable use of work equipment for personal purposes and they are unlikely to expect that an employer would allow a third-party this level of access to their account. Any employee who falls victim to social engineering during a red team, and has their account used by the red team, is likely to experience distress once they learn that this has happened - regardless of whether policies and agreements allow this to be authorised by the security team during simulations.
Assumed compromise – the deliberate foothold
Sometimes as part of a red team, we aren't interested in assessing whether social engineering can result in a foothold - this initial stage of a simulation can be time consuming (and therefore expensive) and what we really want to find out during the simulation is whether the response teams are able to detect an intrusion in progress and interrupt an attacker's activities. When this is the case, we may opt for an 'assumed compromise' exercise. With assumed compromise simulations, we typically use an internal starting point provided through cooperation with our client. The most 'realistic' type of assumed compromise involves a user within the organisation being asked to cooperate with us to deliberately introduce our implant to their device, or to deliberately compromise their user account in some way. There are ethical problems which we need to consider.
Most employees, when joining an organisation, are compelled to agree to acceptable use and security policies. Invariably, these policies will include provisions that the employees must not provide others with access to their accounts, that they must not introduce malicious software or malware to the estate and that they must report immediately any compromise of their account by another. I have yet to see any of these policies make an exception for a security simulation! Employees will usually also be advised that any breach of these policies is likely to be a disciplinary matter, which could result in misconduct investigations or even dismissal.
Consider, then, the position of an employee who is nominated to be the target of an assumed compromise in this way.
The ideal assumed compromise target is somebody within the organisation with no special rights or privileges - an entry-level or mid-level employee with no special permissions for sensitive functions and no special IT or cloud access rights. Once a suitable candidate is identified, somebody within the security function - usually in a much more senior role than this employee, but not within the direct reporting line - will instruct this employee to breach security policies which they have agreed to and also will instruct this employee that they are not to report this to the proper channels. The employee will also likely be asked not to share the information with their direct chain of command.
The employee is being asked, by somebody in a position of power, to knowingly expose their account to misuse, deliberately expose any personal data to a third-party about which they know nothing and allow this unknown third-party to interact using their identity with others in the company.
If something goes wrong with the execution of a red team in this type of assumed compromise scenario, the consequences to the employee chosen to be the point of compromise could be dire if risk management is not robust. Employees can be held accountable for the actions of the red team. Relationships with co-workers could be damaged at a minimum. In the worst-case scenario, the employee could be disciplined and dismissed for breaching contract terms in allowing this to take place.
Where a user is asked to be part of an assumed compromise, an ethical solution requires that the user is fully informed about what they are agreeing to when they do this, and that they are fully protected from adverse consequences should they agree. The user should be able to question the rules of engagement, understand what access to their data the testing team will have and what that potentially means for them. If, for example, their identity is to be used internally for phishing of their colleagues as part of the simulation, the user should know which other employees are being targeted and should be able to raise concerns or veto interactions which might damage important professional or personal relationships. The user should be involved in discussions about what data the test team will be permitted to access and identify any areas of data that the user considers should be personal or off-limits to the team (e.g., areas of their home directory, or reading their emails for example). Users should also be provided with written details of the request and the agreed controls, signed off by senior members of the control group and, ideally, legal/HR representatives. This agreement in writing must include protection and immunity for the user in respect of policies and contractual agreements which the user is being asked to breach as part of the simulation - for their own protection. Having understood the implications of consenting to such a request, a user should be free to deny their consent with no retribution.
Having understood the possible implications of consciously allowing a third-party access to your account during an attack simulation, I think most users would be understandably reluctant to offer their informed consent. I would be, too.
In some jurisdictions, it may simply not be workable for an assumed compromise scenario to use an existing user account for many reasons - including data protection laws and the right to personal privacy. Even where this can be accomplished, as you can see above it is tricky to accomplish ethically if we ensure that consent is provided from an informed perspective. Fortunately, there are lots of alternative approaches we can use to simulate these situations. Purpose-commissioned accounts are an obvious fallback. These come with compromises in the 'realism' of the account and difficulties in bypassing robust JML processes, but can often be used to great effect and do not carry the same risks to an individual employee. More often than not, we therefore aim to use simulated account access in this way.
The takeaway
Because red teaming touches the whole business – including employees – risk management is inherently complicated. Technical risk is only part of the picture – risks to the people we target, including to their emotional wellbeing and their livelihoods, are real and need to be managed with the same rigour.
A well-managed simulation can be conducted with a minimum of risk in all areas, but part of our responsibility as a red team is to anticipate and articulate risks so that the control group can provide informed consent to our activities, understanding the potential consequences of the decisions they make.
If we are to involve individual employees in our simulations, we have an ethical obligation to ensure those individuals are fully informed about what they are agreeing to, and that they are protected from harm.
At times, ensuring that we act ethically and with due regard to the rights and wellbeing of others means that we, within our control group, will make decisions that adversely impact the realism of our simulation with respect to emulating adversary tactics, or that we make decisions that mean our activity is more likely to be detected or less likely to succeed. Such compromises are a necessary part of exercising an adversary simulation, but they need not undermine the ultimate efficacy of the exercise in achieving productive learning outcomes about the resilience of targeted organisations.
Improve your security
Our experienced team will identify and address your most critical information security concerns.