You’re in a coffee bar and you need to connect your smartphone to a Wi-Fi, so you check your screen and see the following options. Imagine that you know or can ask for the key, in case it were requested, which one would you choose?
Depending on your security awareness level, you will choose the first one: mi38, that seems to have the best signal; or v29o, that has not such a bad signal but is secured and requests a password. Imagine now that you are in the same coffee bar, but in this case you have the following list of Wi-Fi networks on your smartphone screen. Which one would you choose now?
Whether your security awareness level is high or not, I’m pretty sure that you would choose 3gk6. What has changed? They are the same Wi-Fi networks, but presented in a different manner. You are not even aware, but this presentation will have influenced your decision. Welcome to the nudge power!
Those nudges that sway your decisions without your knowledge
In 2008 Richard Thaler and Cass Sunstein published Nudge: Improving Decisions about Health, Wealth, and Happiness, a book that helped to popularize the “Nudge theory” and the concept of “choice architecture“. In this book the authors postulate that by designing carefully the options showed to the public, as well as the way such options are presented or framed, we are subtly influencing the decision made, without limiting the choice freedom.
According to the authors, a nudge is: “any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives.”
This book includes dozens of success cases of nudges and choice architectures: fly images etched on urinals that reduced spillage on men bathroom floors; fruits and vegetables that had been placed in front of the self-service tape increased purchases of these fruits and veggies more than if they had been placed at the end; displays along the road showing the speed of the vehicles approaching, so making them slow down; forms presenting the organ donation as default option when somebody dies that achieve vast differences in terms of figures between countries; and I could go on. This book is really fun and illustrative.
As we previously saw in articles of this set, our rationality is limited and our decisions are systematically subdued to biases and heuristics that produce undesirable results in some complex situations. Nudges are supported by the theoretical framework of two cognitive systems: System I and System II. The main feature of these nudges is that they exploit our irrationality.
Over the years, the concept of nudge has been shaped out and new definitions have appeared. A particularly useful definition is the behavioral scientist P. G. Hansen’s one: “A nudge is a function of (I) any attempt at influencing people’s judgment, choice or behavior in a predictable way (II) that is motivated because of cognitive boundaries, biases, routines, and habits in individual and social decision-making posing barriers for people to perform rationally in their own self-declared interests, and which (III) works by making use of those boundaries, biases, routines, and habits as integral parts of such attempts.”
This definition suggests the following nudges’ features: :
- They produce predictable results: they influence towards a predictable direction.
- They fight against irrationality: they intervene when people don’t act rationally in their self-interest due to their cognitive boundaries, biases, routines and habits.
- They tap into irrationality: they exploit people’s cognitive boundaries, heuristics, routines and habits to influence towards a better behavior.
Let’s go back to the first example presented on Wi-Fi networks. According to Hansen’s definition, we can observe how the second way used to present the network list affects as follows:
- It produces predictable results: more users turn to the most secure choices.
- It fights against irrationality: it fights against the unthinking impulse of connection, that may be satisfied by the first Wi-Fi network with good signal appearing within the list, regardless of if it is open or secured.
- It taps into such irrationality: green elements are seen as more secure than red ones, we privilege the first options of a list against the last ones, we pay more attention to visual cues (locks) than to textual ones, we privilege (the supposed) speed over security, etc.
And all this by showing the same networks, without forbidding any option or changing users’ economic incentives. That is to say, all these biases are being tapped to display the preferable option in the first place of the list, in green, including a lock in addition to text; and prioritizing by security as the first criterion as well as by connection speed as the second one. Ultimately, biases are analyzed, and a nudge is designed to tap into them, while respecting choice freedom.
Several research works have materialized this Wi-Fi networks’ experiment successfully achieving to modify users’ behaviors towards more secure choices. In all the cases, these researches reached similar conclusions:
- Well-designed nudges have the power to influence decisions.
- This capacity for modifying behavior is greater as the probability that the user shows insecure behaviors increases.
- The power to alter behavior raises if several types of nudges are combined, so calling Systems I and II.
How to influence your employees’ security behavior
Every day, people within your organization are dealing with a wide range of security decisions, in addition to choose a secure Wi-Fi:
- If I download and install this app, will it involve a risk for my security?
- If I plug this USB into the laptop, will it be an input vector for viruses?
- If I create this short and easy-to-remember password, will I be cracked?
This is why security policies exist: they guide users’ behavior by requiring them to act as securely as possible within the organization’s security context and aims. However, are there other alternatives? Is it possible to guide people’s security choices while respecting their self-determination and without limiting the options? In other words: is it achievable that they act securely without them being aware of they are being influenced and feeling that their freedom is being limited?
According to R. Calo, professor specialized in cyberlaw, there are three types of behavioral intervention:
- Codes: they involve manipulating the environment to make the undesirable (insecure) behavior (almost) impossible. For instance, if you want your system’s users to create secure passwords, you may refuse any password that does not follow the password security policy: “at least 12 characters long including both alphanumeric and special characters as well as upper and lower cases, and non-repeated regarding last 12 passwords”. By doing so, the user does not have another choice than buckle under it or they will not be able to access the system. In general, all security guidelines not leaving options are included in this category: blocking USB ports to prevent potentially dangerous devices from being connected; restricting sites to be connected to a white list; limiting the size of e-mail attachments; and many others typically foreseen within organizational security policies. Codes are really effective to modify behaviors, but they do not leave choice neither exploit limited rationality, so they cannot be considered as “nudges”. In fact, many of these measures do not have success among users and may lead to search for circumventions that betray completely their purpose, such as writing complex passwords in a post-it stuck to the monitor: by doing so, users are protected against remote attacks, but in-house attacks are eased and even fostered.
- Nudges: they exploit cognitive biases and heuristics to influence users towards wiser (secure) behaviors. For example, going back to passwords, if you want your system’s users to create more secure passwords according your security policy guidelines previously mentioned, you can add a password strength indicator for signup forms. Users feel the need to get a stronger password, so they are more likely to keep adding characters in order to complicate it until the result is a flamboyant green “robust password”. Even if the system does not forbid weak passwords, so respecting users self-determination, this simple nudge increases drastically the complexity of created passwords.
- Notices: they are purely informative interventions intended to cause reflection. For instance, the introductory new-password form may include a message reporting on the expected characteristics of new passwords as well as on how important are strong passwords to prevent attacks from happening, etc. Unfortunately, informative messages are really ineffective, since users tend to ignore them and often do not consider them even intelligible. These notices cannot be considered either as “nudges”, since they do not exploit biases neither cognitive boundaries. Nevertheless, their efficacy can be notably increased if they are combined with a nudge: for instance, by including the message and the strength indicator in the same password creation page. These hybrid nudges are aimed to call System I, quick and fast, as well as System II, slow and thoughtful, through informative messages.
Therefore, to ensure the success of a behavioral intervention it will be desirable to call both types of processes.
The most effective nudges in the field of information security
Hybrid nudges are the most effective ones, since they combine thought-provoking information with any other cognitive trick that exploits biases or heuristics::
- Default options: provide more than an option, but always make sure that the default option is the most secure one. By doing so, even if you allow users to select a different option, most of them will not do it.
- (Subliminal) information: a password creation website induces users to create stronger passwords if there are images of hackers, or simply of eyes, or even if the text is modified: “enter your secret” instead of “enter your password”.
- Targets: present a goal to the user, for instance: a strength indicator, a percentage indicator, a progress bar, etc. Like this, they will strive to accomplish the task. This type of intervention can be categorized as a feedback as well.
- Feedback: provide the user with information for them to understand if each action is achieving the expected result while a task is being executed. For example, by reporting on the security level reached over the set-up process of an application or service, or on the risk level of an action before tapping on “Send”. Mind you, the language must be carefully adapted to the recipient’s skill level. For instance, in this research, thanks to the use of metaphors known as “locked doors” and “bandits” users understood better the information and consequently made better choices. In this other study, researchers confirmed how to report periodically Android users on the permissions used by installed apps, so making them to check the permissions granted. In this other study, the same researchers reported users on how their location was being used. Consequently, they limited app access to their location. In another research, the fact of reporting about how many people can view your post in social networks led a high number of users to delete the post in order to avoid regretting.
- Conventional behavior: show the place of each user in relation to users’ average. Nobody likes to be behind, all the people want to be over the average. For instance, following a password selection, the message “87% of your colleagues have created a strong password” make those users that had created a weak password to reflect and create a more secure one.
- Order: present the most secure option at the top of the list. We tend to select the first option we see.
- Standards: use pictographic conventions: green means “secure”, red indicates “danger”. A lock represents security, and so on.
- Prominence: by highlighting the most secure options you attract people’s attention over them, so you facilitate their selection. The more visible an option is, the higher is its probability to be selected.
- Frames: you can present an action’s result as “making a profit” or “avoiding a loss”. Loss aversion tends to be a more powerful momentum.
Nudges’ ethical implications
As you may imagine, this issue is not without its ethical implications, since you are creating interventions with the aim of influencing users’ behavior by exploiting cognitive processes’ holes. In summary, we may say that you are hacking users’ brains.
The researchers K. Renaud and V. Zimmermann have published a full paper where they explore nudges’ ethical guidelines in its broadest sense. This way, they state a number of general principles to create ethical nudges, so before launching yourself on the design of your own organizational nudges, I recommend you to think about the following five ethical principles:
- Autonomy: the end user must be free to choose any of the provided offers, regardless of the direction to where the nudge is addressed. In general terms, no option will be forbidden or removed from the environment. If any option is required to be limited due to security reasons, such fact must be justified.
- Benefit: the nudge must only be deployed when it provides a clear benefit, so the intervention is totally justified.
- Justice: as many people as possible must benefit from the nudge, and not only its author.
- Social responsibility: both nudge’s anticipated and unanticipated results must be considered. Pro-social nudges progressing towards the common good must be always contemplated.
- Integrity: nudges must be designed with scientific support, whenever possible.
Use nudges for Good
Nudges are becoming more usual in the field of cybersecurity with the aim of influencing people in order to make them choose the option that the nudge designer consider is the best or most secure one. New choice architectures are being explored as a means for designing better security making-decision environments without drawing on restrictive policies or limiting options. Even if the neutral design is a fallacy, be cautious and ethical when designing your organizational nudges: design them to help users to overcome those biases and heuristics that endanger their decisions on privacy and security.
Push your organization to achieve a greater security, always respecting people’s freedom.
Gonzalo Álvarez de Marañón
Innovation and Labs (ElevenPaths)