top of page

How weaponising disinformation can bring down a city's power grid


Ph.D. Student: Gururaghav Raman | Advisor: Jimmy C.-H. Peng | Collaborators: Talal Rahwan, Bedoor AlShebli, and Marcin Waniek, New York University, Abu Dhabi, UAE | Project Duration: 2018-2020


Social media has made it possible to manipulate the masses via disinformation and fake news at an unprecedented scale. This is particularly alarming from a security perspective, as humans have proven to be one of the weakest links when protecting critical infrastructure in general, and the power grid in particular.

Yet, while numerous blackout prevention and mitigation strategies have been proposed in the power systems literature, the link between disinformation and blackouts has never been studied to date. Driven by this observation, we seek to answer the following question: can an adversary bring down a city’s power grid using disinformation without any physical or cyber intrusions?

Disinformation attack mechanism

We consider an attack in which an adversary attempts to manipulate the behaviour of citizens by sending fake discount notifications encouraging them to shift their energy consumption into the peak-demand period. Such a shift may result in the tripping of overloaded power lines, leading to blackouts. An overview of this attack and the disinformation message are shown in Figure 1.

Infographic-disinformation attack.tif

Figure 1. Illustrating how the disinformation attack is launched from an attacker, thereby altering the energy consumption patterns of a portion of the population. Importantly, not every recipient follows-through on the notification.

Ultimately, the success of such an attack depends on the follow-through rate, i.e., the fraction of people who behave as intended by the attacker. We analyze the impact caused by such behavioural manipulation on the power grid. To this end, we modelled the power grid of Greater London and simulated the behaviour of residential energy consumers. Importantly, our model considers residential electric vehicle (EV) adoption since the owners of such EVs control a substantial amount of deferrable energy, and thus can cause greater harm when manipulated by an adversary. We vary the EV adoption level in the city, and model the capacity upgrades that are necessary for the grid to support the demand corresponding to each such level. 

Attack impact on the power grid

We consider a scenario where the grid is heavily loaded and any distribution line can sustain at a maximum a 10% increase in the peak demand through it. Figure 2a presents the percentage of consumers who experience a blackout given varying follow-through and EV adoption rates. As can be seen, increasing the EV adoption up to 20% increases the system vulnerability to the attack, whereas beyond 20% the system resilience increases, i.e., it requires a greater follow-through rate to achieve the same attack magnitude. This trend is caused by two opposing forces: (i) increased vulnerability due to the consumers controlling more deferrable energy, and (ii) increased resilience due to the grid’s upgraded capacity to cope with the increased number of EVs. When the EV adoption is smaller than or equal to 20%, the former force outweighs the latter, and hence we see an increase in the system vulnerability. The opposite is true when the EV adoption exceeds 20%, leading to the observed increase in resilience. Next, to get a sense of the distribution of the blackout across the city, we depict the state of the system corresponding to two different cells in the heat map; see Figures 2b and 2c. As can be observed, the impact is dispersed throughout the city rather than being concentrated in very few massive pockets.

Impact on power grid.tif

Figure 2. Impact of an attack on the power distribution network of Greater London. a: The percentage of consumers suffering from a blackout as a result of the attack given different follow-through rates and EV adoption rates. The figure also highlights the columns corresponding to projected EV adoption rates for the UK in the years 2020, 2030, 2040, and 2050. b: Visualization of the status of every power distribution line in the system for a follow-through and EV adoption rates of 0.17 and 0.20, respectively. Grey indicates active lines, whereas red indicates lines that have tripped as a result of overloading. c: The same as (b), but for follow-through and EV adoption rates of 0.12 and 0.20, respectively.

We then study how the grid’s vulnerability depends on the peak overloading capacity of the distribution lines. Say the overloading capacity is increased from 10% to 15%. Simulating the system for follow-through and EV adoption rates of 0.17 and 0.20 respectively, we find that the attack results in only 5.9% of consumers being offline. This is in contrast to 35.4% of consumers that were affected by the blackout when the line capacity was 10%. Further increasing the overloading capacity to 20% reduces the size of the blackout to 1.4% of the consumers

To obtain more insight, we analyze the grid in terms of the line capacity upgrades that are necessary to support increasing EV adoption. The results shown thus far are for the case where, for any given EV adoption rate, the grid is assumed to be upgraded to support exactly that rate. However, if the grid is upgraded to support more than this rate, the impact of the attack will be substantially alleviated, and vice versa. Taking the year 2025 as an example, if by then the grid was not upgraded since 2020, then a mere 5% follow-through rate can bring the grid down completely. On the other hand, if the grid in 2025 was upgraded to support the projected EV adoption until 2030, then even a 100% follow-through rate would cause a blackout for less than 20% of the residents. These results highlight the need for future grid upgrades to not only be dictated by the technical aspects governed by physical laws, but also consider the behavioural aspects of the consumers who may act unpredictably and irrationally, especially when subjected to disinformation. However, since grid upgrades come at a high cost to the power utility, perhaps a more realistic solution would be to focus on increasing the awareness of the consumers and immunizing them against disinformation.

Propagation of disinformation through social networks

In a disinformation-based attack, the social aspect could play an important role, since people may unknowingly amplify the attack by forwarding the disinformation notification to their friends.

Disinformation attack propagation.tif

Figure 3. Attack diffusion. a: An illustration of how disinformation can propagate through a social network. b: The disinformation notifications shown to participants in different conditions, which vary depending on whether or not the notification contains an external link, and whether the sender is a stranger (assumed to be the attacker who uses spoofing services to mask the sender as SMSAlert) or a friend (named John Smith in the survey). c: Given different percentages of initial recipients (10%, 20% and 30%), and different values of k (representing the number of friends to whom each recipient considers forwarding the notification), the subfigures depict the follow-through rates after a single step of propagation in social networks consisting of 1 million individuals. The networks were generated using four network models: Barabási-Albert (BA), Erdős-Rényi (ER), Watts-Strogatz (WS), and Newman Configuration (NC). The propagation is simulated using two influence propagation models: independent cascade (IC) and linear threshold (LT). The participants’ propensities to follow-through or forward the notification (which were reported on a Likert scale from 0 to 10) were mapped to actually probabilities (from 0 to 1) using three functions: linear, squared, and cubic. Results are shown for two cases, one where the notification contained an external link (marked as an ‘X’ in the subfigures), and one where it did not (marked as an ‘O’).

To model the spread of disinformation, we use two standard models of influence propagation, namely, independent cascade, and linear threshold. These models are parameterized based on a survey of 5,124 participants who were recruited through Amazon Mechanical Turk. 


Specifically, the participants were shown a message notifying them of a discount of 50% in their electricity rate from 8PM to 10PM. They were then asked to specify the likelihood of them changing their electricity-use patterns to take advantage of this discount, and the likelihood of them forwarding this message to their friends. We tested two factors that may influence the behaviour of the participants: (i) the notification sender, and (ii) the notification content. As for the first factor, while such notifications are typically received from the power utility, we analyzed the cases when they are instead received from either a stranger or a friend. We considered these two possibilities since some people may receive the spoofed message directly from the attacker (who is a stranger to them), while others may receive it indirectly through friends who forward it to them. As for the second factor—the notification content—we analyzed two variants: one where the discount can only be availed by clicking on an external link, and another where the discount is unconditional. This manipulation allows us to understand the differences, if any, between the context of phishing and spam attacks—which require the recipients to click on an external link embedded in the message—and the context of our disinformation attack—where no such link is necessary. Accordingly, the participants were randomly assigned to one of four conditions: (i) receive a notification with a link from a stranger; (ii) receive a notification without a link from a stranger; (iii) receive a notification with a link from a friend; (iv) receive a notification without a link from a friend. The disinformation messages corresponding to these scenarios are shown in Figure 3b.

Overall, we consider two influence propagation models, three values of k which represents the maximum number of friends to whom a disinformation message could be forwarded, and three mapping functions (linear, squared, and cubic) that translate the survey responses to a probability value in [0, 1]. The final follow-through rates are shown in Figure 3c. Unlike the case of phishing and spam attacks, the disinformation attack considered in our scenario does not require the recipient to click on an external link. To evaluate how this difference affects the impact of the attack, we run similar simulations based on the responses of participants who were shown a message containing an external link. We found that the omission of the link always increases the follow-through rate (see Figure 3c); depending on the model, the increment ranges from 3.4% to 9.8% at the end of one step of propagation.

Now, consider the case when the EV adoption rate in the power grid is 15%. In this case, if 30% of the population were targeted by the attacker initially, then our results in Figure 3c show that the resultant follow-through rate ranges from 9.4% to 26.8%. Our power grid simulations shown earlier in Figure 2a indicate that these follow-through rates would result in a blackout for 5.6% to 100% of the residents, respectively. To put it differently, behavioural manipulation through disinformation can indeed lead to a full blackout in a heavily loaded grid.


We have demonstrated that an adversary can cause blackouts on a city scale, not by tampering with the hardware or hacking into the control systems of the power grid, but rather by focusing entirely on behaviour manipulation. On a broader note, our study is the first to demonstrate that in an era when disinformation can be weaponised, system vulnerabilities in critical infrastructure arise not only from the hardware and software, but also from the behaviour of the consumers.

Supplementary data file

The power systems data supporting the findings in this study are available for download here.

Related works

Marcin Waniek, Gururaghav Raman, Bedoor AlShebli, Jimmy Chih-Hsien Peng, and Talal Rahwan, "Traffic networks are vulnerable to disinformation attacks", Scientific Reports, vol. 11, no. 5329, 2021. Full paper available online at:

bottom of page