It was a steel plant operator’s worst nightmare — a blast furnace that could not be shut down properly. The resulting damage throughout the plant was significant, although its full extent has not been made public. The details were withheld by Germany’s federal office for information security: Bundesamt für Sicherheit in der Informationstechnik (BSI), which did confirm that the event happened at one of that country’s steel mills in 2014. What BSI regards as being much more important than the outcome was the cause: this was no accident but a deliberate attack initiated by computer software.

Hassan Farhangi, director of BCIT’s Group for Advanced Information Technology, stands on the roof of the campus building that houses a microgrid to model the effects of cyber attacks.

Hassan Farhangi, director of BCIT’s Group for Advanced Information Technology, stands on the roof of the campus building that houses a microgrid to model the effects of cyber attacks. Photo credit: Scott McAlpine

More specifically, that cause was spear phishing, an email scam where someone opens a message that appears to contain information from a legitimate source but instead infects the user’s system with malware, which is computer code designed to wreak havoc. Typically the effect will be the disruption of a large database full of information. While inconvenient, good administrators invariably back up such resources in an unconnected location so that they can be recovered afterward. In this case, however, the target was an industrial complex’s control systems. In contrast to the quiet demise of bits and bytes in some air conditioned server room, the effect here was the dramatic physical destruction of large pieces of machinery. There are no back-ups for this kind of infrastructure.

The attack served as a rude awakening for a process control industry that may soon have to deal with unprecedented threats to security. There are enough things that can go wrong in a place like a steel mill without having to worry about somebody breaking in to trash the place. The people responsible for this facility would have erected fences, put locks on the doors, hired guards and vetted the staff who were allowed to run its critical components. Given the sizeable investment at stake and the potential for destruction, it would have been foolish to do anything less. Yet, because their stock-in-trade is molten metal rather than raw data, it is entirely possible that these same people underestimated or overlooked how vulnerable they were to electronic assault.

Depending on how far back your memory goes, it could be hard to recall a time when you did not worry about how safe you were when dealing with information technology. It may have started when you realized just how important it was to keep your bank card personal identification number (PIN) from prying eyes. By now most of us have at least a passing acquaintance with the arcane language of cybersecurity: virus, hacker, phishing, spoofing, Trojan Horse, worm, firewall and back door. We are all too aware that even the most humble electronic instrument in our home might unexpectedly turn into a gateway for mischief, vandalism, theft or outright abuse.  

That awareness now extends to most workplaces. In office-centred businesses, where every desk sports a monitor linked to a network, employees have become sensitive to the vulnerability of systems that are all but indispensable to getting anything done. In a more traditional factory setting, however, that sensitivity could well be muted. At cement plants, oil refineries, or metal working shops, a great deal still gets done by people who do not necessarily spend their days typing or poring over a screen. Physical risks — from fast-moving conveyor belts to toxic chemicals — are front-and-centre in these environments, where doors and walls are likely emblazoned with all manner of cautionary announcements. The noise and shuffle of real materials being manipulated in real time can easily drown out the warning signs of electronic intrusion.

Ironically, it is these very industrial settings that served as the first stop for the kind of electronic interactions that have come to suffuse our lives. As early as the 1940s, when electrical engineers were developing the communications protocols that would enable telephones the world over to connect seamlessly with one another, industrial facilities were beginning to install switches that would replace manual labour on the shop floor, such as the opening or closing of a valve. By the 1960s, the heart of this automation was the Programmable Logic Controller (PLC), an electronic component that converts a signal from one station into a mechanical action somewhere else. 

Former Iranian president Mahmoud Ahmadinejad tours centrifuges used for uranium enrichment, which are believed to have been damaged by the Stuxnet attack in 2010.

Former Iranian president Mahmoud Ahmadinejad tours centrifuges used for uranium enrichment, which are believed to have been damaged by the Stuxnet attack in 2010. Photo Credit: Official website of the President of the Islamic Republic of Iran

The first-generation implementation of PLCs and other microprocessors succeeded because they worked with a factory’s hardware through a common design standard, called a Supervisory Control and Data Acquisition (SCADA) system. Although “software” had yet to enter our general vocabulary, SCADA systems provided the interface that made it possible for operators in a central office to run machinery throughout a factory, as well as collecting data about the processes that machinery was executing. It might look primitive next to the glitzy interfaces found on even the most basic of today’s home computers, but the SCADA system provided a number of industries with the foundation for what we now consider to be modern manufacturing — fast, efficient and precisely controlled. 

“It’s very simple but that simplicity led to a continuation of modernization without appropriate regard for security,” says Rod Howes, cyber portfolio manager at Defence Research and Development Canada’s Centre for Security Science (DRDC CSS). DRDC is an agency of the federal Department of National Defence. Since 2012, it has led the Canadian Safety and Security Program (CSSP) in partnership with Public Safety Canada. The CSSP aims to strengthen Canada’s ability to anticipate, prevent, mitigate, prepare for, respond to and recover from natural disasters, serious accidents, crime and terrorism. This approach combines science and technology with policy, operations and intelligence, looking at threats originating from computer networks such as the one that brought down the German steel mill.

SCADA systems pose a basic problem with respect to cybersecurity. Although such systems will continue to work well at sites where they might have been installed decades earlier, older SCADA system software and hardware vulnerabilities are increasingly being exploited to create openings for attacks that could compromise critical functions. When the SCADA systems found in the Industrial Control Sector (ICS) are connected to the Internet, this means those vulnerabilities can be exploited from anywhere in the world, not just from within the ICS.
There are a range of reasons why there is resistance to change, from financial costs to disruptions of ongoing operations in industrial control rooms. Howes and his colleagues have been raising awareness of these challenges among representatives of Canada’s ICS, where SCADA remains central to such fundamental activities as supplying electricity to the power grid. DRDC CSS has collaborated with Public Safety Canada and Natural Resources Canada to set up and assist in operating the National Energy Infrastructure Test Centre in Ottawa, where several times a year industry representatives are invited to take part in courses that walk them through the details of how their systems might be vulnerable to attacks, including colourful simulations of what such attacks look like and what might be done to prevent them. “The aim is not to build government’s capability to do their job better,” Howes says. “It’s to build industry owners’ and operators’ capability to do their job better.” 

Yet even when participants are open to the possibility that their enterprise is at risk, dealing with that threat often amounts to a post-mortem analysis rather than getting out in front of the attackers.“It’s a big challenge with no quick and easy solution,” says Howes. “The Internet was designed to be open; it was never designed to be closed and controlled. As you layer new technologies onto this standard, you’re layering on new potential vulnerabilities that need to be addressed.”
In order to go beyond simply responding to known threats, it has become necessary to find ways of doing research, which poses a challenge in itself. When it comes to helping a power utility redesign its SCADA system to fend off threats, for example, your ideal option is to try out the new software on a grid that is actively delivering power to paying customers. For Hassan Farhangi, director of SMART, the Smart Microgrid Applied Research Team at the British Columbia Institute of Technology in Burnaby, that move is far too risky for anyone to consider. “You can’t take a piece of technology that is not fully tested in the field, put it in your operational system and just keep your fingers crossed that nothing’s going to go wrong,” Farhangi says.

Faced with that conundrum, Farhangi began working with BC Hydro eight years ago on the next best thing: an elaborate laboratory-based model of the utility’s network, which he dubbed a microgrid. With $2.69 million from a provincial clean energy fund, $2.1 million from the Western Diversification Fund and another $1 million in-kind from industry, the simulation can match both the substantial scale and intricate detail of a functional electrical distribution system. “We gradually designed all the components, including measurement infrastructure, load control systems, communications and data capture and command and control,” Farhangi says.

Such realistic features make the microgrid an effective tool for demonstrating the impact of hacking to industry skeptics. Farhangi suggests that too many of these individuals have an unjustified faith in the air gap, a strict physical separation between private infrastructure and public networks. “Unfortunately, as we move rapidly to integrated command and control systems, that air gap is basically disappearing,” he says. “With the air gaps gone, you are basically opening up your infrastructure to hacking, intrusion and eventually terrorism.”

Perhaps the most telling demonstration of the air gap’s limitations was the infiltration of an Iranian nuclear research facility in 2010. In an effort to elude foreign detection and observation, this site consisted of an underground bunker as physically secure as one could ever want. Yet a tiny piece of code — some 500 kilobytes in all — implanted a replicating worm called Stuxnet within the local network. The software was able to take advantage of common Windows-based systems to collect information from the network as well as compromising programmable logic controllers and causing uranium-enriching centrifuges to spin too quickly and destroy themselves. Iran has never confirmed that such damage occurred nor have the authors of Stuxnet been formally identified.

Farhangi maintains a squad of academic researchers, intimately familiar with hacking, who are looking into how such devastating attacks could cripple critical infrastructure and what needs to be done to stop that. He recalls using this technique to show a utility official how easily a critical pumping station could be fooled into accepting bogus commands from a remote control centre. “He was shocked,” says Farhangi, adding that more people in authority need to be shocked in this way. “I’m not advocating fearmongering but you have to expose executives to the impact and the real dangers that can be inflicted on their organizations. The fact of the matter is that we really need a lot of education.”

Farhangi’s greatest concern, shared by Howes, is that attacks are occurring at industrial sites whose operators are covering them up. He would like to find a way of allowing these cases to be shared without compromising the victims, so that more can be learned about how these events unfolded and what might be done to stop them. Only in this way can we hope to educate a cadre of experts capable of defending the physical foundations of our society as effectively as any military force patrols the border. “Whatever infrastructure has a command and control system is vulnerable to these kinds of attacks,” Farhangi says. “Hackers will find their way into dams, into refineries, into sewer systems. It’s not a matter of if but when.” 


Canada is a target

It may not have had the high drama of major equipment failure at a steel mill, but a tersely worded announcement from the National Research Council in the summer of 2014 reminded Canadians that this country is not immune to the outright theft of confidential information through sophisticated electronic attacks. The statement offered up few details about what was dubbed a “cyber intrusion,” although the effect seriously affected the NRC’s business operations for months to come as the use of networked computers was restricted. “NRC is continuing to work closely with its IT experts and security partners to create a new secure IT infrastructure,” the statement read. “This could take approximately one year however; every step is being taken to minimize disruption.”

As part of that new infrastructure, the NRC’s computer network was isolated from the broader Government of Canada’s primary network to prevent an even more sweeping attack, although the two systems already operated largely independently of one another. Federal chief information officer Corinne Charette credited the attack to a “highly sophisticated Chinese state-sponsored actor,” an accusation denied by China.