Like most “hands-free” technology, Dick Cheney’s wireless pacemaker let him travel freely without worrying about batteries or plugs. But in 2013, the former Vice President disabled its wireless functionality so that hackers couldn’t remotely induce heart failure. “I was aware of the danger, if you will, that existed,” he told 60 Minutes.
While Cheney developed a reputation for fear-mongering during his time in the public eye, his paranoia over the security of his pacemaker wasn’t unfounded.
“Any device connected to the internet can be hacked, and this includes medical devices,” says David Maman, CEO of artificial intelligence developer Binah.ai. “It’s a matter of when, not if.”
In the six years since Cheney made headlines for taking his pacemaker offline, the threat of medical devices being hacked has only grown. The global medical device market is expected to exceed $674 billion in the next three years, and wirelessly connected models are now ubiquitous in healthcare settings, with an average of 15 to 20 in each hospital room. These devices can remotely administer medication, monitor vital signs, support organ function and send patients’ health data directly to doctors. But thanks to increased wireless connectivity, software glitches and the growing sophistication (and boldness) of hackers, they’re often vulnerable to infiltration by outsiders and even patients themselves.
The healthcare sector, which accounted for 41 percent of all cybersecurity breaches reported in 2018, is particularly susceptible to hacking. The Cisco/Cybersecurity Ventures 2019 Cybersecurity Almanac, a compendium of cybersecurity statistics, predicts that healthcare will suffer two to three times more cyberattacks this year than other industries, on average. Fortunately, device manufacturers, researchers, healthcare providers and government agencies are ramping up efforts to protect patients and their medical devices from malicious attacks. But no singular security measure is going to do the trick; experts say the key to mounting a successful defense against hacking is for everyone affected by the problem — patients included — to work together.
High tech, low security
In most ways, medical devices became a lot more convenient and reliable once they became wireless. Some implanted devices, like pacemakers, run on Bluetooth, a lower-power technology that only works in close proximity; others need stronger network connections to allow tracking and response from afar.
Diabetes patients use a wireless remote control for their insulin pumps. Wireless transmitters in pacemakers or cardiac monitors let healthcare providers detect irregular heartbeats and modify settings or instructions to the device accordingly. Cochlear implants, gastric and deep brain stimulators, and MRI machines also connect to wireless networks. Many hospitals now have robotic assistants to dispense medicine at the bedside, with nursing stations or tele-medical staff monitoring their actions. As helpful as these life-changing products are, hospitals and device-makers know they can also be woefully insecure.
“The good news is that nobody is known to have died as of yet from any critical, life-giving medical device,” says Ray Walsh, digital privacy expert at ProPrivacy.com. “However, medical devices have been proven to harbor vulnerabilities that can be exploited by hackers.”
A vulnerability is a software bug or glitch that might leak information or grant unauthorized access to the device in question; medical devices harbor an average of 6.2 vulnerabilities each, according to one report. A hacker can hijack a vulnerable device in different ways, including by disabling it altogether, draining the battery, changing a medication dosage, displaying fake vital signs and issuing a fatal shock. Often, to carry out these types of attacks, hackers infect devices with malware, meaning software designed to disrupt, damage or gain access to a system. In a recent survey by the College of Healthcare Information Management Executives, 18 percent of provider organizations reported that their medical devices were affected by malware or ransomware in the previous 18 months. Ransomware attacks on healthcare organizations are predicted to quadruple between 2017 and 2020, according to the Cybersecurity Almanac.
Some glitches are considered “zero-day vulnerabilities” — they don’t come to light until the day they’re unmasked.
In a lot of cases, vulnerabilities can be anticipated and fixed before hackers can take advantage of them. But some glitches are considered “zero-day vulnerabilities” — they don’t come to light until the day they’re unmasked. “Because they’re unknown until they’re discovered, the manufacturer of the device has ‘zero days’ to get the vulnerability patched in order to inhibit the risk of exploitation,” explains Walsh. “In cases where an exploit could lead to a loss of life, potential zero-days are extremely concerning.”
The late security researcher and famous “white hat” hacker Barnaby Jack was in the business of proving that seemingly unbelievable hacks — the stuff of spy movies — could easily happen in real life. While presenting at a security conference in 2012, Jack live-hacked a wireless insulin pump that he’d placed in a see-through mannequin, administering a lethal dose of insulin to the dummy from 300 feet away. He also developed a way to send a high-voltage electric shock to any pacemaker within a 50-foot radius. Jack was going to reveal his technique for hacking pacemakers at a 2013 security conference, but died suddenly a week before the talk.
Jack’s theatrical demonstrations forced people, including government officials, to acknowledge security weaknesses in critical, widely used medical devices. Walsh says that several factors contribute to the vulnerabilities that Jack, among other security experts, have called attention to. For example, some devices come with hard-coded passwords that are easier to guess than ones created by users. Complicating the matter, added security features might tax a device’s storage or energy, or even slow down the process of authorizing device access during emergencies.
Some cyberattacks infiltrate entire networks, not just one device, to thwart hospital operations or collect personal information. In 2017, a cyberattack called Wannacry unleashed chaos in hospitals worldwide. “This attack took advantage of a security flaw that was present in the Microsoft Windows operating system, shutting down the entire hospital network,” says Joe Flanagan, software engineer at GetSongBPM. In as many as 140 British hospitals, WannaCry blocked the network’s ability to check schedules, resulting in patients missing surgery appointments.
Coordination among manufacturers, patients, providers and government could be our best defense against cybercrime. In January, a coalition of hospitals and medical device manufacturers released a security plan that proposed basic measures for manufacturers to implement and hospitals to demand. To heighten security awareness from the start, the FDA has issued guidance for the earliest stages of medical-device submission: specific lists of cybersecurity hazards, established controls, and plans for future software updates and patches. In 2017, the FDA recalled 465,000 pacemakers to bolster their security features in hopes of preventing hacks.
Device makers often challenge security experts like Jack to hack their devices. And to comply with federal cybersecurity mandates, hospitals are developing processes to prevent, handle and respond to breaches and threats. When necessary, Walsh explains, hospitals arrange for patients to receive updated firmware with security patches on their devices.
If patients feel that software settings are beyond their grasp, they do have one powerful resource at their disposal: starting a dialogue with doctors and nurses.
“I see protection of medical devices being a consumer-led effort,” says Amelia Roberts, a registered nurse and healthcare consultant. “Patients can remain aware by asking their providers, ‘How are hospital staff being educated on recognizing and responding to a compromise versus an attack?’ ‘How is my medical device being kept safe from hacking?’ In short, getting us healthcare folks to even discuss this is the best way to improve the situation.”
That said, not every hack is purely malicious, and device hackability can even be a boon to patients. In 2016, an insulin pump manufacturer discovered a vulnerability that enabled hackers to “spoof” communication between an insulin pump and its remote control. By imitating an authorized user, the hacker could command the pump to deliver unauthorized insulin. The manufacturer disclosed the vulnerability to the FDA and issued directions for protection to diabetes patients and their providers. But many diabetes patients were way ahead of the game, and had already begun to hack their own insulin pumps.
Traditionally, patients need to check their glucose monitor and manually adjust the insulin that regulates blood sugar levels. While this can be onerous enough during the day, it’s downright excruciating at night, when blood sugar tends to fluctuate. So patient hackers developed and shared instructions to build an open-source “artificial pancreas system,” or OpenAPS. The companion app reads the glucose monitor, determines whether to increase or decrease insulin and then converts the smartphone’s Bluetooth into a radio signal that registers on the pump as a command to release more insulin automatically. This system hasn’t been blessed by the FDA, but doctors acknowledge its benefits. And because early hacking was only possible due to a “flaw” — the ability to receive and follow external orders — newer devices actually bake that innovation into a system that does have FDA approval.
In a video posted by CNBC, an endocrinologist with pump-hacking patients said: “I personally don’t know of any major problems that have occurred. Mostly what I am hearing is: People are extremely happy with their results.”