Mary Ellen Zurko remembers feelings of disappointment. Not long after getting a bachelor’s degree from MIT, he did his first job to evaluate a safe computer system for the US government. The aim is to determine whether the system is in accordance with the “orange book,” the government’s authoritative manual on cybersecurity at that time. Is the system technically safe? Yes. In practice? Not too much.
“There is no concern for whether the security demands for end users are completely realistic,” said Zurko. “The idea of a safe system is about technology, and it is considered a perfect and obedient human being.”
The inconvenience started on the track that would define Zurko’s career. In 1996, after returning to MIT for a master’s in computer science, he published an influential paper that introduced the term “user-centered security.” It grew into its own field, concerned by ensuring that cybersecurity was balanced with use, or humans might avoid security protocols and give foot attackers on the door. Lessons from security that can be used now surround us, affecting the design of phishing warnings when we visit unsafe sites or the discovery of “strength” blades when we type the desired password by cybersecurity.
Getting ahead of influence operations(cybersecurity)
Research on thwarting online influence operations is still young. Three years ago, Lincoln Laboratory launched a study on the subject to understand its implications for national security. The land has since increased, especially since the propagation of dangerous and deceptive COVI-19 demands online, has been perpetuated in certain cases by China and Russia, as revealed by a Rand study. There is now funding dedicated through the laboratory technological office to develop countermeasures for influence operations.
“It is important for us to strengthen our democracy and make all our citizens resilient to the types of disinformation campaigns intended for international adversaries, who seek to disrupt our internal processes,” explains Zurko.
Like cyber attacks, influence operations often follow a path at several stages, called a killing chain, to exploit foreseeable weaknesses.