tao Posted September 23, 2017 Share Posted September 23, 2017 The Equifax data breach was yet another cybersecurity incident involving the theft of significant personal data from a large company. Moreover, it is another reminder that the modern world depends on critical systems, networks and data repositories that are not as secure as they should be. And it signals that these data breaches will continue until society as a whole (industry, government and individual users) is able to objectively assess and improve cybersecurity procedures. Although this specific incident is still under investigation, the fact that breaches like this have been happening – and getting bigger – for more than a decade provides cybersecurity researchers another opportunity to examine why these events keep happening. Unfortunately, there is plenty of responsibility for everyone. Several major problems need to be addressed before people can live in a truly secure society: For example, companies must find and hire the right people to actually solve the overall problems and think innovatively rather than just fixing the day-to-day issues. Companies must be made to get serious about cybersecurity – at a time when many firms have financial incentives not to, also. Until then, major breaches will keep happening and may get even worse. Finding the right people Data breaches are commonplace now, and have widespread effects. The Equifax breach affected more than 143 million people – far more than than the 110 million victims in 2013 at Target, the 45 million TJX customers hit in 2007, and significantly more than the 20 million or so current and former government employees in the 2015 U.S. Office of Personnel Management incident. Yahoo’s 2016 loss of user records, with a purported one billion victims, likely holds the dubious record for most victims in a single incident. In part, cybersecurity incidents happen because of how companies – and governments – staff their cybersecurity operations. Often, they try to save money by outsourcing information technology management, including security. That means much of the insight and knowledge about how networks and computer systems work isn’t held by people who work for the company itself. In some cases, outsourcing such services might save money in the short term but also create a lack of institutional knowledge about how the company functions in the long term. Generally speaking, key cybersecurity functions should be assigned to in-house staff, not outside contractors – and who those people are also matters a lot. In my experience, corporate recruiters often focus on identifying candidates by examining their formal education and training along with prior related work experience – automated resume scanning makes that quite easy. However, cybersecurity involves both technical skills and a fair amount of creative thinking that’s not easily found on resumes. Moreover, the presence (or absence) of a specific college degree or industry certification alone is not necessarily the best indicator of who will be a talented cybersecurity professional. In the late 1990s, the best technical security expert on my team was fresh out of college with a degree in forest science – as a self-taught geek, he had not only the personal drive to constantly learn new things and network with others but also the necessary and often unconventional mindset needed to turn his cybersecurity hobby into a productive career. Without a doubt, there are many others like him also navigating successful careers in cybersecurity. Certainly, people need technical skills to perform the basic functions of their jobs – such as promptly patching known vulnerabilities, changing default passwords on critical systems before starting to use them and regularly reviewing security procedures to ensure they’re strong and up to date. Knowing not to direct panicked victims of your security incident to a fraudulent site is helpful, too. But to be most effective over the long term, workers need to understand more than specific products, services and techniques. After all, people who understand the context of cybersecurity – like communicating with the public, managing people and processes, and modeling threats and risks – can come from well beyond the computing disciplines. Being ready for action Without the right people offering guidance to government officials, corporate leaders and the public, a problem I call “cyber-complacency” can arise. This remains a danger even though cybersecurity has been a major national and corporate concern since the Clinton administration of the 1990s. One element of this problem is the so-called “cyber insurance” market. Companies can purchase insurance policies to cover the costs of response to, and recovery from, security incidents like data breaches. Equifax’s policy, for example, is reportedly more than US$100 million; Sony Pictures Entertainment had in place a $60 million policy to help cover expenses after its 2014 breach. This sort of business arrangement – simply transferring the financial risk from one company to another – doesn’t solve any underlying security problems. And since it leaves behind only the risk of some bad publicity, the company’s sense of urgency about proactively fixing problems might be reduced. In addition, it doesn’t address the harm to individual people – such as those whose entire financial histories Equifax stored – when security incidents happen. Cybersecurity problems do not have to be just another risk people accept about using the internet. But these problems are not solved by another national plan or government program or public grumbling about following decades-old basic cybersecurity guidelines. Rather, the technology industry must not cut corners when designing new products and administering systems: Effective security guidelines and practices – such as controlling access to shared resources and not making passwords impossible to change in our “internet of things” devices – must become fundamental parts of the product design process, too. And, cybersecurity professionals must use public venues and conferences to drive innovative thinking and action that can help fundamentally fix our persistent cybersecurity woes and not simply sell more products and services. Making vulnerability unprofitable Many companies, governments and regular people still don’t follow basic cybersecurity practices that have been identified for decades. So it’s not surprising to learn that in 2015, intelligence agencies were exploiting security weaknesses that had been predicted in the 1970s. Presumably, criminal groups and other online attackers were, too. Therefore, it’s understandable that commercialism will arise – as both an opportunity and a risk. At present, when cybersecurity problems happen, many companies start offering purported solutions: One industry colleague called this the computer equivalent of “ambulance chasing.” For instance, less than 36 hours after the Equifax breach was made public, the company’s competitors and other firms increased their advertising of security and identity protection services. But those companies may not be secure themselves. There are definitely some products and services – like identity theft monitoring – that, when properly implemented, can help provide consumers with reassurance when problems occur. But when companies discover that they can make more money selling to customers whose security is violated rather than spending money to keep data safe, they realize that it’s profitable to remain vulnerable. With credit-reporting companies like Equifax, the problem is even more amplified. Consumers didn’t ask for their data to be vacuumed up, but they are faced with bearing the consequences and the costs now that the data have gotten loose. (And remember, the company has that insurance policy to limit its costs.) Government regulators have an important role to play here. Companies like Equifax often lobby lawmakers to reduce or eliminate requirements for data security and other protections, seek to be exempted from liability from potential lawsuits if they minimally comply with the rules and may even try to trick consumers into giving up their rights to sue. Proper oversight would protect customers from these corporate harms. Making a commitment I’ve argued in the past that companies and government organizations that hold critical or sensitive information should be willing to spend money and staff time to ensure the security and integrity of their data and systems. If they fail, they are really the ones to blame for the incident – not the attackers. A National Institute of Standards and Technology researcher exemplified this principle when he recently spoke up to admit that the complex password requirements he helped design years ago don’t actually improve security very much. Put another way, when the situation changes, or new facts emerge, we must be willing to change as necessary with them. The ConversationMany of these problems indeed are preventable. But that’s true only if the cybersecurity industry, and society as a whole, follows the lead of that NIST researcher. We all must take a realistic look at the state of cybersecurity, admit the mistakes that have happened and change our thinking for the better. Only then can anyone – much less everyone – take on the task of devoting time, money and personnel to making the necessary changes for meaningful security improvements. It will take a long time, and will require inconvenience and hard work. But it’s the only way forward. < Here > Link to comment Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.