steven36 Posted May 29, 2018 Share Posted May 29, 2018 Most people are on the patch right now. These patches release a small dose of "security achievement" to last until the next patch is available. That feeling of having done something about security is hard to deny which is why so many people crave it. Luckily for the patch addicts out there, they’re also a compliance requirement. For those of you who don’t know, a compliance requirement is apparently when a bunch of influential companies can suggest the same bad practice, usually involving some software they sell, which makes it a best practice. Then, they all donate the time of an employee to help write those best practices into some compliance checklists. Then some money is transferred to Panama. And that, kids, is how a bill becomes a law. I may have missed a step or two but you get the idea. Anyway, my point is that patches are easy to do, cheap to get, and make people feel good for a little while. But they wear off really quickly, and you’ll need more in no time flat - they're kind of like the information security equivalent of nasal spray. Is it Any Wonder Why We Call Patches a “Fix”? Software companies, those patch dealers, love the patch. It gives them a way to change their code after it's already been paid for and put on your machine. This way they can provide new features and fix vulnerabilities or, still under the guise of the security update, address their own internal policy issues by removing content you already had, change their corporate strategic models and direction by removing parts of your application to be re-sold back to you as add-ons, or to better track how their software is being used - which turns their customers into unpaid focus groups. So, for as much cost and expense as it takes to make a patch to fix problems, the patching process has also allowed the software makers to have a much longer interactive life with customers than most other product makers get. Therefore, the high level of social acceptance patching has achieved has truly been a boon for the software industry that other industries would love to have. Imagine if the clients of the dental industry sent three requests a day to them to see if they could get a cleaning yet. Or if car owners checked in every 24 hours with their local dealership to see if there's a new model available yet to upgrade to. So, the patching process is special. Patching forms an unprecedented addictive behavior that has now become part of so many routines that people rarely question its value in their lives. That puts patching alongside caffeine, alcohol, and nicotine as the socially acceptable, short-lived-effect, dependency-building, legal drugs if used in moderation along with a healthy lifestyle - and there's the catch. We are told patching is good. It feels good. We see it as fixing something which we are told is broken: there's a hole, bad people can take advantage of it, and now they can't. It fixes a definite, specified problem. And it feels good and safe and comfy to take care of problems, like a raccoon washing moldy garbage in a stream before eating it. All good things. I am not alone in having patch regrets. Yes, many, many, many people have patched or updated parts of a system only to find it got worse. Sometimes you can roll back a patch. Sometimes you only think you can. That’s patch regret. Furthermore, when patching fails to help us we are reminded by the software makers that it was us who failed and not the patches. Which is unfair because we are told specifically that we need a patch management process to be securek. We are threatened by our security-conscious cyberhygiene-amatic peers that failure to patch is patching to fail. Okay that phrase doesn’t work, but you get the point. If we don’t patch we’re wrong, if we don’t patch according to a patch management process that has us test the patches before production we’re wrong. And if we patch all the right ways we’re still wrong when there’s a breach. Why Patching is Just Part of the Solution It’s because patching is just one part of the solution, as the patch-pundits are quick to remind us, that includes antivirus, firewalls, intrusion detection systems, strong authentication, encryption, physical locks, disabling of scripting languages, reduced personal information on social networks, and security awareness training as part of a healthy security lifestyle solution. I don't know about the rest of the world, but I've seen this before. As a kid, my breakfast cereal of frosted, chocolaty, Super Corn Crispies was also healthy for me as long as it was part of a nutritious breakfast that they showed right on the box which also included fresh-squeezed orange juice, bran toast, turkey bacon, egg-whites, and a multi-vitamin pill. At some point I realized I wasn't eating the Super Corn Crispies as part of a nutritious breakfast, but I was eating it because I wanted to. And if I stopped eating it, it didn't really take away from my nutritious breakfast. Growing up and being healthier actually meant realizing the difference between what is filling - like three bowls of that cereal - and what feels fulfilling - like an actual nutritious breakfast that is significantly lower in things with a shelf-life of a century. So what is patching really? In the security industry we are told it is part of defense in depth. We are told that it is a specific deterrent or end to a specific threat. We are reminded it’s part of the security process. We are educated that it is one tactic in a strategy to minimize risk. We are cajoled into thinking it's a measure to maintain operations. And we are informed that it's one of the many, many controls which security appears to have. Although, that last one more than the others is due to the vocabulary problem of the security industry. Now the security industry gets occasional complaints for seemingly making up their own words. And they have. Additionally, the security industry that has altered many of the definitions of these words, which explains why it's so hard to find two security professionals to agree on some standard definitions. Although I understand other industries of similar scientific maturity (like the ghost-hunting industry and the bigfoot-hunting industry we see so much on TV these days) have the same problem. Like where else in any other industry besides cybersecurity and ghost-hunting does a firewall not protect against fires? (FYI - in ghost-hunting a firewall protects people against the icy, soul-sucking touch of a Specter. Look it up.) The security industry commonly states that the patching process is an Administrative Control. We know it is part of a business strategy for software companies; however, for the security professionals and the end-users, I'm not so sure it is. In the security trenches we know patching doesn't help maintain a baseline because it changes it. Yes, patching changes code which changes operations. That's why patching is part of Change Control and is tested on non-critical servers first. This is a very important point because you design your operations to be a certain way and if you're changing them constantly with patches, how can you be sure of what you have and what it's capable of at any given moment? You can't. So, no wonder we have such a hard time to secure our operations if we are always changing them. We also know that patching doesn't eliminate or reduce harm on its own. At best it either closes an interactive point or fixes a flaw in an existing operation. Rarely does patching introduce new controls, but it's possible that a particular patch integrates a solution with controls to an existing service like packet filtering, encryption, or input sanitation. But it's not the patching or the patch which is the control because it itself doesn't interact with the threat. The patch is only a way to add or take away code. It makes a change to how things currently run, just like policies and security awareness training do (also administrative controls) to help achieve a security strategy. Wait, aren't things that help achieve a strategy called tactics? So it turns out that patching may not actually be an Administrative control? Actually it’s not. It is not and should never be an administrative control. Maybe that’s just another mislabeling issue but it’s a pretty big one if the wrong people are doing it for the wrong reasons. So, in conclusion of Part 1 of this helpful patching article, just like all the other patching articles that are out there, you should be aware that there’s a whole lot more to patching than fixing a bug or a vulnerability. Yes, there’s so much more going on. And we’ll explore it even further in part 2 until we’ve beaten this topic to death, and then keep beating it until you realize that patch management is not the fun party game you’ve grown to love and respect. Welcome to the sequel of “Getting Off the Patch” where we explore the point of patching for fun and profit. We are continuing with a part 2 because so much was left unsaid in part 1, like: “So what’s the deal, are you supposed to patch or aren’t you?” and “No, seriously, you gonna tell us?” and “Dude, quit it already, nobody likes you!” In part 1 we discussed a lot of cool stuff like how patching may just be a form of brand recognition marketing. You should have read it. And we ended with how patching is a security tactic and not an Administrative control. So let’s pick up from there…. If patching is a tactic towards a particular security strategy, how can that be bad? I never said it was all bad. There are reasons where patching makes sense just like there are times when it makes sense to have that third diazepam pill, park diagonally across two parking spots, or hide in a dumpster - and not coincidentally they all involve raccoons. For example, one overall business strategy is to have perfectly working operations to optimize returns. But optimized returns rely on freedom from costly efforts or unexpected losses (security), and freedom from unpleasant surprises (trust) that force you to drop what you're doing to deal with it. To achieve this, you can pick many tactics and just one of them is patching. So, consider this: Patching may seem to be one of the cheaper tactics towards security since most patches are free and are no-brainers to install. But in what scenarios is it still cheaper after you count in the time of patching, testing, or not testing and fixing all the other software that breaks? Perhaps we can argue that the cost and ease to install makes it most suitable for home computers in this trade-off. So, home-users with non-critical missions feel free to breathe easy, it’s not about you. The Problem of Patching Patching may seem to be more secure because you are effectively interfering with a known vulnerability. But is it the most secure way since it isn't timely because patches come much after the fact and don't address zero days? We can argue that the modicum of protection provided by patching is better for the systems that apply no controls or poor controls, which is the case for many home users. We can also argue that the patch holds for the life of a system, so while it may come late, it never leaves. Other controls aren’t so gallant and require regular maintenance. Patching may also seem to be the way of increasing trust in your service because it is addressing an uncovered flaw. That means you can have more confidence in it. But does adding unknown code and untested changes to your operations give you more reason to trust it, especially considering it is coming from the source that made it wrong to begin with? Perhaps if your network is nearly perfectly homogeneous with only that company's software, so that their stuff is guaranteed to have been tested together (even that’s a rarity though) and therefore not break. Although ANY third-party drivers or applications - or your personally configured environment, or unique processes in how you use the patched application, or service - may still change. Ask yourself if your own experience has shown you that by installing the patches, you're free from troublesome surprises? I don't think we can argue this even for home users because many of them are sick and tired of the undesirable side effects patching may bring. They just don't know how to get off the patch. Patching vs. Balance of Controls Now compare the previous considerations to you taking the time/money to install the right balance of controls so that you never have to patch again unless you want some new feature. You make sure the systems are properly hardened with least privilege, unneeded services off, processes separated, and then the systems separated over the network. You keep proper, regular, tested back-ups, encrypt important stuff, lock out generally writable directories, and so on. You know what I mean by now. So if you do this, then you know which reasons you have to trust your operations, which means no surprises because you can prepare in advance for the problems you know you could have. For example, if you didn't install any continuity controls against DoS attacks due to cost, then you need to create emergency response procedures (an actual administrative control) for handling DoS attacks to get you back and running again as quickly as possible. Most CISOs would love to have this! They would love to feel this safe, because they really are safer. They would love to have less surprises that are actually scares. Unfortunately, the more common way is NOT doing that and instead relying on a few purchased security devices and automatic patching. So let me ask you: How many of the last thousand vulnerability notices went out that included information on which controls prevented or mitigated the exploit? When's the last time you saw a vuln report say, “Our tests showed that the bug in Application X lead to a remote root except when Application X directory is read-only?” They don't. If they did, the patch management staff hearing about it would think, "Damn, I should probably make sure all application directories are least privilege!” Instead, they’re programmed by best practices to think, "Damn, root access? Well, let me sit on my hands here while I wait for the patch and hope nothing happens." Moving Beyond the Patch Unfortunately, what it comes down to is how well people know their operations so that they can secure them. Most IT staff have no idea even of their own business processes in the organization, let alone the big picture of the entire operations. So, their implemented security is just the off-the-seat-of-their-pants variety. They patch because then it's crazy not to. And in that case, I completely agree. Because if you’re winging it, then every little bit towards security counts. Plus, if something breaks, they can always blame the patch. What you have to think about here is that you should patch because you trust the software companies to know more about your operations than you do. Patch because you're so used to nasty surprises that you don't think another one will do anything to change your current self-medicated dosage of antacid and alprazolam. Patch because you know clearly that you are taking a short-cut now to save time for something more important until you get time to come back and do it right. You can add your own excuse here now, or you could just start working towards getting yourself off the patch. Source Link to comment Share on other sites More sharing options...
straycat19 Posted May 30, 2018 Share Posted May 30, 2018 Wish we could go back to what could be called the 'good old days' of software development in the late 80s/early 90s. You bought a program and it worked. Sometimes the company might have their own BBS system where you could download updates but if not the company would send a new disk with the updates on it to you. The updates weren't patches, they were true updates, and they might not release a new version for 6 months or a year. Today developers try to outdo one another and put everything but the kitchen sink into one program. The problem is they can't code all that without introducing security holes and vulnerabilities. If they went back to producing one product and doing it well, then we wouldn't need all these patches. Just like a survey a few years ago showed that 95% of Microsoft Office users only used 3% of all the items/tools that were in office. In essence, most of the software installed on systems is never used, just useless code taking up space and introducing vulnerabilites into your system. Link to comment Share on other sites More sharing options...
Archanus Posted July 5, 2018 Share Posted July 5, 2018 Oh yeah, Microsoft fix one thing and make unstable 100 things :S If I know how to use Ubuntu, I would have leave Windows years ago :S Link to comment Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.