Previously, I had written about a class at George Washington University to which the professor, Dr. Robert McCreight, invites me to be a guest lecturer on cyber-security from time to time. I posted a copy of my slides then and do so again here:
I wanted in this entry to talk about my thoughts on what organizations should consider when dealing with cyber-security issues. My discussion here is based on slide 18 – Thoughts On What To Do (duh). I will cover the final slides in a following entry.
I believe a lot of people start with the wrong premise. They assume that the goal of cyber-security implementation is to end-up with a secure systems architecture. In fact, at least in my opinion, that goal is unrealistic and planning with that objective in mind can lead to negative results.
Money is wasted playing what I refer to as whack-a-mole security, chasing after incidents that have already happened, and spending too much of an organization’s limited resources defending everywhere when the bad guys only need to find one vulnerability.
As I write “The fundamental question is how to be secure when every component is insecure.” I suggest two parts to the response, the first of which I discuss here.
As step one, practice security hygiene. Make sure that you have not made it easy for your systems to be penetrated. The reason we put locks on the doors of houses is not because this makes it impossible to break in, but at least we make it hard for the casual intruder and we slow them down to increase the chance of apprehension.
Much of what I talked about while I was at the Department of Transportation is in fact being accomplished, better than I did for that matter, currently in the Federal Government. There is an increasing movement away from the static oversight of FISMA report creation to the dynamic oversight of real-time situational awareness.
You cannot defend something when you do not know what is happening. Integrating sensors into your network or even better developing systems that themselves provide situation status are a big plus.
Second, it is important to build security into the budget process. My not well-formed thoughts at DOT were that depending on the categorization of software projects, low/medium/high, from a criticality (or some other kind of measurement) there should be a percentage range of the total budget that was required to be associated with security explicitly with a separate plan as to how the money would be spent. I found that when I went back and looked at systems that had been developed at DOT before I joined, the security investments were often not documented and when documented, the percentage of the total expense varied very dramatically.
The key point here is that security dealt with after development generally has little value and even then costs much more than when designed into the system development process.
Third, it is important to be as transparent as possible. There is a tendency to try and hide security status with the excuse that this makes a system more vulnerable by exposing weaknesses that would otherwise not be known.
This premise is generally wrong for at least two reasons. Bad guys will eventually find all of these weaknesses anyway; they spend more focused time doing so then most of us have in protecting. Most important, it is only with transparent exposure of status that we are likely to focus on fixing problems.
It is just as likely that resistance to transparent exposure of status is fear of oversight more than security protection. Management visibility is the biggest cure for problems.
This last issue is representative of the broad issue of information sharing versus information protection, a topic I have discussed many times. I remain convinced that while both have to be paid attention to, organizations that want to be successful in accomplishing their mission need to lean to the information sharing side of the argument.
Next will be my wrap-up of the presentation continuing the conversation about what to do about security while having inherently insecure systems.