Our personal financial identities are exposed, and we’re mad. A sick, visceral, exhausted anger that hits us in the pit of our stomachs and makes us feel powerless.
People are understandably furious about the Equifax breach- to a degree that makes it tough to have a rational discussion about what happened. Unfortunately for information security professionals, anger is a luxury we don’t have right now. It’s now past time to have frank discussions about what went wrong and how to prevent it in our own environments. I’d like to take a moment to clear up a few exceptionally harmful misconceptions about Equifax’s information security and security operations in similar practical environments.
It’s reasonable to be angry about Equifax’s existence , or their business model, or their retention of data. It makes no sense to be angry simply because they were breached. Any organization can, and likely will eventually be breached. What ultimately matters is their preparation, response, and risk mitigation.
You should be angry about Equifax executives selling stock before completing breach notifications. You should be angry that Equifax was not prepared to respond to customer inquires about their breach in a timely manner. You should be angry that the site Equifax put up in response to the breach was poorly branded and appeared hastily implemented . All these things could and should have been prepared for in advance.
Good incident response involves a lot more than simply performing forensics on an attack after the fact. It also involves solid communications plans, drilling for potential incidents, and procedures for plausible scenarios. To an experienced outside observer, Equifax’s incident response and breach notification plans were mediocre at best. Their DFIR team could be top notch at timelining attacker activity on servers, but that means little if they didn’t know who to call for hours.
We must remember to never base any of our metrics, good or bad, on attacker activity alone. Attackers are an unpredictable data point we cannot control. A sophisticated enough attacker can gain access to nearly any network given proper motivation and resources. You are not immune, and neither is any organization, huge or small. Every organization should plan like their most critical system will be hacked, tomorrow.
It may be Equifax’s fault that an individual attack worked due poor procedures, or that they weren’t prepared for an attack, but not simply that they were ultimately breached. It was their job to create the best defensive posture possible, and prepare for the worst case scenario.
There are many scenarios in the corporate world that preclude or delay the application of software patches. Vendors go out of business or discontinue products. Responsible risk management decisions are made regarding critical application downtime vs. life and safety or preventing financial hardship.
The key phrase here is, “responsible risk management decision”. At the end of the line, there should be a clear audit trail leading back to risk managers who involved correct stakeholder teams and provided an analysis of patching versus not patching the system. The risks associated with not patching can be somewhat mitigated through other security practices, like adding defense in depth and monitoring solutions, or segregating vulnerable systems. In a healthy environment, all these things should occur. If Equifax didn’t make a responsible risk decision around not patching, and didn’t provide sensible mitigating controls, you can be angry about that.
In most Fortune 1000 companies, if a system can be patched and isn’t, it is likely not the fault of “Joe or Sue admin”.
There are exceptions to this rule, such as malicious insiders. However in the vast majority of cases, the blame lies squarely with leadership – often C-level executives.
There are the cases where a server can’t be brought down for patching because the business refuses to accept the required downtime. In those scenarios it is the responsibility of management to have patching policies in place which account for limited and temporary exceptions given proper risk evaluation, with mitigating controls. These policies must have buy-in at executive levels so that an angry VP can’t override them merely by threatening a technician’s job.
Of course, there are also instances where organizations operate on unsupported software because leadership has decided to not expend the money or work hours necessary to upgrade them to a supported system. Once again, it falls to security and IT managers to make a case to executives that the upgrade expenditure is a good risk management decision and financially responsible. If a sensible decision isn’t made by executives after being presented with this information, the blame lies squarely at the C-level.
Finally, there there are the cases in which a CIO or CISO fails to provide a policy or advocate for patching, and claims no knowledge of a server’s existence or of a threat. Ultimately, it’s the executives’ responsibility to hire savvy and articulate managers, who in turn hire subject matter experts who can generate comprehensive inventories and make reliable recommendations.
Do not make the mistake of comparing operational bureaucracy in a 50 person company with that of a 50,000 person company.
The Susan Mauldin‘s degree in music composition is totally irrelevant to whether you should be angry with Susan Mauldin.
It is possible for the Equifax CISO to have performed poorly at her job, while also being similarly credentialed to numerous, very competent information security professionals. Her degree should be treated as a non-issue.
As I’ve written in previous blogs, information security academia is new and delightfully inconsistent in quality. The vast majority of professionals with a decade or more experience in security did not attend a security-centric degree program, because those programs simply did not exist prior to around 2006. Like many fast-paced technical fields, information security degree programs that exist now are often abysmally out of date and fail to teach relevant skills. Hiring authorities still see many ‘paper tigers’ who leave 2-4 year degree programs with no substantial real life knowledge.
While I personally do recommend a computer science degree for academically-focused people interested in pursuing a security career, degrees still function mostly as a means of gaining fundamental knowledge in a structured environment, and a stepping stone for career progression and salary increases. Useful intangibles gained by attending a university often tend towards report writing, business, and interpersonal skills. There are other valid ways to gain those skill sets. Many a lauded information security executive has a degree in business, unrelated engineering, or indeed, fine arts. A large percentage don’t have degrees at all (although they still increase promotion potential).
What really counts toward being a competent information security executive? Passion, drive, and business savvy. A firm understanding of high-level fundamentals encompassing a broad range of niches. The ability to hire the right subject matter experts and technical managers to advise him or her without requiring micromanaging. Excellent risk management skills. The ability to play a tough political game to advocate for good security practices and necessary money and headcount.
I don’t know more about Ms. Mauldin than what the internet bios say. It’s possible the blame for a majority of the mistakes made by Equifax lie with her. It’s also possible her input and reports were universally dismissed by the CIO or CEO, and more of the blame can be placed on them. These things may become more clear as more technical and operational details are released. For the time being, stop looking at degrees and certifications for answers, lest you unintentionally personally insult some of the best minds in security as a side effect.
the dump outlet near me dump store hours