Learning From a 10-Year-Old Report

I’ll admit that I’m often dismissive of information, especially in the field of emergency management and homeland security, if it’s over 10 years old.  There is a lot that’s changed in the past 10 years, after all.  But, realistically, for as much as we’ve changed, things have stayed the same.  Arguably, the first decade of this millennium saw much more change in EM/HS than the second decade has, at least so far.  The first decade saw events like 9/11 and Hurricane Katrina.  Yes, there certainly have been major events in this second decade, but none, it seems, were as influential to our field of practice than those in the first decade.

It’s important to reflect upon lessons observed and to examine what lessons we’ve actually learned.  How far have we come in implementing improvements from the 9/11 Report?  What still needs to be accomplished to meet the intent of the Post-Katrina Emergency Management Reform Act (PKEMRA)?  Perhaps when I have some time to devote I’ll review those documents again and look on them reflectively and provide my thoughts here.

Yesterday I received the latest email from DomesticPreparedness.com.  I refer often to their work in my articles.  This weekly brief included an article from one of my favorite authors in this field, John Morton.  I’ve referenced his work in a few of my past articles.  This article, titled The What If Possibility: A Chilling Report, talks about planning for a rogue nuclear attack, the likely lead role the federal government would have to take in response to such an attack (versus a locally-led response), and what the situation would be the day after.  With the threat of North Korean nuclear weapons capability looming, this article was an interesting read and spot-on.  I noticed a problem, though… It referenced Ash Carter as an assistant secretary of defense in the Clinton administration.  While this was true, Carter’s highest office was SecDef under President Obama.  Surely John Morton, with his incredible attention to detail that I’ve come to recognize couldn’t have made this error.

Nope.  No error on his part.  I looked at the date of the article.  June 27, 2007 – over a decade old.  Incredibly, this article is still highly relevant today.  The article does reference the drafting of certain federal plans for nuclear attack.  Plans which I am not privy to, but that must assuredly exist today.  I’m curious as to the model these plans follow, what has been learned from exercising them, and how we might be able to apply elements of these plans to other catastrophic occurrences.

Despite change, so much seems to stay the same. Of course a decade isn’t that long.  Given that emergency management and homeland security are primarily government roles, we have to acknowledge that the (usually necessary) bureaucracy simply doesn’t move that quickly.  Unfortunately, there are things we are far too slow to adopt, not just from a government perspective, but socially.  As a lover of history and sociology, I see lessons observed from the 1900 Galveston hurricane as well as the eruption of Mt Vesuvius in 79 CE.  There is much that history can teach us, if we are willing to listen. Lessons observed, but not learned.

© 2017 – Timothy Riecker

Emergency Preparedness Solutions, LLC

Failed Attempts to Measure NIMS Compliance – How can we get it right?

Yesterday the US Government Accountability Office (GAO) released a report titled Federal Emergency Management Agency: Strengthening Regional Coordination Could Enhance Preparedness Efforts.  I’ve been waiting for a while for the release of this report as I am proud to have been interviewed for it as a subject matter expert.  It’s the second GAO report on emergency management I’ve been involved in through my career.

The end game of this report shows an emphasis for a stronger role of the FEMA regional offices.  The GAO came to this conclusion through two primary discussions, one on grants management, the other on assessing NIMS implementation efforts.  The discussion on how NIMS implementation has thus far been historically measured shows the failures of that system.

When the National Incident Management System (NIMS) was first created as a nation-wide standard in the US via President Bush’s Homeland Security Presidential Directive (HSPD) 5 in 2003, the NIMS Integration Center (NIC) was established to make this happen.  This was a daunting, but not impossible task, involving development of a standard (lucky much of this already existed through similar systems), the creation of a training plan and curricula (again, much of this already existed), and encouraging something called ‘NIMS implementation’ by every level of government and other stakeholders across the nation.  This last part was the really difficult one.

As identified in the GAO report: “HSPD-5 calls for FEMA to (1) establish a mechanism for ensuring ongoing management and maintenance of the NIMS, including regular consultation with other federal departments and agencies and with state and local governments, and (2) develop standards and guidelines for determining whether a state or local entity has adopted NIMS.”

While there was generally no funding directly allocated to NIMS compliance activities for state and local governments, FEMA/DHS associated NIMS compliance as a required activity to be eligible for many of its grant programs.  (So let’s get this straight… If my jurisdiction is struggling to be compliant with NIMS, you will take away the funds which would help me to do so????)  (the actual act of denying funds is something I heard few rumors about, but none actually confirmed).

NIMS compliance was (and continues to be) a self-certification, with little to no effort at the federal level to actually assess compliance.  Annually, each jurisdiction would complete an online assessment tool called NIMSCAST (the NIMS Compliance Assistant Support Tool).  NIMSCAST ran until 2013.

NIMSCAST was a mix of survey type questions… some yes/no, some with qualified answers, and most simply looking for numbers – usually numbers of people trained in each of the ICS courses.  From FEMA’s NIMS website: “The purpose of the NIMS is to provide a common approach for managing incidents.”  How effective do you think the NIMSCAST survey was at gauging progress toward this?  The answer: not very well.  People are good at being busy but not actually accomplishing anything.  It’s not to say that many jurisdictions didn’t make good faith efforts in complying with the NIMS requirements (and thus were dedicated to accomplishing better incident management), but many were pressured and intimidated, ‘pencil whipping’ certain answers, fearing a loss of federal funding.   Even for those will good faith efforts, churning a bunch of people through training courses does not necessarily mean they will implement the system they are trained in.  Implementation of such a system required INTEGRATION through all realms of preparedness and response.  While NIMSCAST certainly provided some measurable results, particularly in terms of the number of people completing ICS courses, that really doesn’t tell us anything about IMPLEMENTATION.  Are jurisdictions actually using NIMS and, if so, how well?  NIMSCAST was a much a show of being busy while not accomplishing anything as some of the activities it measured.  It’s unfortunate that numbers game lasted almost ten years.

In 2014, the NIC (which now stands for the National Integration Center) incorporated NIMS compliance questions into the Unified Reporting Tool (URT), including about a dozen questions into every state’s THIRA and State Preparedness Report submission.  Jurisdictions below states (unless they are Urban Area Security Initiative grant recipients) no longer need to provide any type of certification about their NIMS compliance (unless required by the state).  The questions asked in the URT, which simply check for a NIMS pulse, are even less effective at measuring any type of compliance than NIMSCAST was.

While I am certainly being critical of these efforts, I have and continue to acknowledge how difficult this particular task is.  But there must be a more effective way.  Falling back to my roots in curriculum development, we must identify how we will evaluate learning early in the design process.  The same principal applies here.  If the goal of NIMS is to “provide a common approach to managing incidents”, then how do we measure that?  The only acceptable methodology toward measuring NIMS compliance is one that actually identifies if NIMS has been integrated and implemented.  How do we do that?

The GAO report recommends the evaluation of after action reports (AARs) from incidents, events, and exercises as the ideal methodology for assessing NIMS compliance.  It’s a good idea.  Really, it is.  Did I mention that they interviewed me?

AARs (at least those well written) provide the kinds of information we are looking for.  Does it easily correlate into numbers and metrics?  No.  That’s one of the biggest challenges with using AARs, which are full of narrative.  Another barrier to consider is how AARs are written.  The HSEEP standard for AARs is to focus on core capabilities.  The issue: there is no NIMS core capability.  Reason being that NIMS/ICS encompasses a number of key activities that we accomplish during an incident.  The GAO identified the core capabilities of operational coordination, operational communication, and public information and warning to be the three that have the most association to NIMS activities.

The GAO recommends the assessment of NIMS compliance is best situated with FEMA’s regional offices.  This same recommendation comes from John Fass Morton who authored Next-Generation Homeland Security (follow the link for my review of this book).  Given the depth of analysis these assessments would take to review AAR narratives, the people who are doing these assessments absolutely must have some public safety and/or emergency management experience.  To better enable this measurement (which will help states and local jurisdictions, by the way), there may need to be some modification to the core capabilities and how we write AARs to help us better draw out some of the specific NIMS-related activities.  This, of course, would require several areas within FEMA/DHS to work together… which is something they are becoming better at, so I have faith.

There is plenty of additional discussion to be had regarding the details of all this, but its best we not get ahead of ourselves.  Let’s actually see what will be done to improve how NIMS implementation is assessed.  And don’t forget the crusade to improve ICS training!

What are your thoughts on how best to measure NIMS implementation?  Do you think the evaluation of AARs can assist in this?  At what level do you think this should be done – State, FEMA Regional, or FEMA HQ?

As always, thanks for reading!

© 2016 – Timothy Riecker