Failures in Preparedness

In May the GAO released a report titled “National Preparedness: Additional Actions Needed to Address Gaps in the Nation’s Emergency Management Capabilities”. I encourage everyone to read the report for themselves and also reflect on my commentary from several years of National Preparedness Reports. I’ll summarize all this though… it doesn’t look good. The National Preparedness Reports really tell us little about the state of preparedness across the nation, and this is reinforced by the GAO report as they state “FEMA is taking steps to strengthen the national preparedness system, but has yet to determine what steps are needed to address the nation’s capability gaps across all levels of government”.

First of all, let me be clear about where the responsibility of preparedness lies – EVERYONE. Whole community preparedness is actually a thing. It’s not FEMA’s job to ensure we are prepared. As also made evident in the GAO report (for those who haven’t worked with federal preparedness grants), most preparedness grants are pretty open, and as such, the federal government can’t force everyone to address the most critical capability gaps. Why wouldn’t jurisdictions want to address the most critical capability gaps, though? Here are some of the big reasons:

  • Most or all funding may be used to sustain the employment of emergency management staff, without whom there would be no EM program in that jurisdiction
  • The jurisdiction has prioritized sustaining other core capabilities which they feel are more important
  • The jurisdiction has decided that certain core capabilities are not for them to address (deferring instead to state or federal governments)
  • Shoring up gaps is hard
  • Response is sexier

The GAO report provided some data to support where priorities lie. First, let’s take a look at spending priorities by grant recipients:

While crosscutting capabilities (Operational Coordination, Planning, and Public Information and Warning) were consistently the largest expenditures, I would surmise that Operational Coordination was the largest of the three, followed by Planning, with Public Information and Warning coming in last. And I’m pretty confident that while these are cross cutting, these mostly lied within the Response Mission Area. Assuming my predictions are correct, there is fundamentally nothing wrong with this. It offers a lot of bang for the buck, and I’ve certainly spoken pretty consistently about how bad we are at things like Operational Coordination and Planning (despite some opinions to the contrary). Jumping to the end of the book, notice that Recovery mission area spending accounts for 1% of the total. This seems like a poor choice considering that three of the five lowest rated capabilities are in the Recovery mission area. Check out this table also provided in the GAO report:

Through at least a few of these years, Cybersecurity has been flagged as a priority by DHS/FEMA, yet clearly, we’ve not made any progress on that front. Our preparedness for Housing recovery has always been abysmal, yet we haven’t made any progress on that either. I suspect that those are two areas, specifically, that many jurisdictions feel are the responsibility of state and federal government.

Back in March of 2011, the GAO recommended that FEMA complete a national preparedness assessment of capability gaps at each level of government based on tiered, capability-specific performance objectives to enable prioritization of grant funding. This recommendation has not yet been implemented. While not entirely the fault of FEMA, we do need to reimagine that national preparedness system. While the current system is sound in concept, implementation falls considerably short.

First, we do need a better means of measuring preparedness. It’s difficult – I fully acknowledge that. And for as objective as we try to make it, there is a vast amount of subjectivity to it. I do know that in the end, I shouldn’t find myself shaking my head or even laughing at the findings identified in the National Preparedness Report, though, knowing that some of the information there can’t possibly be accurate.

I don’t have all the answers on how we should measure preparedness, but I know this… it’s different for different levels of government. A few thoughts:

  • While preparedness is a shared responsibility, I don’t expect a small town to definitively have the answers for disaster housing or cybersecurity. We need to acknowledge that some jurisdictions simply don’t have the resources to make independent progress on certain capabilities. Does this mean they have no responsibility for it – no. Absolutely not. But the current structure of the THIRA, while allowing for some flexibility, doesn’t directly account for a shared responsibility.
  • Further, while every jurisdiction completing a THIRA is identifying their own capability targets, I’d like to see benchmarks established for them to strive for. This provides jurisdictions with both internal and external definitions of success. It also allows them an out, to a certain extent, on certain core capabilities that have a shared responsibility. Even a small town can make some progress on preparedness for disaster housing, such as site selection, estimating needs, and identifying code requirements (pro tip… these are required elements of hazard mitigation plans).
  • Lastly, we need to recognize that it’s difficult to measure things when they aren’t the same or aren’t being measured the same. Sure, we can provide a defined core capability, but when everyone has different perspective on and expectation of that core capability and how it should be measured, we aren’t getting answers we can really compare. Everyone knows what a house is, but there is a considerable difference between a double wide and a McMansion. Nothing wrong with either of them, but the differences give us very different base lines to work from. Further, if we need to identify how big a house is and someone measures the length and width of the building, someone else measures the livable square footage of a different building, and a third person measures the number of floors of yet another house, we may have all have correct answers, but we can’t really compare any of them. We need to figure out how to allow jurisdictions to contextualize their own needs, but still be playing the same game.

In regard to implementation, funding is obviously a big piece. Thoughts on this:

  • I think states and UASIs need to take a lot of the burden. While I certainly agree that considerable funding needs to be allocated to personnel, this needs to be balanced with sustaining certain higher tier capabilities and closing critical gaps. Easier said than done, but much of this begins with grant language and recognition that one grant may not fit all the needs.
  • FEMA has long been issuing various preparedness grants to support targeted needs and should not only continue to do so, but expand on this program. Targeted grants should be much stricter in establishing expectations for what will be accomplished with the grant funds.
  • Collaboration is also important. Shared responsibility, whole community, etc. Many grants have suggested or recommended collaboration through the years, but rarely has it been actually required. Certain capabilities lend themselves to better development potential when we see the realization of collaboration, to include the private sector, NGOs, and the federal government. Let’s require more of it.
  • Instead of spreading money far and wide, let’s establish specific communities of practice to essentially act as model programs. For a certain priority, allocate funds for a grant opportunity with enough to fund 3-5 initiatives in the nation. Give 2-3 years for these programs to identify and test solutions. These should be rigorously documented so as to analyze information and potentially duplicate, so I suggest that academic institutions also be involved as part of the collaborative effort (see the previous bullet). Once each of the grantees has completed their projects, host a symposium to compare and contrast, and identify best practices. Final recommendations can be used to benchmark other programs around the nation. Once we have a model, then future funding can be allocated to support implementation of that model in other areas around the nation. Having worked with the National Academies of Sciences, Engineering, and Medicine, they may be an ideal organization to spearhead the research component of such programs.
  • Recognize that preparedness isn’t just long term, it’s perpetual. While certain priorities will change, the goals remain fundamentally the same. We are in this for the long haul and we need to engage with that in mind. Strategies such as the one in the previous bullet point lend themselves to long-term identification of issues, exploration of solutions, and implementation of best practices.
  • Perhaps in summary of all of this, while every jurisdiction has unique needs, grant programs can’t be so open as to allow every grantee to have a wholly unique approach to things. It feels like most grant programs now are simply something thrown at a wall – some of it sticks, some of it falls right off, some might not even make it to the wall, some slowly drips off the wall, and some dries on permanently. We need consistency. Not necessarily uniformity, but if standards are established to provide a foundational 75% solution, with the rest open for local customization, that may be a good way to tackle a lot of problems.

In the end, while FEMA is the implementing agency, the emergency management community needs to work with them to identify how best to measure preparedness across all levels and how we can best implement preparedness programs. Over the past few years, FEMA has been very open in developing programs for the emergency management community and I hope this is a problem they realize they can’t tackle on their own. They need representatives from across the practice to help chart a way ahead. This will ensure that considerations and perspectives from all stakeholder groups are addressed. Preparedness isn’t a FEMA problem, it’s an emergency management problem. Let’s help them help us.

What thoughts do you have on preparedness? How should we measure it? What are the strengths and areas for improvement for funding? Do you have an ideal model in mind?

© 2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Measuring Preparedness – An Executive Academy Perspective

A recent class of FEMA’s Emergency Management Executive Academy published a paper titled Are We Prepared Yet? in the latest issue of the Domestic Preparedness Journal.  It’s a solid read, and I encourage everyone to look it over.

First off, I wasn’t aware of the scope of work conducted in the Executive Academy.  I think that having groups publish papers is an extremely important element.  Given that the participants of the Executive Academy function, presently or in the near future, at the executive level in emergency management and/or homeland security, giving others the opportunity to learn from their insight on topics discussed in their sessions is quite valuable.  I need to do some poking around to see if papers written by other groups can be found.

As most of my readers are familiar, the emphasis of my career has always been in the realm of preparedness.  As such, it’s an important topic to me and I tend to gravitate to publications and ideas I can find on the topic.  The authors of this paper bring up some excellent points, many of which I’ve covered in articles past.  They indicate a variety of sources, including literature reviews and interviews, which I wish they would have cited more completely.

Some points of discussion…

THIRA

The authors discuss the THIRA and SPR – two related processes/products which I find to be extremely valuable.  They indicate that many believe the THIRA to be complex and challenging.  This I would fully agree with, however I posit that there are few things in the world that are both simple and comprehensive in nature.  In particular regard to emergency management and homeland security, the inputs that inform and influence our decisions and actions are so varied, yet so relevant, that to ignore most of them would put us at a significant disadvantage.  While I believe that anything can be improved upon, THIRA and SPR included, this is something we can’t afford to overly simplify.

What was most disappointing in this topic area was their finding that only a scant majority of people they surveyed felt that THIRA provided useful or actionable information.  This leaves me scratching my head.  A properly done THIRA provides a plethora of useful information – especially when coupled with the SPR (POETE) process.  Regardless, the findings of the authors suggest that we need to take another look at THIRA and SPR to see what can be improved upon, both in process and result.

Moving forward within the discussion of THIRA and SPR, the authors include discussion of something they highlight as a best practice, that being New York State’s County Emergency Preparedness Assessment (CEPA).  The intent behind the CEPA is sound – a simplified version of the THIRA which is faster and easier to do for local governments throughout the state.  The CEPA includes foundational information, such as a factual overview of the jurisdiction, and a hazard analysis which ranks hazards based upon likelihood and consequence.  It then analyses a set of capabilities based upon the POETE elements.  While I love their inclusion of POETE (you all know I’m a huge fan), the capabilities they use are a mix of the current Core Capabilities (ref: National Preparedness Goal) and the old Target Capabilities, along with a few not consistent with either and a number of Core Capabilities left out.  This is where the CEPA falls apart for me.  It is this inconsistency with the National Preparedness Goal that turns me off.  Any local governments looking to do work in accordance with the NPG and related elements, including grants, then need to cross walk this data, as does the state in their roll-up of this information to their THIRA and SPR.

The CEPA continues with an examination of response capacity, along the lines of their response-oriented capabilities.  This is a valuable analysis and I expect it becomes quite a reality check for many jurisdictions.  This is coupled with information not only on immediate response, but also sustained response over longer periods of time.  Overall, while I think the CEPA is a great effort to make the THIRA and POETE analysis more palatable for local jurisdictions, it leaves me with some concerns in regard to the capabilities they use.  It’s certainly a step in the right direction, though.  Important to note, the CEPA was largely developed by one of the authors of the paper, who was a former colleague of mine working with the State of New York.

The Process of Preparedness

There are a few topic areas within their paper that I’m lumping together under this discussion topic.  The authors make some excellent points about our collective work in preparedness that I think all readers will nod their heads about, because we know when intuitively, but sometimes they need to be reinforced – not only to us as practitioners, but also to other stakeholders, including the public.  First off, preparedness is never complete.  The cycle of preparedness – largely involving assessment, planning, organizing, equipping, training, and exercising – is just that – a cycle.  It’s endless.  While we do a great deal of work in each of these, our accomplishments are really only temporary.

The authors also mention that our information is not always precise.  We base a lot of what we do in preparedness on information, such as a hazard analysis.  While there are some inputs that are factual and supported by science, there are many that are based on speculation and anecdote.  This is a reality of our work that we must always acknowledge.  As is other of their points – there is no silver bullet.  There is no universal solution to all our woes.  We must constantly have our head in the game and consider actions that we may not have ever considered before.

ICS Improvement Officer

The authors briefly discuss a conceptual position within the ICS Command Staff they call the ICS Improvement Officer.  The concept of this fascinating, if not a bit out of place in this paper given other topics of discussion.  Essentially, as they describe this position, it is someone at the Command Staff level who is responsible for providing quality control to the incident management processes and implementations of the organization.  While I’ve just recently read this paper and haven’t had a lot of time to digest the concept, I really can’t find any fault with the concept.  While the planning process itself is supposed to provide some measure of a feedback loop, there isn’t anyone designated in the organization to shepherd that process beginning to end and ultimately provide the quality control measures necessary.  In practice, I’ve seen this happen collaboratively, among members of the Command and General Staff of a well-staffed structure, as well as by the individual who has the best overall ICS insight and experience in an organization – often the Planning Section Chief.  The authors elude to this position also feeding an AAR process, which contributes to overall preparedness.  I like this idea and I hope it is explored more, either formally or informally.

Conclusion

There are a number of other topic areas of this paper which I haven’t covered here, but I encourage everyone to read on their own.  As mentioned earlier, I’d like to see more of the research papers that come from FEMA’s Emergency Management Executive Academy available for public review.  Agree or disagree with their perspectives, I think their discussions on various topics are absolutely worth looking at.  It’s these discussions like these which will ultimately drive bigger discussions which will continue to advance public safety.

I’m always interested in the perspectives of my readers.  Have you read the paper?  What do you think of the discussion topics they presented?

© 2017 – Timothy M Riecker, CEDP

Emergency Preparedness Solutions, LLC