2021: Another Horrible National Preparedness Report

FEMA’s Christmas present to us in 2021, as with the past several years, was the National Preparedness Report. Before I dive in, a few of reminders. 1) You can find my reviews of the reports of prior years here. 2) To get copies of the reports of prior years, FEMA archives these in the unrestricted side of the Homeland Security Digital Library. 3) Each report is derived from data from the year prior, so this December 2021 report actually covers the calendar year of 2020.

The 2021 report covers risks and capabilities, as have the reports of past years. It also covers ‘Management Opportunities’ which “the Federal Government, SLTTs (state, local, territories, and tribes), and the private sector could use to build capability and address capacity gaps.” It offers a slightly different perspective than the prior year’s ‘Critical Considerations for Emergency Management’, but fundamentally offers the same type of constructive commentary.

Keeping in mind that through much of 2020, the US, as with nations across the globe, was managing the COVID 19 Coronavirus pandemic. An observation from this report is that the word ‘COVID’ comes up 222 times in the document. That is a LOT of focus on one particular hazard. While I’ll grant that it impacted everyone, had a number of cascading impacts, and there are some statements made in the document about other hazards and concurrent incidents, I fear that when nearly every paragraph mentions COVID, we seem to lose a sense of all-hazard emergency management in the document and thus in the state of the nation’s preparedness. What I do appreciate, as with FEMA’s new Strategic Plan and other recent documents, there is acknowledgement and discussion around inequities in disaster relief. This is an important topic which needs to continue getting exposure. Related to this they also reference the National Risk Index that was released in 2020, which includes indices of social vulnerability. This is a valuable tool for all emergency managers.

The information on Risk included in the 2021 report is much more comprehensive and informative than that in the 2020 report, though they once again miss an opportunity to provide metrics and displays of infographics. While words are valuable, well-designed infographics tell an even better story. Most numbers given in this section of the report were buried in seemingly endless paragraphs of text, and there certainly were no deep analytics provided. It’s simply poor story telling and buries much of the value of this section.

While the mention of climate change had been forbidden in the past few reports, I would have expected the 2021 report to have some significant inclusion on the matter. Instead, it’s highlighted in two pages covering ‘Emerging Risks’ with very little information given. Climate change isn’t emerging, folks, it’s here.

Capabilities are a significant focus of the Threat and Hazard Identification and Risk Assessment (THIRA) and Stakeholder Preparedness Review (SPR) completed by states, Urban Area Security Initiative (UASI) funded regions, and others. As part of the THIRA/SPR process, stakeholders traditionally identify their own preparedness goals (capability targets) for each of the 32 Core Capabilities outlined in the National Preparedness Goal. For the 2021 report, FEMA limited the capability targets to a given set focused on pandemic-related capabilities. As mentioned earlier, while the pandemic is certainly a principal concern, and many of the capability targets can be leveraged toward other hazards, I think this was a failure of the all-hazards approach. Further, with this focus, the 2021 report fails to provide most of the metrics provided in reports of the past, identifying, in aggregate, where stakeholders assessed their own standing in each Core Capability. This is the most significant gauge of preparedness, and they provide so little information on it in this report that I feel the report fails at its primary goal.

I’ve mentioned in the past that the metrics provided in previous reports are superficial at best and provide little by way of analysis. Unfortunately, the metrics provided in the 2021 report are even more lacking, and what there is only provides a snapshot of 2020 instead of any trend analysis.

What is included in this section of the document that I appreciated were some infographics compiling information on some of the capability targets that FEMA pre-determined. Unfortunately, they didn’t even provide these infographics for all of the limited set of capability targets, and the information provided is still fairly weak. Again, this severely limits the value of this being a national report on preparedness.

The last major component of the document is Management Opportunities. This section similarly provides seemingly endless paragraphs of text, but does approach these management opportunities like a strategic plan, setting goals, objectives, and (some) possible metrics for each opportunity. These offer valuable approaches, which coincidentally dovetail well into the goals of FEMA’s new strategic plan and will hopefully provide some solid value to emergency management programs at all levels. I think this section is really the most valuable component of the entire report. Unfortunately, it’s the shortest. The opportunities identified in the report are:

  • Developing a Preparedness Investment Strategy
  • Addressing Steady-State Inequities, Vulnerabilities, and a Dynamic Risk Landscape
  • Strengthen Processes Within and Better Connect Areas of the National Preparedness System

Overall, while there are some pockets of good content, this is another disappointing report. FEMA still isn’t telling us much about the state of preparedness across the nation; and in fact this report tells us even less than prior reports, which I didn’t think was possible. They attempt to tell stories through some focused discussion on a few capability targets, which has some value, but are providing little to no information on the big picture; not the current state of preparedness and certainly not any analysis of trends. Even the section on Management Opportunities isn’t consistent in identifying metrics for each opportunity.

What remains a mystery to me is that it takes a full year to develop this report. The metrics I allude to throughout my commentary are largely easy to obtain and analyze, as much of this information comes to FEMA in quantifiable data; also making trend analysis a rather easy chore. Last year’s report, while still severely lacking, was formatted much better than this year’s, which lacked a vision for story telling and communication of data.

Simply put, emergency managers and other recipients of this report (Congress?) should not accept this type of reporting. Despite coming in at 94 pages, it tells us so little and in my mind does not meet the spirit of the requirement for a National Preparedness Report (this is defined in Presidential Policy Directive 8). States, UASIs, and others who complete and submit THIRAs and SPRs should be livid that their efforts, while certainly (hopefully) valuable to them, are being poorly aggregated, studied, analyzed, and reported as part of the National Preparedness Report. In fact I feel that the 2021 report is telling a story that FEMA wants to tell, supported by select data and case studies; rather than actually reporting on the state of preparedness across the nation, as informed by federal, state, local, territorial, tribal, private sector, and non-profit stakeholders.  

As always, the thoughts of my readers are more than welcome.

Happy New Year to everyone!

© 2022 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Measuring Return on Investment Through Key Performance Indicators

Return on investment (ROI) is generally defined as a measurement of performance to evaluate the value of investments of time, money, and effort. Many aspects of preparedness in emergency management offer challenges when trying to gauge return on investment. Sure, it’s easy to identify that m number of classes were conducted and n number of people were trained, that x number of exercises were conducted with y number of participants, that z number of plans were written, or even that certain equipment was purchased. While those tell us about activity, they don’t tell us about performance, results, or outcomes.

More classes were conducted. So what?

We purchased a generator. So what?

The metrics of these activities are easy to obtain, but these are rather superficial and generally less than meaningful. So how can we obtain a meaningful measure of ROI in emergency preparedness?

ROI is determined differently based on the industry being studied, but fundamentally it comes down to identifying key performance indicators, their value, and how much progress was made toward those key performance indicators. So what are our key performance indicators in preparedness?

FEMA has recently began linking key performance indicators to the THIRA. The Threat and Hazard Identification and Risk Assessment, when done well, gives us quantifiable and qualifiable information on the threats and hazards we face and, based upon certain scenarios, the performance measures need to attain certain goals. This is contextualized and standardized through defined Core Capabilities. When we compare our current capabilities to those needed to meet the identified goals (called capability targets in the THIRA and SPR), we are able to better define the factors that contribute to the gap. The gap is described in terms of capability elements – planning, organizing, equipping, training, and exercises (POETE). In accordance with this, FEMA is now making a more focused effort to collect data on how we are meeting capability targets, which helps us to better identify return on investment.

2021 Emergency Management Performance Grant (EMPG) funding is requiring the collection of data as part of the grant application and progress reports to support their ability to measure program effectiveness and investment impacts. They are collecting this information through the EMPG Work Plan. This spreadsheet goes a long way toward helping us better measure preparedness. This Work Plan leads programs to identify for every funded activity:

  • The need addressed
  • What is expected to be accomplished
  • What the expected impact will be
  • Identification of associated mission areas and Core Capabilities
  • Performance goals and milestones
  • Some of the basic quantitative data I mentioned above

This is a good start, but I’d like to see it go further. They should still be prompting EMPG recipients to directly identify what was actually improved and how. What has the development of a new plan accomplished? What capabilities did a certain training program improve? What areas for improvement were identified from an exercise, what is the corresponding improvement plan, and how will capabilities be improved as a result? The way to get to something more meaningful is to continue asking ‘so what?’ until you come to an answer that really identifies meaningful accomplishments.

EMPG aside, I encourage all emergency management programs to identify their key performance indicators. This is a much more results-oriented approach to managing your program, keeping the program focused on accomplishing meaningful outcomes, not just generating activity. It’s more impactful to report on what was accomplished than what was done. It also gives us more meaningful information to analyze across multiple periods. This type of information isn’t just better for grant reports, but also for your local budgets and even routine reports to upper management and elected officials.

What do you think about FEMA’s new approach with EMPG? What key performance indicators do you use for your programs?

© 2021 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

FEMA’s 2020 National Preparedness Report – A Review

It seems an annual tradition for me to be reviewing the National Preparedness Report. I’ve endeavored to provide constructive criticism of these documents, which are compilations of data from state and federal agencies, national-level responses, and other sources.

This year’s National Preparedness Report emphasizes that it is based on data from the 2019 calendar year. In looking back on past reports (note: they are no longer on the FEMA site – I was able to find them in the Homeland Security Digital Library) this has been the past practice. Perhaps I never realized it before, but a report talking about data from practically a full year ago seems to hold even less relevance. That means that enacting changes on a national level based on this data may not even begin to occur until two years have passed. Even taking into consideration that states and UASIs are compiling their reports early in a year for the previous year, it still seems a long time to wait for the national level report. This extent of lag is further emphasized by the document’s foreword, written by the FEMA Administrator, which makes many references to COVID-19 and how much different next year’s report will be, while not really speaking at all about the current report. This speaks a lot to how much we, as a practice, are attracted by the shiny objects dangled in front of us, seemingly ignoring all else.

My first pass of the 2020 report brought two primary impressions: 1) The instructive content of the document is some of the best I’ve seen out of FEMA, and 2) There is a considerable lack of data, with a low value for much of what they have included.

In regard to my first impression, the discussion of concepts such as risk (including emerging risk and systemic risk), capabilities, cascading impacts, community lifelines, public-private partnerships, and vulnerable populations has the perfect level of depth and detail. Not only do they discuss each of these concepts, but they also identify how they each connect to each other. This is EXACTLY the kind of consolidation of information we have needed for a long time. This lends itself to truly integrated preparedness and the kinds of information I’ve mentioned many times as being needed, including in the next version of CPG-101. I’m truly impressed with this content, the examples they provide, and how they demonstrate the interconnectedness of it all. I’ll certainly be using this document as a great source of this consolidated information. Now that I’ve extolled my love and adoration for that content, I’m left wondering why it’s in the National Preparedness Report. It’s great content for instructional material and doctrinal material on integrated preparedness, but it really has no place, at least to this extent of detail in the National Preparedness Report. Aside from the few examples they use, there isn’t much value in this format as a report.

This brings me to my next early observation: that of very little actual data contained in the report. Given the extent to which states, territories, UASIs, and other stakeholders provide data to FEMA each year by way of their Threat and Hazard Identification and Risk Assessments (THIRAs) and Stakeholder Preparedness Reviews (SPRs), along with various other sources of data, this document doesn’t contain a fraction of what is being reported. There are two map products contained in the entire report, one showing the number of federal disaster declarations for the year, the other showing low-income housing availability across the nation. Given the wide array of information provided by state and UASI, and compiled by FEMA region, surely there must be some really insightful trends and other analysis to provide. There are a few other data sets included in the report showing either raw numbers or percentages – nothing I would really consider analytics. Much of the data is also presented as a snapshot in time, without any comparison to previous years.

Any attempt to view this document as a timely, meaningful, and relevant report on the current state of preparedness in the nation, much less an examination of preparedness over time, is simply an exercise in frustration. The previous year’s report at least had a section titled ‘findings’, even though any real analysis of data there was largely non-existent. This year’s report doesn’t even feign providing a section on findings. To draw on one consistently frustrating example, I’ll use the Core Capability of housing. While this report dances around doctrine and concepts, and even has a section on housing, it’s not addressing why so little preparedness funding or even moderate effort is directed toward addressing the issue of emergency housing, which has arguably been the biggest preparedness gap for time eternal in every state of the nation. Looking broadly at all Core Capabilities, this year’s report provides a chart similar to what we’ve seen in previous years’ reports, identifying how much preparedness funding has gone toward each Core Capability. In relative numbers, very little has changed; even though we know that issues like housing, long-term vulnerability reduction, infrastructure systems, and supply chains have huge gaps. All these reports are telling me is that we’re doing the same things over and over again with little meaningful change.

So there it is… while I really am thoroughly impressed with some of the content of the report, much of that content really doesn’t have a place in this report (at least to such an extent), and for what little data is provided in the report, most of it has very little value. The introduction to the document states that “this year’s report is the product of rigorous research, analysis, and input from stakeholders”. To be blunt, I call bullshit on this statement. I expect a report to have data and various analysis of that data, not only telling us what is, but examining why it is. We aren’t getting that. The National Preparedness Report is an annual requirement per the Post Katrina Emergency Management Reform Act. I challenge that FEMA is not meeting the intent of that law with the reports they have been providing. How can we be expected, as a nation, to improve our state of readiness when we aren’t provided with the data needed to support and justify those improvements?

© 2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

The Universal Adversary Mindset

Some of you are probably familiar with the concept of the Universal Adversary (UA). From previous Homeland Security Exercise and Evaluation Program (HSEEP) doctrine, UA is “a fictionalized adversary created by compiling known terrorist modifications, doctrine, tactics, techniques, and procedures in live, virtual, and constructive simulations. The UA is based on real realistic threats … providing participants with a realistic, capabilities-based opponent.” UA is often executed by a Red Team, which serves as an exercise-controlled opposing force for participants.

Over the past few years, I’ve heard less and less of the Universal Adversary concept. DHS used to have a UA Program supporting terrorism-based prevention and responses exercises, dating back to the early 2000s, but lately I’ve neither seen or heard anything about the continuation of the program or capability. (can any readers confirm the life or death of this capability?)

Regardless, the concept of UA offers a fair amount of opportunity, not only within the Prevention Mission Area, but across all of exercise design and perhaps other areas of preparedness – yes, even across all hazards. Of course, I recognize the difference between human perpetrators and other hazards, but just stick with me on this journey.

The fact of the matter is that we so often seem to have, as the 9/11 Commission Report made the phrase infamous, a failure of imagination in our preparedness. I’m not saying we need to go wild and crazy, but we do need to think bigger and a bit more creatively – not only in the hazards that threaten us, but also in our strategies to address them.

The UA concept is applied based on a set of known parameters, though even that gives me some concern. In the Prevention world, this means that a Red Team will portray a known force, such as ISIS, based upon real intel and past actions. We all know from seeing mutual fund commercials on TV that past performance does not predict future results. While humans (perpetrators and defenders alike) gravitate toward patterns, these rules can always and at any time be broken. The same can be said for instances of human error or negligence (see the recent and terrible explosion in the Port of Beirut), or in regard to someone who we have a love-hate relationship with… Mother Nature. We need to be ever vigilant of something different occurring.

There is the ever-prolific debate of scenario-based preparedness vs capability-based preparedness. In my opinion, both are wrong and both are right. The two aren’t and shouldn’t be set against each other as if they can’t coexist. That’s one mindset we need to move away from as we venture further into this. We need to continue with thinking about credible worst-case scenarios, which will still be informed by previous occurrences of a hazard, where applicable, but we need to keep our minds open and thinking creatively. Fundamentally, as the UA concept exists to foil and outthink exercise participants, we need to challenge and outthink ourselves across all areas of preparedness and all hazards.

A great example of how we were foiled, yet again, by our traditional thinking is the current Coronavirus pandemic. Practically every pandemic response plan I’ve read got it wrong. Why? Because most pandemic plans were based upon established guidance which emergency managers, public health officials, and the like got in line and followed to the letter, most without thinking twice about it. I’m not being critical of experts who tried to predict the next pandemic – they fell into the same trap most of us do in a hazard analysis – but the guidance for many years has remained fairly rigid. That said, I think the pandemic plans that exist shouldn’t be sent through the shredder completely. The scenarios those plans were based upon are still potentially valid, but Coronavirus, unfortunately, started playing the game in another ball field. We should have been able to anticipate that – especially after the 2003 SARS outbreak, which we pretty much walked away from with ignorant bliss.

It’s not to say that we can anticipate everything and anything thrown at us, but a bit of creativity can go a long way. Re-think and re-frame your hazards. Find a thread and pull it; see where it leads you. Be a little paranoid. Loosen up a bit. Brainstorm. Freeform. Improv. Have a hazard analysis party! (I come darn close to suggesting an adult beverage – take that as you will). We can apply the same concepts when designing exercises. Consider that in the world of natural hazards, Mother Nature is a Universal Adversary. Any time we hope to have out-thought her, she proves us wrong, and with considerable embarrassment. We also try to out-think the oft stupidity and negligence of our fellow humans… clearly, we’ve not been able to crack that nut yet.

“Think smarter, not harder” is such an easy thing to say, but difficult, often times, to do. So much of what we do in emergency management is based on traditional practices, most of which have valid roots, but so often we seem reluctant to think beyond those practices. When the media reports that a disaster was unexpected, why the hell wasn’t it expected? Consider that many of our worst disasters are the ones we never thought of. Challenge yourself. Challenge others. It is not in the best interests of this profession or for the people we serve to stay stuck in the same modes of thinking. Be progressive. Break the mold. Do better.

© 2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Failures in Preparedness

In May the GAO released a report titled “National Preparedness: Additional Actions Needed to Address Gaps in the Nation’s Emergency Management Capabilities”. I encourage everyone to read the report for themselves and also reflect on my commentary from several years of National Preparedness Reports. I’ll summarize all this though… it doesn’t look good. The National Preparedness Reports really tell us little about the state of preparedness across the nation, and this is reinforced by the GAO report as they state “FEMA is taking steps to strengthen the national preparedness system, but has yet to determine what steps are needed to address the nation’s capability gaps across all levels of government”.

First of all, let me be clear about where the responsibility of preparedness lies – EVERYONE. Whole community preparedness is actually a thing. It’s not FEMA’s job to ensure we are prepared. As also made evident in the GAO report (for those who haven’t worked with federal preparedness grants), most preparedness grants are pretty open, and as such, the federal government can’t force everyone to address the most critical capability gaps. Why wouldn’t jurisdictions want to address the most critical capability gaps, though? Here are some of the big reasons:

  • Most or all funding may be used to sustain the employment of emergency management staff, without whom there would be no EM program in that jurisdiction
  • The jurisdiction has prioritized sustaining other core capabilities which they feel are more important
  • The jurisdiction has decided that certain core capabilities are not for them to address (deferring instead to state or federal governments)
  • Shoring up gaps is hard
  • Response is sexier

The GAO report provided some data to support where priorities lie. First, let’s take a look at spending priorities by grant recipients:

While crosscutting capabilities (Operational Coordination, Planning, and Public Information and Warning) were consistently the largest expenditures, I would surmise that Operational Coordination was the largest of the three, followed by Planning, with Public Information and Warning coming in last. And I’m pretty confident that while these are cross cutting, these mostly lied within the Response Mission Area. Assuming my predictions are correct, there is fundamentally nothing wrong with this. It offers a lot of bang for the buck, and I’ve certainly spoken pretty consistently about how bad we are at things like Operational Coordination and Planning (despite some opinions to the contrary). Jumping to the end of the book, notice that Recovery mission area spending accounts for 1% of the total. This seems like a poor choice considering that three of the five lowest rated capabilities are in the Recovery mission area. Check out this table also provided in the GAO report:

Through at least a few of these years, Cybersecurity has been flagged as a priority by DHS/FEMA, yet clearly, we’ve not made any progress on that front. Our preparedness for Housing recovery has always been abysmal, yet we haven’t made any progress on that either. I suspect that those are two areas, specifically, that many jurisdictions feel are the responsibility of state and federal government.

Back in March of 2011, the GAO recommended that FEMA complete a national preparedness assessment of capability gaps at each level of government based on tiered, capability-specific performance objectives to enable prioritization of grant funding. This recommendation has not yet been implemented. While not entirely the fault of FEMA, we do need to reimagine that national preparedness system. While the current system is sound in concept, implementation falls considerably short.

First, we do need a better means of measuring preparedness. It’s difficult – I fully acknowledge that. And for as objective as we try to make it, there is a vast amount of subjectivity to it. I do know that in the end, I shouldn’t find myself shaking my head or even laughing at the findings identified in the National Preparedness Report, though, knowing that some of the information there can’t possibly be accurate.

I don’t have all the answers on how we should measure preparedness, but I know this… it’s different for different levels of government. A few thoughts:

  • While preparedness is a shared responsibility, I don’t expect a small town to definitively have the answers for disaster housing or cybersecurity. We need to acknowledge that some jurisdictions simply don’t have the resources to make independent progress on certain capabilities. Does this mean they have no responsibility for it – no. Absolutely not. But the current structure of the THIRA, while allowing for some flexibility, doesn’t directly account for a shared responsibility.
  • Further, while every jurisdiction completing a THIRA is identifying their own capability targets, I’d like to see benchmarks established for them to strive for. This provides jurisdictions with both internal and external definitions of success. It also allows them an out, to a certain extent, on certain core capabilities that have a shared responsibility. Even a small town can make some progress on preparedness for disaster housing, such as site selection, estimating needs, and identifying code requirements (pro tip… these are required elements of hazard mitigation plans).
  • Lastly, we need to recognize that it’s difficult to measure things when they aren’t the same or aren’t being measured the same. Sure, we can provide a defined core capability, but when everyone has different perspective on and expectation of that core capability and how it should be measured, we aren’t getting answers we can really compare. Everyone knows what a house is, but there is a considerable difference between a double wide and a McMansion. Nothing wrong with either of them, but the differences give us very different base lines to work from. Further, if we need to identify how big a house is and someone measures the length and width of the building, someone else measures the livable square footage of a different building, and a third person measures the number of floors of yet another house, we may have all have correct answers, but we can’t really compare any of them. We need to figure out how to allow jurisdictions to contextualize their own needs, but still be playing the same game.

In regard to implementation, funding is obviously a big piece. Thoughts on this:

  • I think states and UASIs need to take a lot of the burden. While I certainly agree that considerable funding needs to be allocated to personnel, this needs to be balanced with sustaining certain higher tier capabilities and closing critical gaps. Easier said than done, but much of this begins with grant language and recognition that one grant may not fit all the needs.
  • FEMA has long been issuing various preparedness grants to support targeted needs and should not only continue to do so, but expand on this program. Targeted grants should be much stricter in establishing expectations for what will be accomplished with the grant funds.
  • Collaboration is also important. Shared responsibility, whole community, etc. Many grants have suggested or recommended collaboration through the years, but rarely has it been actually required. Certain capabilities lend themselves to better development potential when we see the realization of collaboration, to include the private sector, NGOs, and the federal government. Let’s require more of it.
  • Instead of spreading money far and wide, let’s establish specific communities of practice to essentially act as model programs. For a certain priority, allocate funds for a grant opportunity with enough to fund 3-5 initiatives in the nation. Give 2-3 years for these programs to identify and test solutions. These should be rigorously documented so as to analyze information and potentially duplicate, so I suggest that academic institutions also be involved as part of the collaborative effort (see the previous bullet). Once each of the grantees has completed their projects, host a symposium to compare and contrast, and identify best practices. Final recommendations can be used to benchmark other programs around the nation. Once we have a model, then future funding can be allocated to support implementation of that model in other areas around the nation. Having worked with the National Academies of Sciences, Engineering, and Medicine, they may be an ideal organization to spearhead the research component of such programs.
  • Recognize that preparedness isn’t just long term, it’s perpetual. While certain priorities will change, the goals remain fundamentally the same. We are in this for the long haul and we need to engage with that in mind. Strategies such as the one in the previous bullet point lend themselves to long-term identification of issues, exploration of solutions, and implementation of best practices.
  • Perhaps in summary of all of this, while every jurisdiction has unique needs, grant programs can’t be so open as to allow every grantee to have a wholly unique approach to things. It feels like most grant programs now are simply something thrown at a wall – some of it sticks, some of it falls right off, some might not even make it to the wall, some slowly drips off the wall, and some dries on permanently. We need consistency. Not necessarily uniformity, but if standards are established to provide a foundational 75% solution, with the rest open for local customization, that may be a good way to tackle a lot of problems.

In the end, while FEMA is the implementing agency, the emergency management community needs to work with them to identify how best to measure preparedness across all levels and how we can best implement preparedness programs. Over the past few years, FEMA has been very open in developing programs for the emergency management community and I hope this is a problem they realize they can’t tackle on their own. They need representatives from across the practice to help chart a way ahead. This will ensure that considerations and perspectives from all stakeholder groups are addressed. Preparedness isn’t a FEMA problem, it’s an emergency management problem. Let’s help them help us.

What thoughts do you have on preparedness? How should we measure it? What are the strengths and areas for improvement for funding? Do you have an ideal model in mind?

© 2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

NEW: 2020 HSEEP Revision

Earlier today FEMA dropped the latest version of the Homeland Security Exercise and Evaluation Program (HSEEP) doctrine.  Doing a quick comparison between this new version and the previous (2013) version, I’ve identified the following significant changes:

  • They replaced the ‘Elected and Appointed Officials’ mentions within the document with ‘Senior Leaders’. This makes sense, since often the elected and appointed officials simply aren’t involved in many of these activities.  The previous terminology is also exclusionary of the private sector and NGOs.
  • The document specifically references the Preparedness Toolkit as a go-to resource.
  • A big emphasis through the document is on the Integrated Preparedness Cycle (see the graphic with this post). The Integrated Preparedness Cycle covers all POETE (Planning, Organizing, Equipping, Training, and Exercising) elements plus Evaluate/Improve.  The graphic also eludes to these activities not necessarily happing in a specific order, as well as the consideration of Preparedness Priorities and Threats, Hazards, and Risks.  Developing a preparedness plan is something I wrote about back in 2016.
  • Integrated Preparedness Cycle
    • Going along with the Integrated Preparedness Cycle, they have done away with the Training and Exercise Plan (TEP) and replaced it with the Integrated Preparedness Plan (IPP), which is developed through input obtained during an Integrated Preparedness Planning Workshop (IPPW). I serious HOPE this shift is successful, as I’ve mentioned in the past how often the training aspect of the TEP was ignored or phoned in.  This approach also does a lot to integrate planning, organizing, and equipping (but ESPECIALLY planning) into the effort.  This is all tied together even more if a jurisdiction has completed a THIRA.  The Integrated Preparedness Cycle and IPP are the things I’m happiest about with the updated document.
  • The new document provides easier to find and read layouts for information associated with exercise types and each of the planning meetings.
  • For years, HSEEP doctrine has suggested (though thankfully not required) an ICS-based organization for exercise planning. I’ve never used this as I found it awkward at best (though I know others often use it and have success in doing so).  The update provides a different suggestion (better, in my opinion) of a functionally organized planning team organization.  Consider that this is still a suggestion, and that you can use it, or a version of it, or an ICS-based one, or anything else you desire.
  • The update provides better delineation between the planning and conduct needs of discussion-based exercises vs those of operations-based exercises. Those of us who have been doing it for a while know, but for those who are new to exercises this should be very helpful.
  • Lastly, the document suggests making corrective actions SMART, as these are really objectives.

FEMA is hosting a series of webinars (listed on the HSEEP website) to discuss these changes.

I’m very happy with the changes made to the doctrine.  It’s a great continued evolution of HSEEP and preparedness as a whole.  For as much as I’m a champion of the Integrated Preparedness Plan, though, having it (thus far) only included in the HSEEP doctrine makes it easy to miss or dismiss by some.  I’m hopeful broader promotion of this concept, perhaps even including it as an emergency management performance grant requirement, will help adoption of this concept.

What are your thoughts?

© 2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Thoughts on How to Improve the Planning Standard

I hope everyone is settling into the new year nicely.  One of the things I started off this year doing was going through CPG 101 and providing input to FEMA for the update of this foundational document.  (note: if you haven’t yet, get your comments in now as the deadline is soon approaching!)  CPG 101, and its predecessors, are time tested and well honed in the guidance provided on the process used for planning.  While it’s frustrating to see and hear that some people still don’t use it, that’s no fault of the document itself, but rather one of human implementation, or lack thereof.

I thought I’d share some of the feedback I sent along to FEMA on what I would like to see in the CPG 101 update.  Looking over my submission, there were two main themes I followed:

  1. Integration of other doctrine and standards
  2. Development of job aids to support use and implementation

I feel that integration of other relevant doctrine and standards into CPG 101 is incredibly important.  We know that preparedness covers an array of activities, but planning is the foundational activity, which all other activities reflect upon.  In past articles I’ve addressed the need to identify these various standards collectively, to show that while these are individual activities with their own outputs, identifying how they can and should be interconnected, offering greater value if used together.  Things like Community Lifelines, THIRA/SPR, HSEEP, and Core Capabilities need to not only be mentioned often, but with examples of how they interconnect and support planning and even each other.

Job aids are tools that support implementation.  I think job aids can and should be developed and included in the updated CPG 101 for each step of the planning process.  While some of us write plans fairly often, there are many who don’t or are going into it for the first time.  These are essentially the ideal conditions for job aids.  They help guide people through the key activities, provide them with reminders, and ultimately support better outcomes. Not only would I like to see job aids, such as check lists and work sheets, for each step, I’d also think that something that covers the whole process comprehensively, essentially a project management perspective, would be incredibly helpful to many people.

There were a couple of one-off suggestions that might not fit the categories mentioned above.  One of which was having more emphasis on the value of data from the jurisdiction’s hazard mitigation plan.  The hazard analysis conducted for hazard mitigation planning is considerably thorough, and can provide great information to support a hazard analysis (or even a THIRA for those brave enough) for purposes of emergency planning.  To be honest, this was something I didn’t really learn until about ten years into my career.  Many of the people I learned from in Emergency Management often leaned so far into response that they disregarded the value of things like mitigation or recovery.  I still find this a lot in our profession.  Once I finally took the time to go through a hazard mitigation plan, I realized the incredible amount of information contained within.  In many cases, there is more information than what is needed for the hazard analysis of an emergency plan, as the narrative and analysis in a hazard mitigation plan often goes into a measure of scientific detail, but this, too, can certainly have value for emergency planning.  Similarly, I also suggested that FP 104-009-2 (the Public Assistance Program and Policy Guide) be included as a reference in CPG 101.  Jurisdictions will strongly benefit from having plans, such as those on debris management, meeting FEMA’s reimbursement guidelines.

Lastly, I encouraged FEMA to include any content that will support plan writers in developing plans that are simply more useful.  So many plans are just a lot of boilerplate narrative, that in the end don’t tell me WHO is responsible for WHAT and HOW things will get done.  It’s so easy for us to be dismissive of action steps when writing a plan, assuming that people will know who has the authority to issue a public alert or the steps involved in activating an EOC.  CPG 101 should reinforce the need for plans to define processes and actions, identify authority, and assign responsibility.  Flow charts, decision trees, maps, charts, and other graphics and job aids are incredibly helpful to ensure that a plan is thorough while also being useful.

That’s the feedback I provided to FEMA, along with a bit of narrative as to why those things are important for inclusion in an updated CPG 101.  I’m curious to hear about the feedback that others provided.  We all tackle these documents from different perspectives, and that’s why I truly appreciate the efforts FEMA makes in these public calls for comment when they are updating certain key documents.

© 2020 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®℠

 

An Updated Community Lifelines Toolkit and Relationships to Incident Management

Earlier this year, FEMA released guidance on the Community Lifelines.  I wrote a piece in the spring about integrating the concept into our preparedness and response activities.  Last month, FEMA issued updated guidance for Community Lifeline Implementation through Toolkit 2.0.  In this update, FEMA cites some lessons learned in actually applying the Lifeline concept in multiple exercises across the nation, as well as from feedback received by stakeholders. Based on these lessons learned and feedback, they have made some adjustments to their toolkit to reflect how they understand, prioritize, and communicate incident impacts; the structure and format for decision-making support products. And planning for these impacts and stabilization prior to and during incidents.  They have also made some changes based upon the updated National Response Framework.  The documents associated with the updated Community Lifelines all seem to reflect an inclusion in the efforts of the National Response Framework.  It’s great to see FEMA actually tying various efforts together and seeking to provide grounded guidance on application of concepts mentioned in doctrine-level documents.

The biggest addition to the Community Lifelines update is the inclusion of the FEMA Incident Stabilization Guide.  The ‘operational draft’ is intended to serve as a reference to FEMA staff and a resource to state, local, and tribal governments on how “FEMA approaches and conducts response operations”.  It’s a 77-page document the obviously leans heavily into the Community Lifelines as a standard for assessing the impacts to critical infrastructure and progress toward restoration, not only in response, but also into recovery operations.  It even reflects on bolstering Community Lifelines in resilience efforts, and ties in the THIRA and capability analysis efforts that states, UASIs, and other governments conduct.  I’m not sure the document is really a review of how FEMA conducts operations, as they say, but it does review the ideology of a portion of those operations.  Overall, there is some very useful information and references contained in the document, but this brings me to a couple of important thoughts:

  1. The utility of this document, as with the entire Community Lifelines concept, at the state and local level is only realized through integration of these concepts at the state and local levels.
  2. We finally have guidance on what ‘incident stabilization’ really entails.

To address the first item… In my first piece on Community Lifelines, I had already mentioned that if states or communities are interested in adopting the concept of Community Lifelines, that all starts with planning.  An important early step of planning is conducting assessments, and the most pertinent assessment relative to this initiative would be to identify and catalog the lifelines in your community.  From there the assessment furthers to examine their present condition, vulnerabilities, and align standards for determining their operational condition aligned with the Community Lifelines guidelines.  I would also suggest identifying resiliency efforts (hopefully these are already identified in your hazard mitigation plan) which can help prevent damages or limit impacts.  As part of your response and short-term recovery lexicon, procedures should be developed to outline how lifeline assessments will be performed, when, and by who, as well as where that information will be collected during an incident.

As for my second item, the concept of incident stabilization has an interesting intersection with a meeting I was invited to last week.  I was afforded the opportunity to provide input to an ICS curriculum update (not in the US – more on this at a later time), and as part of this we discussed the standard three incident priorities (Life Safety, Incident Stabilization, and Property Conservation).  We identified in our discussions that incident stabilization is incredibly broad and can ultimately mean different things to different communities, even though the fundamental premise of it is to prevent further impacts.  This Incident Stabilization Guide is focused exclusively on that topic.  In our endeavor to make ICS training better, more grounded, less conceptual, and more applicable; there is a great deal of foundational information that could be distilled from this new document for inclusion in ICS training to discuss HOW we actually accomplish incident stabilization instead of making a one-off mention of it.

Going a bit into my continued crusade against the current state of ICS training… I acknowledge that any inclusion of this subject matter in ICS training would still be generally brief, and really more of a framework, as implementation still needs to be grounded in community-level plans, but this document is a great resource.  This also underscores that “learning ICS” isn’t just about taking classes.  It’s about being a professional and studying up on how to be a more effective incident manager.  ICS is simply a tool we use to organize our response… ICS is NOT inclusive of incident management.  Not only are we teaching ICS poorly, we are barely teaching incident management.

While I’ve been away for a while working on some large client projects, I’m looking forward to ending the year with a bang, and getting in a few more posts.  It’s great that in my travels and interactions with colleagues, they regularly mention my articles, which often bring about some great discussion.  I’m always interested in hearing the thoughts of other professionals on these topics.

© 2019 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Preparedness: Integrating Community Lifeline Considerations

Much of preparedness is about getting us ready to conduct situational assessment and prioritization of actions.  We train people and develop resources, such as drones, field-deployed apps, and geographic information systems (GIS) to support situational assessment.  The information we obtain from these assessments help in the development and maintenance of situational awareness and, when shared across disciplines, agencies, and jurisdictions, a common operating picture.  Based upon this information, leaders at all levels make decisions.  These decisions often involve the prioritization of our response and recovery actions.  Ideally, we should have plans in place that establish standards for how we collect, analyze, and share information, and also to support the decision making we must do in prioritizing our actions.  Exercises, of course, help us to validate those plans and practice associated tasks.

One significant hurdle for us is how overwhelming disasters can be.  With just slight increases in the complexity of a disaster, we experience factors such as large geography, extensive damages, high numbers of lives at risk, hazardous materials, and others.  Certainly, we know from Incident Command System training that our broad priorities are life safety, incident stabilization, and property conservation – but with all that’s happening, where do we start?

One thing that can help us both assessment and prioritization are community lifelines.  From FEMA: “Community lifelines reframe incident information to provide decision-makers with impact statements and root causes.”  By changing how we frame our data collection, analysis, thinking, and decision-making, we can maximize the effectiveness of our efforts.  This shouldn’t necessitate a change in our processes, but we should incorporate community lifelines into our preparedness activities.

The community lifelines, as identified by FEMA, are:

  • Safety and Security
  • Food, Water, and Sheltering
  • Health and Medical
  • Energy
  • Communications
  • Transportation
  • Hazardous Materials

If this is your first time looking at community lifelines, they certainly shouldn’t be so foreign to you.  In many ways, these are identified components of our critical infrastructure.  By focusing our attention on this list of items, we can affect a more concerted response and recovery.

FEMA guidance goes on to identify essential elements of information (EEI) we should be examining for each community lifeline.  For example, the lifeline of Health and Medical includes the EEIs of:

  • Medical Care
  • Patient Movement
  • Public Health
  • Fatality Management
  • Health Care Supply Chain

Of course, you can dig even deeper when analyzing any of these EEIs to identify the status and root cause of failure, which will then support the prioritization of actions to address the identified failures.  First we seek to stabilize, then restore.  For example, within just the EEI of Fatality Management, you can examine components such as:

  • Mortuary and post-mortuary services
  • Transportation, storage, and disposal resources
  • Body recovery and processing
  • Family assistance

The organization of situation reports, particularly those shared with the media, public, and other external partners might benefit from being organized by community lifelines.  These are concepts that are generally tangible to many people, and highlight many of the top factors we examine in emergency management.

Back in March of this year, FEMA released the Community Lifelines Implementation Toolkit, which provides some great information on the lifelines and some information on how to integrate them into your preparedness.  These can go a long way, but I’d also like to see some more direct application as an addendum to CPG-101 to demonstrate how community lifelines can be integrated into planning.  Further, while I understanding that FEMA is using the community lifeline concept for its own assessments and reporting, the community aspect of these should be better emphasized, and as such identifying some of the very FEMA- and IMAT-centric materials on this page as being mostly for federal application.

Has your jurisdiction already integrated community lifelines into your preparedness?  What best practices have you identified?

© 2019 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC℠®

Reviewing The 2018 National Preparedness Report

The 2018 National Preparedness Report was released last week.  For the past few years, I’ve provided my own critical review of these annual reports (see 2017’s report here).  For those not familiar with the National Preparedness Report (NPR), it is mandated by the Post-Katrina Emergency Management Reform Act (PKEMRA).  The information is compiled by FEMA from the State Preparedness Reports (SPR), including the Threat and Hazard Identification and Risk Assessment (THIRA) data submitted by states, territories, and Urban Area Security Initiative (UASI) – funded regions.  The data presented is for the year prior.  The SPRs and NPR examine the condition of our preparedness relative to the 32 Core Capabilities identified in the National Preparedness Goal.

Overall, the NPR provides little information, certainly nothing that is really shocking if you pay attention to the top issues in emergency management.  Disappointingly, the report only covers those Core Capabilities identified for sustainment or improvement, with no more than a graphic summary of the other Core Capabilities.

Core Capabilities to Sustain

Operational Coordination was identified as the sole Core Capability to sustain in this year’s report.  I’ve got some issues with this right off.  First of all, they summarize their methodology for selecting Core Capabilities to sustain: ‘To be a capability to sustain, the Nation must show proficiency in executing that core capability, but there must also be indications of a potentially growing gap between the future demand for, and the performance of, that capability.’  To me, what this boils down to is ‘you do it well, but you are going to have to do it better’.  I think most EM professionals could add to this list significantly, with Core Capabilities such as Planning; Public Information and Warning; Public Health, Healthcare, and EMS; Situational Assessment; and others.  Distilling it down to only Operational Coordination shows to me, a severe lack of understanding in where we presently are and the demands that will be put on our systems in the future.

Further, the review provided in the report relative to Operational Coordination is pretty soft.  Part of it is self-congratulatory, highlighting advances in the Core Capability made last year, with the rest of the section identifying challenges but proving little analysis.  Statements such as ‘Local governments reported challenges with incident command and coordination during the 2017 hurricane season’ are put out there, yet their single paragraph on corrective actions for the section boils down to the statement of ‘we’re looking at it’.  Not acceptable.

Core Capabilities to Improve

The 2018 report identifies four Core Capabilities to improve:

  • Infrastructure Systems
  • Housing
  • Economic Recovery
  • Cybersecurity

These fall under the category of NO KIDDING.  The writeups within the NPR for each of these superficially identifies the need, but doesn’t have much depth of analysis.  I find it interesting that the Core Capability to sustain has a paragraph on corrective actions, yet the Core Capabilities to Improve doesn’t.  They do, instead, identify key findings, which outline some efforts to address the problems, but are very soft and offer little detail.  Some of these include programs which have been in place for quite some time which are clearly having limited impact on addressing the issues.

What really jumped out at me is the data provided on page 9, which charts the distribution of FEMA Preparedness grants by Core Capability for the past year.  The scale of their chart doesn’t allow for any exact amounts, but we can make some estimates.  Let’s look at four of these in particular:

  • Infrastructure Systems – scantly a few million dollars
  • Housing – None
  • Economic Recovery – Less than Infrastructure Systems
  • Cybersecurity – ~$25 million

With over $2.3 billion in preparedness funding provided in 2017 by FEMA, it’s no wonder these are Core Capabilities that need to be improved when so few funds were invested at the state/territory/UASI level.  The sad thing is that this isn’t news.  These Core Capabilities have been identified as needing improvement for years, and I’ll concede they are all challenging, but the lack of substantial movement should anger all emergency managers.

I will agree that Housing and Cybersecurity require a significant and consolidated national effort to address.  That doesn’t mean they are solely a federal responsibility, but there is clear need for significant assistance at the federal level to implement improvements, provide guidance to states and locals, and support local implementations.  That said, we can’t continue to say that these areas are priorities when little funding or activity is demonstrated to support improvement efforts.  While certain areas may certainly take years to make acceptable improvements, we are seeing a dangerous pattern relative to these four Core Capabilities, which continue to wallow at the bottom of the list for so many years.

The Path Forward

The report concludes with a two-paragraph section titled ‘The Path Forward’, which simply speaks to refining the THIRA and SPR methodology, while saying nothing of how the nation needs to address the identified shortcomings.  Clearly this is not acceptable.

~~

As for my own conclusion, while I saw last year’s NPR as an improvement from years previous, I see this one as a severe backslide.  It provides little useful information and shows negligible change in the state of our preparedness over the past year.  The recommendations provided, at least of those that do exist, are translucent at best, and this report leaves the reader with more questions and frustration.  We need more substance beginning with root cause analysis and including substantial, tangible, actionable recommendations.  While I suppose it’s not the fault of the report itself that little improvement is being made in these Core Capabilities, the content of the report shows a lack of priority to address these needs.

I’m actually surprised that a separate executive summary of this report was published, as the report itself holds so little substance, that it could serve as the executive summary.  Having been involved in the completion of THIRAs and SPRs, I know there is information generated that is simply not being analyzed for the NPR.  Particularly with each participating jurisdiction completing a POETE analysis of each Core Capability, I would like to see a more substantial NPR which does some examination of the capability elements in aggregate for each Core Capability, perhaps identifying trends and areas of focus to better support preparedness.

As always, I’m interested in your thoughts.  Was there anything you thought to be useful in the National Preparedness Report?

© 2018 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC