The 2022 National Preparedness Report – Another Failure in Reporting

As with past years, FEMA gifts us the annual National Preparedness Report for the prior year around the holidays. Some reminders: 1) You can find my reviews of the reports of prior years here. 2) To get copies of the reports of prior years, FEMA archives these in the unrestricted side of the Homeland Security Digital Library. 3) Each report is derived from data from the year prior, so this December 2022 report actually covers the calendar year of 2021.

Compared to last year’s report, this year’s follows much of the same format, with sections on risk, capabilities, and management opportunities. They appropriately moved some of the content in this year’s report to appendices, which helps each of the sections get more to the point.

Last year’s report was on a kick of catastrophic risk, committing what I think was an excessive amount of content to data on large-scale disasters. While we should certainly plan for the worst, most areas do a mediocre job at best with preparing for, mitigating against, responding to, and recovering from mid-sized disasters. If they can’t manage all aspects of these, it’s not even realistic for them to be able to manage the largest that nature, terrorists, or accidents can throw at us. This year’s report has a much better focus on risk, threat, and hazards; with some reflection on THIRA/SPR data from 2021, grounded realities of climate change, and some time given to cybersecurity and infrastructure. In line with the FEMA strategic plan (and continuing from last year’s report), this year’s report also discusses equity, social vulnerability, and risk exposure; with reference to social vulnerability measures (of which I’m a big fan).

Last year’s report covered risk associated with healthcare systems and the economy, which didn’t get much of a mention in this year’s report, which I think is unfortunate. The reality of surge and the shortage of hospital beds has been brought to the forefront over the past few years, with little to nothing being done to address it. Similarly, we’ve also had the fragility of organizations revealed over the past few years, yet have not seen as much of a push for continuity of operations as we should have seen. While thankfully this year’s report doesn’t have the focus on COVID that last year’s did, it seems people want to move on without addressing the glaring lessons learned.

In all, this year’s report spends about half the page volume on risk compared to last year’s report. While this year’s report provides better information, I still think there were some missed opportunities.

Looking into the assessment of capabilities, the first noted issue is that the capability targets for 2021 were the same as those for 2020. While consistency is important for long-term measurement, the lack of any alteration indicates to me that those who establish the capability targets are lacking some critical awareness of the emergency management landscape. While I don’t necessarily dispute the targets included, I think many of them could use some better refinement and specificity. The lack of inclusion of the cross-cutting Planning Core Capability (which is the foundation of all preparedness) is mind-blowing, as is the lack of the Recovery Mission Area’s Housing Core Capability (considered by many to be our greatest area of fragility). I’d really like to see the data substantiating the THIRA/SPR submissions that indicate such a high achievement of Unified Operations. Reflecting back on the necessity for long-term measurement, this year’s report offers none at all. This limits our ability to perceive preparedness gains or losses over time. As with last year’s report, which similarly did not provide this information, I feel this report has failed to meet its primary goal. It’s nothing more than a snapshot in time of very limited metrics – certainly not a comprehensive review of the state of the nation’s preparedness.

One particular graphic, identified as Figure 11 on page 24 of the report, is extremely telling. The chart identifies the non-disaster grant investments for FY21 across various grant programs. The grant distribution seems to not at all align with the established capability targets, which is good in some cases (we still need to invest in plans) but bad in other cases (fatality management is an established capability target that had minimal investment). By far, the greatest expenses are related to planning, as I feel they should be, yet the ground truth is that there are still a lot of horrible plans being generated. We have significant gaps in certain capabilities such as the aforementioned Fatality Management, along with Public Health/Healthcare/EMS, Housing, and Economic Recovery yet we see minimal investment in these. Lastly, for this section I’ll note that last year’s report highlighted some specific capabilities and provided some narrative and data on each, which, while it needed refinement, was a good direction for this report to go into. This year’s report dropped that depth of information completely.

The final section is Management Opportunities. The three opportunities identified in this section are:

  1. Building Community-Wide Resilience to Climate Change Impacts
  2. Reduce Physical and Technological Risks to Critical Infrastructure
  3. Increase Equity in Individual and Community Preparedness

I don’t argue at all with these three items, but the content, as usual, is lacking. What we should see here is a strategic approach to addressing these priority issues. Of course, to best do so, it would need to align with grant funding priorities and other efforts… which is something we’re just not seeing. They do provide some references and data within their analysis, but they do more for making a case for why these are priority issues and thumping their chest for what they have accomplished rather than laying a national roadmap for accomplishing these priorities. Reviewing last year’s management opportunities, I don’t recall many external products that really worked towards addressing these, nor does this year’s report reflect on any progress of these. Without doing so, this section is nothing but well-intentioned yet intangible statements.

My last statement pretty much sums up the entirety of the report… nothing but well intentioned yet intangible statements. This continues on a trend of previous National Preparedness Reports providing a few good data points but certainly NOT reporting on our nation’s preparedness in any meaningful, much less comprehensive, manner. I stand by my statements from last year that we, the emergency management community, should not be accepting this type of reporting. FEMA receives THIRA and SPR data from states, UASIs, and territories; all of which have years of legacy data. Similarly, FEMA receives regular reports on the grants they provide to jurisdictions, all with metrics that should tie back to a common foundation – the National Preparedness Goal’s Core Capabilities. Yet they fail every year to connect these dots and provide tangible, grounded reports with actionable recommendations. This effort, investment, and the FEMA Administrator’s endorsement is both disappointing and concerning. I continue to feel these reports do not meet the intent of the PPD8 requirements.

Happy New Year one and all!

© 2023 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

2021: Another Horrible National Preparedness Report

FEMA’s Christmas present to us in 2021, as with the past several years, was the National Preparedness Report. Before I dive in, a few of reminders. 1) You can find my reviews of the reports of prior years here. 2) To get copies of the reports of prior years, FEMA archives these in the unrestricted side of the Homeland Security Digital Library. 3) Each report is derived from data from the year prior, so this December 2021 report actually covers the calendar year of 2020.

The 2021 report covers risks and capabilities, as have the reports of past years. It also covers ‘Management Opportunities’ which “the Federal Government, SLTTs (state, local, territories, and tribes), and the private sector could use to build capability and address capacity gaps.” It offers a slightly different perspective than the prior year’s ‘Critical Considerations for Emergency Management’, but fundamentally offers the same type of constructive commentary.

Keeping in mind that through much of 2020, the US, as with nations across the globe, was managing the COVID 19 Coronavirus pandemic. An observation from this report is that the word ‘COVID’ comes up 222 times in the document. That is a LOT of focus on one particular hazard. While I’ll grant that it impacted everyone, had a number of cascading impacts, and there are some statements made in the document about other hazards and concurrent incidents, I fear that when nearly every paragraph mentions COVID, we seem to lose a sense of all-hazard emergency management in the document and thus in the state of the nation’s preparedness. What I do appreciate, as with FEMA’s new Strategic Plan and other recent documents, there is acknowledgement and discussion around inequities in disaster relief. This is an important topic which needs to continue getting exposure. Related to this they also reference the National Risk Index that was released in 2020, which includes indices of social vulnerability. This is a valuable tool for all emergency managers.

The information on Risk included in the 2021 report is much more comprehensive and informative than that in the 2020 report, though they once again miss an opportunity to provide metrics and displays of infographics. While words are valuable, well-designed infographics tell an even better story. Most numbers given in this section of the report were buried in seemingly endless paragraphs of text, and there certainly were no deep analytics provided. It’s simply poor story telling and buries much of the value of this section.

While the mention of climate change had been forbidden in the past few reports, I would have expected the 2021 report to have some significant inclusion on the matter. Instead, it’s highlighted in two pages covering ‘Emerging Risks’ with very little information given. Climate change isn’t emerging, folks, it’s here.

Capabilities are a significant focus of the Threat and Hazard Identification and Risk Assessment (THIRA) and Stakeholder Preparedness Review (SPR) completed by states, Urban Area Security Initiative (UASI) funded regions, and others. As part of the THIRA/SPR process, stakeholders traditionally identify their own preparedness goals (capability targets) for each of the 32 Core Capabilities outlined in the National Preparedness Goal. For the 2021 report, FEMA limited the capability targets to a given set focused on pandemic-related capabilities. As mentioned earlier, while the pandemic is certainly a principal concern, and many of the capability targets can be leveraged toward other hazards, I think this was a failure of the all-hazards approach. Further, with this focus, the 2021 report fails to provide most of the metrics provided in reports of the past, identifying, in aggregate, where stakeholders assessed their own standing in each Core Capability. This is the most significant gauge of preparedness, and they provide so little information on it in this report that I feel the report fails at its primary goal.

I’ve mentioned in the past that the metrics provided in previous reports are superficial at best and provide little by way of analysis. Unfortunately, the metrics provided in the 2021 report are even more lacking, and what there is only provides a snapshot of 2020 instead of any trend analysis.

What is included in this section of the document that I appreciated were some infographics compiling information on some of the capability targets that FEMA pre-determined. Unfortunately, they didn’t even provide these infographics for all of the limited set of capability targets, and the information provided is still fairly weak. Again, this severely limits the value of this being a national report on preparedness.

The last major component of the document is Management Opportunities. This section similarly provides seemingly endless paragraphs of text, but does approach these management opportunities like a strategic plan, setting goals, objectives, and (some) possible metrics for each opportunity. These offer valuable approaches, which coincidentally dovetail well into the goals of FEMA’s new strategic plan and will hopefully provide some solid value to emergency management programs at all levels. I think this section is really the most valuable component of the entire report. Unfortunately, it’s the shortest. The opportunities identified in the report are:

  • Developing a Preparedness Investment Strategy
  • Addressing Steady-State Inequities, Vulnerabilities, and a Dynamic Risk Landscape
  • Strengthen Processes Within and Better Connect Areas of the National Preparedness System

Overall, while there are some pockets of good content, this is another disappointing report. FEMA still isn’t telling us much about the state of preparedness across the nation; and in fact this report tells us even less than prior reports, which I didn’t think was possible. They attempt to tell stories through some focused discussion on a few capability targets, which has some value, but are providing little to no information on the big picture; not the current state of preparedness and certainly not any analysis of trends. Even the section on Management Opportunities isn’t consistent in identifying metrics for each opportunity.

What remains a mystery to me is that it takes a full year to develop this report. The metrics I allude to throughout my commentary are largely easy to obtain and analyze, as much of this information comes to FEMA in quantifiable data; also making trend analysis a rather easy chore. Last year’s report, while still severely lacking, was formatted much better than this year’s, which lacked a vision for story telling and communication of data.

Simply put, emergency managers and other recipients of this report (Congress?) should not accept this type of reporting. Despite coming in at 94 pages, it tells us so little and in my mind does not meet the spirit of the requirement for a National Preparedness Report (this is defined in Presidential Policy Directive 8). States, UASIs, and others who complete and submit THIRAs and SPRs should be livid that their efforts, while certainly (hopefully) valuable to them, are being poorly aggregated, studied, analyzed, and reported as part of the National Preparedness Report. In fact I feel that the 2021 report is telling a story that FEMA wants to tell, supported by select data and case studies; rather than actually reporting on the state of preparedness across the nation, as informed by federal, state, local, territorial, tribal, private sector, and non-profit stakeholders.  

As always, the thoughts of my readers are more than welcome.

Happy New Year to everyone!

© 2022 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

FEMA’s 2020 National Preparedness Report – A Review

It seems an annual tradition for me to be reviewing the National Preparedness Report. I’ve endeavored to provide constructive criticism of these documents, which are compilations of data from state and federal agencies, national-level responses, and other sources.

This year’s National Preparedness Report emphasizes that it is based on data from the 2019 calendar year. In looking back on past reports (note: they are no longer on the FEMA site – I was able to find them in the Homeland Security Digital Library) this has been the past practice. Perhaps I never realized it before, but a report talking about data from practically a full year ago seems to hold even less relevance. That means that enacting changes on a national level based on this data may not even begin to occur until two years have passed. Even taking into consideration that states and UASIs are compiling their reports early in a year for the previous year, it still seems a long time to wait for the national level report. This extent of lag is further emphasized by the document’s foreword, written by the FEMA Administrator, which makes many references to COVID-19 and how much different next year’s report will be, while not really speaking at all about the current report. This speaks a lot to how much we, as a practice, are attracted by the shiny objects dangled in front of us, seemingly ignoring all else.

My first pass of the 2020 report brought two primary impressions: 1) The instructive content of the document is some of the best I’ve seen out of FEMA, and 2) There is a considerable lack of data, with a low value for much of what they have included.

In regard to my first impression, the discussion of concepts such as risk (including emerging risk and systemic risk), capabilities, cascading impacts, community lifelines, public-private partnerships, and vulnerable populations has the perfect level of depth and detail. Not only do they discuss each of these concepts, but they also identify how they each connect to each other. This is EXACTLY the kind of consolidation of information we have needed for a long time. This lends itself to truly integrated preparedness and the kinds of information I’ve mentioned many times as being needed, including in the next version of CPG-101. I’m truly impressed with this content, the examples they provide, and how they demonstrate the interconnectedness of it all. I’ll certainly be using this document as a great source of this consolidated information. Now that I’ve extolled my love and adoration for that content, I’m left wondering why it’s in the National Preparedness Report. It’s great content for instructional material and doctrinal material on integrated preparedness, but it really has no place, at least to this extent of detail in the National Preparedness Report. Aside from the few examples they use, there isn’t much value in this format as a report.

This brings me to my next early observation: that of very little actual data contained in the report. Given the extent to which states, territories, UASIs, and other stakeholders provide data to FEMA each year by way of their Threat and Hazard Identification and Risk Assessments (THIRAs) and Stakeholder Preparedness Reviews (SPRs), along with various other sources of data, this document doesn’t contain a fraction of what is being reported. There are two map products contained in the entire report, one showing the number of federal disaster declarations for the year, the other showing low-income housing availability across the nation. Given the wide array of information provided by state and UASI, and compiled by FEMA region, surely there must be some really insightful trends and other analysis to provide. There are a few other data sets included in the report showing either raw numbers or percentages – nothing I would really consider analytics. Much of the data is also presented as a snapshot in time, without any comparison to previous years.

Any attempt to view this document as a timely, meaningful, and relevant report on the current state of preparedness in the nation, much less an examination of preparedness over time, is simply an exercise in frustration. The previous year’s report at least had a section titled ‘findings’, even though any real analysis of data there was largely non-existent. This year’s report doesn’t even feign providing a section on findings. To draw on one consistently frustrating example, I’ll use the Core Capability of housing. While this report dances around doctrine and concepts, and even has a section on housing, it’s not addressing why so little preparedness funding or even moderate effort is directed toward addressing the issue of emergency housing, which has arguably been the biggest preparedness gap for time eternal in every state of the nation. Looking broadly at all Core Capabilities, this year’s report provides a chart similar to what we’ve seen in previous years’ reports, identifying how much preparedness funding has gone toward each Core Capability. In relative numbers, very little has changed; even though we know that issues like housing, long-term vulnerability reduction, infrastructure systems, and supply chains have huge gaps. All these reports are telling me is that we’re doing the same things over and over again with little meaningful change.

So there it is… while I really am thoroughly impressed with some of the content of the report, much of that content really doesn’t have a place in this report (at least to such an extent), and for what little data is provided in the report, most of it has very little value. The introduction to the document states that “this year’s report is the product of rigorous research, analysis, and input from stakeholders”. To be blunt, I call bullshit on this statement. I expect a report to have data and various analysis of that data, not only telling us what is, but examining why it is. We aren’t getting that. The National Preparedness Report is an annual requirement per the Post Katrina Emergency Management Reform Act. I challenge that FEMA is not meeting the intent of that law with the reports they have been providing. How can we be expected, as a nation, to improve our state of readiness when we aren’t provided with the data needed to support and justify those improvements?

© 2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Failures in Preparedness

In May the GAO released a report titled “National Preparedness: Additional Actions Needed to Address Gaps in the Nation’s Emergency Management Capabilities”. I encourage everyone to read the report for themselves and also reflect on my commentary from several years of National Preparedness Reports. I’ll summarize all this though… it doesn’t look good. The National Preparedness Reports really tell us little about the state of preparedness across the nation, and this is reinforced by the GAO report as they state “FEMA is taking steps to strengthen the national preparedness system, but has yet to determine what steps are needed to address the nation’s capability gaps across all levels of government”.

First of all, let me be clear about where the responsibility of preparedness lies – EVERYONE. Whole community preparedness is actually a thing. It’s not FEMA’s job to ensure we are prepared. As also made evident in the GAO report (for those who haven’t worked with federal preparedness grants), most preparedness grants are pretty open, and as such, the federal government can’t force everyone to address the most critical capability gaps. Why wouldn’t jurisdictions want to address the most critical capability gaps, though? Here are some of the big reasons:

  • Most or all funding may be used to sustain the employment of emergency management staff, without whom there would be no EM program in that jurisdiction
  • The jurisdiction has prioritized sustaining other core capabilities which they feel are more important
  • The jurisdiction has decided that certain core capabilities are not for them to address (deferring instead to state or federal governments)
  • Shoring up gaps is hard
  • Response is sexier

The GAO report provided some data to support where priorities lie. First, let’s take a look at spending priorities by grant recipients:

While crosscutting capabilities (Operational Coordination, Planning, and Public Information and Warning) were consistently the largest expenditures, I would surmise that Operational Coordination was the largest of the three, followed by Planning, with Public Information and Warning coming in last. And I’m pretty confident that while these are cross cutting, these mostly lied within the Response Mission Area. Assuming my predictions are correct, there is fundamentally nothing wrong with this. It offers a lot of bang for the buck, and I’ve certainly spoken pretty consistently about how bad we are at things like Operational Coordination and Planning (despite some opinions to the contrary). Jumping to the end of the book, notice that Recovery mission area spending accounts for 1% of the total. This seems like a poor choice considering that three of the five lowest rated capabilities are in the Recovery mission area. Check out this table also provided in the GAO report:

Through at least a few of these years, Cybersecurity has been flagged as a priority by DHS/FEMA, yet clearly, we’ve not made any progress on that front. Our preparedness for Housing recovery has always been abysmal, yet we haven’t made any progress on that either. I suspect that those are two areas, specifically, that many jurisdictions feel are the responsibility of state and federal government.

Back in March of 2011, the GAO recommended that FEMA complete a national preparedness assessment of capability gaps at each level of government based on tiered, capability-specific performance objectives to enable prioritization of grant funding. This recommendation has not yet been implemented. While not entirely the fault of FEMA, we do need to reimagine that national preparedness system. While the current system is sound in concept, implementation falls considerably short.

First, we do need a better means of measuring preparedness. It’s difficult – I fully acknowledge that. And for as objective as we try to make it, there is a vast amount of subjectivity to it. I do know that in the end, I shouldn’t find myself shaking my head or even laughing at the findings identified in the National Preparedness Report, though, knowing that some of the information there can’t possibly be accurate.

I don’t have all the answers on how we should measure preparedness, but I know this… it’s different for different levels of government. A few thoughts:

  • While preparedness is a shared responsibility, I don’t expect a small town to definitively have the answers for disaster housing or cybersecurity. We need to acknowledge that some jurisdictions simply don’t have the resources to make independent progress on certain capabilities. Does this mean they have no responsibility for it – no. Absolutely not. But the current structure of the THIRA, while allowing for some flexibility, doesn’t directly account for a shared responsibility.
  • Further, while every jurisdiction completing a THIRA is identifying their own capability targets, I’d like to see benchmarks established for them to strive for. This provides jurisdictions with both internal and external definitions of success. It also allows them an out, to a certain extent, on certain core capabilities that have a shared responsibility. Even a small town can make some progress on preparedness for disaster housing, such as site selection, estimating needs, and identifying code requirements (pro tip… these are required elements of hazard mitigation plans).
  • Lastly, we need to recognize that it’s difficult to measure things when they aren’t the same or aren’t being measured the same. Sure, we can provide a defined core capability, but when everyone has different perspective on and expectation of that core capability and how it should be measured, we aren’t getting answers we can really compare. Everyone knows what a house is, but there is a considerable difference between a double wide and a McMansion. Nothing wrong with either of them, but the differences give us very different base lines to work from. Further, if we need to identify how big a house is and someone measures the length and width of the building, someone else measures the livable square footage of a different building, and a third person measures the number of floors of yet another house, we may have all have correct answers, but we can’t really compare any of them. We need to figure out how to allow jurisdictions to contextualize their own needs, but still be playing the same game.

In regard to implementation, funding is obviously a big piece. Thoughts on this:

  • I think states and UASIs need to take a lot of the burden. While I certainly agree that considerable funding needs to be allocated to personnel, this needs to be balanced with sustaining certain higher tier capabilities and closing critical gaps. Easier said than done, but much of this begins with grant language and recognition that one grant may not fit all the needs.
  • FEMA has long been issuing various preparedness grants to support targeted needs and should not only continue to do so, but expand on this program. Targeted grants should be much stricter in establishing expectations for what will be accomplished with the grant funds.
  • Collaboration is also important. Shared responsibility, whole community, etc. Many grants have suggested or recommended collaboration through the years, but rarely has it been actually required. Certain capabilities lend themselves to better development potential when we see the realization of collaboration, to include the private sector, NGOs, and the federal government. Let’s require more of it.
  • Instead of spreading money far and wide, let’s establish specific communities of practice to essentially act as model programs. For a certain priority, allocate funds for a grant opportunity with enough to fund 3-5 initiatives in the nation. Give 2-3 years for these programs to identify and test solutions. These should be rigorously documented so as to analyze information and potentially duplicate, so I suggest that academic institutions also be involved as part of the collaborative effort (see the previous bullet). Once each of the grantees has completed their projects, host a symposium to compare and contrast, and identify best practices. Final recommendations can be used to benchmark other programs around the nation. Once we have a model, then future funding can be allocated to support implementation of that model in other areas around the nation. Having worked with the National Academies of Sciences, Engineering, and Medicine, they may be an ideal organization to spearhead the research component of such programs.
  • Recognize that preparedness isn’t just long term, it’s perpetual. While certain priorities will change, the goals remain fundamentally the same. We are in this for the long haul and we need to engage with that in mind. Strategies such as the one in the previous bullet point lend themselves to long-term identification of issues, exploration of solutions, and implementation of best practices.
  • Perhaps in summary of all of this, while every jurisdiction has unique needs, grant programs can’t be so open as to allow every grantee to have a wholly unique approach to things. It feels like most grant programs now are simply something thrown at a wall – some of it sticks, some of it falls right off, some might not even make it to the wall, some slowly drips off the wall, and some dries on permanently. We need consistency. Not necessarily uniformity, but if standards are established to provide a foundational 75% solution, with the rest open for local customization, that may be a good way to tackle a lot of problems.

In the end, while FEMA is the implementing agency, the emergency management community needs to work with them to identify how best to measure preparedness across all levels and how we can best implement preparedness programs. Over the past few years, FEMA has been very open in developing programs for the emergency management community and I hope this is a problem they realize they can’t tackle on their own. They need representatives from across the practice to help chart a way ahead. This will ensure that considerations and perspectives from all stakeholder groups are addressed. Preparedness isn’t a FEMA problem, it’s an emergency management problem. Let’s help them help us.

What thoughts do you have on preparedness? How should we measure it? What are the strengths and areas for improvement for funding? Do you have an ideal model in mind?

© 2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

The 2019 National Preparedness Report, or ‘How Are We Measuring Preparedness?’

FEMA recently released the 2019 National Preparedness Report.  Simply put, I’m confused.  Nothing in the report actually lines up with doctrine.  It leaves me wondering how we are actually measuring preparedness.  So what’s the issue?

While the National Preparedness Report is initially structured around the five mission areas (Prevention, Protection, Mitigation, Response, and Recovery), the only full inclusion of the Core Capabilities in the report is a table on page 9, outlining usage of grant funds per Core Capability.  After this, the Core Capabilities for each mission are listed in the title page for each mission area within the detailed findings for those mission areas.  No detail of progress within these Core Capabilities is provided, however.  With the absence of this analysis, we are not seeing data on the progression of preparedness, which, per the National Preparedness Report, is measured through the lens of each of the Core Capabilities.

This is further confused on pages 45 and 48, in particular, where tables list the Community Lifelines with some sort of correlated ‘capabilities’ (noted with a lowercase ‘c’… thus not the Core Capabilities).  These capabilities are not from any doctrine that I can find or recall, including the components and subcomponents for each Community Lifeline provided in the Community Lifelines Toolkit.  For each of these they provide some analytical data, but it’s unclear what this is based upon.  The methodology provided early in the document does nothing to identify why this change in format has occurred or where these specific data sets come from, much less why they are deviating from the previous format and the standards provided through the National Preparedness Goal.

Some perspective… It would seem logical that the National Preparedness Report would be assessing our national state of preparedness relative to the National Preparedness Goal, as it has since its inception.  The National Preparedness Goal is structured around the five mission areas and the 32 Core Capabilities.  With the emergence of the Community Lifelines and their inclusion in the recent update of the National Response Framework, it makes sense that we will see Community Lifelines further integrated into standards, doctrine, and reports, but they have yet to be integrated into the National Preparedness Goal (the current version is dated 2015).  We have not yet seen a comprehensive crosswalk between the Community Lifelines and the Core Capabilities, but it should be recognized that there are certain aspects, even if you just examine the Response Mission Area, that don’t match up.

In an unrelated observation on the National Preparedness Report, the trend continues with citing after action reports from the year, but not actually providing any analysis of lessons learned and how those are being applied across the nation.

Bottom line… while there are some valuable nuggets of information included in this report, I find most of it to be confusing, as it lacks a consistent format on its own, as well as inconsistency with the existing standard of measurement as defined by the National Preparedness Goal.  Why is this a big deal?  First, it’s a deviation from the established standard.  While the standard may certainly have room for improvement, the standard must first be changed before the metrics in the reporting can be changed.  Second, with the deviation from the standard, we aren’t able to measure progress over time.  All previous National Preparedness Reports have provided data within the scope of Core Capabilities, while this one largely does not.  This breaks the possibility of any trend analysis.  Third, there is no reasoning provided behind the capabilities (lowercase ‘c’) associated with each of the Community Lifelines in the report.  It’s simply confusing to the extent that it becomes irrelevant because the information provided is not within the existing lexicon which is used for measurement of practically everything in preparedness.

Simply put, this year’s report is even more disappointing than those provided in previous years.  In fact, since it doesn’t conform with the current standard, I’d suggest it’s not even valid.  This should be far better.

Thoughts?

© 2019 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC

 

 

Reviewing The 2018 National Preparedness Report

The 2018 National Preparedness Report was released last week.  For the past few years, I’ve provided my own critical review of these annual reports (see 2017’s report here).  For those not familiar with the National Preparedness Report (NPR), it is mandated by the Post-Katrina Emergency Management Reform Act (PKEMRA).  The information is compiled by FEMA from the State Preparedness Reports (SPR), including the Threat and Hazard Identification and Risk Assessment (THIRA) data submitted by states, territories, and Urban Area Security Initiative (UASI) – funded regions.  The data presented is for the year prior.  The SPRs and NPR examine the condition of our preparedness relative to the 32 Core Capabilities identified in the National Preparedness Goal.

Overall, the NPR provides little information, certainly nothing that is really shocking if you pay attention to the top issues in emergency management.  Disappointingly, the report only covers those Core Capabilities identified for sustainment or improvement, with no more than a graphic summary of the other Core Capabilities.

Core Capabilities to Sustain

Operational Coordination was identified as the sole Core Capability to sustain in this year’s report.  I’ve got some issues with this right off.  First of all, they summarize their methodology for selecting Core Capabilities to sustain: ‘To be a capability to sustain, the Nation must show proficiency in executing that core capability, but there must also be indications of a potentially growing gap between the future demand for, and the performance of, that capability.’  To me, what this boils down to is ‘you do it well, but you are going to have to do it better’.  I think most EM professionals could add to this list significantly, with Core Capabilities such as Planning; Public Information and Warning; Public Health, Healthcare, and EMS; Situational Assessment; and others.  Distilling it down to only Operational Coordination shows to me, a severe lack of understanding in where we presently are and the demands that will be put on our systems in the future.

Further, the review provided in the report relative to Operational Coordination is pretty soft.  Part of it is self-congratulatory, highlighting advances in the Core Capability made last year, with the rest of the section identifying challenges but proving little analysis.  Statements such as ‘Local governments reported challenges with incident command and coordination during the 2017 hurricane season’ are put out there, yet their single paragraph on corrective actions for the section boils down to the statement of ‘we’re looking at it’.  Not acceptable.

Core Capabilities to Improve

The 2018 report identifies four Core Capabilities to improve:

  • Infrastructure Systems
  • Housing
  • Economic Recovery
  • Cybersecurity

These fall under the category of NO KIDDING.  The writeups within the NPR for each of these superficially identifies the need, but doesn’t have much depth of analysis.  I find it interesting that the Core Capability to sustain has a paragraph on corrective actions, yet the Core Capabilities to Improve doesn’t.  They do, instead, identify key findings, which outline some efforts to address the problems, but are very soft and offer little detail.  Some of these include programs which have been in place for quite some time which are clearly having limited impact on addressing the issues.

What really jumped out at me is the data provided on page 9, which charts the distribution of FEMA Preparedness grants by Core Capability for the past year.  The scale of their chart doesn’t allow for any exact amounts, but we can make some estimates.  Let’s look at four of these in particular:

  • Infrastructure Systems – scantly a few million dollars
  • Housing – None
  • Economic Recovery – Less than Infrastructure Systems
  • Cybersecurity – ~$25 million

With over $2.3 billion in preparedness funding provided in 2017 by FEMA, it’s no wonder these are Core Capabilities that need to be improved when so few funds were invested at the state/territory/UASI level.  The sad thing is that this isn’t news.  These Core Capabilities have been identified as needing improvement for years, and I’ll concede they are all challenging, but the lack of substantial movement should anger all emergency managers.

I will agree that Housing and Cybersecurity require a significant and consolidated national effort to address.  That doesn’t mean they are solely a federal responsibility, but there is clear need for significant assistance at the federal level to implement improvements, provide guidance to states and locals, and support local implementations.  That said, we can’t continue to say that these areas are priorities when little funding or activity is demonstrated to support improvement efforts.  While certain areas may certainly take years to make acceptable improvements, we are seeing a dangerous pattern relative to these four Core Capabilities, which continue to wallow at the bottom of the list for so many years.

The Path Forward

The report concludes with a two-paragraph section titled ‘The Path Forward’, which simply speaks to refining the THIRA and SPR methodology, while saying nothing of how the nation needs to address the identified shortcomings.  Clearly this is not acceptable.

~~

As for my own conclusion, while I saw last year’s NPR as an improvement from years previous, I see this one as a severe backslide.  It provides little useful information and shows negligible change in the state of our preparedness over the past year.  The recommendations provided, at least of those that do exist, are translucent at best, and this report leaves the reader with more questions and frustration.  We need more substance beginning with root cause analysis and including substantial, tangible, actionable recommendations.  While I suppose it’s not the fault of the report itself that little improvement is being made in these Core Capabilities, the content of the report shows a lack of priority to address these needs.

I’m actually surprised that a separate executive summary of this report was published, as the report itself holds so little substance, that it could serve as the executive summary.  Having been involved in the completion of THIRAs and SPRs, I know there is information generated that is simply not being analyzed for the NPR.  Particularly with each participating jurisdiction completing a POETE analysis of each Core Capability, I would like to see a more substantial NPR which does some examination of the capability elements in aggregate for each Core Capability, perhaps identifying trends and areas of focus to better support preparedness.

As always, I’m interested in your thoughts.  Was there anything you thought to be useful in the National Preparedness Report?

© 2018 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC

2017 National Preparedness Report – A Review

With my travel schedule, I missed the (late) release of the 2017 National Preparedness Report (NPR) in mid-October.  Foundationally, the findings of the 2017 report show little change from the 2016 report.  If you are interested in comparing, you can find my review of the 2016 NPR here.

The 2017 NPR, on the positive side, provided more data and more meaningful data than its predecessor.  It appeared to me there was more time and effort spent in analysis of this data.  If you aren’t familiar with the premise of the NPR, the report is a compilation of data obtained from State Preparedness Reports (SPRs) submitted by states, territories, and UASI-funded regions; so the NPR, fundamentally, should be a reflection of what was submitted by these jurisdictions and regions – for the better or worse of it.  The SPR asks jurisdictions to provide an honest analysis of each of the core capabilities through the POETE capability elements (Planning, Organizing, Equipping, Training, and Exercising).

From the perspective of the jurisdictions, no one wants to look bad.  Not to say that any jurisdiction has lied, but certainly agendas can sway subjective assessments.  Jurisdictions want to show that grant money is being spent effectively (with the hopes of obtaining more), but not with such terrific results that anyone would think they don’t need more.  Over the past few years the SPRs, I believe, have started to normalize and better reflect reality.  I think the authors of the NPR have also come to look at the data they receive a little more carefully and word the NPR to reflect this reality.

The 2017 NPR (which evaluates 2016 data from jurisdictions) identified five core capabilities the nation needs to sustain.  These are:

  • Environmental Response/Health and Safety
  • Intelligence and Information Sharing
  • Operational Communications
  • Operational Coordination
  • Planning

I’m reasonably comfortable with the first two, although they both deal with hazards and details that change regularly, so keeping on top of them is critical.  Its interesting that Operational Communication is rated so high, yet is so commonly seen as a top area for improvement on after-action reports of exercises, events, and incidents.  To me, the evidence doesn’t support the conclusion in regard to this core capability.  Operational Coordination and Planning both give me some significant concern.

First, in regard to Operational Coordination, I continue to have a great deal of concern in the ability of responders (in the broadest definitions) to effectively implement the Incident Command System (ICS).  While the implementation of ICS doesn’t comprise all of this core capability, it certainly is a great deal of it.  I think there is more room for improvement than the NPR would indicate.  For example, in a recent exercise I supported, the local emergency manager determined there would be a unified command with him holding ‘overall command’.  Unfortunately, these false interpretations of ICS are endemic.

I believe the Planning core capability is in a similar state inadequacy.  Preparedness lies, fundamentally, on proper planning and the assessments that support it. While I’ve pontificated at length about the inadequacy of ICS training, I’ve seen far too many plans with gaps that you could drive a truck through.  I’ve recently exercised a college emergency response plan that provided no details or guidance on critical tasks, such as evacuation of a dormitory and support of the evacuated students.  The plan did a great job of identifying who should be in the EOC, but gave no information on what they should be doing or how they should do it.  The lack of plans that can be operationalized and implemented is staggering.

The NPR identified the top core capabilities to be improved.  There are no surprises in this list:

  • Cybersecurity
  • Economic Recovery
  • Housing
  • Infrastructure Systems
  • Natural and Cultural Resources
  • Supply Chain Integrity and Security

Fortunately, I’m seeing some (but not all) of these core capabilities getting some needed attention, but clearly not enough.  These don’t have simple solutions, so they will take some time.

Page 10 of the NPR provides a graph showing the distribution of FEMA preparedness (non-disaster) grants by core capability for fiscal year 2015.  Planning (approx. $350m) and Operational Coordination (approx. $280m) lead the pack by far.  I’m curious as to what specific activities these dollars are actually being spent on, because my experience shows that it’s not working as well as is being reported.  Certainly there has been some positive direction, but I’m guessing that dollars are being spent on activities that either have negligible impact or actually have a negative impact, such as funding the development of some of the bad plans we’re seeing out there.

I’m curious as to what readers are seeing out in real life.  What capabilities concern you the most?  What capabilities do you see successes in?  Overall, I think everyone agrees that we can do better.  We can also get better and more meaningful reports.  This NPR was a step in the right direction from last year’s, but we need to continue forward progress.

© 2017 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC

2016 National Preparedness Report Released

The fifth National Preparedness Report has been released by FEMA.  The National Preparedness Report is based upon, as the report states, input of more than 450 data sources and 190 stakeholders, including 66 non-federal organizations (which would account for state preparedness report submissions and information from Urban Area Security Initiative regions).  The report is intended as a summary of where the nation stands in regard to each of the 32 Core Capabilities outlined in the National Preparedness Goal.

As mentioned, this is the fifth National Preparedness Report to hit the streets.  While they have some value and demonstrate that the data collection that is done is actually collated, I feel that through the years they are offering less meat and more potatoes.  I appreciate the highlighting of best practices for each mission area, but, to me, there is a missed opportunity if a report is simply providing data and not recommendations.  While it’s understood that the goal of the National Preparedness Report is not to provide recommendations (it would also take longer to publish the report, and the people pulling the data together do not likely have the expertise to create recommendations), I’d like to see FEMA (and stakeholders) have follow up efforts to provide recommendations in each mission area and not miss this valuable opportunity to then apply the findings and look forward.

Below, I’ve included their overall findings with a bit of my own commentary.  Overall, I will say that there is nothing eye opening in this report for anyone who pays attention.  It’s pretty easy to guess those Core Capabilities which are at the top and those which are at the bottom.

  • Planning; Public Health, Healthcare, and Emergency Medical Services; and Risk and Disaster Resilience Assessment are the three Core Capabilities in which the Nation has developed acceptable levels of performance for critical tasks, but that face performance declines if not maintained and updated to address emerging challenges.
    • My commentary: BULLSHIT.  If these Core Capabilities are at ‘acceptable levels’, then our standards must be pretty low.  Planning is the one that disturbs me most.  We continue to see plenty of poor plans that are not realistic, can’t be operationalized, and are created to meet requirements (which are typically met by formatting and buzzwords).  Have we improved?  Sure.  But I wouldn’t say we are at ‘acceptable levels’.  As for Public Health, Healthcare, and Emergency Medical Services, we are struggling in certain areas to simply keep our heads above water.  While we are fairly solid in some areas of public health, one only needs to look at the Ebola incident to view how fragile our state of readiness is.  The findings for Planning and Public Health, to me, are nothing but shameful pandering and we need to get realistic about where we are at and the challenges we face.  Gold stars won’t stand up to the next disaster.  As for Risk and Disaster Resilience Assessment I have admittedly less experience personally.  I do know that we have some pretty incredible tools available that can help us determine impacts of various hazards for any given area under a variety of conditions, which is an amazing application of technology.  My concerns here are that there are still many who don’t know about these tools, don’t use them, and/or don’t follow the findings of information from these tools in their hazard mitigation actions.
  • Cybersecurity, Economic Recovery, Housing, and Infrastructure Systems remain national areas for improvement. Two additional Core Capabilities – Natural and Cultural Resources, and Supply Chain Integrity and Security – emerged as new national areas for improvement.
    • My commentary: NO KIDDING. While we have made a great deal of progress on Cybersecurity, we are still far behind the criminal element in most respects.  It also needs to be fully recognized in the National Preparedness Goal that Cybersecurity is a Core Capability common to all five mission areas.  Economic Recovery will always be a challenge, as every community impacted by an incident has a certain way it heals, essentially along the lines of Maslow’s Hierarchy.  A strong local economy is important to this healing, ensuring that the community has access to the resources it needs to rebuild and a return to normalcy.  While I’m sure studies have been done, we need to examine more closely how the economic recovery process evolves after a disaster to identify how it can be best supported.  Housing is the absolutely most challenging Core Capability in the National Preparedness Goal.  While I don’t have a solution for this, I do know that our current approaches, philosophies, and ways of thinking haven’t moved us an inch toward the finish line on this one.  We need to change our current way of thinking to be successful.  As for Infrastructure Systems, I could go on for days about this.  I’ve written previously, several times, (as have many others) on the critically fragile state of our infrastructure.  It’s no big secret.
  • States and territories continue to be more prepared to achieve their targets for Response Core Capabilities, while they are least prepared to meet their targets in the Recovery Mission Area.
    • This is another NO KIDDING. While we must always have a greater focus on Response, as that’s where lives are saved and the immediate danger is addressed, we can’t lose sight of Recovery.  Some recovery activities are more clear cut than others, and FEMA often muddies the waters more by inadvertently intimidating state and local governments when it comes to disaster recovery, as the focus becomes centered more on reimbursable activities vs doing what needs to be done.  The report included some interesting findings (take a look in the Recovery Mission Area drop down on the web site) on ‘mixed trends in exercising recovery capabilities’.  Again, this is nothing earth shattering, but it’s nice to see the matter addressed.  Yes, we clearly need to exercise Recovery Mission Area Core Capabilities better and more often.

These reports are always worth looking through, even though much of the information is generally known by those of us in the profession.  There are always little nuggets of learning available, and data from the report may be used to support your own endeavors for additional funding or resources for your own program.

As always, I’m interested in your insights and thoughts on this post and the National Preparedness Report.

© 2016 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC – Your Partner in Preparedness

 

What’s the (next) Big Idea in Emergency Management?

Innovation.  It seems to be what everyone clamors for.  In emergency management we see people striving for it across the board: in government and in education we try to build the better emergency management mouse trap.  We establish think tanks to find new solutions and the private sector looks for better ways to protect their investments.  But what is it that we are looking for?  What systemic problems do we still face in emergency management that require change? 

There is plenty out there that needs to be improved upon.  There always will be.  Until we can prepare for, prevent, and mitigate disasters to the point that little to no response is ever needed and no loss of life occurs we will continue to strive for better ways of doing things.  I’m guessing that day is a long way off, so we have plenty of work to do.  Before we can innovate, however, we must find cause.  Necessity, as they say, is the mother of invention.  So what needs exist that must be corrected? 

Certainly our after action reports (AARs) identify areas of needed change.  But those generally only show us gaps in local systems.  Threat and Hazard Identification and Risk Analysis (THIRA) likewise shows gaps in local systems.  Does this information ever get fed to higher levels?  Of course it does… in some measure but only some of the time.  States assemble State Preparedness Reports (SPRs) which, in current practice, conduct an analysis of each core capability through each of the POETE elements (planning, organizing, equipping, training, and exercising).  These in turn inform the National Preparedness Report (NPR).  The 2014 NPR was released by FEMA earlier this month, identifying areas for improvement in several of the core capabilities.  This is certainly a resource to help us identify needs, but none of these resources or mechanisms are perfect.  What is missing?  How do we improve them?

Interestingly enough, some opine that we aren’t examining the right data.  The Congressional Research Service suggests that we might need better measures of preparedness, according to their report and this article from FierceHomelandSecurity.com.  The report gives no answers, but poses several questions.  Overall, what can we do better?

Returning to innovation, where do the gaps truly exist?  How do we validate those gaps?  Can we address those gaps with current systems or do we need to create new systems (innovations)?  If it is with current systems, what are the barriers to getting the gaps addressed in the short term?  If it is not with current systems where does the innovation come from? 

Despite having worked in Emergency Management for over fifteen years and having seen, felt, and experienced the myriad changes which have occurred – especially since 9/11 – and with every administration subsequent to the attacks I really hadn’t sat and considered the changes that have occurred.  I’m about half way through an amazing book by John Fass Morton called Next-Generation Homeland Security: Network Federalism and the Course to National Preparedness.  The first 200 pages or so of the book provide a thorough review of civil defense/emergency management/homeland security through decades and over a dozen presidential administrations.  The gravity of it all has left my head spinning.  So many changes – and most simply for the sake of politics.  Much of it seems like wasted effort, but Mr. Morton connects the dots so brilliantly and identifies that D certainly could not have happened if not for A, B, and C… even though C and A were essentially the same.  IT seems that through these years so much has occurred, but so little has actually changed.  I would argue that the practice of emergency management is in a better place now than ever, but what will emergency management look like tomorrow?  Will our continued evolution be through measured change or through innovation?  What makes that determination? 

© 2014 – Timothy Riecker