Failures in Preparedness

In May the GAO released a report titled “National Preparedness: Additional Actions Needed to Address Gaps in the Nation’s Emergency Management Capabilities”. I encourage everyone to read the report for themselves and also reflect on my commentary from several years of National Preparedness Reports. I’ll summarize all this though… it doesn’t look good. The National Preparedness Reports really tell us little about the state of preparedness across the nation, and this is reinforced by the GAO report as they state “FEMA is taking steps to strengthen the national preparedness system, but has yet to determine what steps are needed to address the nation’s capability gaps across all levels of government”.

First of all, let me be clear about where the responsibility of preparedness lies – EVERYONE. Whole community preparedness is actually a thing. It’s not FEMA’s job to ensure we are prepared. As also made evident in the GAO report (for those who haven’t worked with federal preparedness grants), most preparedness grants are pretty open, and as such, the federal government can’t force everyone to address the most critical capability gaps. Why wouldn’t jurisdictions want to address the most critical capability gaps, though? Here are some of the big reasons:

  • Most or all funding may be used to sustain the employment of emergency management staff, without whom there would be no EM program in that jurisdiction
  • The jurisdiction has prioritized sustaining other core capabilities which they feel are more important
  • The jurisdiction has decided that certain core capabilities are not for them to address (deferring instead to state or federal governments)
  • Shoring up gaps is hard
  • Response is sexier

The GAO report provided some data to support where priorities lie. First, let’s take a look at spending priorities by grant recipients:

While crosscutting capabilities (Operational Coordination, Planning, and Public Information and Warning) were consistently the largest expenditures, I would surmise that Operational Coordination was the largest of the three, followed by Planning, with Public Information and Warning coming in last. And I’m pretty confident that while these are cross cutting, these mostly lied within the Response Mission Area. Assuming my predictions are correct, there is fundamentally nothing wrong with this. It offers a lot of bang for the buck, and I’ve certainly spoken pretty consistently about how bad we are at things like Operational Coordination and Planning (despite some opinions to the contrary). Jumping to the end of the book, notice that Recovery mission area spending accounts for 1% of the total. This seems like a poor choice considering that three of the five lowest rated capabilities are in the Recovery mission area. Check out this table also provided in the GAO report:

Through at least a few of these years, Cybersecurity has been flagged as a priority by DHS/FEMA, yet clearly, we’ve not made any progress on that front. Our preparedness for Housing recovery has always been abysmal, yet we haven’t made any progress on that either. I suspect that those are two areas, specifically, that many jurisdictions feel are the responsibility of state and federal government.

Back in March of 2011, the GAO recommended that FEMA complete a national preparedness assessment of capability gaps at each level of government based on tiered, capability-specific performance objectives to enable prioritization of grant funding. This recommendation has not yet been implemented. While not entirely the fault of FEMA, we do need to reimagine that national preparedness system. While the current system is sound in concept, implementation falls considerably short.

First, we do need a better means of measuring preparedness. It’s difficult – I fully acknowledge that. And for as objective as we try to make it, there is a vast amount of subjectivity to it. I do know that in the end, I shouldn’t find myself shaking my head or even laughing at the findings identified in the National Preparedness Report, though, knowing that some of the information there can’t possibly be accurate.

I don’t have all the answers on how we should measure preparedness, but I know this… it’s different for different levels of government. A few thoughts:

  • While preparedness is a shared responsibility, I don’t expect a small town to definitively have the answers for disaster housing or cybersecurity. We need to acknowledge that some jurisdictions simply don’t have the resources to make independent progress on certain capabilities. Does this mean they have no responsibility for it – no. Absolutely not. But the current structure of the THIRA, while allowing for some flexibility, doesn’t directly account for a shared responsibility.
  • Further, while every jurisdiction completing a THIRA is identifying their own capability targets, I’d like to see benchmarks established for them to strive for. This provides jurisdictions with both internal and external definitions of success. It also allows them an out, to a certain extent, on certain core capabilities that have a shared responsibility. Even a small town can make some progress on preparedness for disaster housing, such as site selection, estimating needs, and identifying code requirements (pro tip… these are required elements of hazard mitigation plans).
  • Lastly, we need to recognize that it’s difficult to measure things when they aren’t the same or aren’t being measured the same. Sure, we can provide a defined core capability, but when everyone has different perspective on and expectation of that core capability and how it should be measured, we aren’t getting answers we can really compare. Everyone knows what a house is, but there is a considerable difference between a double wide and a McMansion. Nothing wrong with either of them, but the differences give us very different base lines to work from. Further, if we need to identify how big a house is and someone measures the length and width of the building, someone else measures the livable square footage of a different building, and a third person measures the number of floors of yet another house, we may have all have correct answers, but we can’t really compare any of them. We need to figure out how to allow jurisdictions to contextualize their own needs, but still be playing the same game.

In regard to implementation, funding is obviously a big piece. Thoughts on this:

  • I think states and UASIs need to take a lot of the burden. While I certainly agree that considerable funding needs to be allocated to personnel, this needs to be balanced with sustaining certain higher tier capabilities and closing critical gaps. Easier said than done, but much of this begins with grant language and recognition that one grant may not fit all the needs.
  • FEMA has long been issuing various preparedness grants to support targeted needs and should not only continue to do so, but expand on this program. Targeted grants should be much stricter in establishing expectations for what will be accomplished with the grant funds.
  • Collaboration is also important. Shared responsibility, whole community, etc. Many grants have suggested or recommended collaboration through the years, but rarely has it been actually required. Certain capabilities lend themselves to better development potential when we see the realization of collaboration, to include the private sector, NGOs, and the federal government. Let’s require more of it.
  • Instead of spreading money far and wide, let’s establish specific communities of practice to essentially act as model programs. For a certain priority, allocate funds for a grant opportunity with enough to fund 3-5 initiatives in the nation. Give 2-3 years for these programs to identify and test solutions. These should be rigorously documented so as to analyze information and potentially duplicate, so I suggest that academic institutions also be involved as part of the collaborative effort (see the previous bullet). Once each of the grantees has completed their projects, host a symposium to compare and contrast, and identify best practices. Final recommendations can be used to benchmark other programs around the nation. Once we have a model, then future funding can be allocated to support implementation of that model in other areas around the nation. Having worked with the National Academies of Sciences, Engineering, and Medicine, they may be an ideal organization to spearhead the research component of such programs.
  • Recognize that preparedness isn’t just long term, it’s perpetual. While certain priorities will change, the goals remain fundamentally the same. We are in this for the long haul and we need to engage with that in mind. Strategies such as the one in the previous bullet point lend themselves to long-term identification of issues, exploration of solutions, and implementation of best practices.
  • Perhaps in summary of all of this, while every jurisdiction has unique needs, grant programs can’t be so open as to allow every grantee to have a wholly unique approach to things. It feels like most grant programs now are simply something thrown at a wall – some of it sticks, some of it falls right off, some might not even make it to the wall, some slowly drips off the wall, and some dries on permanently. We need consistency. Not necessarily uniformity, but if standards are established to provide a foundational 75% solution, with the rest open for local customization, that may be a good way to tackle a lot of problems.

In the end, while FEMA is the implementing agency, the emergency management community needs to work with them to identify how best to measure preparedness across all levels and how we can best implement preparedness programs. Over the past few years, FEMA has been very open in developing programs for the emergency management community and I hope this is a problem they realize they can’t tackle on their own. They need representatives from across the practice to help chart a way ahead. This will ensure that considerations and perspectives from all stakeholder groups are addressed. Preparedness isn’t a FEMA problem, it’s an emergency management problem. Let’s help them help us.

What thoughts do you have on preparedness? How should we measure it? What are the strengths and areas for improvement for funding? Do you have an ideal model in mind?

© 2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

It’s Not Too Late To Prepare

The phrase I’ve been using lately when I speak to people has been “It’s not too late to prepare”.  Many people perceive that in the middle of a disaster we are unable to prepare.  Quite the contrary, we have the potential to integrate all of our preparedness steps into a response.  Because we have problems in front of us that need to be addressed, we have an opportunity to continuously improve, ensuring that organizationally we are offering the very best we can. 

There is a reason why there isn’t a mission area for preparedness in the National Preparedness Goal.  This is because preparedness is ongoing.  It’s not a separate or distinct activity.  Rather it is comprised of activities that support all mission areas, no matter when they are actioned.  Preparedness is continuous.

Assessment

Assessment is a key activity within preparedness.  In fact, assessment is foundational in understanding what’s going on.  During a disaster, good management practices dictate that we should be monitoring our response and adjusting as needed.  What exactly should we be monitoring?  Similar to evaluating an exercise, consider the following:

  • What was the effectiveness of deliberate planning efforts? 
    • Were planning assumptions correct?
    • Was the concept of operations adequate in scope and detail? 
    • What was lacking?
    • What worked well?
  • What was the effectiveness of plan implementation?
    • If aspects of plan implementation need improvement, what was the reason for the shortfall?
      • A poor plan
      • Lack of job aids
      • Lack of/poor/infrequent training
      • Lack of practice
      • Lack of the proper resources or capabilities
      • The plan wasn’t followed
  • Did resources and capabilities meet needs?  If not, why?

Planning

While some planning gaps will require a longer time period to address, I’m aware of many jurisdictions and organizations which have been developing plans in the midst of the pandemic.  They recognized a need to have a plan and convened people to develop those plans.  While some of the planning is incident-specific, many of the plans can be utilized in the future we as well, either in the form they were written or adjusted to make them more generally applicable without the specific details of this pandemic.  I’d certainly suggest that any plans developed during the pandemic are reviewed afterwards to identify the same points listed above under ‘assessment’ before they are potentially included in your organization’s catalogue of plans. Also consider that we should be planning for contingencies, as other incidents are practically inevitable.

Training

Training is another fairly easy and often essential preparedness activity which can performed in the midst of a disaster.  Many years ago FEMA embraced the concept of training during disasters.  FEMA Joint Field Offices mobilize with training personnel.  These personnel not only provide just in time training for new personnel or to introduce new systems and processes, but they provide continuing training a variety of topics throughout response and recovery, providing a more knowledgeable workforce.  I’ve seen some EOCs around the country do the same.  Recently, my firm has been contracted to provide remote training for the senior leadership of a jurisdiction on topics such as continuity of operations and multi-agency coordination, which are timely matters for them as they continue to address needs related to the pandemic. 

Exercises

While assessments, planning, and training are certainly activities that may take place during a disaster, exercises are probably less likely, but may, if properly scoped and conducted, still have a place.  Consider that the military will constantly conduct what they call battle drills, even in active theaters of war, to ensure that everyone is familiar with plans and protocols and practiced in their implementation.  Thinking back on new plans that are being written in the midst of the pandemic, it’s a good idea to validate that plan with a tabletop exercise.  We know that even the best written plans will still have gaps that during a blue-sky day we would often identify through an exercise.  Plans written in haste during a crisis are even more prone to have gaps simply because we probably don’t have the opportunity to think everything through and be as methodical and meticulous as we would like.  A tabletop exercise doesn’t have to be complex or long, but it’s good to do a talk through of the plan.  Depending on the scope of the plan and the depth of detail (such as a new procedure, conducting a walk-through of major movements of that plan (that’s a drill) can help ensure validity of the plan and identify any issues in implementation.  While you aren’t likely to go the extent of developing an ExPlan, an evaluator handbook, or exercise evaluation guides (yes, that’s totally OK), it’s still good to lay out a page of essential information to include objectives and methodology since taking the time to write these things down is one more step to ensure that you are doing everything you need for the validation to be effective.  Documentation is still important, and while it can be abbreviated, it shouldn’t be cut out entirely.  It’s also extremely important to isolate the exercise, ensuring that everyone is aware that what is being performed or discussed is not yet part of the response activity.  Evaluators should still give you written observations and documented feedback from participants.  You probably don’t need a full AAR, especially since the observations are going to be put into an immediate modification of the plan in question, but the documentation should still be kept together as there may still be some observations to record for further consideration. 

Evaluation and After Action

Lastly, incident evaluation is something we shouldn’t be missing.  We learn a lot about incident evaluation from exercise evaluation.   I’ve written on it before, which I encourage you to look at, but the fundamentals are ensuring that all actions and decisions are documented, that a hotwash is conducted (or multiple hotwashes to capture larger numbers of people or people who were engaged in very different functions), and that an after action report is developed.   Any incident should provide a lot of lessons learned for your organization, but the circumstances of a pandemic amplify that considerably.  Ensure that everyone in your organization, at all levels, is capturing observations and lessons learned daily.  Ensure that they are providing context to their observations as well, since once this is over, they may not recall the details needed for a recommendation. You may want to consider putting together a short form for people to capture and organize these observations – essentially identifying the issue, providing context, and putting forth a recommendation to address the issue. Don’t forget to encourage people to also identify best practices.  In the end, remember that if lessons learned aren’t actually applied, nothing will change. 

I welcome any insight on how we can continue to apply preparedness in the midst of a disaster. 

Be smart, stay safe, stay healthy, and be good to each other. 

©2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC

Operational Readiness in Emergency Management

Back in 2017 I wrote a piece on defining operational readiness.  It’s a topic, which, after some recent discussion with a colleague, I think bears revisiting and expanding upon.  Specifically, how we apply it in emergency management, or not.  Readiness is really a final comprehensive perspective of preparedness.  That is, once you have reached a certain level of preparedness, you can be ready, but being prepared doesn’t necessarily make you ready.  Preparedness is generally perceived as an ongoing process, though a state of readiness is typically a snapshot in time.

It struck me that the military tends to have more of a focus on readiness, while emergency management has a focus on preparedness.  While you will find both concepts within the doctrine of emergency management and military, the actual applications are considerably skewed.  After my discussion, I began to wonder why there this difference exists and what we can learn from it.

Having worked a considerable amount with various National Guard elements, I’ve come to highly respect their processes and their endeavor for readiness.  Not that we don’t have similar rigor in emergency management, but the focus seems to be more on the processes of preparedness rather than a state of operational readiness.  Sometimes the differences are so subtle that I have to sit back and think them through, but they are certainly there, and they are meaningful.  Given the military’s focus on operational readiness, they serve as a good source of information, though it needs to be properly filtered for application to emergency management.

As I’ve applied more thought to this, I’ve assembled a refined definition of readiness as it applies to emergency management, that being:

[Readiness is the nexus of benchmark outcomes of preparedness matched with the needs of a specific kind and type of response. A state of operational readiness is achieved when all applicable preparedness benchmarks are met and the organization is willing and able to adequately leverage the resulting capabilities against a corresponding threat or hazard.]

I’ve put together a graphic I think reasonably represents this relationship below.  Readiness is represented by a cloud because, as I explore further in this writing, it is itself rather amorphic and complex.

Readiness

To explain the components of my definition…  Readiness comes from a culmination of outcomes from preparedness activities, but only when each of these outcomes achieves a specific benchmark state.  The achievement of benchmarked preparedness activities define a measure of capability.  These capabilities are associated with a specific threat(s) or hazard(s).  As such, that state of readiness is only applicable to a specific kind (threat or hazard) and type (size and complexity) incident.  To help illustrate my points, here are a couple of examples using field response scenarios:

We can assume that a volunteer fire department is prepared to handle a room and contents fire.  They should have all the elements needed to do so, and in fact, these elements have standards (benchmarks) defined by the NFPA and state fire marshals.  Does this mean they have achieved readiness?  Hopefully yes, but perhaps not.  Given the rather extensive crisis of low membership in volunteer fire departments, the department in question may not have adequate staff to respond to this fire if it occurs, for example, in the middle of a week day.  This gives them a measure of degraded, or even negligible readiness.

Similarly, if we take the same fire department, having accomplished the benchmarks of preparedness for response to a room and contents fire, and even given adequate staff to do so, they may not have a state of readiness to fully address a hazardous materials incident.  While many of the elements of preparedness apply to both types of incidents, there are some critical differences which they would have to overcome to establish a state of readiness for a different type of incident.  Likewise, we could revert back to the room and contents fire and make it bigger – say a fully involved structure fire. While the department might have operational readiness to address the room and contents fire, they may not have the operational readiness to address a structure fire.

I think it’s fair to say that we can be prepared for something without having operational readiness for it.  Years ago, when there was a planetary ‘near miss’ by a meteor, a news outlet contacted our state OEM PIO.  They asked if we had a plan for a meteor strike.  The PIO acknowledged that we didn’t have a plan specific to that, but we did have a comprehensive emergency management plan, through which, and supported by various functional annexes, we were prepared to respond to such an incident and its effects should it occur.  Was the PIO wrong?  Not at all.  Assuming the other elements of preparedness were reasonably in place (and they were), it would be fair to say we were generally ‘prepared for anything’.  Were we ready, however?  Absolutely not.  The operational readiness needs for such an extraordinary, high impact incident are near-impossible to achieve.

When we examine this, it’s important to identify that a state of readiness can wax and wane, based on our ability to apply the identified preparedness measures to the incident in question. Considering the first example of the fire department and the room and contents fire, the department has a state of operational readiness when, as included in the definition I gave, all the preparedness benchmarks are met and they are willing and able to adequately leverage the resulting capabilities against a corresponding threat or hazard.  Changes in capability and/or the willingness or ability to apply those capabilities will result in degradation of readiness.  Depending on the factor in question, it may fully disqualify their readiness, or it may decrease their readiness by some measure.

So why is readiness important?  Readiness is the green light.  If we accomplish a state of operational readiness, we increase our chances of success in addressing the threat or hazard in question.  If we haven’t achieved readiness, we still can obviously be successful, but that success may come at a greater cost, longer period of time, and/or increased error.

How do we achieve readiness?  The current approach we have in emergency management certainly isn’t enough.  While some efforts may culminate in operational readiness, there is, as a whole, a significant lack of focus on operational readiness.  This seems to largely be a cultural issue to overcome.  In general, we seem to have the attitude that preparedness equates to readiness, and that preparedness itself is an end state. Even though we intuitively, and doctrinally, know that preparedness is a cycle, we seem to take comfort in ‘completing’ certain tasks among the preparedness elements – planning, organizing, equipping, training, exercises, and improvement – and then assuming readiness.  Readiness itself is actually the end state, though it is a dynamic end state; one that we can easily lose and must constantly strive to maintain.  To accomplish and maintain operational readiness, it is imperative that we aggressively and rigorously pursue activity in each of the elements of preparedness.  We must also continually monitor our ability to execute the capabilities we are preparing.  That ability, ultimately, is our measure of readiness.

The scale and unit of measuring readiness is something I’m not exploring in depth here (it really warrants its own deliberate effort), but expect to revisit in the future.  I surmise that the factors may be different based upon the various capabilities, and types and kinds of threats/hazards we are trying to address.  We need to examine capability requirements at a granular (task) level to truly assess our current state of readiness and identify what we need to address to increase our readiness.  I also assume that there is a somewhat intangible factor to readiness, one that likely revolves around the human factor. Things like leadership, decision-making, confidence, and ability to improvise. The measure of readiness may also involve certain external factors, such as weather.  The measurement of readiness certainly is complex and involves numerous factors.

I do know that practice is a significant factor in operational readiness.  Earlier I mentioned my experience with the National Guard.  Much of that revolves around exercises, which is one of the best (though not the only) measures of readiness.  Operational military units seem to constantly exercise.  Sometimes small scale, sometimes large.  They exercise different aspects, different scenarios, and different approaches.  It’s the regular repetition that builds competence and confidence, along with identifying shortfalls within the capability such as planning gaps, equipment failures, and the need to anticipate and prepare for certain contingencies.  While we exercise a fair amount in emergency management, we still don’t exercise enough.  I see a lot of people in emergency management leadership develop a complacency and virtually declare that ‘close enough is enough’.  It’s absolutely not enough to exercise a plan or capability once a year, which is something we often see (and often at best).

Preparedness is not something we achieve, it’s something we do; but through it we strive to achieve and maintain readiness.

It’s interesting to note that at the level of federal doctrine, we have a National Preparedness Goal.  We need to recognize that preparedness isn’t the goal – Readiness is.  A possible starting point for change would be the assembly of a blue-ribbon panel, likely by FEMA, to explore this topic and provide recommendations on a unified way ahead for emergency management to recognize the need for operational readiness, including purposeful changes in doctrine and culture to emphasize this desired end state.  We need a solid definition, means of measurement, guidelines for implementation, and an identification of the barriers to success with recommendations on how to overcome them (yep, I already know money and staff are the big ones).

I hope I’ve given some food for thought in regard to readiness.  The simple act of writing this and the bit of associated reading and thinking I’ve done on the topic certainly has me thinking about things differently.  As always, I’m curious to hear your thoughts on operational readiness, what it means to you, and what we can do to achieve it.

© 2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

 

 

 

The 2019 National Preparedness Report, or ‘How Are We Measuring Preparedness?’

FEMA recently released the 2019 National Preparedness Report.  Simply put, I’m confused.  Nothing in the report actually lines up with doctrine.  It leaves me wondering how we are actually measuring preparedness.  So what’s the issue?

While the National Preparedness Report is initially structured around the five mission areas (Prevention, Protection, Mitigation, Response, and Recovery), the only full inclusion of the Core Capabilities in the report is a table on page 9, outlining usage of grant funds per Core Capability.  After this, the Core Capabilities for each mission are listed in the title page for each mission area within the detailed findings for those mission areas.  No detail of progress within these Core Capabilities is provided, however.  With the absence of this analysis, we are not seeing data on the progression of preparedness, which, per the National Preparedness Report, is measured through the lens of each of the Core Capabilities.

This is further confused on pages 45 and 48, in particular, where tables list the Community Lifelines with some sort of correlated ‘capabilities’ (noted with a lowercase ‘c’… thus not the Core Capabilities).  These capabilities are not from any doctrine that I can find or recall, including the components and subcomponents for each Community Lifeline provided in the Community Lifelines Toolkit.  For each of these they provide some analytical data, but it’s unclear what this is based upon.  The methodology provided early in the document does nothing to identify why this change in format has occurred or where these specific data sets come from, much less why they are deviating from the previous format and the standards provided through the National Preparedness Goal.

Some perspective… It would seem logical that the National Preparedness Report would be assessing our national state of preparedness relative to the National Preparedness Goal, as it has since its inception.  The National Preparedness Goal is structured around the five mission areas and the 32 Core Capabilities.  With the emergence of the Community Lifelines and their inclusion in the recent update of the National Response Framework, it makes sense that we will see Community Lifelines further integrated into standards, doctrine, and reports, but they have yet to be integrated into the National Preparedness Goal (the current version is dated 2015).  We have not yet seen a comprehensive crosswalk between the Community Lifelines and the Core Capabilities, but it should be recognized that there are certain aspects, even if you just examine the Response Mission Area, that don’t match up.

In an unrelated observation on the National Preparedness Report, the trend continues with citing after action reports from the year, but not actually providing any analysis of lessons learned and how those are being applied across the nation.

Bottom line… while there are some valuable nuggets of information included in this report, I find most of it to be confusing, as it lacks a consistent format on its own, as well as inconsistency with the existing standard of measurement as defined by the National Preparedness Goal.  Why is this a big deal?  First, it’s a deviation from the established standard.  While the standard may certainly have room for improvement, the standard must first be changed before the metrics in the reporting can be changed.  Second, with the deviation from the standard, we aren’t able to measure progress over time.  All previous National Preparedness Reports have provided data within the scope of Core Capabilities, while this one largely does not.  This breaks the possibility of any trend analysis.  Third, there is no reasoning provided behind the capabilities (lowercase ‘c’) associated with each of the Community Lifelines in the report.  It’s simply confusing to the extent that it becomes irrelevant because the information provided is not within the existing lexicon which is used for measurement of practically everything in preparedness.

Simply put, this year’s report is even more disappointing than those provided in previous years.  In fact, since it doesn’t conform with the current standard, I’d suggest it’s not even valid.  This should be far better.

Thoughts?

© 2019 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC

 

 

CDC Releases New Public Health Emergency Preparedness and Response Capabilities

The CDC recently released its updated Public Health Emergency Preparedness and Response Capabilities.  While this is certainly important for public health preparedness personnel, these are something that most emergency management professionals should also be aware of.  Public Health is an incredibly integral partner in emergency management and homeland security.  Last year I did a review of the new HHS ASPR Health Care Preparedness and Response Capabilities and also included the previous version of the CDC Public Health capabilities in my discussion.

The new CDC standards, at a glance, are the same as the previous version.  All 15 capabilities have been continued.  Upon closer examination, there has certainly been some refinement across these capabilities, including some adjustments in the functions, or primary activities, associated with each capability; as well as a better look at preparedness measures for each.  As with the previous version, they front load some guidance on integrating the capabilities into preparedness and response activities.

For those keeping track from the previous version, each capability narrative includes a summary of changes which were adopted from lessons learned over the past several years.  Similar to the previous version, each capability is broken into functions and tasks, with suggested performance measures.  For those of you who remember the old Target Capabilities List and Universal Task List, it’s a similar, although more utilitarian, concept.

So what do emergency managers need to know?  Fundamentally, be aware that these capabilities are what public health will be primarily focused on rather than the National Preparedness Goal’s 32 Core Capabilities.  These aren’t mutually exclusive to each other, though.  In fact, the new CDC document references the National Preparedness Goal.  There are some public health capabilities that cross walk pretty easily, such as Fatality Management.  The public health capability, however, has a strong focus on the public health aspects of this activity.  Some public health capabilities don’t necessarily have a direct analog, as many of them would be considered to be part of the Public Health, Healthcare, and Emergency Medical Services Core Capability.

My recommendation is to have a copy of this document handy.  Review it to become familiar with it, and, depending on how heavy your involvement is with public health, you may be making some notes on how these capabilities compare with and interact with the 32 Core Capabilities.

© 2018 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC℠

Reviewing The 2018 National Preparedness Report

The 2018 National Preparedness Report was released last week.  For the past few years, I’ve provided my own critical review of these annual reports (see 2017’s report here).  For those not familiar with the National Preparedness Report (NPR), it is mandated by the Post-Katrina Emergency Management Reform Act (PKEMRA).  The information is compiled by FEMA from the State Preparedness Reports (SPR), including the Threat and Hazard Identification and Risk Assessment (THIRA) data submitted by states, territories, and Urban Area Security Initiative (UASI) – funded regions.  The data presented is for the year prior.  The SPRs and NPR examine the condition of our preparedness relative to the 32 Core Capabilities identified in the National Preparedness Goal.

Overall, the NPR provides little information, certainly nothing that is really shocking if you pay attention to the top issues in emergency management.  Disappointingly, the report only covers those Core Capabilities identified for sustainment or improvement, with no more than a graphic summary of the other Core Capabilities.

Core Capabilities to Sustain

Operational Coordination was identified as the sole Core Capability to sustain in this year’s report.  I’ve got some issues with this right off.  First of all, they summarize their methodology for selecting Core Capabilities to sustain: ‘To be a capability to sustain, the Nation must show proficiency in executing that core capability, but there must also be indications of a potentially growing gap between the future demand for, and the performance of, that capability.’  To me, what this boils down to is ‘you do it well, but you are going to have to do it better’.  I think most EM professionals could add to this list significantly, with Core Capabilities such as Planning; Public Information and Warning; Public Health, Healthcare, and EMS; Situational Assessment; and others.  Distilling it down to only Operational Coordination shows to me, a severe lack of understanding in where we presently are and the demands that will be put on our systems in the future.

Further, the review provided in the report relative to Operational Coordination is pretty soft.  Part of it is self-congratulatory, highlighting advances in the Core Capability made last year, with the rest of the section identifying challenges but proving little analysis.  Statements such as ‘Local governments reported challenges with incident command and coordination during the 2017 hurricane season’ are put out there, yet their single paragraph on corrective actions for the section boils down to the statement of ‘we’re looking at it’.  Not acceptable.

Core Capabilities to Improve

The 2018 report identifies four Core Capabilities to improve:

  • Infrastructure Systems
  • Housing
  • Economic Recovery
  • Cybersecurity

These fall under the category of NO KIDDING.  The writeups within the NPR for each of these superficially identifies the need, but doesn’t have much depth of analysis.  I find it interesting that the Core Capability to sustain has a paragraph on corrective actions, yet the Core Capabilities to Improve doesn’t.  They do, instead, identify key findings, which outline some efforts to address the problems, but are very soft and offer little detail.  Some of these include programs which have been in place for quite some time which are clearly having limited impact on addressing the issues.

What really jumped out at me is the data provided on page 9, which charts the distribution of FEMA Preparedness grants by Core Capability for the past year.  The scale of their chart doesn’t allow for any exact amounts, but we can make some estimates.  Let’s look at four of these in particular:

  • Infrastructure Systems – scantly a few million dollars
  • Housing – None
  • Economic Recovery – Less than Infrastructure Systems
  • Cybersecurity – ~$25 million

With over $2.3 billion in preparedness funding provided in 2017 by FEMA, it’s no wonder these are Core Capabilities that need to be improved when so few funds were invested at the state/territory/UASI level.  The sad thing is that this isn’t news.  These Core Capabilities have been identified as needing improvement for years, and I’ll concede they are all challenging, but the lack of substantial movement should anger all emergency managers.

I will agree that Housing and Cybersecurity require a significant and consolidated national effort to address.  That doesn’t mean they are solely a federal responsibility, but there is clear need for significant assistance at the federal level to implement improvements, provide guidance to states and locals, and support local implementations.  That said, we can’t continue to say that these areas are priorities when little funding or activity is demonstrated to support improvement efforts.  While certain areas may certainly take years to make acceptable improvements, we are seeing a dangerous pattern relative to these four Core Capabilities, which continue to wallow at the bottom of the list for so many years.

The Path Forward

The report concludes with a two-paragraph section titled ‘The Path Forward’, which simply speaks to refining the THIRA and SPR methodology, while saying nothing of how the nation needs to address the identified shortcomings.  Clearly this is not acceptable.

~~

As for my own conclusion, while I saw last year’s NPR as an improvement from years previous, I see this one as a severe backslide.  It provides little useful information and shows negligible change in the state of our preparedness over the past year.  The recommendations provided, at least of those that do exist, are translucent at best, and this report leaves the reader with more questions and frustration.  We need more substance beginning with root cause analysis and including substantial, tangible, actionable recommendations.  While I suppose it’s not the fault of the report itself that little improvement is being made in these Core Capabilities, the content of the report shows a lack of priority to address these needs.

I’m actually surprised that a separate executive summary of this report was published, as the report itself holds so little substance, that it could serve as the executive summary.  Having been involved in the completion of THIRAs and SPRs, I know there is information generated that is simply not being analyzed for the NPR.  Particularly with each participating jurisdiction completing a POETE analysis of each Core Capability, I would like to see a more substantial NPR which does some examination of the capability elements in aggregate for each Core Capability, perhaps identifying trends and areas of focus to better support preparedness.

As always, I’m interested in your thoughts.  Was there anything you thought to be useful in the National Preparedness Report?

© 2018 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC

Project Responder and DHS’ Inability to Follow Standards

I was recently made aware of Project Responder, a publication sponsored by the DHS Science and Technology Directorate, which examines emergency response capability needs within the scope of current operational requirements, threats, and, hazards; with an ultimate focus on the identification of needs an correlating these with technological fixes.  The project description states that ‘the findings from the project can inform the US Department of Homeland Security’s decisions about investments in projects and programs to promote capability enhancement…’.  Project Responder 5 was published in August of this year.  Prior to this edition, I’ve not been familiar with the project, which started in early 2001.

The executive summary of the document states that ‘the document describes 37 capability needs identified by emergency responders…’ <record scratch>.  Hold on a moment… I thought DHS defined 32 Core Capabilities.  Yep, they still do.  The first page of Project Responder 5 includes a foot note that states ‘For purposes of this document, a capability is defined as “the means to accomplish one or more tasks under specific conditions”’.  So in other words, DHS can’t follow it’s own standards.  In many of my articles I’ve regularly remarked about the continual need to streamline our emergency management processes so we can make easier comparisons between these processes, efforts, and activities without having to establish cross walks or translations.  By working from the same standards, we can move easily move between mission areas, which don’t always have boldly marked lines between them, and have an ability to define target results and measure progress.  The Core Capabilities established by the National Preparedness Goal go a long way toward accomplishing this standardization.  It seems the folks in the Science and Technology Directorate don’t think they are that important, and this infuriates me.

The document outlines the 37 capability needs within nine capability domains.  These are:

  • Risk Assessment and Planning
  • Communication and Information Sharing
  • Command, Control, and Coordination
  • Training and Exercise
  • Responder Health and Safety
  • Intelligence and Investigation
  • Logistics and Resource Management
  • Casualty Management
  • Situational Awareness

Some of these appear to have direct correlation to some of what we know as the 32 Core Capabilities, while others seem to combine, redefine, or create new ones.  As the gaps within each domain are discussed, they reference applicable standards.  Interestingly enough, certain standards which you would expect to see aren’t present, such as NIMS being referenced in the Command, Control, and Coordination capability; and HSEEP referenced in the Training and Exercise capability.  Regardless of what technology applications are used to support these areas, these standards are fundamental.

It’s not that the data and analysis that comes out of Project Responder is entirely bad.  It isn’t.  But it’s not great either.  It seems to fall short consistently throughout the document.  The information also needs to be organized within the current lexicon, allowing the reader to make direct correlations to what we are familiar with.  I’m guessing that the project team who did the research and pulled the document together actually knows very little about emergency management or homeland security.  Their inability to communicate context and work within established standards seems to demonstrate this.  It’s fine that the document has a focus on technology implementations that can address gaps, but the fundamentals within the field of practice can’t be ignored.  I don’t see why this project could not have been conducted within the established industry standards.

Perhaps I’ve given a more soap-boxish post than I usually do.  I’m frustrated to see so much wasted time, effort, and dollars in something that could have been more impactful.  Please take a look through the document and let me know what your impressions are.  Also, if you happen to have any insight on this publication which I have missed or am not aware, I’d love to hear it.

Thanks for reading and be safe this holiday season.

© 2017 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC

Reviewing Health Care and Public Health Capabilities

Most in emergency management and homeland security are aware of the National Preparedness Goal’s 32 Core Capabilities, but are you aware of the Health Care and Public Health capabilities promulgated and published by the HHS/ASPR and the CDC?

Recently updated, the 2017-2022 Health Care Preparedness and Response Capabilities are assembled by the US Department of Health and Human Services (HHS) Assistant Secretary for Preparedness and Response (ASPR).  According to ASPR, these capabilities are intended to ‘describe what the health care delivery system must do to effectively prepare for and respond to emergencies that impact the public’s health’.  The health care delivery system includes health care coalitions (HCCs), hospitals, and EMS.  These consist of four capabilities:

  1. Foundation for Health Care and Medical Readiness
  2. Health Care and Medical Response Coordination
  3. Continuity of Health Care Service Delivery
  4. Medical Surge

The Centers for Disease Control and Prevention (CDC) (also part of HHS) publishes the Public Health Preparedness Capabilities.  The current version of the Public Health capabilities is dated 2011, with the CDC being anticipated to begin updating the document in late summer of 2017.  The CDC’s Public Health Preparedness Capabilities help to establish standards for state and local public health preparedness through 15 capabilities, which are:

  1. Community Preparedness
  2. Community Recovery
  3. Emergency Operations Coordination
  4. Emergency Public Information and Warning
  5. Fatality Management
  6. Information Sharing
  7. Mass Care
  8. Medical Countermeasure Dispensing
  9. Medical Material Management and Distribution
  10. Medical Surge
  11. Non-Pharmaceutical Interventions
  12. Public Health Laboratory Testing
  13. Public Health Surveillance and Epidemiological Investigation
  14. Responder Safety and Health
  15. Volunteer Management

Similar to the use of the Core Capabilities in emergency management and homeland security broadly, I see the ASPR and CDC sets of capabilities as providing an opportunity to identify capabilities which are functionally focused.  Aside from the three common Core Capabilities (Planning, Public Information and Warning, and Operational Coordination), there is only one public health/health care-specific Core Capability: Public Health, Health Care, and Emergency Medical Services.  It makes sense for these areas to need to further identify and refine their own capabilities.  It might be interesting to see other sub-sets of public safety, such as fire and law enforcement do the same relative to the Core Capabilities they each heavily participate in.  Or it might send us down a rabbit hole we don’t need to jump down…

That said, I always champion opportunities for synergy and streamlining of existing systems and doctrine, and I’m rather disappointed that has not been done.  There is clearly overlap between the ASPR and CDC capabilities as compared to the Core Capabilities; that being apparent in even the titles of some of these capabilities addressing topics such as operational coordination, mass care, and public information and warning.

Corresponding to the recent release of ASPR’s updated Health Care Preparedness and Response Capabilities, I sat through a webinar that reviewed the update.  The webinar gave an opportunity for me to ask if there was any consideration given to structuring these more similarly to the National Preparedness Goal’s Core Capabilities.  In response, ASPR representatives stated they are working with the Emergency Preparedness Grant Coordination Working Group, which consists of ASPR, CDC, Health Resources and Services Administration, DHS/FEMA, US DOT, and the National Highway Traffic Safety Administration.  This working group has developed an interim crosswalk, applicable to the current documents, and expected to be updated with the CDC’s update to the Public Health Preparedness Capabilities.  While a crosswalk helps, it still acknowledges that each are operating within their own silos instead of fully coordinating and aligning with the National Preparedness Goal.  The world of preparedness is dynamic and made even more complex when efforts aren’t aligned.

Regardless of the lack of alignment, these are great tools.  Even if you aren’t in public health and health care, you should become familiar with these documents, as they represent important standards in these fields.  Similar to the Core Capabilities, grants and preparedness activities are structured around them.  If you interface with public health and health care, you have even more reason to become familiar with these – as they are likely referenced in multi-agency discussions and you should be aware of the similarities and differences between these and the Core Capabilities.

© 2017 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC

New and Timely Cyber Security Information

October is National Cyber Security Awareness Month.  With it, the DHS Private Sector Office has provided a number of resources to help organizations get involved in cyber security awareness.  These include weekly themes, such as Stop. Think. Connect., information on a weekly Twitter Chat series, and other information.

Perhaps released intentionally during National Cyber Security Awareness Month is the call for public comment on the National Cyber Incident Response Plan.  From their website, DHS’ National Protection and Programs Directorate and FEMA’s National Integration Center are leading the development of this document in coordination with the US Department of Justice, the Secretary of Defense, and other partners.  This plan is intended to provide a nation-wide approach to cyber incidents, incorporating roles for the private sector and all levels of government (TR – similar to the National Planning Frameworks, which this document rather heavily references).  The National Engagement Period ends on October 31, so be sure to review the document and provide feedback.  There are also a series of webinars referenced on the website.

In my initial and very cursory review of the plan, I was pleased to see the references to the National Preparedness Goal and National Planning Frameworks.  I’ve mentioned before that we need to strive to align and integrate all preparedness efforts along these lines and I’m thrilled to see it happening.  It’s even more encouraging to see this occurring with something that could be considered a bit fringe to traditional emergency management.  The plan directly references a number of Core Capabilities.  They take an interesting approach with this.  Instead of identifying which Core Capabilities the plan organizes under, they instead align certain Core Capabilities within what they call Lines of Effort.  These Lines of Effort include Threat Response, Asset Response, and Intelligence Support.  For each Core Capability they define the Core Capability, a la the National Preparedness Goal, and describe how that Core Capability applies to Line of Effort, along with listing associated critical tasks. (inserted is Table 2 from the plan which shows this alignment)

cyber-cc-by-loe

What I find even more interesting is the array of Core Capabilities they identified for their Lines of Effort.  While this plan is oriented toward response, the Core Capabilities they identify come from the Mission Areas of Prevention, Protection, Response, and Mitigation, along with including the three common Core Capabilities.  This further reinforces the thought that the Cyber Security Core Capability should also be included as a common Core Capability.  This is an interesting document which I look forward to reviewing in more detail.

© 2016 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLCYour Partner in Preparedness

 

2016 National Preparedness Report Released

The fifth National Preparedness Report has been released by FEMA.  The National Preparedness Report is based upon, as the report states, input of more than 450 data sources and 190 stakeholders, including 66 non-federal organizations (which would account for state preparedness report submissions and information from Urban Area Security Initiative regions).  The report is intended as a summary of where the nation stands in regard to each of the 32 Core Capabilities outlined in the National Preparedness Goal.

As mentioned, this is the fifth National Preparedness Report to hit the streets.  While they have some value and demonstrate that the data collection that is done is actually collated, I feel that through the years they are offering less meat and more potatoes.  I appreciate the highlighting of best practices for each mission area, but, to me, there is a missed opportunity if a report is simply providing data and not recommendations.  While it’s understood that the goal of the National Preparedness Report is not to provide recommendations (it would also take longer to publish the report, and the people pulling the data together do not likely have the expertise to create recommendations), I’d like to see FEMA (and stakeholders) have follow up efforts to provide recommendations in each mission area and not miss this valuable opportunity to then apply the findings and look forward.

Below, I’ve included their overall findings with a bit of my own commentary.  Overall, I will say that there is nothing eye opening in this report for anyone who pays attention.  It’s pretty easy to guess those Core Capabilities which are at the top and those which are at the bottom.

  • Planning; Public Health, Healthcare, and Emergency Medical Services; and Risk and Disaster Resilience Assessment are the three Core Capabilities in which the Nation has developed acceptable levels of performance for critical tasks, but that face performance declines if not maintained and updated to address emerging challenges.
    • My commentary: BULLSHIT.  If these Core Capabilities are at ‘acceptable levels’, then our standards must be pretty low.  Planning is the one that disturbs me most.  We continue to see plenty of poor plans that are not realistic, can’t be operationalized, and are created to meet requirements (which are typically met by formatting and buzzwords).  Have we improved?  Sure.  But I wouldn’t say we are at ‘acceptable levels’.  As for Public Health, Healthcare, and Emergency Medical Services, we are struggling in certain areas to simply keep our heads above water.  While we are fairly solid in some areas of public health, one only needs to look at the Ebola incident to view how fragile our state of readiness is.  The findings for Planning and Public Health, to me, are nothing but shameful pandering and we need to get realistic about where we are at and the challenges we face.  Gold stars won’t stand up to the next disaster.  As for Risk and Disaster Resilience Assessment I have admittedly less experience personally.  I do know that we have some pretty incredible tools available that can help us determine impacts of various hazards for any given area under a variety of conditions, which is an amazing application of technology.  My concerns here are that there are still many who don’t know about these tools, don’t use them, and/or don’t follow the findings of information from these tools in their hazard mitigation actions.
  • Cybersecurity, Economic Recovery, Housing, and Infrastructure Systems remain national areas for improvement. Two additional Core Capabilities – Natural and Cultural Resources, and Supply Chain Integrity and Security – emerged as new national areas for improvement.
    • My commentary: NO KIDDING. While we have made a great deal of progress on Cybersecurity, we are still far behind the criminal element in most respects.  It also needs to be fully recognized in the National Preparedness Goal that Cybersecurity is a Core Capability common to all five mission areas.  Economic Recovery will always be a challenge, as every community impacted by an incident has a certain way it heals, essentially along the lines of Maslow’s Hierarchy.  A strong local economy is important to this healing, ensuring that the community has access to the resources it needs to rebuild and a return to normalcy.  While I’m sure studies have been done, we need to examine more closely how the economic recovery process evolves after a disaster to identify how it can be best supported.  Housing is the absolutely most challenging Core Capability in the National Preparedness Goal.  While I don’t have a solution for this, I do know that our current approaches, philosophies, and ways of thinking haven’t moved us an inch toward the finish line on this one.  We need to change our current way of thinking to be successful.  As for Infrastructure Systems, I could go on for days about this.  I’ve written previously, several times, (as have many others) on the critically fragile state of our infrastructure.  It’s no big secret.
  • States and territories continue to be more prepared to achieve their targets for Response Core Capabilities, while they are least prepared to meet their targets in the Recovery Mission Area.
    • This is another NO KIDDING. While we must always have a greater focus on Response, as that’s where lives are saved and the immediate danger is addressed, we can’t lose sight of Recovery.  Some recovery activities are more clear cut than others, and FEMA often muddies the waters more by inadvertently intimidating state and local governments when it comes to disaster recovery, as the focus becomes centered more on reimbursable activities vs doing what needs to be done.  The report included some interesting findings (take a look in the Recovery Mission Area drop down on the web site) on ‘mixed trends in exercising recovery capabilities’.  Again, this is nothing earth shattering, but it’s nice to see the matter addressed.  Yes, we clearly need to exercise Recovery Mission Area Core Capabilities better and more often.

These reports are always worth looking through, even though much of the information is generally known by those of us in the profession.  There are always little nuggets of learning available, and data from the report may be used to support your own endeavors for additional funding or resources for your own program.

As always, I’m interested in your insights and thoughts on this post and the National Preparedness Report.

© 2016 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC – Your Partner in Preparedness