CDC Forgot About Planning

In late February, CDC released the highly anticipated notice of funding opportunity (NOFO) for the 2024-2028 Public Health Emergency Preparedness (PHEP) grant. The general concept of the grant wasn’t a big surprise, as they had been promoting a move to their Response Readiness Framework (RRF). The timing of the new five-year grant cycle seems ideal to implement lessons learned from COVID-19, yet they are falling short.

I’ve reflected in the past on the preparedness capability elements of Planning, Organizing, Equipment/Systems, Training, and Exercises (POETE). I also often add Assessing to the front of that (APOETE). These preparedness elements are essentially the buckets of activity through which we categorize our preparedness activities.

In reviewing the ten program priorities of the RRF, I’m initially encouraged by the first priority: Prioritize a risk-based approach to all-hazards planning. Activity-wise, what this translates to in the NOFO is conducting a risk assessment. Solid start. Yet nowhere else is planning overtly mentioned. Within the NOFO some of the other priorities reflect on ensuring certain things are addressed in plans, such as health equity, but there is otherwise no direct push for planning. Buried within the NOFO (page 62) is a list of plans that must be shared with project officers upon request (under the larger heading of Administrative and Federal Requirements) but the development of any of these plans does not correlate to any priorities, strategies, or activities within the document.

As for the rest of APOETE, there is good direction on Organizing, Equipment and Systems, Training, and Exercises. While that’s all great, planning is the true foundation of preparedness and it is so obviously left out of this NOFO. Along with my general opinion that most emergency plans (across all sectors) are garbage, that vast majority of findings from numerous COVID-19 after-action reports I’ve written (which included two states and several county and local governments) noted the significant need for improved emergency plans. Further, the other preparedness elements (OETE) should all relate back to our plans. If we aren’t developing, improving, and updating plans, then the other activities will generally lack focus, direction, and relevance.

Understanding that this is the first year of a five-year grant cycle, some changes and clarification will occur as the cycle progresses, but as planning is a foundational activity, it should be immediately and directly tied to the results of the assessment this year’s grant calls for. Otherwise, the findings of the assessments are generally meaningless if we aren’t taking action and developing plans to address them. This is leaving us with a significant gap in preparedness. Someone at CDC didn’t think this through and it leaves me with a great deal of concern, especially in the aftermath of the COVID-19 response.

What are your thoughts on this?

© 2024 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Properly Applying ICS in Function-Specific Plans

As with many of my posts, I begin with an observation of something that frustrates me. Through much of my career, as I review function-specific plans (e.g., shelter plans, point of distribution plans, debris management plans, mass fatality incident management plans) I see a lot of organization charts that are inserted into those plans. Almost always, the org chart is an application of a ‘full’ incident command system (ICS) org chart (Command, Command Staff, General Staff, and many subordinate positions). This is obviously suitable for a foundational emergency operations plan (EOP), an emergency operations center (EOC) plan, or something else that is very comprehensive in nature where this size and scope of an organization would be used, but function-specific plans are not that. This, to me, is yet another example of a misinterpretation, misunderstanding, and/or misuse of the principles of National Incident Management System (NIMS) and ICS.

Yes, we fundamentally have a mandate to use ICS, which is also an effective practice, but not every function and facility we activate within our response and recovery operations requires a full organization or an incident management team to run. The majority of applications of a function-specific plan are within a greater response (such as activating a commodity POD during a storm response). As such, the EOP should have already been activated and there should already be an ‘umbrella’ incident management organization (e.g., ICS) in place – which means you are (hopefully) using ICS. Duplicating the organization within every function isn’t necessary. If we truly built out organizations according to every well intentioned (but misguided) plan, we would need several incident management teams just to run a Type 3 incident. This isn’t realistic, practical, or appropriate.

Most function-specific plans, when activated, would be organized within the Operations Section of an ICS organization. There is a person in charge of that function – depending on the level of the organization in which they are placed and what the function is, there is plenty of room for discussion on what their title would be, but I do know that it absolutely is NOT Incident Commander. There is already one of those and the person running a POD doesn’t get to be it. As for ‘command staff’ positions, if there is really a need for safety or public information activity (I’m not even going to talk about liaison) at these levels, these would be assistants, as there is (should be) already a Safety Officer or PIO as a member of the actual Command Staff. Those working within these capacities at the functional level should be coordinating with the principal Command Staff personnel. As for the ‘general staff’ positions within these functions, there is no need for an Operations Section as what’s being done (again, most of the time that’s where these functions are organized) IS operations. Planning and Logistics are centralized within the ICS structure for several reasons, the most significant being an avoidance of duplication of effort. Yes, for all you ICS nerds (like me) there is an application of branch level planning (done that) and/or branch level logistics that can certainly be necessary for VERY complex functional operations, but this is really an exception and not the rule – and these MUST interface with the principal General Staff personnel. As for Finance, there are similarly many reasons for this to be centralized within the primary ICS organization, which is where it should be.

We need to have flexibility balanced with practicality in our organizations. We also need to understand that personnel (especially those trained to serve in certain positions) are finite, so it is not feasible to duplicate an ICS structure for every operational function, nor is it appropriate. The focus should be on what the actual function does and how it should organize to best facilitate that. My suggestion is that if you are writing a plan, unless you REALLY understand ICS (and I don’t mean that you’ve just taken some courses), find someone who (hopefully) does and have a conversation with them. Talk through what you are trying to accomplish with your plan and your organization; everything must have a purpose so ask ‘why?’ and question duplication of effort. This is another reason why planning is a team sport and it’s important to bring the right people onto the team.

© 2024 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Developing Incident After-Action Reports

Incident and event after action reports (AARs) are extremely important for identifying the successes and challenges we faced in our efforts. Just like our evaluation efforts in exercises, many valuable lessons can be learned and effective practices identified from incidents and events. Yet for as much as incident and event AARs are encouraged, there are often problems with how these are developed.

While the quality of exercise after action reports is often not up to par, a defined process of exercise evaluation along with a suggested AAR format has been available to us and engrained in emergency management practice for a long time via the Homeland Security Exercise and Evaluation Program (HSEEP). While some concepts of exercise evaluation can be utilized for incident and event evaluation, we need to have a very different approach to be most effective.

FEMA has been promoting a Continuous Improvement concept for emergency management for several years. Incident and event evaluation is part of continuous improvement, though continuous improvement is intended to permeate much more of our daily and incident operations. While FEMA’s program has some good information that applies to incident and event evaluation, there are some important things I feel are missing.

Perhaps the most significant difference in our approach to incident and event evaluation vs exercise evaluation is the evaluation focus. Exercises, right from our very first steps of design, are designed explicitly for evaluation. The identification of capabilities and exercise objectives gives direction to our design and directly informs our evaluation of the exercise. Essentially, the intent and focus of evaluation is baked in from the start. For incidents and events, however, it is not.

Because evaluation is not a primary intent of incidents and events, we generally need to determine our evaluation strategy afterwards. The development of our evaluation strategy absolutely must begin with the identification of what we want to evaluate. This is a critical element not included in FEMA’s Continuous Improvement guidance. Without determining the focus of the evaluation, the discovery process lacks direction and may likely explore areas of incident/event operations that are lower priority to stakeholders. Determining what the evaluation effort will focus on can be considered similar to developing objectives, and as such should be specific enough to give proper direction to the evaluation effort. For example, having done numerous COVID-19 AARs, it’s not enough to say that we will evaluate ‘vaccination’. Vaccination is a very broad activity so we should determine specific aspects of vaccination to focus on, such as equity of distribution or vaccine point of dispensing (POD) operations. Obviously multiple focus areas can be identified based upon what is most important to stakeholders. And no, incident objectives should not serve as your focal points. These are operational objectives that have nothing to do with evaluation, though your evaluation itself may likely take the incident objectives (and associated actions) into consideration.

FEMA’s Continuous Improvement guidance provides a lot of great insight for the discovery process. The most common tools I use are focus groups, interviews, document reviews, and surveys. Focus groups and interviews allow people to tell their experiences from their perspectives. These offer a lot of insight and include facts as well as opinions, both of which are valid in the AAR process, as long as they are handled properly in the process, as discerning between the two is important.

Document reviews are also important. Typically I look at documents developed before the incident (mostly plans) and those developed during the incident (such as press releases, incident action plans, situation reports, and operational plans). While documents developed during the incident typically tell me what was done or what was intended to be done, the documents developed prior to the incident typically provide me with a standard from which to work.

There are a couple of important caveats with this:

1) Many plans are operationally inadequate, so they may not have been able to be followed.

2) Many organizations don’t reference their plans, regardless of quality.

As such, a big part of my document review is also determining the quality of the documents and if they were referenced during the incident or event. It may very well be that the actions taken were better than what was in the plans.

Surveys… so much to say about surveys that probably deserves its own blog post. Surveys can be great tools, but most tend to design poor surveys. They should be succinct and to the point. You will want to ask a lot of questions, but resist the urge to do so. The more questions you ask, the lower the rate of return on surveys. So focus on a few questions that will give you great data.

We then go to writing, which involves the organization of our information, formation of key observations (by focus area), a narrative analysis for each observation, and development of one or more recommendations for each observation. The analysis is an aspect that many AARs, including those for exercises, miss the mark. The analysis needs to contextualize the observation and justify the recommendations. It should provide sufficient detail for someone not knowledgeable in that observation (or of the incident) to have a reasonable understanding of the associated issues. Remember that an AAR may be referenced for years to come and can also be used to support budgets, grant applications, and obviously the corrective actions that are identified. A good analysis is necessary and should certainly be more than a couple of sentences. Be sure to identify strengths and effective practices, not just lessons learned and challenges.

I do not advocate using the HSEEP AAR template for incident and event evaluations. Beyond an awkward fit for some of the ‘fill-in-the-box’ information, the overall structure is not supportive of what an incident or event AAR needs to include. I suggest writing the AAR like a professional report. I’d include an executive summary, table of contents, research methodology, observations/analysis/recommendations, an incident or event timeline, and summary of recommendations (I do still like to use the traditional HSEEP improvement plan matrix for this). I tend to have a lot of citations throughout the document (typically I footnote these). Citations can include standards, such as NIMS, references (plans), media articles, and more.

A couple of notes: 1 – When planning our management of an event, we can be more proactive in evaluation by including it as a deliberate component of our efforts. 2 – Incident evaluation can begin during the incident by tasking an incident evaluator.

Incident and event evaluation can be daunting to approach. It requires endorsement from the highest levels to ensure cooperation and access to information. Honesty is what is needed, not sugar coating. Far too many AARs I’ve seen for exercises, incidents, and events are very soft and empty. Remember that we aren’t evaluating people, rather we are evaluating plans, processes, systems, and decisions. The final AAR should be shared with stakeholders so they can learn and apply corrective actions that may be relevant to them. Given most state public information laws, the AAR may need to be made available to the public, which is more reason to ensure that it is professionally written and that observations have quality analysis as members of the public may require context. I’ve also seen many elected and appointed officials (and legal counsels) be reluctant to have written reports or written reports with much detail because of freedom of information laws. While I understand that accountability and transparency can create challenges, we must remember that governments works on behalf of the people, and the acknowledgement of mistakes and shortcomings (as well as successes) is important to continuous improvement of the services we provide.

What is your approach with incident and event AARs? Where do you see that we need to improve this important process?

© 2024 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Stop Exercising Bad Plans

We know that the purpose of most exercises in emergency management (ref HSEEP) and related fields is to validate plans. That concept, though, is built on a fragile premise: that the plans are good.

Over the years, the more plans I see from various jurisdictions, the more disappointed I am practically to the extent of losing near-total faith in our profession’s ability to develop quality plans. Most emergency plans out there are crap. Garbage. Not worth the effort that has been put into them. Typically, they don’t have enough detail. Not that they need to have procedure-level detail (but those procedures should be found somewhere), but they are often written so high level that they are merely conceptual or policy-esque.

The premise that exercises are intended to validate plans would indicate a belief that the plans themselves serve as quality standards of practice for the organization(s) they are built for. The sad truth is that they are not. So, what are our exercises proving?

Gaps in exercise evaluation are a significant hurdle which are often based upon poor evaluation practices, poor AAR writing, and/or the assumption of quality plans. I find many AARs to be very superficial. They provide observations and recommendations, but no analysis. Without analysis we have no context for the observation and no examination of root cause or other contributing factors. Absent this analysis, the AARs aren’t able to truly identify what needs to be addressed. So, with the superficial, come the obvious statements and recommendations that communication needs to be improved, more ICS training is needed, etc.

What I don’t see enough of are observations, ANALYSIS, and recommendations that indicate:

  1. Plans need to be drastically improved (updated and/or developed)
  2. Responders need to actually be trained in their roles to support implementation of the plans (ICS does NOT teach us how to implement plans… in fact ICS training largely ignores the importance of existing plans)

What of the AARs that are better and actually do recommend improved plans? This leads us to the next potential point of failure: implementation of corrective actions. I see so many organizations are simply bad at this. They seem content to exercise over and over again (typically at the expense of taxpayer dollars) and come up with the same results. They largely aren’t fixing anything, or perhaps just the proverbial low-hanging fruit (i.e. more ICS training), but they aren’t tackling the harder-to-do, yet more impactful development of quality plans.

We need to stop assuming our plans are good. Exercising bad plans has little value to us and is typically more wasteful than beneficial.

Just like the potential causes identified above, there are numerous issues to be addressed. First of all, we need to recognize that not every emergency manager has the acumen for writing plans. The development of emergency plans is a hybrid of art and science. It includes hard and soft skillsets such as technical writing, systems thinking, organization, research, collaboration, and creativity. We have standards for developing plans, such as CPG101, which overall is a good standard (though it could be improved to help people use it). We have some training available in how to develop emergency plans, but there are some issues.

  • The G-235 Emergency Planning course (now IS-235) was a great course, but the big push 15-20 years ago to put so many classroom courses online to make them more accessible and to save costs largely resulted in decreased learning outcomes.
  • The classroom training in emergency planning has largely been replaced by the E103 Planning: Emergency Operations course, which is part of the Emergency Management Basic Academy. This is a pretty good course but being part of the Basic Academy (which is a great concept) also limits access to some people as the general practice is (understandably) to give registration preference to those who are taking the entire academy. Sure, the entire academy makes for more well-rounded EMs, but if someone wants to focus on emergency planning, some of the other courses, while complimentary, constitute a larger investment of time and possibly money.
  • Finally, FEMA has the Planning Practitioner Program, which is a more intensive experience and certainly provides some improved learning outcomes, but with the expectation of a huge percentage of emergency managers (and those in related professions) to be proficient in emergency planning, this program simply isn’t available enough. (Note re training: yes, there are an abundance of other planning-related courses out there… I just highlighted these as examples).

I’ll also say that simply taking some classes does not make you a proficient emergency planner. Because there is art and science to it, it can’t simply be taught. It needs to be learned and experienced. Practice and mentorship are key – which is something else most EMs don’t have access to or even seek out. Training is not the only solution.

So, while this article started out with identifying the fallacy often seen in our exercise practices, I end up, once again, pointing out what I think is the biggest gap in the entirety of emergency management – bad plans. Plans are the foundation of our practice, yet we can’t seem to get it right. We are too dismissive of the necessity and process of plan development and upkeep. We are too accepting of inadequate plans that are not implementation ready. We don’t do enough to build personnel capability in plan development. So many of those who are writing plans, be they civil servants, consultants, or others, are simply bad at it. And while some have potential that is underdeveloped, others simply don’t have the acumen for it.

And the worst part about it all… we, as a practice and professional culture, are accepting it!

Many of my posts through the years have ended with a similar statement… we are treating emergency management like a game of beer league hockey. We aren’t taking it seriously enough. We need to do better and demand better. So what are you doing to support improved emergency planning practices?

© 2024 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

2023 National Preparedness Report

Every year at this time of year, FEMA delivers the National Preparedness Report. Much like that one relative that is always a horrible gifter around the holidays, the infamous legacy of a long line of NPRs persists, reinforcing the waste of time, effort, and money through lack of value. It truly pains me to be so negative about these documents, but the disappointment of these documents pains me more. The development of the NPR is a great opportunity to provide analysis of meaningful information, yet it is consistently inconsistent in the style and format presented every year, and falls severely short of any potential this document could have. That said, there are always a couple of shining moments that each report has, if only they could embrace those and use them every year! If you would like a summary of the abysmal history of NPRs through the years, you can find my previous posts here.

The 2023 NPR (which is developed from 2022 data) kicks off in a laughable fashion in the Introduction and Executive Summary, which identifies four key findings:

  1. Increasing Frequency, Severity, and Cost of Disasters
  2. High Community-Level Risk
  3. Ongoing Individual and Household Preparedness Gaps
  4. Lack of Standardized Building Code Adoption

This is followed immediately by three recommendations:

  1. Target Investments Towards Particular Core Capabilities and Mission Areas
  2. Reduce All-Hazards Challenges Through Targeted Actions and Increased Coordination
  3. Address National Gaps to Prepare for Catastrophic Disasters

Following the Introduction and Executive Summary, the report is structured with information on Risks, followed by what they claim are ‘trends’ in Capabilities, Focus Areas of certain Core Capabilities, and a conclusion. Let’s take a quick look at each.

A formatting issue that immediately struck me as I explored the sections was that they carried through numbering of sub sections which began in the Introduction. Seems minor, but it’s awkward and made me think in the first (Risk) section that I had missed something when the first numbered subsection (three pages into the section) started with 4. Overall, the section on Risks provides some good summaries and graphics that emphasize the increasing frequency, severity, and cost of disasters, providing both annual trend information (I like this!) as well as information specific to 2022. Page 10 of the document provides an interest graph derived from national 2022 THIRA/SPR data that lists hazards of concern. The top 5 hazards of concern listed are:

  1. Cyber Attack
  2. Pandemic
  3. Flood
  4. Active Shooter (can we PLEASE universally adopt the term Active Shooter/Hostile Event??)
  5. Earthquake

Wanting to see if/how dots were connected, I read ahead a bit on these to see if there were any connections. In the Focus Areas section, Cybersecurity is prominently identified within the discussion on the Public Health, Healthcare, and EMS Capability as a threat to the healthcare sector. While this is true, the Cybersecurity threat permeates every other sector, which is only vaguely alluded to in the discussion on the Long-Term Vulnerability Reduction Capability. The Public Health, Healthcare, and EMS Capability did reinforce Pandemic preparedness needs, though the Active Shooter and Earthquake concerns had virtually no mention in the document beyond the Threat/Hazard discussion.

While I do appreciate the mention of the National Risk Index in this section (it’s a great tool), they miss the opportunity to really contextualize and cross reference threats and hazards of concern.

The section on Capabilities highlighted something I found both interesting and confusing…

In the Response mission area, communities report low levels of grant investment and lower target achievement in Mass Care Services and Logistics and Supply Chain Management. Communities also consider Mass Care Services a high priority capability. These capabilities and three of the four Recovery Core Capabilities fall within these ranges and may warrant increased grant investments.

My commentary: If communities are identifying Mass Care Services to be a high priority, why are they investing lower levels of grant funds into that capability?

The first subsection of the Capabilities section is Individual and Household Preparedness. While clearly an important area of discussion, it’s not a Core Capability, nor does the report associate any Core Capabilities with this topic. The next subsection on Community Preparedness does make some connections to Core Capabilities. It’s in this subsection that the updated chart of Grant Funding by Core Capability is provided. Yet again, the Housing Core Capability is among the loss leaders, with no sign of that ship being steered on the proper course. I find it interesting to note that Supply Chain Integrity and Security, and Economic Recovery are also among the lowest investments, despite some severe lessons learned from COVID-19 in those areas.

Among the leaders in Grant Funding by Core Capability are Planning, Operational Coordination, and Operational Communications. All that money spent, yet those areas continue to be consistently among the highest areas for improvement in after-action reports. I’d love to see an audit detailing more precisely what activities that money is being spent on within these Core Capabilities and what the outcomes of those activities are, as I suspect we are spending a whole lot of money with little resulting value. I’ll also note that this is only 2022 data. Every year I’ve written about the NPR I’ve suggested the need for multi-year analysis so we can actually identify trends, progress, and gaps over time. Single year snapshot-in-time data has such limited value.

The last subsection in the Capabilities Section is National Preparedness. Much of the information in this section is provided in a table on National-Level Capability Gaps and Recommendations. The table is organized by POETE but also includes areas on Capacity and Coordination (I’d suggest that the items contained in these two areas could have been placed within POETE). The introduction to this table states that the table summarizes high-level gaps and recommendations at the national level across all Core Capabilities. While in essence this something I’ve suggested in my commentary on previous year’s reports, this is TOO high level. It’s so high level that it is completely absent of any context or detail to really be meaningful. I’m also left wondering (doubting, really) if future grant funding will target any of these recommendations.

The next section is Focus Areas. This section highlights four specific Core Capabilities:

  1. Fire Management and Suppression
  2. Logistics and Supply Chain Management
  3. Public Health, Healthcare, and EMS
  4. Long-Term Vulnerability Reduction

While the reason for these four, specifically, to be covered is pretty evident based upon associated risk, threats, hazards, and needed improvements; I’m still left wondering why only these four, especially when significant gaps were identified in so many other Core Capabilities, as well as the lack of progress I noted earlier on other Core Capabilities despite extraordinary investment.

Each of these Core Capabilities is organized by a discussion of associated risk – which included some quality identification of trends, costs, and impacts; capability gaps; and management opportunities. Overall, the content in these areas is fine, but nothing really earthshattering. The Management Opportunities, which are mostly corrective actions, have focus ranging from federal, to SLTT, NGO and Private Sector, and Individuals and Households. Some good ideas are listed, similar to last year’s approach, but as with the previous section, I’m still left wondering if any of these actions will become funded priorities.

I noted in the Conclusion that the report does include an email address for feedback. I don’t think I ever saw this before, but I’ll be sending my collected commentary from this year and previous years to hopefully spur some changes to make the report more valuable than a superficial summary.

© 2023 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Gaps in ICS Doctrine and Documents

Last month I got to spend several days with some great international colleagues discussing problems and identifying solutions that will hopefully have a meaningful and lasting impact across incident management and emergency response. No, this wasn’t at an emergency management conference; this was with an incredible group of ICS subject matter experts convened by ICS Canada, with a goal of addressing some noted gaps in ICS doctrine, training, and other related documents. While the focus was specific to the documents under the purview of ICS Canada, most of these matters directly apply to ICS in the United States as well.

Overall, our doctrine, curriculum, etc. (collectively, documents) across ICS is a mess. Broadly, the issues include:

  • Poor definitions of key concepts and features of ICS.
  • Lack of proper emphasis or perspective.
  • Lack of inclusion of contemporary practices. (management concepts, social expectations, moral obligations, even legal requirements, etc.)
  • Lack of continuity from doctrine into supporting documents and curriculum. – Everything needs to point back to doctrine. Not that every tool needs to be explicitly included in the doctrine, but they should be based upon consistent standards.
  • A need to support updated training to improve understanding and thus implementation.

As we discussed among the group and I continued thought on this, I’ve realized that ICS, as it relates to the US (NIMS) has so little doctrine spread across a few NIMS documents (the core NIMS doctrine, National Qualification System documents, and a few guidance/reference documents – which aren’t necessarily doctrine). In the US, via the National Wildfire Coordinating Group (NWCG), we used to have a whole array of documents which could be considered ICS doctrine (in the days of NIIMS <yes, that’s two ‘eyes’>). When the responsibility for the administration of ICS (for lack of better phrasing) shifted to DHS, these documents were ‘archived’ by the NWCG and not carried over or adopted by the NIMS Integration Center (NIC) in DHS who now has responsibility for NIMS oversight and coordination. The NIC has developed some good documents, but in the 20 years since the signing of HSPD-5 (which created and required the use of NIMS) it seems the greatest progress has been on resource typing and little else.

Looking at current NIMS resources, I note that some are available from the core NIMS site https://www.fema.gov/emergency-managers/nims while others are available from EMI at https://training.fema.gov/emiweb/is/icsresource/. All these documents really need to be consolidated into one well organized site with doctrine identified separate from other resources and documents (i.e. job aids, guidance, etc.).

I thought it might be fun to find some examples so I decided to open up the ICS 300 instructor guide, flip through some pages, and look at a few concepts identified therein that might not have much doctrinal foundation. Here’s a few I came up with:

  • Formal and Informal Communication
    • These concepts aren’t cited anywhere in NIMS documents. While superficially they seem to be pretty straight forward, we know that communication is something we constantly need improvement in (see practically any after-action report). As such, I’d suggest that we need inclusion and reinforcement of foundational communications concepts, such as these, in doctrine to ensure that we have a foundation from which to instruct and act.
  • Establishing Command
    • This is mentioned once in the core NIMS doctrine with the simple statement that it should be done at the beginning of an incident. While often discussed in ICS courses, there are no foundational standards or guidance for what it actually means to establish command or how to do it. Seems a significant oversight for such an important concept.
  • Agency Administrator
    • While this term comes up several times in the core NIMS doctrine, they are simple references with the general context being that the Agency Administrator will seek out and give direction to the Incident Commander. It seems taken for granted that most often the Incident Commander needs to seek out the Agency Administrator and lead up, ask specific questions, and seek specific permissions and authorities.
  • Control Objectives
    • Referenced in the course but not defined anywhere in any ICS document.
  • Complexity Analysis
    • The course cites factors but doesn’t reference the NIMS Incident Complexity Guide. Granted, the NIMS Complexity Guide was published in June 2021 (after the most recent ICS 300 course material), but the information in the Complexity Guide has existed for some time and is not included in the course materials.
  • Demobilization
    • Another big example of the tail wagging the dog in NIMS. Demobilization is included across many ICS trainings, but there is so little doctrinal foundation for the concept. The core NIMS doctrine has several mentions of demobilization, even with a general statement of importance, but there is no standard or guidance on the process of demobilization beyond what is in curriculum – and training should never be the standard.

For ICS being our standard, we haven’t established it well as a standard. A lot of work needs to be done to pull this together, fill the gaps, and ensure that all documents are adequately and accurately cross-referenced. This will require a significant budget investment in the National Integration Center and the formation of stakeholder committees to provide guidance to the process. We need to do better.

What doctrine and document gaps do you see as priorities in NIMS?

© 2023 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

The Texas Emergency Management Academy

Continuing the recent theme of discussing standards and training in emergency management, a timely article was released with the latest Domestic Preparedness Journal Weekly Brief. The article (written by Dr. Michael Valiente, Senior Training Officer for TDEM) tells of the first Texas Emergency Management Academy, developed by the Texas Division of Emergency Management, which is an eight-month program providing training in a variety of topics. As an eight-month program (though I don’t know their class schedule) it’s certainly longer than the FEMA Basic Academy and seems quite intensive. There is even some indication of FEMA Basic Academy courses being included in the program. The article mentions starting with 20 cadets and graduating 17, which is an excellent graduation rate.

The program covers the expected topics of preparedness, mitigation, response, and recovery with some specific content identified from FEMA, TDEM, and seemingly some guest instructors from other agencies, which I think really enriches the learning experience (emergency management isn’t only performed by emergency management agencies, after all). I’m hopeful there was quality training in how to write various types of emergency plans. I’m just more and more discouraged nearly every day by the plans I’m seeing out there… but that’s a different topic.

Beyond the four fundamental areas, there are some notable additions. One of which is a basic EMT course. I’m kind of scratching my head on this one. As I’ve espoused before, I certainly have no issues with people getting additional training or professional certifications – especially in life saving skills, but EMS is not EM. I can certainly hear in my head a lot of the justifications people would use for this, and while I understand them, I just don’t know that I can agree with the inclusion of an EMT course into an EM program.

Having a program of extended duration such as this offers some great opportunity to build in some external activities, such as conferences, training, and exercises sponsored or conducted by other partners, which they absolutely did. Of course, they included training from the Texas A&M Engineering Extension Service (TEEX) which I’ve always found to be fantastic. They also had a capstone exercise which was held at the TEEX facility in College Station (highly recommend, by the way, for those who have never been). Certainly, a great opportunity to utilize a terrific resource in your back yard.

Another noteworthy addition was an emergency management job fair which was preceded by classes on resume building, interview techniques, and other skills. I think this is brilliant and incredibly valuable for participants.

Overall, this seems a good and valuable program, though from what I read, given the inclusion of the EMS training, the response courses, the field training (which included a lot of response activity), and the (response) capstone exercise, it very heavily leans toward response. Sure the ‘pointy end of the stick’ for many emergency managers comes down to the high-consequence crisis that must be managed – and as such these training and experiences hold great value – but so much of what emergency managers do is in the time before and after disasters, much of which is administrative and collaborative. I’m just hoping there was a lot of great content, activities, and opportunities that supported these things as well that simply weren’t highlighted as much in the article.

In the discussions that have been had as of late on standards in emergency management, an academy-style program like this could certainly be a standard. There are pros and cons, but certainly things to be considered. I’m curious about what TDEM learned from this first academy that they expect to change for the next. Would love to hear from TDEM folks (and others) involved in the program, as well as graduates!

© 2023 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

The 2022 National Preparedness Report – Another Failure in Reporting

As with past years, FEMA gifts us the annual National Preparedness Report for the prior year around the holidays. Some reminders: 1) You can find my reviews of the reports of prior years here. 2) To get copies of the reports of prior years, FEMA archives these in the unrestricted side of the Homeland Security Digital Library. 3) Each report is derived from data from the year prior, so this December 2022 report actually covers the calendar year of 2021.

Compared to last year’s report, this year’s follows much of the same format, with sections on risk, capabilities, and management opportunities. They appropriately moved some of the content in this year’s report to appendices, which helps each of the sections get more to the point.

Last year’s report was on a kick of catastrophic risk, committing what I think was an excessive amount of content to data on large-scale disasters. While we should certainly plan for the worst, most areas do a mediocre job at best with preparing for, mitigating against, responding to, and recovering from mid-sized disasters. If they can’t manage all aspects of these, it’s not even realistic for them to be able to manage the largest that nature, terrorists, or accidents can throw at us. This year’s report has a much better focus on risk, threat, and hazards; with some reflection on THIRA/SPR data from 2021, grounded realities of climate change, and some time given to cybersecurity and infrastructure. In line with the FEMA strategic plan (and continuing from last year’s report), this year’s report also discusses equity, social vulnerability, and risk exposure; with reference to social vulnerability measures (of which I’m a big fan).

Last year’s report covered risk associated with healthcare systems and the economy, which didn’t get much of a mention in this year’s report, which I think is unfortunate. The reality of surge and the shortage of hospital beds has been brought to the forefront over the past few years, with little to nothing being done to address it. Similarly, we’ve also had the fragility of organizations revealed over the past few years, yet have not seen as much of a push for continuity of operations as we should have seen. While thankfully this year’s report doesn’t have the focus on COVID that last year’s did, it seems people want to move on without addressing the glaring lessons learned.

In all, this year’s report spends about half the page volume on risk compared to last year’s report. While this year’s report provides better information, I still think there were some missed opportunities.

Looking into the assessment of capabilities, the first noted issue is that the capability targets for 2021 were the same as those for 2020. While consistency is important for long-term measurement, the lack of any alteration indicates to me that those who establish the capability targets are lacking some critical awareness of the emergency management landscape. While I don’t necessarily dispute the targets included, I think many of them could use some better refinement and specificity. The lack of inclusion of the cross-cutting Planning Core Capability (which is the foundation of all preparedness) is mind-blowing, as is the lack of the Recovery Mission Area’s Housing Core Capability (considered by many to be our greatest area of fragility). I’d really like to see the data substantiating the THIRA/SPR submissions that indicate such a high achievement of Unified Operations. Reflecting back on the necessity for long-term measurement, this year’s report offers none at all. This limits our ability to perceive preparedness gains or losses over time. As with last year’s report, which similarly did not provide this information, I feel this report has failed to meet its primary goal. It’s nothing more than a snapshot in time of very limited metrics – certainly not a comprehensive review of the state of the nation’s preparedness.

One particular graphic, identified as Figure 11 on page 24 of the report, is extremely telling. The chart identifies the non-disaster grant investments for FY21 across various grant programs. The grant distribution seems to not at all align with the established capability targets, which is good in some cases (we still need to invest in plans) but bad in other cases (fatality management is an established capability target that had minimal investment). By far, the greatest expenses are related to planning, as I feel they should be, yet the ground truth is that there are still a lot of horrible plans being generated. We have significant gaps in certain capabilities such as the aforementioned Fatality Management, along with Public Health/Healthcare/EMS, Housing, and Economic Recovery yet we see minimal investment in these. Lastly, for this section I’ll note that last year’s report highlighted some specific capabilities and provided some narrative and data on each, which, while it needed refinement, was a good direction for this report to go into. This year’s report dropped that depth of information completely.

The final section is Management Opportunities. The three opportunities identified in this section are:

  1. Building Community-Wide Resilience to Climate Change Impacts
  2. Reduce Physical and Technological Risks to Critical Infrastructure
  3. Increase Equity in Individual and Community Preparedness

I don’t argue at all with these three items, but the content, as usual, is lacking. What we should see here is a strategic approach to addressing these priority issues. Of course, to best do so, it would need to align with grant funding priorities and other efforts… which is something we’re just not seeing. They do provide some references and data within their analysis, but they do more for making a case for why these are priority issues and thumping their chest for what they have accomplished rather than laying a national roadmap for accomplishing these priorities. Reviewing last year’s management opportunities, I don’t recall many external products that really worked towards addressing these, nor does this year’s report reflect on any progress of these. Without doing so, this section is nothing but well-intentioned yet intangible statements.

My last statement pretty much sums up the entirety of the report… nothing but well intentioned yet intangible statements. This continues on a trend of previous National Preparedness Reports providing a few good data points but certainly NOT reporting on our nation’s preparedness in any meaningful, much less comprehensive, manner. I stand by my statements from last year that we, the emergency management community, should not be accepting this type of reporting. FEMA receives THIRA and SPR data from states, UASIs, and territories; all of which have years of legacy data. Similarly, FEMA receives regular reports on the grants they provide to jurisdictions, all with metrics that should tie back to a common foundation – the National Preparedness Goal’s Core Capabilities. Yet they fail every year to connect these dots and provide tangible, grounded reports with actionable recommendations. This effort, investment, and the FEMA Administrator’s endorsement is both disappointing and concerning. I continue to feel these reports do not meet the intent of the PPD8 requirements.

Happy New Year one and all!

© 2023 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Federal Coordination of All-Hazard Incident Management Teams

A few months ago the FEMA administration decided that the US Fire Administration (USFA) would discontinue their management of the All-Hazards Incident Management Team (AHIMT) program, which they have developed and managed for years. While I was never in favor of the USFA managing the program (AHIMTs are not fire-service specific), the staff assigned to the program did an admirable job of growing the AHIMT concept to what we have today.

The All-Hazards Incident Management Team Association (AHIMTA), which has been a vocal proponent of the development of AHIMTs, has thankfully been working to make people aware of this change. As part of their advocacy, they also wrote to FEMA Administrator Deanne Criswell regarding their concerns with the dissolution of this formal program. Administrator Criswell responded to AHIMTA, indicating that despite successes, the AHIMT program has “not been able to establish a sustainable or robust AHIMT program with long-term viability.” She did indicate that the USFA will continue providing related training to state, local, tribal, and territorial (SLTT) partners (though she specified that USFA training efforts will apply to fire and EMS agencies) and that she has directed the USFA to collaborate with the FEMA Field Operations Directorate to continue support to AHIMTs.

This change and some of the wording in the Administrator’s response is obviously very concerning for the future of AHIMTs. I first question the Administrator’s statement about the AHIMT program not being sustainable long-term. Not that I’m doubting her, but I’m curious as to what measures of sustainability she is referring. I’m guessing most of the issue is that of funding, along with this never having fully been part of the USFA’s mission. Everything really does boil down to funding, but how much funding can a small program office really need? I’m also concerned about the USFA continuing with the AHIMT training mission (as I always have been), and even more so with the Administrator’s specification of fire and EMS (only?) being supported. While I have no issue at all with the USFA, and think they have done a great job with IMT and related training, their primary focus on fire and EMS (even absent the Administrator’s statement) can be a barrier (real or perceived) to other disciplines obtaining or even being aware of the training.

I firmly believe that a federal-level program office to continue managing, promoting, and administering a national AHIMT program is necessary. I do not think it should be in the USFA, however, as it has been, as their mission is not comprehensive in nature. It’s a program that should be managed properly within FEMA, though not by the FEMA Field Operations Directorate, which is primarily charged with FEMA’s own field operations. While this does include FEMA’s own IMATs, their focus is internal and with a very different purpose. My biggest inclination is for the program to be placed within the NIMS Integration Center, which already does a great deal of work that intersects with AHIMTs. On the training side of things, I’d like to see AHIMT training moved to FEMA’s Emergency Management Institute (EMI), to emphasize the inclusion of SLTT participants regardless of discipline. Incident management, as a comprehensive response function, is inclusive of all hazards and all disciplines and practices, just like emergency management.

The dissolution of the AHIMT program at the federal level makes no sense to me at all. The absence of a program office not only degrades the importance of incident management teams, but of incident management as a concept and a skillset – which I think also needs some vision beyond the current IMT model to support local incident management capabilities. I’m appreciative of the AHIMTA and their advocacy for a federal AHIMT program office, and I’m hopeful that they will be able to convince FEMA of this need and that a program office is properly restored.

© 2022 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

NIMS Change – Information and Communications Technology Branch

FEMA recently released a draft for the National Incident Management System (NIMS) Information and Communications Technology (ICT) guidance, providing a framework for incorporating ICT into the Incident Command System (ICS). The draft guidance in many ways formalizes many of the functional changes ICS practitioners have been incorporating for quite a while.

Essentially, the guidance creates an ICT branch within the Logistics Section. That branch can include the traditional Communications Unit as well as an Information Technology (IT) Service Unit. They also make allowances for a Cybersecurity Unit to be included the branch – not as an operational element for a cyber incident, but largely in a network security capacity. The creation of an ICT branch is also recommended for emergency operations centers (EOCs), regardless of the organizational model followed.

The IT Service Unit includes staffing for a leader, support specialists, and a help desk function, while the Cybersecurity Unit includes staffing for a leader, a cybersecurity planner, a cybersecurity coordinator, and a cyber support specialist. The position descriptions and associated task books are already identified pending final approvals and publication of this guidance, with the Cybersecurity and Infrastructure Security Agency (CISA) seemingly ready to support training needs for many of the new positions.

I’m fully in support of this change. FEMA is accepting feedback through October 20, 2022, with instructions available on the website provided previous.

Not being a communications or IT specialist myself, I’m interested in the perspectives of others on this.

© 2022 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®