2024 National Preparedness Report – Another Missed Opportunity

The annual National Preparedness Report (NPR) is a requirement of Presidential Policy Directive 8, which states that the NPR is based on the National Preparedness Goal. The National Preparedness Goal, per the FEMA website, is “A secure and resilient nation with the capabilities required across the whole community to prevent, protect against, mitigate, respond to, and recover from the threats and hazards that pose the greatest risk.” The capabilities indicated in the National Preparedness Goal are specifically the 32 Core Capabilities.

The 2024 NPR is developed to reflect data and information from 2023. As with previous NPRs, I have a lot of concern about the ultimate value of the document. While I’m sure a lot of time, effort, and money was spent gathering an abundance of data from across the nation to support this report, this year’s report, following the unfortunate trend of its predecessors, doesn’t seem to be worth the investment. As with the others, this report falls short on adequate scope, information, and recommendations. Certainly, there is a challenge to be acknowledged of not only gathering a massive quantity of information from across the country but also examining and reporting this information in aggregate, as most federal reports are burdened to do. That said, I see little excuse to not provide a meaningful report.

In this year’s report, following the introductory materials, is a section on risks, which is largely a reflection of the high impact disasters of 2023 seen across the US; the most challenging threats and hazards; and the intersections of risk and vulnerability. All in all, this is an adequate snapshot of these topics in summary, with some solid points and a level of analysis that I would expect through the rest of the document, which includes trends over time, and identification of factors which influence the findings. There are several maps and charts which provide good data visualization and several mentions of bridging data between agencies such as FEMA, NOAA, and CDC. A good start.

The next section is Capabilities. This section has two areas of narrative – community preparedness and individual and household preparedness. Given the significant efforts to bolster capabilities throughout the federal government and in state, tribal, and territorial governments, it seems these levels are obviously missing if we are to suggest that all local governments are simply communities, so I’m not sure why this is specifically titled community preparedness. Does it not include the efforts of states or others? Page 18 of the report provides a chart similar to what we’ve seen in previous reports which shows how much money was spent on each capability (in communities… again, what does this include or exclude?) for 2023. The chart also indicates the percentage of communities achieving their capability targets.

As with the reports from the previous year, I ask: So what? This is a snapshot in time and lacking context. A trend analysis accounting for at least the past several years would be quite insightful, as would some description of what the funds within each capability were primarily spent on – broadly planning, organizing, equipping, training, and exercises, but I’d like to see even more specifics. There are a few random examples in the narrative, but a lot is still lacking. I’d also like to see some analysis of relative success or value of these investments. In regard specifically to the percentage of communities who feel they have achieved their capability target, I have to eye roll a bit at this, as this is often the most subjective (and sometimes smoke and mirrors) aspect of the Threat and Hazard Identification and Risk Assessment (THIRA). This chart currently has little value other than a ‘gee whiz’ factor of seeing how much money is spent on each capability.

I’ll also include a specific observation of mine here: the Core Capability of Mass Care Services, which in the previous year’s report was indicated as a high-priority capability, continues a trend (I’m only aware of the trend from looking back at previous reports since trend data is not included in the report) of having one of the lowest achievement percentages and investments. I’m hopeful that’s why it’s included in the next section as a focus area.

The other area of narrative in the Capabilities section is individual and household preparedness. All in all, the information presented here is fine and even includes a slight bit of trend analysis, though in 2023 a much more comprehensive reporting of this information was provided under separate cover. I think an improved version of something like the 2023 report should be incorporated into the NPR.

The next section of the NPR is Focus Areas, which includes the Core Capabilities of Mass Care Services, Public Information and Warning, Infrastructure Systems, and Cybersecurity. Each focus area includes narrative on risk, capabilities and gaps, and management opportunities – which all provide great information. There is a brief mention of how these focus areas were selected. While I’m fine with having a deeper analysis of certain focus areas, I think the NPR should still provide a comprehensive review of all Core Capabilities.

While the management opportunities listed for each of the four focus areas are essentially recommendations, the report itself only provides two recommendations which are labeled as such. These recommendations are identified in the document’s introduction with a bit of narrative (and the conclusion with none), that thankfully provides some suggestions for actionable implementation, but I was left feeling both surprised and disappointed that the National Preparedness Report, which really should be providing an analysis of all 32 Core Capabilities which serve as the foundation for nation’s preparedness goal, has only two recommendations for improving our preparedness. Two. That’s it. There should be an abundance of recommendations. This is the information that emergency managers and decision-makers within the field of practice need within federal agencies and state, local, tribal, and territorial (SLTT) governments. Another missed opportunity to provide value.

The 2024 NPR is extremely similar to the past several years in format and general content, and as such I’m not surprised by the lack of value. I continue to stand by my statement across these past several years in regard to this report: the emergency management community should not be accepting this type of reporting. While I recognize that through PPD8, it is defined that the audience for this report is the President and the Secretary of Homeland Security, the utility of such a report can and should have a much broader reach across all of emergency management, and idealistically to tax payers as well, who should be able to access better information on how their tax dollars are spent within preparedness – which impacts everyone. States, UASIs, and other entities who submit information annually for this report should also be disappointed that this is what is published about their hard work, and the emergency management membership organizations should also be demanding better. This report has the potential to be meaningful, insightful, and influential, yet FEMA misses the opportunity every single year to do so. The data exists, and the stories of the activities, accomplishments, and gaps can all be told. With the application of some reasonable analysis and recommendations, the document could be much more impactful.

It’s been said by many that emergency managers are notorious for not marketing well, and this document is proof positive of that. Those of us working in this profession know there is so much more to be examined and described that can tell of not only what we have accomplished but also of the work to be done. We find ourselves in a time where the purpose and value of FEMA is being questioned by a number of people; a time where some inefficiencies, missteps, and even failures are being put under a very critical microscope and seemingly being used to fuel a suggestion of eliminating FEMA. Greater efficiencies can certainly be identified and gaps addressed, but our reluctance to tell the stories of what we do clearly lend to misunderstandings and a severe lack of awareness that exist about our field of practice – one in which there is no organization of greater prominence and importance than FEMA. While the NPR is not at fault for these shortcomings, it is a contributor. When reports like this miss opportunities to do more and be more year over year, that snowballs and becomes a much greater issue. We need to do better.

© 2015 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Five Domains of Incident Management

Earlier this summer, RAND, under contract to CDC as part of a five-year project related to examining and assessing incident management practices in public health, developed and released the Incident Management Measurement Toolkit. Overall, I think the tool developed is a solid effort toward standardizing the evaluation of incident management. The tool guides a depth of examination into incident management practices. It can be a bit daunting at a glance, but the methodology of evaluation is generally what I’ve been practicing over the past several years for developing incident and event AARs. I’d also suggest that it’s scalable in application.

I feel it’s important to note that incident management teams involved in non-public health applications were also engaged in the research. The outcomes of the project and the inclusion of non-public health incident management practices in the research indicate to me that this tool can be applied broadly and not limited to public health applications.

Serving as a foundation for the assessment tool and methodology are five Domains of Incident Management that the project team identified. Provided with key activities, these include:

  1. Situational Awareness and Information Sharing – Perception and characterization of incident-related information to identify response needs.
  2. Incident Action and Implementation Planning – Ongoing articulation and communication of decisions in coherent incident action plans.
  3. Resource Management and Mobilization – Deployment of human, physical, and other resources to match ongoing situational awareness, identification of roles, and relevant decisions.
  4. Coordination and Collaboration – Engagement and cooperation between different stakeholders, teams, and departments in managing the incident.
  5. Feedback and Continuous Quality Improvement – The need for ongoing evaluation and refinement of incident management processes.

In consideration of these domains, I think the activities inherent within them are fairly agnostic of the type of incident management system (i.e. ICS) used. I also think these same domains can be applied for recovery operations, again, regardless of the system or organization being utilized; as well as the principal practice at work (public health, emergency management, fire service, law enforcement, etc.).

I’ve been intending to write about these domains for a while, but each time I considered them, something stood out to me as being a bit askew. I finally realized that these really aren’t domains that encompass all of incident management. Rather, these domains are better associated with an incident management system, such as the Incident Command System (ICS). The first three domains are very clearly applied directly to an incident management system, and the fourth is the general concept of multiagency coordination, which is a common concept of incident management systems. The last domain is simply quality management which is certainly integral across various incident management systems.

While I don’t believe my view undermines the tool’s value, it highlights the need for a clearer understanding of its limitations. An incident management system, like ICS, is just one part of incident management and doesn’t cover all related activities. Some tasks in incident management, such as setting priorities, decision-making, troubleshooting, and dealing with political and social issues, are often not directly related to the tactical management systems we use. Additionally, many important aspects fall within leadership that aren’t covered by the NIMS doctrine or the Planning P. Although organizing resources is a central part of incident management, there are many other activities not addressed in a tactical response that may influence tactical applications but are not part of a defined incident management system. While one could argue these activities fit into the five identified domains, I feel this analysis doesn’t provide a complete picture of a complex response. More information would be needed.

That said, I really like this toolkit. I think it provides a structured mechanism for evaluating common practices of incident management systems, which itself can provide a foundation for a more comprehensive assessment of incident management. That comprehensive assessment, beyond the incident management system, is also more anecdotal and often requires persons experienced in asking the right questions and clarifying perspectives and opinions – things that ultimately can’t be done (or at least done easily) with an assessment tool.

So regardless of what the nature of your incident is, consider using the Incident Management Measurement Toolkit as part of your AAR process.

What are your thoughts on the RAND tool? Have you used it? What do you think of the five domains they have identified?

©2024 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

ICS: Problems and Perceptions

Oddly enough, I’ve recently seen a spate of LinkedIn posts espousing the benefits of the Incident Command System (ICS). Those who have been reading my material for a while know that I’m a big proponent of ICS, though I am highly critical of the sub-par curriculum that we have been using for decades to teach ICS. The outcome is an often poorly understood and implemented system resulting in limited effectiveness.

Yes, ICS is a great tool, if implemented properly. Yet most implementations I see aren’t properly conducted. To further muddy these waters, I see emergency plans everywhere that commit our responders and officials to using ICS – this is, after all, part of the National Incident Management System (NIMS) requirement that many have – yet they don’t use it.

So why isn’t ICS being used properly or even at all? Let’s start with plans. Plans get written and put up on a proverbial shelf – physical or digital. They are often not shared with the stakeholders who should have access to them. Even less frequently are personnel trained in their actual roles as identified and defined in plans. Some of those roles are within the scope of ICS while some are not. The bottom line is that many personnel, at best, are only vaguely familiar with what they should be doing in accordance with plans. So, when an incident occurs, most people don’t think to reference the plan, and they flop around like a fish out of water trying to figure out what to do. They make things up. Sure, they often try their best, assessing what’s going on and finding gaps to fill, but without a structured system in place and in the absence (or lack of referencing) of the guidance that a quality plan should offer, efficiency and effectiveness are severely decreased, and some gaps aren’t even recognized or anticipated.

Next, let’s talk about ICS training. Again, those who have been reading my work for a while have at least some familiarity with my criticism of ICS training. To be blunt, it sucks. Not only does the content of courses not even align with course objectives, the curriculum overall doesn’t teach us enough of HOW to actually use ICS. My opinion: We need to burn the current curriculum to the ground and start over. Course updates aren’t enough. Full rewrites, a complete reimagining of the curriculum and what we want to accomplish with it, needs to take place.

Bad curriculum aside… For some reason people think that ICS training will solve all their problems. Why? One reason I believe is that we’ve oversold it. Part of that is most certainly due to NIMS requirements. Not that I think the requirements, conceptually, are a bad thing, but I think they cause people to think that if it’s the standard that we are all required to learn, it MUST be THE thing that we need to successfully manage the incident. I see people proudly boasting that they’ve completed ICS300 or ICS400. OK, that’s great… but what can you actually do with that? You’ve learned about the system, but not so much of how to actually use it. Further, beyond the truth that ICS training sucks, it’s also not enough to manage an incident. ICS is a tool of incident management. It’s just one component of incident management, NOT the entirety of incident management. Yes, we need to teach people how to use ICS, but we also need to teach the other aspects of incident management.

We also don’t use ICS enough. ICS is a contingency system. It’s not something we generally use every day, at least to a reasonably full extent. Even our first responders only use elements of ICS on a regular basis. While I don’t expect everyone to be well practiced in the nuances and specific applications of ICS, we still need more practice at using more of the system. It’s not the smaller incidents where our failure to properly implement ICS is the concern – it’s the larger incidents. It’s easy to be given a scenario and to draw out on paper what the ICS org chart should look like to manage the scenario. It’s a completely different thing to have the confidence and ego in check to make the call for additional resources – not the tactical ones – but for people to serve across a number of ICS positions. Responders tend to have a lot of reluctance to do so. Add to that the fact that most jurisdictions simply don’t have personnel even remotely qualified to serve in most of those positions. So not only are we lacking the experience in using ICS on larger incidents, we also don’t have experience ‘ramping up’ the organization for a large response. An increase in exercises, of course, is the easy answer, but exercises require time, money, and effort to implement.

One last thing I’ll mention on this topic is about perspective. One of the posts I read recently on LinkedIn espoused all the things that ICS did. While I understand the intent of their statements, the truth is that ICS does nothing. ICS is nothing more than a system on paper. It takes people to implement it. ICS doesn’t do things; PEOPLE do these things. The use of ICS to provide structure and processes to the chaos, if properly done, can reap benefits. I think that statements claiming all the things that ICS can do for us, without inserting the critical human factor into the statement, lends to the myth of ICS being our savior. It’s not. It must be implemented – properly – by people to even stand a chance.

Bottom line: we’re not there yet when it comes to incident management, including ICS. I dare say too many people are treating it as a hobby, not a profession. We have a standard, now let’s train people on it PROPERLY and practice it regularly.

©2024 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

NIMS Intel and Investigations Function – A Dose of Reality

Background

Soon after the initiation of the National Incident Management System (NIMS) as a result of Homeland Security Presidential Directive 5 in 2003, the Intelligence and Investigation (I/I) function was developed and introduced to NIMS, specifically to the Incident Command System (ICS). While we traditionally view I/I as a law enforcement function, there are other activities which guidance indicates may fall within I/I, such as epidemiology (personally, I’d designate epidemiology as a specific function, as we saw done by many during the COVID-19 response), various cause and origin investigations, and others. Integration of these activities into the response structure has clear advantages.

The initial guidance for the I/I function was largely developed by command personnel with the New York City Police Department (NYPD). This guidance offered several possible locations for the I/I function within the ICS structure, based on anticipated level of activity, needed support, and restrictions of I/I related information. These four possible ways of organizing the I/I function per this guidance are depicted here, and include:

  1. Placement as a Command Staff position
  2. Organized within the Operations Section (i.e. at a Branch level)
  3. Developed as its own section
  4. Included as a distinct unit within the Planning Section

These concepts have been included in the NIMS doctrine and have been supported within the NIMS Intelligence/Investigations Function Guidance and Field Operations Guide, though oddly enough, this second document ONLY addresses the organization of an I/I Section and not the other three options.

The Reality

Organization of I/I can and does certainly occur through any one of these four organizational models, though my own experiences and experiences of others as described to me have shown that very often this kind of integration of I/I within the ICS structure simply does not occur. Having worked with numerous municipal, county, state, federal, and specially designated law enforcement agencies, I’ve found that the I/I function is often a detached activity which is absolutely not operating under the command and control of the incident commander.

Many of the sources of I/I come from fusion centers, which are off-scene operations, or from agencies with specific authorities for I/I activities that generally have no desire or need to become part of the ICS structure, such as the FBI conducting a preliminary investigation into an incident to determine if it was a criminal act, or the NTSB investigating cause and origin of a transportation incident. These entities certainly should be communicating and coordinating with the ICS structure for scene access and operational deconfliction, but are operating under their own authority and conducting specific operations which are largely separate from the typical life safety and recovery operations on which the ICS structure is focused.

My opinion on this is that operationally it’s completely OK to have the I/I function detached from the ICS structure. There are often coordination meetings and briefings that occur between the I/I function and the ICS structure which address safety issues and acknowledge priorities and authorities, but the I/I function is in no way reporting to the IC. Coordination, however, is essential to safety and mutual operational success.

I find that the relationship of I/I to the ICS structure most often depends on where law enforcement is primarily organized within the ICS structure and who is managing that interest. For example, if the incident commander (IC) is from a law enforcement agency, interactions with I/I activities are more likely to be directly with the IC. Otherwise, interactions with I/I are typically handled within the Operations Section through a law enforcement representative within that structure. Similarly, I’ve also experienced I/I activity to have interactions with an emergency operations center (EOC) through the EOC director (often not law enforcement, though having designated jurisdictional authority and/or political clout) or through a law enforcement agency representative. As such, compared to the options depicted on an org chart through the earlier link, we would see this coordination or interaction depicted with a dotted line, indicating that authority is not necessarily inherent.

I think that the I/I function organized within the ICS structure is more likely to happen when a law enforcement agency has significant responsibility and authority on an incident, and even more likely if a law enforcement representative is the IC or represented in a Unified Command. I also think that the size and capabilities of the law enforcement agency is a factor, as it may be their own organic I/I function that is performing within the incident. As such, it would make sense that a law enforcement agency such as NYPD, another large metropolitan law enforcement agency, or a state police agency leading or heavily influencing an ICS structure would be more likely to bring an integrated I/I function to that structure. Given this, it makes sense that representatives from NYPD would have initially developed these four possible organizational models and seemingly exclude the possibility of a detached I/I function, but we clearly have numerous use cases where these models are not being followed. I’ll also acknowledge that there may very well be occurrences where I/I isn’t but should be integrated into the ICS structure. This is a matter for policy and training to address when those gaps are identified.

I believe that NIMS doctrine needs to acknowledge that a detached I/I function is not just possible, but very likely to occur. Following this, I’d like to see the NIMS Intelligence/Investigation Function Guidance and Field Operations Guide updated to include this reality, along with operational guidance on how best to interact with a detached I/I function. Of course, to support implementation of doctrine, this would then require policies, plans, and procedures to be updated, and training provided to reflect these changes, with exercises to test and reinforce the concepts.

What interactions have you seen between an ICS or EOC structure and the I/I function? What successes and challenges have you seen from it?

© 2024 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Culture of Preparedness – a Lofty Goal

September is National Preparedness Month here in the US. As we soon head into October, it’s a good opportunity to reflect on what we’ve accomplished during the month, or even elsewhere in the year. While National Preparedness Month is an important thing to mark and to remind us of how important it is to be prepared, over the past several years I’ve come to question our approaches to community preparedness. What are we doing that’s actually moving the needle of community preparedness in a positive direction? Flyers and presentations and preparedness kits aren’t doing it. While I can’t throw any particular numbers into the mix, I think most will agree that our return on investment is extremely low. Am I ready to throw all our efforts away and say it’s not making any difference at all? Of course not. Even one person walking away from a presentation who makes changes within their household to become better prepared is important. But what impact are we having overall?

Culture of preparedness is a buzz phrase used quite a bit over the last number of years. What is a culture of preparedness? An AI assisted Google search tells me that a culture of preparedness is ‘a system that emphasizes the importance of preparing for and responding to disasters, and that everyone has a role to play in doing so.’ Most agree that we don’t have a great culture of preparedness across much of the US (and many other nations) and that we need to improve our culture of preparedness. But how?

People love to throw that phrase into the mix of a discussion, claiming that improving the culture of preparedness will solve a lot of issues. They may very well be correct, but it’s about as effective as a doctor telling you that you will be fine from the tumor they found once a cure for cancer is discovered. Sure, the intent is good, but the statement isn’t helpful right now. We need to actually figure out HOW to improve our culture of preparedness. We also need to recognize that in all likelihood it will take more than one generation to actually realize the impacts of deliberate work toward improvement.

The time has come for us to stop talking about how our culture of preparedness needs improvement and to actually do something about it. There isn’t one particular answer or approach that will do this. Culture of preparedness is a whole community concept. We rightfully put a lot of time, effort, and money into ensuring that our responders (broad definition applied) are prepared, because they are the ones we rely on most. I’d say their culture of preparedness is decent (maybe a B-), but we can do a lot better. (If you think my assessment is off, please check out my annual reviews of the National Preparedness Report and let me know if you come to a different conclusion). There is much more to our community, however, than responders. Government administration, businesses, non-government organizations, and people themselves compose the majority of it, and unfortunately among these groups is where our culture of preparedness has the largest gaps.

As with most of my posts, I don’t actually have a solution. But I know what we are doing isn’t getting us to where we want to be. I think the solution, though, lies in studying people, communities, and organizations and determining why they behave and feel the way they do, and identifying methodologies, sticks, and carrots that can help attain an improved culture of preparedness over time. We must also ensure that we consider all facets of our communities, inclusive of gender identity, race, culture, income, citizenship status, and more. We need people who know and study such things to help guide us. The followers of Thomas Drabek. The Kathleen Tierneys* of the world. Sociologists. Anthropologists. Psychologists. Organizational psychologists.  

A real, viable culture of preparedness, in the present time, is little more than a concept. We need to change our approach from using this as a buzz phrase in which everyone in the room nods their heads, to a goal which we make a deliberate effort toward attaining. A problem such as this is one where we can have a true union of academia and practice, with academics and researchers figuring out how to solve the problem and practitioners applying the solutions, with a feedback loop of continued study to identify and track the impacts made, showing not only the successes we (hopefully) attain, but also how we can continue to improve.

*Note: I don’t know Dr. Tierney personally and it is not my intent to throw her under the proverbial bus for such a project. I cite her because her writing on related topics is extremely insightful. I highly recommend Disasters: A Sociological Approach.

© 2024 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

ICS Training Sucks – Progress Inhibited by Bias

It’s been a while since I’ve written directly toward my years-long rally against our current approach to Incident Command System (ICS) training. Some of these themes I’ve touched on in the past, but recent discussions on this and other topics have gotten the concept of our biases interfering with progress stuck in my head.

It is difficult for us, as humans, to move forward, to be truly progressive and innovative, when we are in a way contaminated by what we know about the current system which we wish to improve. This knowledge brings with it an inherent bias – good, bad, or otherwise – which influences our vision, reasoning, and decisions. Though on the other hand, knowledge of the existing system gives us a foundation from which we can work, often having awareness of what does and does not work.

I’m sure there have been some type of psychological studies done on such things. I’ve certainly thought about, in my continued rally against our current approach to ICS training, what that training could look like if we set individuals to develop something new if they’ve never seen the current training. Sure, the current training has a lot of valuable components, but overall, it’s poorly designed, with changes and updates through decades still based upon curriculum that was poorly developed, though with good intentions, so long ago.

In recent months, having had discussions with people about various things across emergency management that require improvement, from how we assess preparedness, to how we develop plans, to how we respond, and even looking at the entire US emergency management enterprise itself. Every one of these discussions, trying to imagine what a new system or methodology could look like, with every one of these people (myself included), were infected by an inherent bias that stemmed from what is. Again, I’m left wondering, what would someone build if they had no prior knowledge of what currently exists.

Of course, what would be built wouldn’t be flawless. To some solutions, those of us in the know may even shake our heads, saying that certain things have already been tried but were proven to fail (though perhaps under very different circumstances which may no longer be relevant). Some solutions, however, could be truly innovative.

The notion, perhaps, is a bit silly, as I’m not sure we could expect anyone to build, for example, a new ICS curriculum, without having subject matter expertise in ICS (either their own or through SMEs who would guide and advise on the curriculum). These SMEs, inevitably, would have taken ICS training somewhere along their journey.

All that said, I’m not sure it’s possible for us to eliminate our bias in many of these situations. Even the most visionary of people can’t shed that baggage. But we can certainly improve how we approach it. I think a significant strategy would be having a facilitator who is a champion of the goal and who understands the challenges, who can lead a group through the process. I’d also suggest having a real-time ‘red team’ (Contrarian?) element as part of the group, who can signal when the group is exercising too much bias brought forth from what they know of the current implementation.

In the example of reimagining ICS training, I’d suggest that the group not be permitted to even access the current curriculum during this effort. They should also start from the beginning of the instructional design process, identifying needs and developing training objectives from scratch, rather than recycling or even referencing the current curriculum. The objectives really need to answer the question – ‘What do we want participants to know or do at the completion of the course?’. Levels of training are certainly a given, but perhaps we need to reframe to what is used elsewhere in public safety, such as the OSHA 1910.120 standard which uses the levels of Awareness, Operations, Technician, and Command. Or the DHS model which uses Awareness, Performance, and Management & Planning. We need to further eliminate other bias we bring with us, such as the concept of each level of training only consisting of one course. Perhaps multiple courses are required to accomplish what is needed at each level? I don’t have the answers to any of these questions, but all of these things, and more, should be considered in any real discussion about a new and improved curriculum.

Of course, any discussions on new and improved ICS curriculum need to begin at the policy level, approving the funding and the effort and reinforcing the goal of having a curriculum that better serves our response efforts.

How would you limit the influence of bias in innovation?

© 2024 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Mixing Exercise Types

As with many things, we are taught exercises in a rather siloed fashion. First by category: discussion-based and operations-based. Then by type. That kind of compartmentalization is generally a necessity in adult education methodology. Individually, each exercise type has its own pros and cons. Rarely, however, do we ever seen or heard of combining exercise types within one initiative.

The first time I did this was several years ago. My company was designing a series of functional exercises to be used for locations around the country. While the exercises were focused on response, one goal of our client was to include some aspects of recovery in the exercise. At about six hours, the exercises weren’t long. Time jumps can be awkward, and for the small amount of time dedicated to recovery in the exercise, the impact of the disruption from the time jump within the exercise may not net a positive result. Add to that the time it would take to provide a quantity of new information that would be needed to make a recovery-oriented functional exercise component viable.

Instead of trying to shoe-horn this in, we opted to stop the functional component of the exercise at an established time and introduce a discussion on disaster recovery. With the proper introduction and just a bit of information to provide context in addition to what they had already been working on, the discussion went smoothly and accomplished everything with which we were charged. The participants were also able to draw on information and actions from the response-focused functional component of the exercise.

We’re recently developed another exercise that begins with a tabletop exercise to establish context and premise then splits the participants into two groups which are each challenged with some operations-based activity: one deploying to a COOP location to test functionality (a drill), the other charged with developing plans to address the evolving implications of the initial incident (a functional exercise). Following the operations-based exercises, the two groups will reconvene to debrief on their activities and lessons learned before going into a hotwash.

Making this happen is easy enough. Obviously we need to ensure that objectives align with the expected activities. You also want to make sure that the dual exercise modalities are appropriate for the same participants. While I try not to be hung up on the nuances of documentation, though documentation is important, especially when it comes to grant compliance and ensuring that everyone understands the structure and expectations of the exercise. If we are mixing a discussion-based exercise and an operations-based exercise, one of the biggest questions is likely what foundational document to use – a SitMan or ExPlan. Generally, since the operations-based exercises can have greater consequences regarding safety and miscommunication, I’d suggest defaulting to an ExPlan, though be sure to include information that addresses the needs of the discussion-based exercise component in your ExPlan as well as the player briefing.

In running the exercise, be sure to have a clear transition from one exercise type to the other, especially if there are multiple locations and/or players are spread out. Players should be given information that prepares them for the transition in the player briefing. Having exercise staff (controllers/facilitators and evaluators) properly prepared for this through clearly communicating expectations at the C/E briefing and in C/E documentation is obviously important, as well as ensuring they are ready for the transition.

I’d love to hear other success stories from those who may have done something similar.

© 2024 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Preparing for Community Lifelines Implementation

In all great ideas, the devil, as they say, is in the details. Implementing new concepts often requires preparations to ensure that the implementation goes smoothly. We often rush to implementation, perhaps excited for the results, perhaps not thinking through the details. Without proper preparation, that implementation can fail miserably. Integrating and implementing the Community Lifelines is no exception.

Just like everything else we do in preparedness, we should turn to the capability elements of planning, organizing, equipping, training, and exercises (POETE) to guide our preparedness for Community Lifeline implementation.

Planning and Organizing

I’m coupling these two capability elements together as they so strongly go hand-in-hand. Determining how you want to use Community Lifelines is an important early step. I’d suggest developing a Community Lifeline Implementation Plan for your jurisdiction that not only identify how you will use them in response and recovery operations, but details of how their use fits within your response and recovery management structure, how information will flow, who is responsible for what, how information is reported, and to who it is reported. The Implementation Plan should also outline the preparedness steps needed and how and where information will be catalogued.

I’ve seen several Community Lifeline integrations across local, county, and state jurisdictions, these mostly being visual status displays, but there can be some complexity in how we even get to that display.

We all know from CPG-101 that forming a planning team is the first step of emergency planning. While not itself really the capability element of Organizing, the stakeholders that will be assembled for this will extend across all capability elements and into response and recovery operations.

Before identifying stakeholders, we need to examine each Community Lifeline down to the sub-component levels, which first necessitates determining which components and sub-components are applicable to your jurisdiction. For example, within the Transportation Community Lifeline, if your jurisdiction has no Aviation resources or infrastructure, you may choose to not include that component.

Once you have made the determination as to what components and sub-components of each Community Lifeline will be included, it’s not time to form your planning teams for each. Depending on the size of your jurisdiction, you could form teams at the Community Lifeline level, the component level, or the sub-component level. You could even use different approaches for each (i.e. The Community Lifeline of Water Systems may only involve a few stakeholders to address all components and sub-components, whereas Health and Medical may require distinct teams for each component). Since much of the Community Lifelines is centered on or strongly relates to critical infrastructure, many of our stakeholders will be from the private sector. Hopefully these are partners you have engaged with before, but if not, this is a great opportunity to do so.

In meeting with each of these stakeholders/stakeholder groups, providing them with an orientation to the Community Lifelines concept will be important. Be sure to talk about this within the contexts of whole-community preparedness, public-private partnerships, critical infrastructure, and the five mission areas. This should include the expectation for these to be long-term working groups that will provide information updates before, during, and after a disaster. It will be important to obtain from each the following information (at minimum) for each function and/or facility:

  • Legal owners and operators
  • Primary and alternate points of contact (and contact info for each) (Note that these should be emergency/24 hour contacts)
  • Existing emergency plans
  • Protection activities
  • Prevention activities
  • Mitigation activities
  • Preparedness activities
  • Response and recovery priorities
  • Critical continuity and supply chain issues
  • Sensitive information concerns

Processes will need to be mapped to identify how information will be obtained in an incident from the owners/operators of each facility or function, what information will be expected, in what format, and how often. Internal (EOC) procedures should identify how this information will be received, organized, and reported and how it will influence operational priorities for response and recovery. Since the visual representation of the Community Lifelines is the face of the system, you should also determine what the benchmarks are within each Community Lifeline, component, and sub-component for differentiating between status (i.e. what failures will bring status from green to yellow, and from yellow to red) and how the status of one may influence the status of others.

Equipment (and Systems)

It is important to catalogue the information you obtain from preparedness activities as well as in implementation. Consider GIS integrations, as there is an abundance of information that involves geolocation. I’ll make a special shout out here to the Community Lifeline Status System (CLSS) project, which is funded by the DHS Science and Technology (S&T) Directorate and is being developed by contract to G&H International. When rolled out, the CLSS will be available at no cost to every jurisdiction in the US to support Community Lifeline integration. Having been fortunate enough to get a private in-depth tour of the system, I’m thoroughly impressed. The CLSS is based on Arc GIS and provides a lot of customizable space to store all this preparedness information.

Using a system such as CLSS to display and share Community Lifelines information is also a benefit. While most displays I’ve seen simply show the icon and status color for each Community Lifeline, an interactive dashboard type of system can help provide additional context and important information. This is something CLSS also provides.

Training and Exercises

As with any new plans or processes, training is an important part of supporting implementation. Training audiences will include:

  • EOC personnel
  • Owners/Operators of Community Lifelines infrastructure
  • Officials who will receive Community Lifelines information

Proper training requires that different audiences should receive training to address their specific needs.

Similarly, exercises should purposely test these processes, and use of Community Lifelines should be incorporated into exercises often. Community Lifelines status and information should be engaged in exercises to inform and support decision making.

///

If you already know the benefits of the Community Lifelines, hopefully you see the advantages of adequate preparedness to get the most out of them. The stakeholder groups you assemble to support planning should be everlasting, as information on their vulnerabilities, capabilities, and activities are likely to change over time. Beyond direct Community Lifeline applications, these are all great partners for a variety of emergency management activities to support the whole community. The preparedness efforts, and maintenance thereof (sorry, but it’s not just a one-time thing) is a significant investment (and could likely be a full-time job for even a moderately sized jurisdiction) but it should pay incredible dividends over and over again.

Are you using Community Lifelines? What have you learned about the need to prepare for their use?

© 2024 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

CDC Forgot About Planning

In late February, CDC released the highly anticipated notice of funding opportunity (NOFO) for the 2024-2028 Public Health Emergency Preparedness (PHEP) grant. The general concept of the grant wasn’t a big surprise, as they had been promoting a move to their Response Readiness Framework (RRF). The timing of the new five-year grant cycle seems ideal to implement lessons learned from COVID-19, yet they are falling short.

I’ve reflected in the past on the preparedness capability elements of Planning, Organizing, Equipment/Systems, Training, and Exercises (POETE). I also often add Assessing to the front of that (APOETE). These preparedness elements are essentially the buckets of activity through which we categorize our preparedness activities.

In reviewing the ten program priorities of the RRF, I’m initially encouraged by the first priority: Prioritize a risk-based approach to all-hazards planning. Activity-wise, what this translates to in the NOFO is conducting a risk assessment. Solid start. Yet nowhere else is planning overtly mentioned. Within the NOFO some of the other priorities reflect on ensuring certain things are addressed in plans, such as health equity, but there is otherwise no direct push for planning. Buried within the NOFO (page 62) is a list of plans that must be shared with project officers upon request (under the larger heading of Administrative and Federal Requirements) but the development of any of these plans does not correlate to any priorities, strategies, or activities within the document.

As for the rest of APOETE, there is good direction on Organizing, Equipment and Systems, Training, and Exercises. While that’s all great, planning is the true foundation of preparedness and it is so obviously left out of this NOFO. Along with my general opinion that most emergency plans (across all sectors) are garbage, that vast majority of findings from numerous COVID-19 after-action reports I’ve written (which included two states and several county and local governments) noted the significant need for improved emergency plans. Further, the other preparedness elements (OETE) should all relate back to our plans. If we aren’t developing, improving, and updating plans, then the other activities will generally lack focus, direction, and relevance.

Understanding that this is the first year of a five-year grant cycle, some changes and clarification will occur as the cycle progresses, but as planning is a foundational activity, it should be immediately and directly tied to the results of the assessment this year’s grant calls for. Otherwise, the findings of the assessments are generally meaningless if we aren’t taking action and developing plans to address them. This is leaving us with a significant gap in preparedness. Someone at CDC didn’t think this through and it leaves me with a great deal of concern, especially in the aftermath of the COVID-19 response.

What are your thoughts on this?

© 2024 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Properly Applying ICS in Function-Specific Plans

As with many of my posts, I begin with an observation of something that frustrates me. Through much of my career, as I review function-specific plans (e.g., shelter plans, point of distribution plans, debris management plans, mass fatality incident management plans) I see a lot of organization charts that are inserted into those plans. Almost always, the org chart is an application of a ‘full’ incident command system (ICS) org chart (Command, Command Staff, General Staff, and many subordinate positions). This is obviously suitable for a foundational emergency operations plan (EOP), an emergency operations center (EOC) plan, or something else that is very comprehensive in nature where this size and scope of an organization would be used, but function-specific plans are not that. This, to me, is yet another example of a misinterpretation, misunderstanding, and/or misuse of the principles of National Incident Management System (NIMS) and ICS.

Yes, we fundamentally have a mandate to use ICS, which is also an effective practice, but not every function and facility we activate within our response and recovery operations requires a full organization or an incident management team to run. The majority of applications of a function-specific plan are within a greater response (such as activating a commodity POD during a storm response). As such, the EOP should have already been activated and there should already be an ‘umbrella’ incident management organization (e.g., ICS) in place – which means you are (hopefully) using ICS. Duplicating the organization within every function isn’t necessary. If we truly built out organizations according to every well intentioned (but misguided) plan, we would need several incident management teams just to run a Type 3 incident. This isn’t realistic, practical, or appropriate.

Most function-specific plans, when activated, would be organized within the Operations Section of an ICS organization. There is a person in charge of that function – depending on the level of the organization in which they are placed and what the function is, there is plenty of room for discussion on what their title would be, but I do know that it absolutely is NOT Incident Commander. There is already one of those and the person running a POD doesn’t get to be it. As for ‘command staff’ positions, if there is really a need for safety or public information activity (I’m not even going to talk about liaison) at these levels, these would be assistants, as there is (should be) already a Safety Officer or PIO as a member of the actual Command Staff. Those working within these capacities at the functional level should be coordinating with the principal Command Staff personnel. As for the ‘general staff’ positions within these functions, there is no need for an Operations Section as what’s being done (again, most of the time that’s where these functions are organized) IS operations. Planning and Logistics are centralized within the ICS structure for several reasons, the most significant being an avoidance of duplication of effort. Yes, for all you ICS nerds (like me) there is an application of branch level planning (done that) and/or branch level logistics that can certainly be necessary for VERY complex functional operations, but this is really an exception and not the rule – and these MUST interface with the principal General Staff personnel. As for Finance, there are similarly many reasons for this to be centralized within the primary ICS organization, which is where it should be.

We need to have flexibility balanced with practicality in our organizations. We also need to understand that personnel (especially those trained to serve in certain positions) are finite, so it is not feasible to duplicate an ICS structure for every operational function, nor is it appropriate. The focus should be on what the actual function does and how it should organize to best facilitate that. My suggestion is that if you are writing a plan, unless you REALLY understand ICS (and I don’t mean that you’ve just taken some courses), find someone who (hopefully) does and have a conversation with them. Talk through what you are trying to accomplish with your plan and your organization; everything must have a purpose so ask ‘why?’ and question duplication of effort. This is another reason why planning is a team sport and it’s important to bring the right people onto the team.

© 2024 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®