The EOC is About Bureaucracy, not Response

Take a deep breath. It’s going to be OK. Really.

The National Incident Management System (NIMS) gives us the definition of an Emergency Operations Center (EOC) as “locations where staff from multiple agencies typically come together to address imminent threats and hazards and to provide coordinated support to incident command, on-scene personnel, and/or other EOCs.”

I’ll agree with this, but I’ll also suggest this isn’t a complete definition.

EOCs are really about building a bridge between emergency needs and our daily bureaucracy. In the context of response, we have a given set of agencies and organizations, such as first responders and others, that operate in that arena almost exclusively. Their bureaucracies are actually built around their response missions. They are generally built for speed and are supported by procedures and policies to support this. But during a large emergency or disaster, the needs of the incident exceed the capabilities of these response forces, requiring other, less traditional, agencies and organizations to not only provide support, but services as well. While we have seen an increase in these ‘non-traditional’ responders becoming involved, it doesn’t occur with enough frequency to make it a standard of practice. Rather, the focus in these agencies and organizations is still their daily missions, of which the vast majority is not disaster related. Their bureaucracies are built for the day-to-day, not for speed. This is all OK. Bureaucracy isn’t a four-letter word, but in the world of emergency and disaster management we need to understand why certain bureaucracies are built the way they are and figure out how to flex them to shorten reaction time.

Enter the EOC – a room (physical, virtual, in hybrid) where the intent is for bureaucracy and speed to awkwardly coexist. While our traditional response agencies and their counterparts (often at higher levels of government) will always be needed to contribute, the EOC isn’t built just for them. The EOC is also built for those who don’t respond with lights and sirens, but are just as important to supporting our communities during times of disaster.

We need to consider that disasters offer extraordinary circumstances with problems that can’t be solved by traditional means. We need to be creative. We also need to recognize how interconnected all facets of our community lifelines are. In order to conquer the extraordinary, we need everyone. We need to identify the capabilities and capacities held by agencies and organizations we might not typically see involved in an incident. Take a look through the list of departments your own city, village, or county has. The Clerk’s Office? The Planning Department? The Purchasing Department? The Office for Mental Health? The IT Department? Office for the Aging? Child and Family Services? Human Resources? Weights and Measures? There are so many more.

The intent of the EOC is to bring together representatives of these agencies and organizations to help streamline their assistance and support. The EOC should cut the proverbial red tape, but key to that is ensuring that each organization is properly represented. With no disrespect intended toward middle managers, as they often are the ones who really run an organization, EOCs require representation from executive-level leadership of these agencies and organizations. The EOC needs the people who have authority to cut through red tape when required.

So how do we approach this in emergency management: APOETE.

Assessing – Seek first to understand. Identify what agencies and organizations may be needed and when. What do they have? What can they do? What are their limitations?

Planning – Integrate them into emergency planning and encourage them to develop their own emergency plans that address they can work within their own bureaucracies.

Organizing – Meet with them and meet collectively. Bring representatives onto working groups that work in preparedness, response, and/or recovery (consider the Community Lifelines as a place to start). This promotes mutual understanding and inclusiveness.

Equipping and Systems – Ensure that all partners have access to the systems used to support incident management.

Training and Exercises – Broaden the invite lists for training and exercises to help these partners gain knowledge and become more involved.

In the end, it’s about working together toward a common cause, aka unity of effort. To maximize the utility of our EOCs, we need to stop looking at an EOC through the lens of the first responder. Flip that perspective and begin looking at the EOC through the lens of government bureaucracy. Consider what these partners need to be successful. How and when can we streamline? Don’t try to turn them into first responders – that’s the wrong expectation. Rather, we need to meet them where they are, respect what they do, and understand why they have certain protocols in place. That will give us a foundation of understanding to work from.

End note: I’ll also suggest that this reality is another reason why ICS-based organizational models for EOCs are less than effective. The organization of an EOC needs to serve a different purpose than what we often try to force it into. Check out the Incident Support Model as a great alternative.

© 2025 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Revisiting POETE

One of my most popular posts has been my original post on POETE from July 2014. In the 11+ years has passed since that post, I continue leveraging the concept in every way I can. In case you’ve not heard of the concept, I certainly urge you to click the link above and read my original post. Briefly, POETE stands for Planning, Organizing, Equipping, Training, and Exercises. These are collectively known as the elements of preparedness. POETE is more than a checklist to me. It’s a strategic lens for realistically building and sustaining capabilities. Whether you are building a new emergency operations plan, launching a new public health preparedness initiative, or refining multiagency coordination activities, POETE offers a structured way to think through what takes to help ensure these endeavors are implementation-ready.

While I’ve written on these in the past, my continued and diverse application of POETE has broadened my perspective on application, so here are some fresh thoughts.

Assessing – On occasion, I throw an A in front of the acronym for Assessing. While assessments are an early activity of Planning, there are also plenty of stand-alone assessment activities which should be regarded of their own accord. Assessments can and should inform everything else we do in preparedness. Good assessments can provide us with justification for certain activities and investments and can often give us a data-driven approach. Along with many of the risk assessments common across emergency management, like the Threat, Hazard Identification, and Risk Assessment (THIRA), I’d also suggest that (well written) after-action reports (AARs) can also do the job. A well-developed AAR for an incident, event, or exercise can provide objective analysis of observed activities or discussions. When writing an AAR, we should always keep in mind that part of achieving the goal of improvement may involve requests and justifications for funding.

Planning – I’ve written a lot on the topic of emergency planning through the years. Overall, my take on most emergency plans is that they suck. Horribly. They aren’t worth the time, money, or effort invested in writing them. So many people go about it wrong. A true plan needs to be a blueprint for action. Good plans are operationally-focused, guiding decisions and actions. They should not just be theory and policy, as so many are. At best, I’d call something like that a framework, but it’s certainly not a plan.

Organizing – Organizing is largely about structure, roles, and responsibilities, but you can’t even get there without first building relationships and partnerships. Everything we do in emergency management is about relationships. It’s about knowing who has the thing you need – be it a physical resource, specialized knowledge, or specific authority. Last week I wrote a new piece on Community Lifelines. The central activity of doing anything with Community Lifelines is building relationships. Once those relationships are in place, then other activities will follow.

Equipping – I’ve always been very big on tools matching the mission. Equipment in this context means any and all resources available to us. The key aspect of this is alignment. Are the tools we use matching up to our threats, our people, and our procedures? While it’s understandable to have to update procedures to match a new resource, we should be very cautious about the resource dictating procedure. Our resources need to work for us, not the other way around.

Training – I feel like we have been gradually moving away from compliance being the center of the training universe. Yes, there is still plenty of training that is required for various purposes – there should be and there will always be. But I’ve been getting more requests from clients to develop custom training because they realize that little to no training exists to meet their needs. More people are realizing, for example, that ICS training is absolutely not the fit for EOC staff. Similarly, they are realizing that existing EOC training might begin to approach their needs, but the implementation of their specific EOC model really requires customized training. Overall, training needs to be role-based. We need to be training people what we want them to do. We need to give them the knowledge to succeed, not just generalized training for a broad group hoping that people will be able to ascertain what pertains to them and what does not. We also need to realize that, since most training in emergency management is response-oriented, the things they are being trained to do are things they don’t do often and/or don’t do them under pressure. So frequency of training and job aids are essential to their success.

Exercises – The thing I do the highest volume of. Luckily, I love to do them! Exercises are about testing our plans and capabilities before they are tested for real. Pay attention to good exercise design and never forget that the end product is a worthwhile AAR. I still see so many softball AARs out there. AARs that pat people on the back for a job well done while only acknowledging the superficial opportunities to improve – often times because they don’t want to hurt anyone’s feelings. I don’t ever write an AAR for the purpose of offending anyone, but if we don’t expose what doesn’t work, the chances of it ever being addressed are so much lower than if we had documented it.

While we have the acronym of (A)POETE, it’s important to keep in mind that it’s not intended to be a linear process. It’s iterative and constantly in need of attention. Each component is informed by the others. While I generally believe that Planning is still the foundation of preparedness and it should heavily influence all other elements, those other elements can still influence Planning. POETE activities should be used to build our capabilities. These activities help us prepare with purpose, focus, and intent.

© 2025 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Replacing ESFs with Community Lifelines

I’ve written previously about my concerns with using Emergency Support Functions in many state, local, territorial, and tribal (SLTT) emergency operations centers. The ESF structure was never intended for SLTT use, and while it may have some successes with the largest of the states and metropolitan areas, it’s generally not a good fit.

I’ve also written previously about Community Lifelines and the benefits thereof. Consider, however, that Community Lifelines, while designed originally for the organization of information by FEMA regional offices when they monitor a disaster, can have much broader applicability. We can and should be using Community Lifelines across every phase / mission area of Emergency Management.

Lately I’ve been having more and more conversations about Community Lifelines with clients, at conferences, and with others who are interested in learning more about them and how to use them. Across emergency management we often find or are provided with approaches to problems that are single-use. We should regularly explore opportunities to expand those single-use applications, increasing the utility of the concept at hand. Given the shortcomings of ESFs for most jurisdictions and the much broader applicability of Community Lifelines for every jurisdiction under the sun, I suggest that Community Lifelines cannot only be operationalized to be a viable replacement for ESFs, they can do so much more. Here are my arguments in support of replacing ESFs with a Community Lifelines – driven organization in SLTT EOCs as well as emergency management programs as a whole:

  1. Community Lifelines are community-focused and more comprehensive of the needs of a community, whereas ESFs are driven by functions which may have limited capabilities or capacities in any given jurisdiction.
  2. Community Lifelines can be operationalized just like ESFs, with primary and support agencies and organizations.
  3. Community Lifelines are focused on stabilizing critical services with built-in mechanisms for assessing impacts and establishing priorities.
  4. Community Lifelines more directly support the inclusion of the private sector, along with government, NGOs, and quasi-government owners/operators.
  5. Community Lifelines provide better preparedness and resilience initiatives.
  6. Community Lifelines provide us with a basis for measuring progress across all phases or mission areas. The only thing we can measure in an ESF is what we might have available to leverage in a response.
  7. Community Lifelines connect resilience, response, and recovery since they are the focal point. While the National Response Framework and National Recovery Framework still have national relevance, the transition from ESFs to Recovery Support Functions (RSFs) is challenging at best.
  8. The inclusion of Community Lifelines in our EOC structure is easy and agnostic to the organizational model used in the EOC. ESFs include functions that are part of the typical overhead management of an EOC, such as ESF 5 (Information and Planning), ESF 7 (Logistics), and ESF 15 (External Affairs), which is an awkward integration.
  9. Community Lifelines lend to better partnerships and preparedness. The ESF plans of most jurisdictions are truly little more than a general scope of the ESF with a list of participating agencies and organizations.

We need to change our mindset of emergency management being centered on response. Yes, response is the big shiny thing. It’s the thing we practice for and anticipate. A more wholistic and comprehensive approach is available to us, however, by using Community Lifelines as the foundation of our work. I suggest that jurisdictions develop Community Lifeline Implementation Plans, which are fundamentally strategic plans identifying how Community Lifelines can be used in Prevention/Protection/Mitigation, Preparedness, Response, and Recovery. Consider how the relationships forged with the owners/operators of Community Lifeline partners can support each of those phases and activities, increasing the resiliency of our community as a whole by making each partner more resilient; and by understanding and preparing for the response and recovery needs of our community through the collective effort of Community Lifeline partners.

Emergency management is more than response. It is a comprehensive effort to support our communities before, during, and after disaster.

© 2025 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Cutting Grant Funds Cuts National Practices

As some rumors become reality for the current fiscal year and budget memos are leaked for the coming fiscal year, one thing is clear – states, local governments, tribal governments, and territories (SLTTs) will be receiving significantly less federal grant funding for preparedness. While some programs are expected to be outright eliminated, others are being reorganized and refocused with significant budget cuts. While not all change is bad, there is a significant shift in preparedness priorities that is largely politically motivated and lacking foundations in reality. I wrote last month on the Future of the US Emergency Management System, which focuses mostly on FEMA-centric topics, but we are also seeing and expecting major cuts to public health emergency preparedness (PHEP) grant funds, the elimination of certain PHEP programs, and indirect impacts to PHEP from cuts to other public health programs.  Similar cuts are also expected with the Hospital Preparedness Program (HPP). While I don’t think preparedness funds will be completely cut, the impacts will be significant until SLTTs are able to adjust their own budgets to address what priorities they can.  

Grant funding, however, is not only to the direct advantage of the recipients. Compliance with grant rules has long supported national standards (note that I use this term loosely. See this article for more information). FEMA preparedness grants, PHEP grants, and HPP grants, among others, have required the adoption of the National Incident Management System (NIMS), the use of the Homeland Security Exercise and Evaluation Program (HSEEP), national focus on certain threats or hazards, and reasonable consistency in building and sustaining defined capabilities. Grants have been the proverbial carrot that encouraged compliance and participation. While some of the results have been poorly measured (see my annual commentary on the National Preparedness Reports), the benefits of others have been much more tangible. Keeping things real, compliance with many of these requirements by some recipients may have been lackluster at best. Enforcement of these requirements has been practically non-existent (despite rumors of the “NIMS Police” circulating for years), which I think is a shame. That said, I think most recipients worked to meet most requirements in good faith; perhaps partly because someone’s signature attested to it, but I think mostly because many of these requirements were viewed as best practices. As such, while the requirements may be going away if there is no carrot for compliance, I think many jurisdictions will continue implementation.

All this, however, looks at past requirements. But what of new practices that would benefit from nation-wide implementation? I fear that without practices being required as part of a grant, adoption will be minimal. We would have to count on several factors for adoption to take place.

1) Emergency managers would need to be informed of the practice and the benefits thereof. Let’s be honest, most emergency managers are not well informed of new practices and concepts. Often, they simply don’t have the time to do any more than what they are doing, but unfortunately some may not care. Agencies like FEMA have also been notoriously bad at circulating information on new programs, practices, and concepts.

2) Emergency managers would have to agree that the practice can be beneficial to them.

3) Emergency managers would need the resources (time, staff, funds, etc.) to actually implement the practice.

4) In a multi-agency environment, partner agencies are more willing to support activities if they are told it’s a grant requirement – even if it’s not their own grant requirement. They may be reluctant to commit resources to something that is simply perceived as a good idea.

There are certainly a number of challenges ahead for emergency management in the broadest of applications. What I discuss here only scratches the surface. Let’s not lose sight of the benefits of best practices and standards, even if no one is telling us we need to adhere to them. That’s a hallmark of professionalism. We need to collectively advocate for our profession and the resources necessary to perform the critical functions we have. We need to take the time to advocate and to be deliberate in our actions. We need to secure multiple funding streams from every level of government possible. We need to identify efficiencies and leverage commonalities among partner agencies. Yes, lend your voice to the national organizations, but know that it’s up to you to advocate in your municipality, county, and state – and those efforts are now more important than ever.

©2025 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Taking the Reins

Through the past several years of my blog, the central theme of my posts has really been to ask ‘why?’. Why do we do the things we do in emergency management? Why do we accept things as they are? Why haven’t we endeavored to change, update, or improve upon some of these things that range, at best, from mediocracy to, at worst, absolute crap?

A boss of mine many (so many) years ago taught me the concept of ‘ask why five times’ if you want to get to the root of anything. Of course, you need to seek the proper people to ask or sources to conduct your research, but the concept still stands – often we can’t just ask ‘why’ once and expect that one answer to explain everything for us.

Our field of practice is filled with so many things which can be considered standards. They may be true standards, such as NFPA 1660, or simply a de facto standard – something that has become widely accepted in practice, such as CPG 101.

Standards are a double-edged sword. On the better side, they give us commonality. We can expect that, if reasonably applied, the outputs will have substantial similarity and will, at minimum, meet a base-line expectation. Consistency is generally viewed as good and beneficial in largely any application. On the other hand, standards can stifle innovation. They can encourage laziness. They often promote shortcuts like templates, which, while there are benefits, largely remove the inclination of critical thinking from the work that is done and assume that all applications can fit within someone else’s concept of how things should be.

As we face a significant possibility of a number of de facto standards from FEMA no longer being maintained due to changes in focus and reduction in force – things like the homeland security exercise and evaluation program (HSEEP), CPG 101, and even the National Incident Management System (NIMS) – how will things be done in what may become a new era of emergency management?

There are some that are shilling the downfall of emergency management. While I don’t think this extreme is quite realistic, there will most certainly be some significant changes and impacts to which we must adapt. In the realm of standards (and likely other gaps created), I feel the profession will realize the need to take care of itself, taking a path of self-determination and filling a role that has been, most successfully, done by FEMA. Early on, in the absence of a central coordinating entity (FEMA) maintaining these de facto standards, we will see several disparate efforts of upkeep, with results likely following a bell curve of quality – most will be deemed reasonable, though outliers will exist on both ends of the spectrum, with one side being garbage and the other fairly inspired and progressive. Here enters opportunity. Opportunity for improvement, innovation, different perspectives, and simply seeking better ways of doing things. Though this process begs some questions – Whose version will reign supreme? And what authority does the author have to publish any given standard? Is some measure of authority even required for such a thing for it to be, even unofficially, adopted by the profession?

I feel that regardless of this circumstance, we must periodically examine our standards of practice. Ask ‘why?’ five times (or really however many times is necessary). This can range from asking the same question over and over until you get to some foundational answer you are seeking, or asking a chain of related questions to poke at different sides of the standard. Consider questions like ‘Why does the standard exist?’, ‘Why does the standard exist as it is?’, ‘How did this standard evolve?’, ‘What are the strengths of the current standard?’, ‘What are the weaknesses of the current standard?’, ‘What can we do better and how?’.

There has been some effort lately (also spearheaded by FEMA) toward the concept and implementation of continuous improvement. Standards should also fit within this movement. Standards need to evolve and change and support the practice, though they should be constructed in such a fashion that does not limit a range of application (i.e. can it be used by states as well as small towns? Does it need to be?) or stifle innovation. And while evolution is necessary, I’ll also caution against wholesale change – unless a truly better way is developed and validated. Standards should not change based simply on someone’s good idea, a different perspective, or political influence. Standards (true or de facto) or any part thereof and in any industry should be peer developed and peer reviewed. Changes need to be carefully considered, but also not feared. While I feel FEMA has been a good steward of our standards of practice, that time may be coming to an end, at least for a while. The standards of practice across emergency management must be maintained if this disruption comes to fruition. This is a challenge. This is an opportunity. This is a necessity. We must rise to the occasion.

© 2025 – Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

2024 National Preparedness Report – Another Missed Opportunity

The annual National Preparedness Report (NPR) is a requirement of Presidential Policy Directive 8, which states that the NPR is based on the National Preparedness Goal. The National Preparedness Goal, per the FEMA website, is “A secure and resilient nation with the capabilities required across the whole community to prevent, protect against, mitigate, respond to, and recover from the threats and hazards that pose the greatest risk.” The capabilities indicated in the National Preparedness Goal are specifically the 32 Core Capabilities.

The 2024 NPR is developed to reflect data and information from 2023. As with previous NPRs, I have a lot of concern about the ultimate value of the document. While I’m sure a lot of time, effort, and money was spent gathering an abundance of data from across the nation to support this report, this year’s report, following the unfortunate trend of its predecessors, doesn’t seem to be worth the investment. As with the others, this report falls short on adequate scope, information, and recommendations. Certainly, there is a challenge to be acknowledged of not only gathering a massive quantity of information from across the country but also examining and reporting this information in aggregate, as most federal reports are burdened to do. That said, I see little excuse to not provide a meaningful report.

In this year’s report, following the introductory materials, is a section on risks, which is largely a reflection of the high impact disasters of 2023 seen across the US; the most challenging threats and hazards; and the intersections of risk and vulnerability. All in all, this is an adequate snapshot of these topics in summary, with some solid points and a level of analysis that I would expect through the rest of the document, which includes trends over time, and identification of factors which influence the findings. There are several maps and charts which provide good data visualization and several mentions of bridging data between agencies such as FEMA, NOAA, and CDC. A good start.

The next section is Capabilities. This section has two areas of narrative – community preparedness and individual and household preparedness. Given the significant efforts to bolster capabilities throughout the federal government and in state, tribal, and territorial governments, it seems these levels are obviously missing if we are to suggest that all local governments are simply communities, so I’m not sure why this is specifically titled community preparedness. Does it not include the efforts of states or others? Page 18 of the report provides a chart similar to what we’ve seen in previous reports which shows how much money was spent on each capability (in communities… again, what does this include or exclude?) for 2023. The chart also indicates the percentage of communities achieving their capability targets.

As with the reports from the previous year, I ask: So what? This is a snapshot in time and lacking context. A trend analysis accounting for at least the past several years would be quite insightful, as would some description of what the funds within each capability were primarily spent on – broadly planning, organizing, equipping, training, and exercises, but I’d like to see even more specifics. There are a few random examples in the narrative, but a lot is still lacking. I’d also like to see some analysis of relative success or value of these investments. In regard specifically to the percentage of communities who feel they have achieved their capability target, I have to eye roll a bit at this, as this is often the most subjective (and sometimes smoke and mirrors) aspect of the Threat and Hazard Identification and Risk Assessment (THIRA). This chart currently has little value other than a ‘gee whiz’ factor of seeing how much money is spent on each capability.

I’ll also include a specific observation of mine here: the Core Capability of Mass Care Services, which in the previous year’s report was indicated as a high-priority capability, continues a trend (I’m only aware of the trend from looking back at previous reports since trend data is not included in the report) of having one of the lowest achievement percentages and investments. I’m hopeful that’s why it’s included in the next section as a focus area.

The other area of narrative in the Capabilities section is individual and household preparedness. All in all, the information presented here is fine and even includes a slight bit of trend analysis, though in 2023 a much more comprehensive reporting of this information was provided under separate cover. I think an improved version of something like the 2023 report should be incorporated into the NPR.

The next section of the NPR is Focus Areas, which includes the Core Capabilities of Mass Care Services, Public Information and Warning, Infrastructure Systems, and Cybersecurity. Each focus area includes narrative on risk, capabilities and gaps, and management opportunities – which all provide great information. There is a brief mention of how these focus areas were selected. While I’m fine with having a deeper analysis of certain focus areas, I think the NPR should still provide a comprehensive review of all Core Capabilities.

While the management opportunities listed for each of the four focus areas are essentially recommendations, the report itself only provides two recommendations which are labeled as such. These recommendations are identified in the document’s introduction with a bit of narrative (and the conclusion with none), that thankfully provides some suggestions for actionable implementation, but I was left feeling both surprised and disappointed that the National Preparedness Report, which really should be providing an analysis of all 32 Core Capabilities which serve as the foundation for nation’s preparedness goal, has only two recommendations for improving our preparedness. Two. That’s it. There should be an abundance of recommendations. This is the information that emergency managers and decision-makers within the field of practice need within federal agencies and state, local, tribal, and territorial (SLTT) governments. Another missed opportunity to provide value.

The 2024 NPR is extremely similar to the past several years in format and general content, and as such I’m not surprised by the lack of value. I continue to stand by my statement across these past several years in regard to this report: the emergency management community should not be accepting this type of reporting. While I recognize that through PPD8, it is defined that the audience for this report is the President and the Secretary of Homeland Security, the utility of such a report can and should have a much broader reach across all of emergency management, and idealistically to tax payers as well, who should be able to access better information on how their tax dollars are spent within preparedness – which impacts everyone. States, UASIs, and other entities who submit information annually for this report should also be disappointed that this is what is published about their hard work, and the emergency management membership organizations should also be demanding better. This report has the potential to be meaningful, insightful, and influential, yet FEMA misses the opportunity every single year to do so. The data exists, and the stories of the activities, accomplishments, and gaps can all be told. With the application of some reasonable analysis and recommendations, the document could be much more impactful.

It’s been said by many that emergency managers are notorious for not marketing well, and this document is proof positive of that. Those of us working in this profession know there is so much more to be examined and described that can tell of not only what we have accomplished but also of the work to be done. We find ourselves in a time where the purpose and value of FEMA is being questioned by a number of people; a time where some inefficiencies, missteps, and even failures are being put under a very critical microscope and seemingly being used to fuel a suggestion of eliminating FEMA. Greater efficiencies can certainly be identified and gaps addressed, but our reluctance to tell the stories of what we do clearly lend to misunderstandings and a severe lack of awareness that exist about our field of practice – one in which there is no organization of greater prominence and importance than FEMA. While the NPR is not at fault for these shortcomings, it is a contributor. When reports like this miss opportunities to do more and be more year over year, that snowballs and becomes a much greater issue. We need to do better.

© 2015 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Five Domains of Incident Management

Earlier this summer, RAND, under contract to CDC as part of a five-year project related to examining and assessing incident management practices in public health, developed and released the Incident Management Measurement Toolkit. Overall, I think the tool developed is a solid effort toward standardizing the evaluation of incident management. The tool guides a depth of examination into incident management practices. It can be a bit daunting at a glance, but the methodology of evaluation is generally what I’ve been practicing over the past several years for developing incident and event AARs. I’d also suggest that it’s scalable in application.

I feel it’s important to note that incident management teams involved in non-public health applications were also engaged in the research. The outcomes of the project and the inclusion of non-public health incident management practices in the research indicate to me that this tool can be applied broadly and not limited to public health applications.

Serving as a foundation for the assessment tool and methodology are five Domains of Incident Management that the project team identified. Provided with key activities, these include:

  1. Situational Awareness and Information Sharing – Perception and characterization of incident-related information to identify response needs.
  2. Incident Action and Implementation Planning – Ongoing articulation and communication of decisions in coherent incident action plans.
  3. Resource Management and Mobilization – Deployment of human, physical, and other resources to match ongoing situational awareness, identification of roles, and relevant decisions.
  4. Coordination and Collaboration – Engagement and cooperation between different stakeholders, teams, and departments in managing the incident.
  5. Feedback and Continuous Quality Improvement – The need for ongoing evaluation and refinement of incident management processes.

In consideration of these domains, I think the activities inherent within them are fairly agnostic of the type of incident management system (i.e. ICS) used. I also think these same domains can be applied for recovery operations, again, regardless of the system or organization being utilized; as well as the principal practice at work (public health, emergency management, fire service, law enforcement, etc.).

I’ve been intending to write about these domains for a while, but each time I considered them, something stood out to me as being a bit askew. I finally realized that these really aren’t domains that encompass all of incident management. Rather, these domains are better associated with an incident management system, such as the Incident Command System (ICS). The first three domains are very clearly applied directly to an incident management system, and the fourth is the general concept of multiagency coordination, which is a common concept of incident management systems. The last domain is simply quality management which is certainly integral across various incident management systems.

While I don’t believe my view undermines the tool’s value, it highlights the need for a clearer understanding of its limitations. An incident management system, like ICS, is just one part of incident management and doesn’t cover all related activities. Some tasks in incident management, such as setting priorities, decision-making, troubleshooting, and dealing with political and social issues, are often not directly related to the tactical management systems we use. Additionally, many important aspects fall within leadership that aren’t covered by the NIMS doctrine or the Planning P. Although organizing resources is a central part of incident management, there are many other activities not addressed in a tactical response that may influence tactical applications but are not part of a defined incident management system. While one could argue these activities fit into the five identified domains, I feel this analysis doesn’t provide a complete picture of a complex response. More information would be needed.

That said, I really like this toolkit. I think it provides a structured mechanism for evaluating common practices of incident management systems, which itself can provide a foundation for a more comprehensive assessment of incident management. That comprehensive assessment, beyond the incident management system, is also more anecdotal and often requires persons experienced in asking the right questions and clarifying perspectives and opinions – things that ultimately can’t be done (or at least done easily) with an assessment tool.

So regardless of what the nature of your incident is, consider using the Incident Management Measurement Toolkit as part of your AAR process.

What are your thoughts on the RAND tool? Have you used it? What do you think of the five domains they have identified?

©2024 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

ICS: Problems and Perceptions

Oddly enough, I’ve recently seen a spate of LinkedIn posts espousing the benefits of the Incident Command System (ICS). Those who have been reading my material for a while know that I’m a big proponent of ICS, though I am highly critical of the sub-par curriculum that we have been using for decades to teach ICS. The outcome is an often poorly understood and implemented system resulting in limited effectiveness.

Yes, ICS is a great tool, if implemented properly. Yet most implementations I see aren’t properly conducted. To further muddy these waters, I see emergency plans everywhere that commit our responders and officials to using ICS – this is, after all, part of the National Incident Management System (NIMS) requirement that many have – yet they don’t use it.

So why isn’t ICS being used properly or even at all? Let’s start with plans. Plans get written and put up on a proverbial shelf – physical or digital. They are often not shared with the stakeholders who should have access to them. Even less frequently are personnel trained in their actual roles as identified and defined in plans. Some of those roles are within the scope of ICS while some are not. The bottom line is that many personnel, at best, are only vaguely familiar with what they should be doing in accordance with plans. So, when an incident occurs, most people don’t think to reference the plan, and they flop around like a fish out of water trying to figure out what to do. They make things up. Sure, they often try their best, assessing what’s going on and finding gaps to fill, but without a structured system in place and in the absence (or lack of referencing) of the guidance that a quality plan should offer, efficiency and effectiveness are severely decreased, and some gaps aren’t even recognized or anticipated.

Next, let’s talk about ICS training. Again, those who have been reading my work for a while have at least some familiarity with my criticism of ICS training. To be blunt, it sucks. Not only does the content of courses not even align with course objectives, the curriculum overall doesn’t teach us enough of HOW to actually use ICS. My opinion: We need to burn the current curriculum to the ground and start over. Course updates aren’t enough. Full rewrites, a complete reimagining of the curriculum and what we want to accomplish with it, needs to take place.

Bad curriculum aside… For some reason people think that ICS training will solve all their problems. Why? One reason I believe is that we’ve oversold it. Part of that is most certainly due to NIMS requirements. Not that I think the requirements, conceptually, are a bad thing, but I think they cause people to think that if it’s the standard that we are all required to learn, it MUST be THE thing that we need to successfully manage the incident. I see people proudly boasting that they’ve completed ICS300 or ICS400. OK, that’s great… but what can you actually do with that? You’ve learned about the system, but not so much of how to actually use it. Further, beyond the truth that ICS training sucks, it’s also not enough to manage an incident. ICS is a tool of incident management. It’s just one component of incident management, NOT the entirety of incident management. Yes, we need to teach people how to use ICS, but we also need to teach the other aspects of incident management.

We also don’t use ICS enough. ICS is a contingency system. It’s not something we generally use every day, at least to a reasonably full extent. Even our first responders only use elements of ICS on a regular basis. While I don’t expect everyone to be well practiced in the nuances and specific applications of ICS, we still need more practice at using more of the system. It’s not the smaller incidents where our failure to properly implement ICS is the concern – it’s the larger incidents. It’s easy to be given a scenario and to draw out on paper what the ICS org chart should look like to manage the scenario. It’s a completely different thing to have the confidence and ego in check to make the call for additional resources – not the tactical ones – but for people to serve across a number of ICS positions. Responders tend to have a lot of reluctance to do so. Add to that the fact that most jurisdictions simply don’t have personnel even remotely qualified to serve in most of those positions. So not only are we lacking the experience in using ICS on larger incidents, we also don’t have experience ‘ramping up’ the organization for a large response. An increase in exercises, of course, is the easy answer, but exercises require time, money, and effort to implement.

One last thing I’ll mention on this topic is about perspective. One of the posts I read recently on LinkedIn espoused all the things that ICS did. While I understand the intent of their statements, the truth is that ICS does nothing. ICS is nothing more than a system on paper. It takes people to implement it. ICS doesn’t do things; PEOPLE do these things. The use of ICS to provide structure and processes to the chaos, if properly done, can reap benefits. I think that statements claiming all the things that ICS can do for us, without inserting the critical human factor into the statement, lends to the myth of ICS being our savior. It’s not. It must be implemented – properly – by people to even stand a chance.

Bottom line: we’re not there yet when it comes to incident management, including ICS. I dare say too many people are treating it as a hobby, not a profession. We have a standard, now let’s train people on it PROPERLY and practice it regularly.

©2024 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

NIMS Intel and Investigations Function – A Dose of Reality

Background

Soon after the initiation of the National Incident Management System (NIMS) as a result of Homeland Security Presidential Directive 5 in 2003, the Intelligence and Investigation (I/I) function was developed and introduced to NIMS, specifically to the Incident Command System (ICS). While we traditionally view I/I as a law enforcement function, there are other activities which guidance indicates may fall within I/I, such as epidemiology (personally, I’d designate epidemiology as a specific function, as we saw done by many during the COVID-19 response), various cause and origin investigations, and others. Integration of these activities into the response structure has clear advantages.

The initial guidance for the I/I function was largely developed by command personnel with the New York City Police Department (NYPD). This guidance offered several possible locations for the I/I function within the ICS structure, based on anticipated level of activity, needed support, and restrictions of I/I related information. These four possible ways of organizing the I/I function per this guidance are depicted here, and include:

  1. Placement as a Command Staff position
  2. Organized within the Operations Section (i.e. at a Branch level)
  3. Developed as its own section
  4. Included as a distinct unit within the Planning Section

These concepts have been included in the NIMS doctrine and have been supported within the NIMS Intelligence/Investigations Function Guidance and Field Operations Guide, though oddly enough, this second document ONLY addresses the organization of an I/I Section and not the other three options.

The Reality

Organization of I/I can and does certainly occur through any one of these four organizational models, though my own experiences and experiences of others as described to me have shown that very often this kind of integration of I/I within the ICS structure simply does not occur. Having worked with numerous municipal, county, state, federal, and specially designated law enforcement agencies, I’ve found that the I/I function is often a detached activity which is absolutely not operating under the command and control of the incident commander.

Many of the sources of I/I come from fusion centers, which are off-scene operations, or from agencies with specific authorities for I/I activities that generally have no desire or need to become part of the ICS structure, such as the FBI conducting a preliminary investigation into an incident to determine if it was a criminal act, or the NTSB investigating cause and origin of a transportation incident. These entities certainly should be communicating and coordinating with the ICS structure for scene access and operational deconfliction, but are operating under their own authority and conducting specific operations which are largely separate from the typical life safety and recovery operations on which the ICS structure is focused.

My opinion on this is that operationally it’s completely OK to have the I/I function detached from the ICS structure. There are often coordination meetings and briefings that occur between the I/I function and the ICS structure which address safety issues and acknowledge priorities and authorities, but the I/I function is in no way reporting to the IC. Coordination, however, is essential to safety and mutual operational success.

I find that the relationship of I/I to the ICS structure most often depends on where law enforcement is primarily organized within the ICS structure and who is managing that interest. For example, if the incident commander (IC) is from a law enforcement agency, interactions with I/I activities are more likely to be directly with the IC. Otherwise, interactions with I/I are typically handled within the Operations Section through a law enforcement representative within that structure. Similarly, I’ve also experienced I/I activity to have interactions with an emergency operations center (EOC) through the EOC director (often not law enforcement, though having designated jurisdictional authority and/or political clout) or through a law enforcement agency representative. As such, compared to the options depicted on an org chart through the earlier link, we would see this coordination or interaction depicted with a dotted line, indicating that authority is not necessarily inherent.

I think that the I/I function organized within the ICS structure is more likely to happen when a law enforcement agency has significant responsibility and authority on an incident, and even more likely if a law enforcement representative is the IC or represented in a Unified Command. I also think that the size and capabilities of the law enforcement agency is a factor, as it may be their own organic I/I function that is performing within the incident. As such, it would make sense that a law enforcement agency such as NYPD, another large metropolitan law enforcement agency, or a state police agency leading or heavily influencing an ICS structure would be more likely to bring an integrated I/I function to that structure. Given this, it makes sense that representatives from NYPD would have initially developed these four possible organizational models and seemingly exclude the possibility of a detached I/I function, but we clearly have numerous use cases where these models are not being followed. I’ll also acknowledge that there may very well be occurrences where I/I isn’t but should be integrated into the ICS structure. This is a matter for policy and training to address when those gaps are identified.

I believe that NIMS doctrine needs to acknowledge that a detached I/I function is not just possible, but very likely to occur. Following this, I’d like to see the NIMS Intelligence/Investigation Function Guidance and Field Operations Guide updated to include this reality, along with operational guidance on how best to interact with a detached I/I function. Of course, to support implementation of doctrine, this would then require policies, plans, and procedures to be updated, and training provided to reflect these changes, with exercises to test and reinforce the concepts.

What interactions have you seen between an ICS or EOC structure and the I/I function? What successes and challenges have you seen from it?

© 2024 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Culture of Preparedness – a Lofty Goal

September is National Preparedness Month here in the US. As we soon head into October, it’s a good opportunity to reflect on what we’ve accomplished during the month, or even elsewhere in the year. While National Preparedness Month is an important thing to mark and to remind us of how important it is to be prepared, over the past several years I’ve come to question our approaches to community preparedness. What are we doing that’s actually moving the needle of community preparedness in a positive direction? Flyers and presentations and preparedness kits aren’t doing it. While I can’t throw any particular numbers into the mix, I think most will agree that our return on investment is extremely low. Am I ready to throw all our efforts away and say it’s not making any difference at all? Of course not. Even one person walking away from a presentation who makes changes within their household to become better prepared is important. But what impact are we having overall?

Culture of preparedness is a buzz phrase used quite a bit over the last number of years. What is a culture of preparedness? An AI assisted Google search tells me that a culture of preparedness is ‘a system that emphasizes the importance of preparing for and responding to disasters, and that everyone has a role to play in doing so.’ Most agree that we don’t have a great culture of preparedness across much of the US (and many other nations) and that we need to improve our culture of preparedness. But how?

People love to throw that phrase into the mix of a discussion, claiming that improving the culture of preparedness will solve a lot of issues. They may very well be correct, but it’s about as effective as a doctor telling you that you will be fine from the tumor they found once a cure for cancer is discovered. Sure, the intent is good, but the statement isn’t helpful right now. We need to actually figure out HOW to improve our culture of preparedness. We also need to recognize that in all likelihood it will take more than one generation to actually realize the impacts of deliberate work toward improvement.

The time has come for us to stop talking about how our culture of preparedness needs improvement and to actually do something about it. There isn’t one particular answer or approach that will do this. Culture of preparedness is a whole community concept. We rightfully put a lot of time, effort, and money into ensuring that our responders (broad definition applied) are prepared, because they are the ones we rely on most. I’d say their culture of preparedness is decent (maybe a B-), but we can do a lot better. (If you think my assessment is off, please check out my annual reviews of the National Preparedness Report and let me know if you come to a different conclusion). There is much more to our community, however, than responders. Government administration, businesses, non-government organizations, and people themselves compose the majority of it, and unfortunately among these groups is where our culture of preparedness has the largest gaps.

As with most of my posts, I don’t actually have a solution. But I know what we are doing isn’t getting us to where we want to be. I think the solution, though, lies in studying people, communities, and organizations and determining why they behave and feel the way they do, and identifying methodologies, sticks, and carrots that can help attain an improved culture of preparedness over time. We must also ensure that we consider all facets of our communities, inclusive of gender identity, race, culture, income, citizenship status, and more. We need people who know and study such things to help guide us. The followers of Thomas Drabek. The Kathleen Tierneys* of the world. Sociologists. Anthropologists. Psychologists. Organizational psychologists.  

A real, viable culture of preparedness, in the present time, is little more than a concept. We need to change our approach from using this as a buzz phrase in which everyone in the room nods their heads, to a goal which we make a deliberate effort toward attaining. A problem such as this is one where we can have a true union of academia and practice, with academics and researchers figuring out how to solve the problem and practitioners applying the solutions, with a feedback loop of continued study to identify and track the impacts made, showing not only the successes we (hopefully) attain, but also how we can continue to improve.

*Note: I don’t know Dr. Tierney personally and it is not my intent to throw her under the proverbial bus for such a project. I cite her because her writing on related topics is extremely insightful. I highly recommend Disasters: A Sociological Approach.

© 2024 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®