EM Engagement with the Public

Emergency management is notoriously bad at marketing. People have a much better idea of what most other government agencies do, or simply (at least) that they exist. Establishing awareness and understanding of emergency management not only for the people you serve, but those you work with can go a long way toward meeting your goals.

As with any message everything is about the audience. Emergency management has a variety of audiences. While we have some programs and campaigns oriented toward individuals, much of our work is with organizations, including non-profits, other government agencies, and the private sector. All in all, most emergency managers are pretty good at interfacing and coordinating with organizations. It’s the public that we still struggle with. Emergency management inherited the burden of individual and family preparedness from the days of civil defense. Things were different then. Civil defense focused on one threat, it was persistent, and the calls to action were tangible and even practiced with the public in many communities.

And yes, I said that our present engagement with the public is a burden. Can it make a difference? Sure. Does it make a difference? Sometimes. While some can argue that any measurable difference we make is good, we all need to acknowledge that campaigns and programs for the public are often a huge source of frustration for emergency managers across the nation and elsewhere. We feel compelled to do it, but so often we can’t make that connection. While I think it is a worthwhile mission and there are successes, the usual rhetoric is stale (i.e., make a plan, build a kit, be informed, get involved) and our return on investment is extremely low.

We need to do more than handing out flyers at the county fair. Some communities have been able to find success through partner agencies or organizations that actually do work with the public on a regular basis, which I think is a better formula for success. These agencies and organizations already have an in with a certain portion of the population. They have an established presence, rapport, and reputation. Given that agencies and organizations have different audiences, it is best to engage more than one to ensure the best coverage throughout the community.

As mentioned, our usual rhetoric also needs to change. With continued flooding here in the northeast US, I saw a message from a local meteorologist on Twitter recently giving some information on the flooding and saying to ‘make a plan’. Fundamentally that’s good. Unfortunately, this message is pretty consistent with what we put out most of the time in emergency management. Yes, it’s a call to action, but incredibly non-specific. Should I plan to stay home? Should I plan to evacuate? Should I plan to get a three-week supply of bread and milk? I’ll grant that Twitter isn’t really the best platform for giving a lot of detail, but I think we can at least tell the public what to make a plan for and provide a reference to additional information.

Should EM disengage with the public at large? No, absolutely not. But we do need to find better ways to engage, and I think that really requires a keen eye toward marketing, analyzing our audiences to determine what kinds of messaging will work best, how to reach them, and what is important to them. Two messages a year about preparedness doesn’t cut it. Neither does a bunch of messages giving the FEMA hotline after a disaster. It needs to be consistent. It needs to be fun. It needs to be engaging. It should be multimodal – social media, speaking at local meetings, articles in the town newsletter, etc. Don’t be boring, don’t be technical, don’t be doom and gloom. Make it clear, make it interesting (to them… not you), and make it brief. Essentially, don’t be so ‘government’ about it. (The same applies for any corporate emergency management program as well).

I’ll also add that having a presence with the public in your community is, in a practical sense, a presence with voters. While emergency managers often talk about the need for emergency management to be politically neutral, there are a lot of interests that align with emergency management that are clearly partisan, giving cause for us to be political. For context (because ‘politics’ has become such a bad word) I’m not talking about campaigning for someone, attending a rally, or spewing political rhetoric; but rather being engaged in political processes, of which a huge part is having a regular and strong presence. Even with partisan issues aside, emergency management requires funding and other resources to be effective, and that often requires an extent of political engagement and support. We need to actively and regularly promote what we do and what we accomplish. No, it’s not usually as sexy as putting out a big fire or building a bridge, but most fire and highway departments don’t miss an opportunity to get that stuff in the news. That’s why people know them.

Given the fairly universal benefits to emergency managers everywhere, I’d love to see FEMA engage with a marketing firm to produce a broad range of reusable content. TV and radio spots; website and social media graphics; customizable newsletter articles and handouts; speaking points for meetings (no PowerPoint necessary, please), interviews, and podcasts; etc. This also can’t be done every 10 or 15 years. It’s something that should be refreshed every two years to stay relevant, fresh, and meaningful, and with the input of actual emergency managers and public information officers. Speaking of PIOs, if you think your only work with emergency management is during a disaster, think again. PIOs, even if not within EM, should absolutely be engaged in these efforts.

FEMA has produced some material in the past, as have some states for use by local governments, but we need more and we can’t hold our breath for this to be done. Emergency management is, however, a great community of practice. If you have a successful practice or message, please share it! Bring it to your networks or even provide information in a comment to this post.

© 2021 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

When to AAR

A discussion with colleagues last week both on and off social media on the development of after-action reports (AARs) for the COVID 19 pandemic identified some thoughtful perspectives. To contextualize, the pandemic is arguably the longest and largest response the world has ever faced. Certainly, no one argues the necessity for organizations to develop AARs, as there have been an abundance of lessons learned that transcend all sectors. It’s thankfully not often we are faced with such a long incident, but in these circumstances, we need to reconsider our traditional ways of doing things, which has generally been to develop an AAR at the conclusion of the incident.

One central aspect of the discussions was about the timing of the AARs. When should we develop an AAR for an incident? I certainly think that with most incidents, we can safely AAR when the incident is complete, particularly given that most incidents don’t last as long as the pandemic has. The difficulty with the pandemic, relative to AARs, is time. The more time goes on, the more we focus on recent concerns and the less we remember of the earlier parts of the response. This likely remains within tolerable limits for an incident that will last several weeks or even up to a few months, but eventually we need to recognize that that longer we go without conducting the after-action process, the more value we lose. Yes, we can recreate a lot with through documentation, but human inputs are critical to the AAR process, and time severely erodes those. Given this, I suggest the ideal practice in prolonged incidents is to develop interim AARs to ensure that chunks of time are being captured.

Another aspect related to this is to determine what measure we are using for the incident. The vast majority of AARs focus mostly on response, not recovery. This is an unfortunate symptom of the response-centric mentality that persists in emergency management. We obviously should be conducting AARs after the response phase, but we also need to remember to conduct them once the recovery phase is substantially complete. Given that recovery often lasts much longer than the response, we certainly shouldn’t wait until recovery is complete to develop a single AAR for the incident, rather we should be developing an AAR, at a minimum, at the substantial completion of response and another at the substantial completion of recovery.

Yet another complication in this discussion is that timing is going to be different for different organizations. I presently have some clients for which the pandemic is much less of a concern operationally as it was a year ago, especially with a vaccinated workforce. So much less of a concern, in fact, that they have largely resumed normal operations, though obviously with the continuation of some precautionary measures. Other organizations, however, are still in a full-blown response; while there are still yet others somewhere in the middle. This means that as we go through time, the pandemic will largely be over for certain organizations and jurisdictions around the world, while others are still consumed by the incident. While the WHO will give the official declaration of the conclusion of the pandemic, it will be over much sooner for a lot of organizations. Organizations should certainly be developing AARs when they feel the incident has substantially ended for them, even though the WHO may not have declared the pandemic to have concluded.

Consider that the main difference between evaluating an exercise and evaluating an incident is that we begin the exercise with the goal of evaluation. As such, evaluation activities are planned and integrated into the exercise, with performance standards identified and staff dedicated to evaluation. While we evaluate our operations for effectiveness during a response and into recovery, we are generally adjusting in real time to this feedback rather than capturing the strengths and opportunities for improvement. Be it during the incident or after, we need to deliberately foster the AAR process to not only capture what was done, but to help chart a path to a more successful future. I’ve been preaching about the value of incident evaluation for several years, and have been thankful to see that FEMA had developed a task book for such.

Given the complexity and duration of the pandemic, I started encouraging organizations to develop interim AARs longer than a year ago, and in fact supported a client in developing their initial response AAR just about a year ago. FEMA smartly assembled an ‘Initial Assessment Report’ of their early response activity through September of 2020, though unfortunately I’ve not seen anything since. There was a question about naming that came up in the discussions I had, suggesting that the term ‘AAR’ should be reserved for after the incident, and a different term used for any other reports. I partially agree. While I think we should still call it what it is – even if it’s done in the midst of an incident, it is still an after-action report – that being an analysis of actions we’ve taken within a defined period of time. Afterall, it’s not called an ‘after incident report’. That said, I do think that any AARs developed during the incident do warrant some clarification, which can incorporate the inclusion of a descriptor such as ‘interim’ or ‘phase 1, 2, 3, etc’, or whatever is most suitable. I don’t think we need anything standardized so long as it’s fairly self-explanatory.

Have you already conducted an AAR for the pandemic? Do you expect to do another?

© 2021 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Metrics and Data Analytics in Emergency Management

I’ve lately seen some bad takes on data analytics in emergency management. For those not completely familiar, data analytics is a broad-based term applied to all manner of data organization, manipulation, and modeling to bring out the most valuable perspectives, insights, and conclusions which can better inform decision-making. Obviously, this can be something quite useful within emergency management.

Before we can even jump into the analysis of data, however, we need to identify the metrics we need. This is driven by decision-making, as stated above, but also by operational need, measurement of progress, and reporting to various audiences, which our own common operating picture, to elected officials, to the public. In identifying what we are measuring, we should regularly assess who the audience is for that information and why the information is needed.

Once we’ve identified the metrics, we need to further explore the intended use and the audience, as that influences what types of analysis must be performed with the metrics and how the resultant information will be displayed and communicated.

I read an article recently from someone who made themselves out to be the savior of a state emergency operations center (EOC) by simply collecting some raw data and putting it into a spreadsheet. While this is the precursor of pretty much all data analysis, I’d argue that the simple identification and listing of raw data is not analytics. It’s what I’ve come to call ‘superficial’ data, or what someone on Twitter recently remarked to me as ‘vanity metrics’. Examples: number of people sheltered, number of customers with utility outages, number of people trained, number of plans developed.

We see a lot of these kinds of data in FEMA’s annual National Preparedness Report and the Emergency Management Performance Grant (EMPG) ‘Return on Investment’ report generated by IAEM and NEMA. These reports provide figures on dollars spent on certain activities, assign numerical values to priorities, and state how much of a certain activity was accomplished within a time period (i.e. x number of exercises were conducted over the past year). While there is a place for this data, I’m always left asking ‘so what?’ after seeing these reports. What does that data actually mean? They simply provide a snapshot in time of mostly raw data, which isn’t very analytical or insightful. It’s certainly not something I’d use for decision-making. Both of these reports are released annually, giving no excuse to not provide some trends and comparative analysis over time, much less geography. Though even in the snapshot-of-time type of report, there can be a lot more analysis conducted that simply isn’t done.

The information we report should provide us with some kind of insight beyond the raw data. Remember the definition I provided in the first paragraph… it should support decision-making. This can be for the public, the operational level, or the executive level. Yes, there are some who simply want ‘information’ and that has its place, especially where political influence is concerned.

There are several types of data analytics, each suitable for examining certain types of data. What we use can also depend on our data being categorical (i.e. we can organize our data into topical ‘buckets’) or quantitative. Some data sets can be both categorical and quantitative. Some analysis examines a single set of data, while other types support comparative analysis between multiple sets of data. Data analytics can be as simple as common statistical analysis, such as range, mean, median, mode, and standard deviation; while more complex data analysis may use multiple steps and various formulas to identify things like patterns and correlation. Data visualization is then how we display and communicate that information, through charts, graphs, geographic information systems (GIS), or even infographics. Data visualization can be as important as the analysis itself, as this is how you are conveying what you have found.

Metrics and analytics can and should be used in all phases of emergency management. It’s also something that is best planned, which establishes consistency and your ability to efficiently engage in the activity. Your considerations for metrics to track and analyze, depending on the situation, may include:

  • Changes over time
    • Use of trend lines and moving averages may also be useful here
  • Cost, resources committed, resources expended, status of infrastructure, and measurable progress or effectiveness can all be important considerations
  • Demographics of data, which can be of populations or other distinctive features
  • Inclusion of capacities, such as with shelter data
  • Comparisons of multiple variables in examining influencing factors (i.e. loss of power influences the number of people in shelters)
    • Regression modeling, a more advanced application of analytics, can help identify what factors actually do have a correlation and what the impact of that relationship is.
  • Predictive analytics help us draw conclusions based on trends and/or historical data
    • This is a rabbit you can chase for a while, though you need to ensure your assumptions are correct. An example here: a hazard of certain intensity occurring in a certain location can expect certain impacts (which is much of what we do in hazard mitigation planning). But carry that further. Based on those impacts, we can estimate the capabilities and capacities that are needed to respond and protect the population, and the logistics needed to support those capabilities.
  • Consider that practically any data that is location-bound can and should be supported with GIS. It’s an incredible tool for not only visualization but analysis as well.
  • Data analytics in AARs can also be very insightful.

As I mentioned, preparing for data analysis is important, especially in response. Every plan should identify the critical metrics to be tracked. While many are intuitive, there is a trove of Essential Elements of Information (EEI) provided in FEMA’s Community Lifelines toolkit. How you will analyze the metrics will be driven by what information you ultimately are seeking to report. What should always go along with data analytics is some kind of narrative not only explaining and contextualizing what is being shown, but also making some inference from it (i.e. what does it mean, especially to the intended audience).

I’m not expecting that everyone can do these types of analysis. I completed a college certificate program in data analytics last year and it’s still challenging to determine the best types of analysis to use for what I want to accomplish, as well as the various formulas associated with things like regression models. Excel has a lot of built-in functionality for data analytics and there are plenty of templates and tutorials available online. It may be useful for select EOC staff as well as certain steady-state staff to get some training in analytics. Overall, think of the variables which can be measured: people, cost, status of infrastructure, resources… And think about what you want to see from that data now, historically, and predicted into the future. What relationships might different variables have that can make data even more meaningful. What do we need to know to better support decisions?

Analytics can be complex. It will take deliberate effort to identify needs, establish standards, and be prepared to conduct the analytics when needed.

How have you used data analytics in emergency management? What do you report? What decisions do your analytics support? What audiences receive that information and what can they do with it?

© 2021 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

EOC Management Platforms

Some recent social media discussion on EOC management systems has prompted far too many thoughts for me to write in small character quantities…

I’ve been fortunate to have been involved in prospecting several systems for EOC management and other workflow management needs, along with obviously having used a multitude of these systems in EOCs and other capacities. I certainly have my preferences of different systems as well as those I’m really not a fan of, which I’m not going to get into here, though I will say there are some smaller but very successful vendors with great products and excellent track records. I think everyone needs to spec these out for themselves. One note, before I speak a bit about that process, is that you don’t necessarily need a proprietary system. Current technology facilitates file sharing, task management, accessible GIS, and other needs, though full integration and intentional design are just some of the benefits of using a proprietary system. A lot of organizations learned over the past 16 months or so of the pandemic they can get considerable mileage out of applications like Microsoft Teams and Smartsheet. There are also benefits to systems which users may use on a more regular basis than an EOC management platform which may only be used during incidents, events, and exercises.

I’ll say there is no definitive right way to spec out a system, but there are a lot of wrong ways to approach it. My observations below are in no way comprehensive, but they are hitting a lot of the big things that I’ve seen and experienced. Also, my observations aren’t highly techy since that’s not my forte (see my first item below).

Form a Team: (Yeah, we are starting this out like CPG 101.) Bring the right stakeholders together for this. Consider the whole community of your EOC. However you organize your EOC, ensure that elements of the entire organization are represented. Don’t forget agency representatives, finance, GIS, and PIO/JIC. Also include disaster recovery, legal, and obviously IT.

Understand the Situation: (Gosh, CPG 101 has so many uses!) Understand your needs. If you don’t understand your own needs, an outsider certainly won’t. A lot of persons and organizations may think they understand their needs, and very likely do, but there is a big difference between stating it and actually digging into it. Also, this should be done BEFORE you meet with vendors (they will likely suggest against this as they want to influence your perspective on this). It’s important that you do this first so you can establish a standard and see how each vendor/product can meet those needs. NEVER let a vendor define your need.

You may want to take an opportunity to solicit input from other users as well. Talk to them, build a survey, etc. to see what features they might want and what they don’t want. This also helps build buy in for eventual implementation, which can be very important.

There are certain fundamentals to be established and decisions that will need to be made up front by your organization.

  • On the IT side, will this solution be self-hosted or vendor hosted?
  • How many users would be ‘normal’ for an incident? What would be a surge number of users?
    • Who are these users (generally… your organization, other organizations)
    • Are role-based user profiles preferred?
  • Does your organization want to maintain it or will maintenance and updates be part of the contract?
  • What legal information retention requirements exist?
  • What’s your budget?

Most of the needs to be identified are functional. These are the main things you want to use it for. Start with big items such as the ability to develop collaborative EOC action plans and situation reports, dashboard displays, resource tracking, mission tracking, financial tracking and forecasting, etc. Then examine each one more closely, going to a workflow and task analysis. In this, you break each item into tasks and consider:

  • who is responsible for each,
  • who contributes to each,
  • who needs to be aware of each,
  • and who are decision-makers for each;
  • as well as what information is needed for each,
  • what information is tracked, and
  • identifying any outputs or reports (and the key data sets associated with each)

Each task may need to be analyzed deeper as it may have several sub-tasks.

  • Does information need to be routed and to who?
  • Are there multiple reviews or approvals?
  • Is anyone outside the organization involved? (i.e. someone who would not have access to the system but would require information from the system)
  • TIP: It’s often helpful to actually run a bit of a simulation with the people who actually do the tasks such that what they do can be observed in real time.
  • Be aware of how you are presently limited or influenced by the current technology you use and flag this. It may not be a necessary part of your workflow if it’s dictated by that technology.
  • This is also a good time to question why certain processes are conducted the way they are. And remember: ‘Because we’ve always done it that way’ is not a good answer.

Consider how information can/should be displayed, including geocoding information for GIS use.

Keep in mind that the technology you eventually obtain should support these processes and tasks, inputs, outputs, and users. The technology implementation may streamline your workflow, but shouldn’t dictate it.

You may also want to talk to colleagues to see what systems they are using; what their opinions are of those systems; and lessons learned with the system, vendor, implementation, etc. They might even give you access to poke around in their system a bit.

Determine Goals and Objectives: Here is where you identify your specifications based on your outcomes above. Once you have specifications you can start approaching vendors. Your IT department should know how to put together a technology specifications package.

Talk to Vendors: Get your specifications out to vendors, see who is interested, and meet with them to discuss. Depending on your organization, this may be a formal process or can be informal. If it’s formal, be sure that everyone understands this is not yet the invitation to bid. This is still an information gathering step.

Product demos can be great opportunities for your team (yes, your whole team should participate) to meet with vendors to see how they can address your needs and to learn what options are out there. Every team member should have the spec sheet with them so they can independently assess how the demonstrated solution meets needs. This also allows vendors to identify additional value and features they can provide as well as suggesting different approaches to some of your needs. (Again, your team should keep in mind that the product should never dictate the process, though it’s good to understand that the product may streamline that process, but it shouldn’t sacrifice any essential components you have defined). They may also offer best practices they have experienced in their work with other clients.

Get an understanding of how the system is administered and what help desk support looks like. How much in-house administration is your organization able to do, such as adding new users, creating profiles/roles, and resetting passwords?

This interface with vendors is important. I’ve had vendors simply pitch their off-the-shelf solution with little to no regard for the spec sheet, which is obviously a great reason to ask them to leave. You aren’t likely to get a fully custom-built product (though it’s possible), but in all likelihood what you will be offered is a customized version of their off-the-shelf standard. I generally think this is the best option. Along with being cost effective, you also have reasonable assurances that the foundation of their platform is good since it’s what they are building from.

Speaking of this foundation, ask them who their clients are and ask for references of current users. It’s also totally fair to ask why their solution is better than that of other competitors (name them!), and why some customers may choose to go to another platform.

It’s also good to ask about the future. What does the service contract look like? How are updates done? What time period do you have with the vendor for no-cost adjustments once the system is in place? Is incident support available? How can you be made aware of innovations and how are those prices structured for implementation? Will they put together a training package and will they support the first round of training? How about self-guided training?

Purchase: Based on your vendor demos, you might have some modifications to make to your spec sheet. That’s OK. You’ve seen what’s out there and have had an opportunity to learn. I’d also suggest identifying what you consider to be firm requirements vs features which are desired but not required. Once you’ve gotten all the information you can, conduct your purchasing process as you need to.

Implementation and Maintenance: Implementation can be a considerable challenge, and this is where even the better EOC management platforms fail, either in actual practice or in the opinion of users. I could write an entirely separate post on this alone. The most important things to realize are that:

  1. Change is hard
  2. The system isn’t (likely) used daily

Some people will be excited about the system, others will be stuck in the mud of change. Get people oriented to it and train them. Remember that since this system isn’t used on a regular basis, their skills (even their recall of their log in credentials) will atrophy. So training should be a recurring thing for all stakeholders. Everyone needs to know how to log in and navigate the main screen, but not everyone needs to know how to build a situation report. You should also have a just in time (JIT) training program since inevitably people will walk into your EOC cold when a disaster occurs. Consider training that is modular.

Get an exercise in early. This is a great way to reinforce familiarity for users but to also work the kinks out of the system. Ideally, a vendor rep should be present for the exercise so they can see potential issues, even if they can’t fix them on the spot.

Moving into the future, exercise regularly to maintain proficiency and always keep an eye out for opportunities to improve. Recognize that processes evolve over time and technology is obsolete after a couple of years, so your platform should be evolving to avoid stagnation. Maintain a relationship with your vendor but keep an eye on what else is out there. Challenge your vendor to rise to the occasion, be innovative, and to continue meeting your needs. If they can’t, it’s time to look elsewhere.

Through the entire process, interfacing with vendors can be challenging if the right people aren’t involved. My preference is to not be working with someone who has never worked in an EOC or someone who is a ‘typical and generic’ salesperson. Ideally, vendor representatives will have some EOC experience so they can relate to your needs. Your own representative, ideally, should also be able to meet the vendor halfway, being an emergency manager with some tech savvy.

What experiences do you have with EOC management platforms? Any words of wisdom to share about the process?

© 2021 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

ESFs Aren’t for Everyone

Through the years I’ve had numerous conversations with states, cities, and others about organizing their emergency operations plans (EOPs) around Emergency Support Functions (ESFs). In every conversation I’ve suggested against the use of ESFs. Why?

Let’s start with definitions. One definition of ESFs provided by FEMA states that ESFs ‘describe federal coordinating structures that group resources and capabilities into functional areas most frequently needed in a national response’.  Another states that ESFs are ‘a way to group functions that provide federal support to states and federal-to-federal support, both for Stafford Act declared disasters and emergencies and for non-Stafford Act incidents.’ The National Response Framework (NRF) states that ESFs are ‘response coordinating structures at the federal level’.

The key word in these definitions is ‘federal’. ESFs are a construct originally of the Federal Response Plan (FRP) which was in place from 1992 to 2004. The FRP was a signed agreement among 27 Federal departments and agencies as well as the America Red Cross that outlined how Federal assistance and resources would be provided to state and local governments during a disaster. The ESFs were carried into the National Response Plan in 2004 and the National Response Framework in 2008.

While the NRF, CPG 101, and other sources indicate that other levels of government may also organize their response structure utilizing ESFs, I think any attempts are awkward and confusing at best.

Jumping to present day, the following ESFs are identified in the NRF:

  1. Transportation
  2. Communications
  3. Public Works and Engineering
  4. Firefighting
  5. Information and Planning
  6. Mass Care, Emergency Assistance, Temporary Housing, and Human Assistance
  7. Logistics
  8. Public Health and Medical Services
  9. Search and Rescue
  10. Oil and Hazardous Materials Response
  11. Agriculture and Natural Resources
  12. Energy
  13. Public Safety and Security
  14. Cross-Sector Business and Infrastructure
  15. External Affairs

The ESFs work for the Federal government by providing organizations to address the legal, regulatory, and bureaucratic coordination that must take place across various agencies. These organizations are utilized before (preparedness), during (response and recovery… though ultimately most of these transition to the Recovery Support Functions per the National Disaster Recovery Framework), and after (AAR) a disaster as a cohesive means of maintaining relationships, continuity, and operational readiness. Each of the ESFs maintains a lead agency and has several supporting agencies which also have capabilities and responsibilities within the mission of that ESF.

Where does this fall apart for states and other jurisdictions? First of all, I view Emergency Support Function/ESF as a branded name. The ESF is a standard. When someone refers to ESFs, it’s often inferred that they are speaking of the Federal constructs. ESFs are defined by the Federal government in their current plans (presently the NRF). When co-opted by states or other jurisdictions, this is where it first starts to fall apart. This creates a type of ‘brand confusion’. i.e. Which ESFs are we speaking of? This is further exacerbated if names and definitions of their ESFs aren’t consistent with what is established by the Federal government.

Further, the utilization of ESFs may simply not be the correct tool. It may be the same agencies responsible for transportation as well as public works and engineering. So why have two teams comprised of personnel from the same agencies – especially if bench depth is small in those agencies. Related to this, I’ll say that many jurisdictions (which may even include smaller states, territories, or tribes) simply don’t have the depth to staff 15 ESFs. This is why an organization should be developed for each jurisdiction by each jurisdiction based on their needs and capabilities. It’s simply silly to try to apply the construct utilized by our rather massive Federal government to a jurisdiction much smaller.

Next, I suggest that the integration of ESFs into a response structure is simply awkward. I think in many ways this holds true for the Federal government as well. Is ESF 7 (Logistics) an emergency support function or is it a section in our EOC? The same goes for any of the other ESFs which are actually organizational components often found in response or coordination structures inspired by the Incident Command System.

All that said, the spirit of ESFs is valuable and should be utilized by other jurisdictions in other levels of government. These are often referred to as Functional Branches. Similar to ESFs, they can be used before, during, and after a disaster. Your pre-disaster planning teams become the core group implementing the plans they developed and improving the plans and associated capabilities after a disaster. As functional branches, there is no name confusion with ESFs, even though there is considerable similarity. You aren’t constrained to the list of Federal ESFs and don’t have to worry about how they define or construct them. You can do your own thing without any confusion. You are also able to build the functional branches based on your own needs and capabilities, not artificially trying to fit your needs into someone else’s construct. I’ve seen a lot of states use the term State Support Function or SSF, which is certainly fine.

I will make a nod here though to a best practice inspired by the ESFs, and that is having certain standing working groups for incident management organizational elements (i.e. communications, logistics, information and planning, and external affairs) that may not be organized under the operations section or whatever is analogous in your EOC. Expand beyond these as needed. Recall that the first step in CPG 101 for emergency planning calls for developing a planning team. There is a great deal of benefit to be had by utilizing stakeholder teams to establish standard operating guidelines, job aids, etc. in these functions or others in your EOC or other emergency organizational structure. Often it’s the emergency manager or a staff member doing this, expecting others to simply walk in and accept what has been developed. If people want to work in a Planning Section for your jurisdiction, let them own it (obviously with some input and guidance as needed).

I think ESFs are a valuable means for the US Federal government to organize, but don’t confuse the matter or develop something unnecessary by trying to carbon copy them into your jurisdiction. Examine your own needs and capabilities and form steady state working groups that become functional entities during disaster operations.

© 2021 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

EOCs and IMTs

The world of incident management is foggy at best. There are rules, sometimes. There is some valuable training, but it doesn’t necessarily apply to all circumstances or environments. There are national models, a few of them in fact, which makes them models, not standards. Incident management is not as straight forward as some may think. Sure, on Type 4 and 5 incidents the management of the incident is largely taking place from an incident command post. As you add more complexity, however, you add more layers of incident management. Perhaps multiple command posts (a practical truth, regardless of the ‘book answer’), departmental operations centers, emergency operations centers at various levels of government, and an entire alphabet soup of federal operations centers at the regional and national levels with varying (and sometimes overlapping) focus. Add in operational facilities, such as shelters, warehouses, isolation and quarantine facilities, etc. and you have even more complexity. Trying to map out these incident management entities and their relationships is likely more akin to a tangle of yarn than an orderly spiderweb.

Incident Management Teams (IMTs) (of various fashion) are great resources to support the management of incidents, but I often see people confusing the application of an IMT. Most IMTs are adaptable, with well experienced personnel who can pretty much fit into any assignment and make it work. That said, IMTs are (generally) trained in the application of the incident command system (ICS). That is, they are trained in the management of complex, field-level, tactical operations. They (usually) aren’t specifically trained in managing an EOC or other type of operations center. While the principles of ICS can be applied to practically any aspect of incident management, even if ICS isn’t applied in the purest sense, it might not be the system established in a given operations center (in whatever form it may take). While IMTs can work in operations centers, operations centers don’t necessarily need an IMT, and while (formal) IMTs are great resources, they might not be the best solution.  

The issue here certainly isn’t with IMTs, though. Rather it’s with the varying nature of operations centers themselves. IMTs are largely a defined resource. Trying to fit them to your EOC may be a square peg/round hole situation. It’s important to note that there exists no single standard for the organization and management of an EOC. NIMS provides us with some optional models, and in practice much of what I’ve seen often has some similarity to those models, yet have deviations which largely prevent us from labeling what is in practice with any of the NIMS-defined models in the purist sense. The models utilized in EOCs are often practical reflections of the political, bureaucratic, and administrative realities of their host agencies and jurisdictions. They each have internal and external needs that drive how the operations center is organized and implemented. Can these needs be ultimately addressed if a single standard were required? Sure, but when governments, agencies, and organizations have well established systems and organizations, we’ll use finance as an example, it simply doesn’t make sense to reorganize. This is why we are so challenged with establishing a single standard or even adhering to a few models.

The first pathway to success for your operations center is to actually document your organization and processes. It seems simple, yet most EOCs don’t have a documented plan or operating guideline. It’s also not necessarily easy to document how the EOC will work if you haven’t or rarely have activated it at all. This is why we stick to the CPG-101 planning process, engaging a team of people to help determine what will or won’t work, examining each aspect from a different perspective. I also suggest enlisting the help of someone who has a good measure of experience with a variety of EOCs. This may be someone from a neighboring jurisdiction, state emergency management, or a consultant. Either way, start with the existing NIMS models and figure out what will work for you, with modifications as needed. Once you have a plan, you have a standard from which to work.

Once you have that plan, train people in the plan. Figure out who in your agency, organization, or jurisdiction has the knowledge, skills, and abilities to function within key positions. FEMA’s EOC Skillsets can help with this – even if the positions they use don’t totally map to yours, it’s not difficult to line up most of the common functions. Regardless of what model you are using, a foundation of ICS training is usually helpful, but DON’T STOP HERE. ICS training alone, even if your EOC is ICS-based, isn’t enough. I can practically guarantee your EOC uses systems, processes, or implementations unique to your EOC which aren’t part of ICS or the ICS training your personnel received. Plus, well… if you haven’t heard… ICS training sucks. It can be a hard truth for a lot of entities, but to prepare your personnel the best way possible, you will need to develop your own EOC training. And of course to complete the ‘preparedness trifecta’ you should then conduct exercises to validate your plans and support familiarity.

All that said, you may require help for a very large, long, and/or complex incident. This is where government entities and even some in the private sector request incident management support. Typically this incident management support comes from established IMTs or a collection of individuals providing the support you need. The tricky part is that they aren’t familiar with how you are organized or your way of doing things. There are a few ways to hedge against the obstacles this potentially poses. First, you can establish an agreement or contract with people or an organization that know your system. If this isn’t possible, you can at least (if you’ve followed the guidance above) send your plan to those coming to support your needs, allowing them at least a bit of time in transit to study up. Lastly, a deliberate transition, affording some overlap or shadowing time with the outgoing and incoming personnel will help tremendously, affording the incoming personnel to get a hands-on feel for things (I recommend this last one even if the incoming personnel are familiar with your model as it will give an opportunity to become familiar with how you are managing the incident). Of course all of these options will include formal briefings, sharing of documentation, etc.

Remember, though, that there are certain things your agency, organization, or jurisdiction will always own, especially the ultimate responsibility for your mission. Certain internal processes, such as purchasing, are still best handled by your own people. If your operations are technical and industry-specific, such as for a utility, they should still be managed by your own people. That doesn’t mean, however that your people can’t be supported by outside personnel (Ref my concept of an Incident Support Quick Response Team). The bottom line here is that IMTs or any other external incident support personnel are great resources, but don’t set them up for a slow start, or even failure, by not addressing your own preparedness needs for your EOC. In fact any external personnel supporting your EOC should be provided with a packet of information, including your EOC plan and procedures, your emergency operations plan (EOP), maps, a listing of capabilities, demographics, hazards, org charts for critical day-to-day operations, an internal map of the building they will be working in, and anything else that will help orient them to your jurisdiction and organization – and the earlier you can get it to them the better! Don’t forget to get your security personnel on board (building access cards and parking tags) and your IT personnel (access to your network, printers, and certain software platforms). Gather these packets beforehand or, at the very least, assemble a checklist to help your personnel quickly gather and address what’s needed.

© 2021 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Measuring Return on Investment Through Key Performance Indicators

Return on investment (ROI) is generally defined as a measurement of performance to evaluate the value of investments of time, money, and effort. Many aspects of preparedness in emergency management offer challenges when trying to gauge return on investment. Sure, it’s easy to identify that m number of classes were conducted and n number of people were trained, that x number of exercises were conducted with y number of participants, that z number of plans were written, or even that certain equipment was purchased. While those tell us about activity, they don’t tell us about performance, results, or outcomes.

More classes were conducted. So what?

We purchased a generator. So what?

The metrics of these activities are easy to obtain, but these are rather superficial and generally less than meaningful. So how can we obtain a meaningful measure of ROI in emergency preparedness?

ROI is determined differently based on the industry being studied, but fundamentally it comes down to identifying key performance indicators, their value, and how much progress was made toward those key performance indicators. So what are our key performance indicators in preparedness?

FEMA has recently began linking key performance indicators to the THIRA. The Threat and Hazard Identification and Risk Assessment, when done well, gives us quantifiable and qualifiable information on the threats and hazards we face and, based upon certain scenarios, the performance measures need to attain certain goals. This is contextualized and standardized through defined Core Capabilities. When we compare our current capabilities to those needed to meet the identified goals (called capability targets in the THIRA and SPR), we are able to better define the factors that contribute to the gap. The gap is described in terms of capability elements – planning, organizing, equipping, training, and exercises (POETE). In accordance with this, FEMA is now making a more focused effort to collect data on how we are meeting capability targets, which helps us to better identify return on investment.

2021 Emergency Management Performance Grant (EMPG) funding is requiring the collection of data as part of the grant application and progress reports to support their ability to measure program effectiveness and investment impacts. They are collecting this information through the EMPG Work Plan. This spreadsheet goes a long way toward helping us better measure preparedness. This Work Plan leads programs to identify for every funded activity:

  • The need addressed
  • What is expected to be accomplished
  • What the expected impact will be
  • Identification of associated mission areas and Core Capabilities
  • Performance goals and milestones
  • Some of the basic quantitative data I mentioned above

This is a good start, but I’d like to see it go further. They should still be prompting EMPG recipients to directly identify what was actually improved and how. What has the development of a new plan accomplished? What capabilities did a certain training program improve? What areas for improvement were identified from an exercise, what is the corresponding improvement plan, and how will capabilities be improved as a result? The way to get to something more meaningful is to continue asking ‘so what?’ until you come to an answer that really identifies meaningful accomplishments.

EMPG aside, I encourage all emergency management programs to identify their key performance indicators. This is a much more results-oriented approach to managing your program, keeping the program focused on accomplishing meaningful outcomes, not just generating activity. It’s more impactful to report on what was accomplished than what was done. It also gives us more meaningful information to analyze across multiple periods. This type of information isn’t just better for grant reports, but also for your local budgets and even routine reports to upper management and elected officials.

What do you think about FEMA’s new approach with EMPG? What key performance indicators do you use for your programs?

© 2021 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Emergency Management Budgets

Last week there were some posts circulating around Twitter expressing some considerable dismay about emergency management budgets. While I obviously agree that emergency management programs should be better funded, there is some important context to consider when looking at (most) emergency management agency budgets in the US.

While jurisdictions having emergency management programs provide some measure of funding, typically the largest quantity of funding comes from federal grant programs, with the most significant grant for operational expenses being the Emergency Management Performance Grant (EMPG). EMPG is part of the Homeland Security Grant Program (HSGP) and is budgeted each year in the federal budget with administrative responsibilities in the hands of FEMA. States are the grantees of EMPG. While a considerable amount of the funds are retained by states, there is a requirement for a certain percentage to be applied to local emergency management programs. States have different models for how the funds are allocated – some states award funds directly to county/local governments (subgrantees), others spend the funds on behalf of the subgrantees through the provision of direct services to county/local governments. Many states also use a hybrid of the two models. Those receiving an allocation of EMPG are ideally accounting for it in their published budgets, but we should be aware that some releases of budget information may not include EMPG numbers.

There are also additional grant funds available to county and local governments to support an array of emergency management and emergency management-related programs. These include hazard mitigation grants, the Urban Area Security Initiatives (UASI) grant, Secure the Cities, and others. Yes, a lot of these funds are targeted to more ‘homeland security’ types of activities, but we should also recognize the considerable overlap in a lot of EM and HS. I took a small sample of a few mid to large sized cities (mostly since they have established and funded emergency management offices), seeing ratios of 1:3 to 1:4 for local share funding compared to grant funding (this did not include COVID-related supplemental funding). Of course, you may see numbers significantly different in your jurisdiction.

I’ll also suggest that activities across many other local government agencies and departments support some measure of emergency management. While a lot of these expenditures may not have the input of an emergency management office, there are a variety of local infrastructure projects (hopefully contributing to hazard mitigation), health and human services investments (mitigation and preparedness), code enforcement (mitigation), and others that do contribute to the greater emergency management picture for the jurisdiction. In fact, some of the funding allocations received by these agencies may be through discipline-specific emergency management grant programs, such as those which may come from US DOT or CDC/HHS.

Overall, emergency management funding tends to be a lot larger than the casual observer may think, though even a budget analyst would require some time to identify how it all comes together, especially for a larger jurisdiction that tends to have larger departments, more complex expenditures, and more grant funding. As mentioned, I’d still love to see more direct funding allocations for emergency management programs, especially as emergency management can hopefully direct efforts where and how they are needed most within their communities. I’m also hopeful that officials leading different programs at the local level are coming together to jointly determine how best to allocate federal funds (obviously within the grant terms and conditions), even if they are coming from different federal and state agencies and being awarded to different local departments, with a goal of addressing local threats, hazards, and capabilities in the best ways possible for communities.

While what I wrote is a broad-brush example of how emergency funding is allocated across much of the US, different states do administer grants different. It can be as simple as I’ve outlined, or a lot more complex. We also have a lot of examples of the haves and have-nots, with many smaller jurisdictions being left woefully behind in funding. I’d love to hear what the funding situation looks like for your jurisdiction. Also, for those not in the US, how are your local programs funded?

© 2021 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

A Podcast Invitation

Last week I had the honor of being invited to guest on the EM Weekly podcast. We had a great discussion there talking about incident management structures and some of the continued challenges of emergency management.

Check it out here:

An Update of Ontario’s Incident Management System

Just yesterday, the Canadian province of Ontario released an update of its Incident Management System (IMS) document. I gave it a read and have some observations, which I’ve provided below. I will say that it is frustrating that there is no Canadian national model for incident management, rather the provinces determine their own. Having a number of friends and colleagues from across Canada, they have long espoused this frustration as well. That said, this document warrants an examination.

The document cites the Elliot Lake Inquiry from 2014 as a prompt for several of the changes in their system from the previous iteration of their IMS document. One statement from the Inquiry recommended changes to ‘put in place strategies that will increase the acceptance and actual use of the Incident Management System – including simplifying language’. Oddly enough, this document doesn’t seem to overtly identify any strategies to increase acceptance or use; in fact there is scant mention of preparedness activities to support the IMS or incident management as a whole. I think they missed the mark with this, but I will say the recommendation from the Inquiry absolutely falls in line with what we see in the US regarding acceptance and use.

The authors reinforce that ICS is part of their IMS (similar to ICS being a component of NIMS) and that their ICS model is compatible with ICS Canada and the US NIMS. I’ll note that there are some differences (many of which are identified below) that impact that compatibility, though don’t outright break it. They also indicate that this document isn’t complete and that they already identified future additions to the document including site-specific roles and responsibilities, EOC roles and responsibilities, and guidance on resource management. In regard to the roles and responsibilities, there is virtually no content in this document on organizations below the Section Chief level, other than general descriptions of priority activity. I’m not sure why they held off of including this information, especially since the ICS-specific info is reasonably universal.

I greatly appreciate some statements they make on the application of Unified Command, saying that it should only be used when single command cannot be established. They give some clarifying points within the document with some specific considerations, but make the statement that “Single command is generally the preferred form of incident management except in rare circumstances where unified command is more effective” and reinforce that regular assessment of Unified Command should be performed if implemented. It’s quite a refreshing perspective opposed to what we so often see in the US which practically espouses that Unified Command should be the go-to option. Unified Command is hard, folks. It adds a lot of complexity to incident management. While it can solve some problems, it can also create some.

There are several observations I have on ICS-related organizational matters:

  • They use the term EOC Director. Those who have been reading my stuff for a while know that I’m really averse to this term as facilities have managers. They also suggest that the term EOC Command could be used (this might even be worse than EOC Director!).
  • While they generally stick with the term Incident Commander, they do address a nuance where Incident Manager might be appropriate (they use ‘manager’ here but not for EOCs??). While I’m not sure that I’m sold on the title, they suggest that incidents such as a public health emergency that is wide-reaching and with no fixed site is actually managed and not commanded. So in this example, the person in charge from the Health Department would be the Incident Manager. It’s an interesting nuance that I think warrants more discussion.
  • The document refers several times to the IC developing strategies and tactics. While they certain may have input to this, strategies and tactics are typically reserved for the Operations Section.
  • There is an interesting mention in the document that no organization has tactical command authority over any other organization’s personnel or assets unless such authority is transferred. This is a really nuanced statement. When an organization responds to an incident and acknowledges that the IC is from another organization, the new organization’s resources are taking tactical direction from the IC. Perhaps this is the implied transfer of authority? This statement needs a lot of clarification.
  • Their system formally creates the position of Scribe to support the Incident Commander, while the EOC Director may have a Scribe as well as an Executive Assistant. All in all, I’m OK with this. Especially in an EOC, it’s a reflection of reality – especially the Executive Assistant – which is not granted the authority of a Deputy, but is more than a Scribe. I often see this position filled by a Chief of Staff.
  • The EOC Command Staff (? – they don’t make a distinction for what this group is called in an EOC) includes a Legal Advisor. This is another realistic inclusion.
  • They provide an option for an EOC to be managed under Unified Command. While the concept is maybe OK, ‘command’ is the wrong term to use here.
  • The title of Emergency Information Officer is used, which I don’t have any particular issue with. What’s notable here is that while the EIO is a member of the Command Staff (usually), the document suggests that if the EIO is to have any staff, particularly for a Joint Information Center, that they are moved to the General Staff and placed in charge of a new section named the Public Information Management Section. (a frustration here that they are calling the position the EIO, but the section is named Public Information). Regardless of what it’s called or if there is or is not a JIC, I don’t see a reason to move this function to the General Staff.
  • Aside from the notes above, they offer three organizational models for EOCs, similar to those identified in NIMS
  • More than once, the document tasks the Operations Section only with managing current operations with no mention of their key role in the planning process to develop tactics for the next operational period.
  • They suggest other functions being included in the organization, such as Social Services, COOP, Intelligence, Investigations, and Scientific/Technical. It’s an interesting call out whereas they don’t specify how these functions would be included. I note this because they refer to Operations, Planning, Logistics, and Finance/Admin as functions (which is fine) but then also calling these activities ‘functions’ leads me to think they intend for new sections to be created for these. Yes, NIMS has evolved to make allowances for some flexibility in the organization of Intel and Investigations, something like Social Services (for victims) is clearly a function of Operations. While I appreciate their mention of COOP, COOP is generally a very department-centric function. While a continuity plan could certainly be activated while the broader impacts of the incident are being managed, COOP is really a separate line of effort, which should certainly be coordinated with the incident management structure, but I’m not sure it should be part of it – though I’m open to discussion on this one.
  • I GREATLY appreciate their suggestion of EOC personnel being involved in planning meetings of incident responders (ICP). This is a practice that can pay significant dividends. What’s interesting is that this is a measure of detail the document goes into, yet is very vague or lacking detail in other areas.

The document has some considerable content using some different terminology in regard to incidents and incident complexity. First off, they introduce a classification of incidents, using the following terminology:

  • Small
  • Large
  • Major
  • Local, Provincial, and National Emergencies

Among these, Major incidents and Local/Provincial/National Emergencies can be classified as ‘Complex Incidents’. What’s a complex incident? They define that as an incident that involves many factors which cannot be easily analyzed or understood; they may be prolonged, large scale, and/or involve multiple jurisdictions. While I understand that perhaps they wanted to simplify the language associated with Incident Types, but even with the very brief descriptions the document provided on each classification, these are very vague. Then laying the term of ‘complex incident’ over the top of this, it’s considerably confusing.

**Edit – I realized that the differentiator between small incident and large incident is the number of responding organizations. They define a small incident as a single organization response, and a large incident as a multi agency response. So the ‘typical’ two car motor vehicle accident that occurs in communities everywhere, requiring fire, EMS, law enforcement, and tow is a LARGE INCIDENT????? Stop!

Another note on complex incidents… the document states that complex incidents involving multiple response organizations, common objectives will usually be high level, such as ‘save lives’ or ‘preserve property’, with each response organization developing their own objectives, strategies, and tactics.  I can’t buy into this. Life safety and property preservation are priorities, not objectives. And allowing individual organizations to develop their own objectives, strategies, and tactics pretty much breaks the incident management organization and any unity of effort that could possibly exist. You are either part of the response organization or you are not.

Speaking of objectives, the document provides a list of ‘common response objectives’ such as ‘save lives’ and ‘treat the sick and injured’. These are not good objectives by any measure (in fact they can’t be measured) and should not be included in the document as they only serve as very poor examples.

So in the end there was a lot in this document that is consistent with incident management practices, along with some good additions, some things that warrant further consideration, and some things which I strongly recommend against. There are certainly some things in here that I’d like to see recognized as best practices and adopted into NIMS. I recognize the bias I have coming from the NIMS world, and I tried to be fair in my assessment of Ontario’s model, examining it for what it is and on its own merit. Of course anyone who has been reading my posts for a while knows that I’m just as critical of NIMS and related documents out of the US, so please understand that my (hopefully) constructive comments are not intended to create an international incident. I’m a big fan of hockey and poutine – please don’t take those away from me!

I’m always interested in the perspectives of others. And certainly if you were part of the group that developed this document, I’d love to hear about some of your discussions and how you reached certain conclusions, as well as what you envision for the continued evolution for the Provincial IMS.

© 2021 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®