When to AAR

A discussion with colleagues last week both on and off social media on the development of after-action reports (AARs) for the COVID 19 pandemic identified some thoughtful perspectives. To contextualize, the pandemic is arguably the longest and largest response the world has ever faced. Certainly, no one argues the necessity for organizations to develop AARs, as there have been an abundance of lessons learned that transcend all sectors. It’s thankfully not often we are faced with such a long incident, but in these circumstances, we need to reconsider our traditional ways of doing things, which has generally been to develop an AAR at the conclusion of the incident.

One central aspect of the discussions was about the timing of the AARs. When should we develop an AAR for an incident? I certainly think that with most incidents, we can safely AAR when the incident is complete, particularly given that most incidents don’t last as long as the pandemic has. The difficulty with the pandemic, relative to AARs, is time. The more time goes on, the more we focus on recent concerns and the less we remember of the earlier parts of the response. This likely remains within tolerable limits for an incident that will last several weeks or even up to a few months, but eventually we need to recognize that that longer we go without conducting the after-action process, the more value we lose. Yes, we can recreate a lot with through documentation, but human inputs are critical to the AAR process, and time severely erodes those. Given this, I suggest the ideal practice in prolonged incidents is to develop interim AARs to ensure that chunks of time are being captured.

Another aspect related to this is to determine what measure we are using for the incident. The vast majority of AARs focus mostly on response, not recovery. This is an unfortunate symptom of the response-centric mentality that persists in emergency management. We obviously should be conducting AARs after the response phase, but we also need to remember to conduct them once the recovery phase is substantially complete. Given that recovery often lasts much longer than the response, we certainly shouldn’t wait until recovery is complete to develop a single AAR for the incident, rather we should be developing an AAR, at a minimum, at the substantial completion of response and another at the substantial completion of recovery.

Yet another complication in this discussion is that timing is going to be different for different organizations. I presently have some clients for which the pandemic is much less of a concern operationally as it was a year ago, especially with a vaccinated workforce. So much less of a concern, in fact, that they have largely resumed normal operations, though obviously with the continuation of some precautionary measures. Other organizations, however, are still in a full-blown response; while there are still yet others somewhere in the middle. This means that as we go through time, the pandemic will largely be over for certain organizations and jurisdictions around the world, while others are still consumed by the incident. While the WHO will give the official declaration of the conclusion of the pandemic, it will be over much sooner for a lot of organizations. Organizations should certainly be developing AARs when they feel the incident has substantially ended for them, even though the WHO may not have declared the pandemic to have concluded.

Consider that the main difference between evaluating an exercise and evaluating an incident is that we begin the exercise with the goal of evaluation. As such, evaluation activities are planned and integrated into the exercise, with performance standards identified and staff dedicated to evaluation. While we evaluate our operations for effectiveness during a response and into recovery, we are generally adjusting in real time to this feedback rather than capturing the strengths and opportunities for improvement. Be it during the incident or after, we need to deliberately foster the AAR process to not only capture what was done, but to help chart a path to a more successful future. I’ve been preaching about the value of incident evaluation for several years, and have been thankful to see that FEMA had developed a task book for such.

Given the complexity and duration of the pandemic, I started encouraging organizations to develop interim AARs longer than a year ago, and in fact supported a client in developing their initial response AAR just about a year ago. FEMA smartly assembled an ‘Initial Assessment Report’ of their early response activity through September of 2020, though unfortunately I’ve not seen anything since. There was a question about naming that came up in the discussions I had, suggesting that the term ‘AAR’ should be reserved for after the incident, and a different term used for any other reports. I partially agree. While I think we should still call it what it is – even if it’s done in the midst of an incident, it is still an after-action report – that being an analysis of actions we’ve taken within a defined period of time. Afterall, it’s not called an ‘after incident report’. That said, I do think that any AARs developed during the incident do warrant some clarification, which can incorporate the inclusion of a descriptor such as ‘interim’ or ‘phase 1, 2, 3, etc’, or whatever is most suitable. I don’t think we need anything standardized so long as it’s fairly self-explanatory.

Have you already conducted an AAR for the pandemic? Do you expect to do another?

© 2021 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Metrics and Data Analytics in Emergency Management

I’ve lately seen some bad takes on data analytics in emergency management. For those not completely familiar, data analytics is a broad-based term applied to all manner of data organization, manipulation, and modeling to bring out the most valuable perspectives, insights, and conclusions which can better inform decision-making. Obviously, this can be something quite useful within emergency management.

Before we can even jump into the analysis of data, however, we need to identify the metrics we need. This is driven by decision-making, as stated above, but also by operational need, measurement of progress, and reporting to various audiences, which our own common operating picture, to elected officials, to the public. In identifying what we are measuring, we should regularly assess who the audience is for that information and why the information is needed.

Once we’ve identified the metrics, we need to further explore the intended use and the audience, as that influences what types of analysis must be performed with the metrics and how the resultant information will be displayed and communicated.

I read an article recently from someone who made themselves out to be the savior of a state emergency operations center (EOC) by simply collecting some raw data and putting it into a spreadsheet. While this is the precursor of pretty much all data analysis, I’d argue that the simple identification and listing of raw data is not analytics. It’s what I’ve come to call ‘superficial’ data, or what someone on Twitter recently remarked to me as ‘vanity metrics’. Examples: number of people sheltered, number of customers with utility outages, number of people trained, number of plans developed.

We see a lot of these kinds of data in FEMA’s annual National Preparedness Report and the Emergency Management Performance Grant (EMPG) ‘Return on Investment’ report generated by IAEM and NEMA. These reports provide figures on dollars spent on certain activities, assign numerical values to priorities, and state how much of a certain activity was accomplished within a time period (i.e. x number of exercises were conducted over the past year). While there is a place for this data, I’m always left asking ‘so what?’ after seeing these reports. What does that data actually mean? They simply provide a snapshot in time of mostly raw data, which isn’t very analytical or insightful. It’s certainly not something I’d use for decision-making. Both of these reports are released annually, giving no excuse to not provide some trends and comparative analysis over time, much less geography. Though even in the snapshot-of-time type of report, there can be a lot more analysis conducted that simply isn’t done.

The information we report should provide us with some kind of insight beyond the raw data. Remember the definition I provided in the first paragraph… it should support decision-making. This can be for the public, the operational level, or the executive level. Yes, there are some who simply want ‘information’ and that has its place, especially where political influence is concerned.

There are several types of data analytics, each suitable for examining certain types of data. What we use can also depend on our data being categorical (i.e. we can organize our data into topical ‘buckets’) or quantitative. Some data sets can be both categorical and quantitative. Some analysis examines a single set of data, while other types support comparative analysis between multiple sets of data. Data analytics can be as simple as common statistical analysis, such as range, mean, median, mode, and standard deviation; while more complex data analysis may use multiple steps and various formulas to identify things like patterns and correlation. Data visualization is then how we display and communicate that information, through charts, graphs, geographic information systems (GIS), or even infographics. Data visualization can be as important as the analysis itself, as this is how you are conveying what you have found.

Metrics and analytics can and should be used in all phases of emergency management. It’s also something that is best planned, which establishes consistency and your ability to efficiently engage in the activity. Your considerations for metrics to track and analyze, depending on the situation, may include:

  • Changes over time
    • Use of trend lines and moving averages may also be useful here
  • Cost, resources committed, resources expended, status of infrastructure, and measurable progress or effectiveness can all be important considerations
  • Demographics of data, which can be of populations or other distinctive features
  • Inclusion of capacities, such as with shelter data
  • Comparisons of multiple variables in examining influencing factors (i.e. loss of power influences the number of people in shelters)
    • Regression modeling, a more advanced application of analytics, can help identify what factors actually do have a correlation and what the impact of that relationship is.
  • Predictive analytics help us draw conclusions based on trends and/or historical data
    • This is a rabbit you can chase for a while, though you need to ensure your assumptions are correct. An example here: a hazard of certain intensity occurring in a certain location can expect certain impacts (which is much of what we do in hazard mitigation planning). But carry that further. Based on those impacts, we can estimate the capabilities and capacities that are needed to respond and protect the population, and the logistics needed to support those capabilities.
  • Consider that practically any data that is location-bound can and should be supported with GIS. It’s an incredible tool for not only visualization but analysis as well.
  • Data analytics in AARs can also be very insightful.

As I mentioned, preparing for data analysis is important, especially in response. Every plan should identify the critical metrics to be tracked. While many are intuitive, there is a trove of Essential Elements of Information (EEI) provided in FEMA’s Community Lifelines toolkit. How you will analyze the metrics will be driven by what information you ultimately are seeking to report. What should always go along with data analytics is some kind of narrative not only explaining and contextualizing what is being shown, but also making some inference from it (i.e. what does it mean, especially to the intended audience).

I’m not expecting that everyone can do these types of analysis. I completed a college certificate program in data analytics last year and it’s still challenging to determine the best types of analysis to use for what I want to accomplish, as well as the various formulas associated with things like regression models. Excel has a lot of built-in functionality for data analytics and there are plenty of templates and tutorials available online. It may be useful for select EOC staff as well as certain steady-state staff to get some training in analytics. Overall, think of the variables which can be measured: people, cost, status of infrastructure, resources… And think about what you want to see from that data now, historically, and predicted into the future. What relationships might different variables have that can make data even more meaningful. What do we need to know to better support decisions?

Analytics can be complex. It will take deliberate effort to identify needs, establish standards, and be prepared to conduct the analytics when needed.

How have you used data analytics in emergency management? What do you report? What decisions do your analytics support? What audiences receive that information and what can they do with it?

© 2021 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

EOC Management Platforms

Some recent social media discussion on EOC management systems has prompted far too many thoughts for me to write in small character quantities…

I’ve been fortunate to have been involved in prospecting several systems for EOC management and other workflow management needs, along with obviously having used a multitude of these systems in EOCs and other capacities. I certainly have my preferences of different systems as well as those I’m really not a fan of, which I’m not going to get into here, though I will say there are some smaller but very successful vendors with great products and excellent track records. I think everyone needs to spec these out for themselves. One note, before I speak a bit about that process, is that you don’t necessarily need a proprietary system. Current technology facilitates file sharing, task management, accessible GIS, and other needs, though full integration and intentional design are just some of the benefits of using a proprietary system. A lot of organizations learned over the past 16 months or so of the pandemic they can get considerable mileage out of applications like Microsoft Teams and Smartsheet. There are also benefits to systems which users may use on a more regular basis than an EOC management platform which may only be used during incidents, events, and exercises.

I’ll say there is no definitive right way to spec out a system, but there are a lot of wrong ways to approach it. My observations below are in no way comprehensive, but they are hitting a lot of the big things that I’ve seen and experienced. Also, my observations aren’t highly techy since that’s not my forte (see my first item below).

Form a Team: (Yeah, we are starting this out like CPG 101.) Bring the right stakeholders together for this. Consider the whole community of your EOC. However you organize your EOC, ensure that elements of the entire organization are represented. Don’t forget agency representatives, finance, GIS, and PIO/JIC. Also include disaster recovery, legal, and obviously IT.

Understand the Situation: (Gosh, CPG 101 has so many uses!) Understand your needs. If you don’t understand your own needs, an outsider certainly won’t. A lot of persons and organizations may think they understand their needs, and very likely do, but there is a big difference between stating it and actually digging into it. Also, this should be done BEFORE you meet with vendors (they will likely suggest against this as they want to influence your perspective on this). It’s important that you do this first so you can establish a standard and see how each vendor/product can meet those needs. NEVER let a vendor define your need.

You may want to take an opportunity to solicit input from other users as well. Talk to them, build a survey, etc. to see what features they might want and what they don’t want. This also helps build buy in for eventual implementation, which can be very important.

There are certain fundamentals to be established and decisions that will need to be made up front by your organization.

  • On the IT side, will this solution be self-hosted or vendor hosted?
  • How many users would be ‘normal’ for an incident? What would be a surge number of users?
    • Who are these users (generally… your organization, other organizations)
    • Are role-based user profiles preferred?
  • Does your organization want to maintain it or will maintenance and updates be part of the contract?
  • What legal information retention requirements exist?
  • What’s your budget?

Most of the needs to be identified are functional. These are the main things you want to use it for. Start with big items such as the ability to develop collaborative EOC action plans and situation reports, dashboard displays, resource tracking, mission tracking, financial tracking and forecasting, etc. Then examine each one more closely, going to a workflow and task analysis. In this, you break each item into tasks and consider:

  • who is responsible for each,
  • who contributes to each,
  • who needs to be aware of each,
  • and who are decision-makers for each;
  • as well as what information is needed for each,
  • what information is tracked, and
  • identifying any outputs or reports (and the key data sets associated with each)

Each task may need to be analyzed deeper as it may have several sub-tasks.

  • Does information need to be routed and to who?
  • Are there multiple reviews or approvals?
  • Is anyone outside the organization involved? (i.e. someone who would not have access to the system but would require information from the system)
  • TIP: It’s often helpful to actually run a bit of a simulation with the people who actually do the tasks such that what they do can be observed in real time.
  • Be aware of how you are presently limited or influenced by the current technology you use and flag this. It may not be a necessary part of your workflow if it’s dictated by that technology.
  • This is also a good time to question why certain processes are conducted the way they are. And remember: ‘Because we’ve always done it that way’ is not a good answer.

Consider how information can/should be displayed, including geocoding information for GIS use.

Keep in mind that the technology you eventually obtain should support these processes and tasks, inputs, outputs, and users. The technology implementation may streamline your workflow, but shouldn’t dictate it.

You may also want to talk to colleagues to see what systems they are using; what their opinions are of those systems; and lessons learned with the system, vendor, implementation, etc. They might even give you access to poke around in their system a bit.

Determine Goals and Objectives: Here is where you identify your specifications based on your outcomes above. Once you have specifications you can start approaching vendors. Your IT department should know how to put together a technology specifications package.

Talk to Vendors: Get your specifications out to vendors, see who is interested, and meet with them to discuss. Depending on your organization, this may be a formal process or can be informal. If it’s formal, be sure that everyone understands this is not yet the invitation to bid. This is still an information gathering step.

Product demos can be great opportunities for your team (yes, your whole team should participate) to meet with vendors to see how they can address your needs and to learn what options are out there. Every team member should have the spec sheet with them so they can independently assess how the demonstrated solution meets needs. This also allows vendors to identify additional value and features they can provide as well as suggesting different approaches to some of your needs. (Again, your team should keep in mind that the product should never dictate the process, though it’s good to understand that the product may streamline that process, but it shouldn’t sacrifice any essential components you have defined). They may also offer best practices they have experienced in their work with other clients.

Get an understanding of how the system is administered and what help desk support looks like. How much in-house administration is your organization able to do, such as adding new users, creating profiles/roles, and resetting passwords?

This interface with vendors is important. I’ve had vendors simply pitch their off-the-shelf solution with little to no regard for the spec sheet, which is obviously a great reason to ask them to leave. You aren’t likely to get a fully custom-built product (though it’s possible), but in all likelihood what you will be offered is a customized version of their off-the-shelf standard. I generally think this is the best option. Along with being cost effective, you also have reasonable assurances that the foundation of their platform is good since it’s what they are building from.

Speaking of this foundation, ask them who their clients are and ask for references of current users. It’s also totally fair to ask why their solution is better than that of other competitors (name them!), and why some customers may choose to go to another platform.

It’s also good to ask about the future. What does the service contract look like? How are updates done? What time period do you have with the vendor for no-cost adjustments once the system is in place? Is incident support available? How can you be made aware of innovations and how are those prices structured for implementation? Will they put together a training package and will they support the first round of training? How about self-guided training?

Purchase: Based on your vendor demos, you might have some modifications to make to your spec sheet. That’s OK. You’ve seen what’s out there and have had an opportunity to learn. I’d also suggest identifying what you consider to be firm requirements vs features which are desired but not required. Once you’ve gotten all the information you can, conduct your purchasing process as you need to.

Implementation and Maintenance: Implementation can be a considerable challenge, and this is where even the better EOC management platforms fail, either in actual practice or in the opinion of users. I could write an entirely separate post on this alone. The most important things to realize are that:

  1. Change is hard
  2. The system isn’t (likely) used daily

Some people will be excited about the system, others will be stuck in the mud of change. Get people oriented to it and train them. Remember that since this system isn’t used on a regular basis, their skills (even their recall of their log in credentials) will atrophy. So training should be a recurring thing for all stakeholders. Everyone needs to know how to log in and navigate the main screen, but not everyone needs to know how to build a situation report. You should also have a just in time (JIT) training program since inevitably people will walk into your EOC cold when a disaster occurs. Consider training that is modular.

Get an exercise in early. This is a great way to reinforce familiarity for users but to also work the kinks out of the system. Ideally, a vendor rep should be present for the exercise so they can see potential issues, even if they can’t fix them on the spot.

Moving into the future, exercise regularly to maintain proficiency and always keep an eye out for opportunities to improve. Recognize that processes evolve over time and technology is obsolete after a couple of years, so your platform should be evolving to avoid stagnation. Maintain a relationship with your vendor but keep an eye on what else is out there. Challenge your vendor to rise to the occasion, be innovative, and to continue meeting your needs. If they can’t, it’s time to look elsewhere.

Through the entire process, interfacing with vendors can be challenging if the right people aren’t involved. My preference is to not be working with someone who has never worked in an EOC or someone who is a ‘typical and generic’ salesperson. Ideally, vendor representatives will have some EOC experience so they can relate to your needs. Your own representative, ideally, should also be able to meet the vendor halfway, being an emergency manager with some tech savvy.

What experiences do you have with EOC management platforms? Any words of wisdom to share about the process?

© 2021 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®