Developing Incident After-Action Reports

Incident and event after action reports (AARs) are extremely important for identifying the successes and challenges we faced in our efforts. Just like our evaluation efforts in exercises, many valuable lessons can be learned and effective practices identified from incidents and events. Yet for as much as incident and event AARs are encouraged, there are often problems with how these are developed.

While the quality of exercise after action reports is often not up to par, a defined process of exercise evaluation along with a suggested AAR format has been available to us and engrained in emergency management practice for a long time via the Homeland Security Exercise and Evaluation Program (HSEEP). While some concepts of exercise evaluation can be utilized for incident and event evaluation, we need to have a very different approach to be most effective.

FEMA has been promoting a Continuous Improvement concept for emergency management for several years. Incident and event evaluation is part of continuous improvement, though continuous improvement is intended to permeate much more of our daily and incident operations. While FEMA’s program has some good information that applies to incident and event evaluation, there are some important things I feel are missing.

Perhaps the most significant difference in our approach to incident and event evaluation vs exercise evaluation is the evaluation focus. Exercises, right from our very first steps of design, are designed explicitly for evaluation. The identification of capabilities and exercise objectives gives direction to our design and directly informs our evaluation of the exercise. Essentially, the intent and focus of evaluation is baked in from the start. For incidents and events, however, it is not.

Because evaluation is not a primary intent of incidents and events, we generally need to determine our evaluation strategy afterwards. The development of our evaluation strategy absolutely must begin with the identification of what we want to evaluate. This is a critical element not included in FEMA’s Continuous Improvement guidance. Without determining the focus of the evaluation, the discovery process lacks direction and may likely explore areas of incident/event operations that are lower priority to stakeholders. Determining what the evaluation effort will focus on can be considered similar to developing objectives, and as such should be specific enough to give proper direction to the evaluation effort. For example, having done numerous COVID-19 AARs, it’s not enough to say that we will evaluate ‘vaccination’. Vaccination is a very broad activity so we should determine specific aspects of vaccination to focus on, such as equity of distribution or vaccine point of dispensing (POD) operations. Obviously multiple focus areas can be identified based upon what is most important to stakeholders. And no, incident objectives should not serve as your focal points. These are operational objectives that have nothing to do with evaluation, though your evaluation itself may likely take the incident objectives (and associated actions) into consideration.

FEMA’s Continuous Improvement guidance provides a lot of great insight for the discovery process. The most common tools I use are focus groups, interviews, document reviews, and surveys. Focus groups and interviews allow people to tell their experiences from their perspectives. These offer a lot of insight and include facts as well as opinions, both of which are valid in the AAR process, as long as they are handled properly in the process, as discerning between the two is important.

Document reviews are also important. Typically I look at documents developed before the incident (mostly plans) and those developed during the incident (such as press releases, incident action plans, situation reports, and operational plans). While documents developed during the incident typically tell me what was done or what was intended to be done, the documents developed prior to the incident typically provide me with a standard from which to work.

There are a couple of important caveats with this:

1) Many plans are operationally inadequate, so they may not have been able to be followed.

2) Many organizations don’t reference their plans, regardless of quality.

As such, a big part of my document review is also determining the quality of the documents and if they were referenced during the incident or event. It may very well be that the actions taken were better than what was in the plans.

Surveys… so much to say about surveys that probably deserves its own blog post. Surveys can be great tools, but most tend to design poor surveys. They should be succinct and to the point. You will want to ask a lot of questions, but resist the urge to do so. The more questions you ask, the lower the rate of return on surveys. So focus on a few questions that will give you great data.

We then go to writing, which involves the organization of our information, formation of key observations (by focus area), a narrative analysis for each observation, and development of one or more recommendations for each observation. The analysis is an aspect that many AARs, including those for exercises, miss the mark. The analysis needs to contextualize the observation and justify the recommendations. It should provide sufficient detail for someone not knowledgeable in that observation (or of the incident) to have a reasonable understanding of the associated issues. Remember that an AAR may be referenced for years to come and can also be used to support budgets, grant applications, and obviously the corrective actions that are identified. A good analysis is necessary and should certainly be more than a couple of sentences. Be sure to identify strengths and effective practices, not just lessons learned and challenges.

I do not advocate using the HSEEP AAR template for incident and event evaluations. Beyond an awkward fit for some of the ‘fill-in-the-box’ information, the overall structure is not supportive of what an incident or event AAR needs to include. I suggest writing the AAR like a professional report. I’d include an executive summary, table of contents, research methodology, observations/analysis/recommendations, an incident or event timeline, and summary of recommendations (I do still like to use the traditional HSEEP improvement plan matrix for this). I tend to have a lot of citations throughout the document (typically I footnote these). Citations can include standards, such as NIMS, references (plans), media articles, and more.

A couple of notes: 1 – When planning our management of an event, we can be more proactive in evaluation by including it as a deliberate component of our efforts. 2 – Incident evaluation can begin during the incident by tasking an incident evaluator.

Incident and event evaluation can be daunting to approach. It requires endorsement from the highest levels to ensure cooperation and access to information. Honesty is what is needed, not sugar coating. Far too many AARs I’ve seen for exercises, incidents, and events are very soft and empty. Remember that we aren’t evaluating people, rather we are evaluating plans, processes, systems, and decisions. The final AAR should be shared with stakeholders so they can learn and apply corrective actions that may be relevant to them. Given most state public information laws, the AAR may need to be made available to the public, which is more reason to ensure that it is professionally written and that observations have quality analysis as members of the public may require context. I’ve also seen many elected and appointed officials (and legal counsels) be reluctant to have written reports or written reports with much detail because of freedom of information laws. While I understand that accountability and transparency can create challenges, we must remember that governments works on behalf of the people, and the acknowledgement of mistakes and shortcomings (as well as successes) is important to continuous improvement of the services we provide.

What is your approach with incident and event AARs? Where do you see that we need to improve this important process?

© 2024 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

A National Disaster Safety Board

You’ve heard of the National Transportation Safety Board (NTSB), right? If not, the nitty gritty of it is that they are an independent federal accident investigation agency. They determine probable cause of the full range of major transportation incidents, typically putting forward safety recommendations. They are granted some specific authorities related to these investigations, such as being the lead federal agency to investigate them (absent criminal aspects) and they maintain a schedule of deployment-ready teams for this purpose.  They can conduct investigative hearings (ever see the film Sully?) and publish public reports on these matters. Overall, I’ve had positive interactions with NTSB representatives and have found their work to be highly effective.

While certainly related to emergency management, the main purpose for my quick review of the NTSB in this post is to provide a starting point of understanding for Congressional legislation urging the formation of a National Disaster Safety Board (NDSB). The draft bill for discussion can be found here. This bill has been put forth with bi-partisan sponsors in both the US Senate and the House of Representatives.

The purpose of the NDSB, per this bill, is:

  1. To reduce future losses by learning from incidents, including underlying factors.
  2. Provide lessons learned on a national scale.
  3. Review, analyze, and recommend without placing blame.
  4. Identify and make recommendations to address systemic causes of incidents and loss from incidents.
  5. Prioritize efforts that focus on life safety and injury prevention, especially in regard to disproportionately impacted communities.

To execute this mission, the bill provides that the NDSB will have the authority to review incidents with 10 or more fatalities; may self-determine the need for board review of an incident; and shall have the full ability to investigate, review, and report on incidents.

The bill directs the NDSB to coordinate with all levels of government to identify and adopt standard methods of measuring impacts of disasters to provide for consistent trend analysis and comparisons, and to ensure that these standards are uniformly applied. The bill requires the NDSB to coordinate with all levels of government in their investigations during incident responses, and to participate in the incident command system for coordination of efforts as well as investigative purposes. Affected authorities shall have an opportunity to review the NDSB report 30 days prior to publication.

The NDSB will be comprised of seven board members, selected by the President from a slate of candidates provided by both houses of Congress, with no more than four board members having affiliation with the same political party, and with all members having technical and/or professional qualifications in emergency management, fire management, EMS, public health, engineering, or social and behavioral sciences.

There is a lot of other legalese and detail in the bill, but I’m happy to find that the language supports coordination among and with federal agencies, including FEMA, NIST, NTSB, and others; and also has an emphasis on investigating impacts to disproportionately impacted communities. The bill also charges the NDSB with conducting special studies as they see fit and providing technical support for the implementation of recommendations.

I’m thrilled with this effort and I’m hopeful the bill progresses to law. We have had a history of outstanding research from academic institutions and after action reports from government entities, which should all still continue, but it’s incredibly substantial that the NDSB will establish standards and consistency in how we examine disasters over time. We’ve seen how impactful the NTSB has been since its inception in 1967, and I feel the NDSB could have an even greater impact examining a broader spectrum of disasters. This is an effort which has been long encouraged by various emergency management related groups. The NDSB, I suspect, will also support a stronger and more defined FEMA, as well as strengthening all aspects of emergency management at all levels.

What thoughts do you have on the NDSB? What do you hope will come of it?

© 2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC

It’s Not Too Late To Prepare

The phrase I’ve been using lately when I speak to people has been “It’s not too late to prepare”.  Many people perceive that in the middle of a disaster we are unable to prepare.  Quite the contrary, we have the potential to integrate all of our preparedness steps into a response.  Because we have problems in front of us that need to be addressed, we have an opportunity to continuously improve, ensuring that organizationally we are offering the very best we can. 

There is a reason why there isn’t a mission area for preparedness in the National Preparedness Goal.  This is because preparedness is ongoing.  It’s not a separate or distinct activity.  Rather it is comprised of activities that support all mission areas, no matter when they are actioned.  Preparedness is continuous.

Assessment

Assessment is a key activity within preparedness.  In fact, assessment is foundational in understanding what’s going on.  During a disaster, good management practices dictate that we should be monitoring our response and adjusting as needed.  What exactly should we be monitoring?  Similar to evaluating an exercise, consider the following:

  • What was the effectiveness of deliberate planning efforts? 
    • Were planning assumptions correct?
    • Was the concept of operations adequate in scope and detail? 
    • What was lacking?
    • What worked well?
  • What was the effectiveness of plan implementation?
    • If aspects of plan implementation need improvement, what was the reason for the shortfall?
      • A poor plan
      • Lack of job aids
      • Lack of/poor/infrequent training
      • Lack of practice
      • Lack of the proper resources or capabilities
      • The plan wasn’t followed
  • Did resources and capabilities meet needs?  If not, why?

Planning

While some planning gaps will require a longer time period to address, I’m aware of many jurisdictions and organizations which have been developing plans in the midst of the pandemic.  They recognized a need to have a plan and convened people to develop those plans.  While some of the planning is incident-specific, many of the plans can be utilized in the future we as well, either in the form they were written or adjusted to make them more generally applicable without the specific details of this pandemic.  I’d certainly suggest that any plans developed during the pandemic are reviewed afterwards to identify the same points listed above under ‘assessment’ before they are potentially included in your organization’s catalogue of plans. Also consider that we should be planning for contingencies, as other incidents are practically inevitable.

Training

Training is another fairly easy and often essential preparedness activity which can performed in the midst of a disaster.  Many years ago FEMA embraced the concept of training during disasters.  FEMA Joint Field Offices mobilize with training personnel.  These personnel not only provide just in time training for new personnel or to introduce new systems and processes, but they provide continuing training a variety of topics throughout response and recovery, providing a more knowledgeable workforce.  I’ve seen some EOCs around the country do the same.  Recently, my firm has been contracted to provide remote training for the senior leadership of a jurisdiction on topics such as continuity of operations and multi-agency coordination, which are timely matters for them as they continue to address needs related to the pandemic. 

Exercises

While assessments, planning, and training are certainly activities that may take place during a disaster, exercises are probably less likely, but may, if properly scoped and conducted, still have a place.  Consider that the military will constantly conduct what they call battle drills, even in active theaters of war, to ensure that everyone is familiar with plans and protocols and practiced in their implementation.  Thinking back on new plans that are being written in the midst of the pandemic, it’s a good idea to validate that plan with a tabletop exercise.  We know that even the best written plans will still have gaps that during a blue-sky day we would often identify through an exercise.  Plans written in haste during a crisis are even more prone to have gaps simply because we probably don’t have the opportunity to think everything through and be as methodical and meticulous as we would like.  A tabletop exercise doesn’t have to be complex or long, but it’s good to do a talk through of the plan.  Depending on the scope of the plan and the depth of detail (such as a new procedure, conducting a walk-through of major movements of that plan (that’s a drill) can help ensure validity of the plan and identify any issues in implementation.  While you aren’t likely to go the extent of developing an ExPlan, an evaluator handbook, or exercise evaluation guides (yes, that’s totally OK), it’s still good to lay out a page of essential information to include objectives and methodology since taking the time to write these things down is one more step to ensure that you are doing everything you need for the validation to be effective.  Documentation is still important, and while it can be abbreviated, it shouldn’t be cut out entirely.  It’s also extremely important to isolate the exercise, ensuring that everyone is aware that what is being performed or discussed is not yet part of the response activity.  Evaluators should still give you written observations and documented feedback from participants.  You probably don’t need a full AAR, especially since the observations are going to be put into an immediate modification of the plan in question, but the documentation should still be kept together as there may still be some observations to record for further consideration. 

Evaluation and After Action

Lastly, incident evaluation is something we shouldn’t be missing.  We learn a lot about incident evaluation from exercise evaluation.   I’ve written on it before, which I encourage you to look at, but the fundamentals are ensuring that all actions and decisions are documented, that a hotwash is conducted (or multiple hotwashes to capture larger numbers of people or people who were engaged in very different functions), and that an after action report is developed.   Any incident should provide a lot of lessons learned for your organization, but the circumstances of a pandemic amplify that considerably.  Ensure that everyone in your organization, at all levels, is capturing observations and lessons learned daily.  Ensure that they are providing context to their observations as well, since once this is over, they may not recall the details needed for a recommendation. You may want to consider putting together a short form for people to capture and organize these observations – essentially identifying the issue, providing context, and putting forth a recommendation to address the issue. Don’t forget to encourage people to also identify best practices.  In the end, remember that if lessons learned aren’t actually applied, nothing will change. 

I welcome any insight on how we can continue to apply preparedness in the midst of a disaster. 

Be smart, stay safe, stay healthy, and be good to each other. 

©2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC

Incident Evaluation

I’ve written at length about the importance of quality evaluation of exercises.  Essentially, if we don’t evaluate exercises, and do it well, the benefits of the exercises are quite limited.  Generally, we don’t see a benefit to incidents.  By their very nature, incidents threaten and impact life, property, and environment – things we don’t view as being beneficial.  However, benefits are often a product of opportunity; and we absolutely should take the opportunity to evaluate our responses.

Many incidents do get evaluated, but through research after the fact.  We retrace our steps, review incident documents (such as incident action plans), interview personnel, and examine dispatch logs.  These efforts usually paint a decent picture of intent and result (things that are often different), but often miss the delta – the difference between the two – as well as other nuances.  When we evaluate an exercise, we do so in real time.  Th evaluation effort is best done with preparation.  Our evaluation plans, methodologies, and personnel are identified in the design phase of the exercise.  Just as we develop emergency operations plans and train personnel to respond, we can develop incident evaluation plans and train personnel to evaluate incident responses.

Understandably, a hurdle we might have is the availability of personnel to dedicate solely to evaluation, especially on larger incidents – but don’t be afraid of asking for mutual aid just to support incident evaluation (just be sure to include them in your preparedness efforts).  Just as regional exercise teams should be developed to provide cooperative efforts in exercise design, conduct, and evaluation; incident evaluation teams should be developed regionally.  To me, it makes sense for many of these personnel to be the same, as they are already familiar with how to evaluate and write up evaluations.

In exercises, we often use Exercise Evaluation Guides (EEGs) to help focus our evaluation efforts.  These are developed based upon identified Core Capabilities and objectives, which are determined early in the exercise design process.  While we don’t know the specific objectives we might use in an incident, we can identify these in general, based upon past experiences and our preparedness efforts for future incidents.  Similarly, our emergency planning efforts should be based around certain Core Capabilities, which can help inform our incident evaluation preparedness efforts.  Job aids similar to EEGs, let’s call them incident evaluation guides (IAGs), can be drafted to prepare for incident evaluation, with adjustments made as necessary when an incident occurs.

Evaluating an incident, in practice, is rather similar to how we would evaluate an exercise, which is why the training for these activities is relatively portable.  Evaluation efforts should avoid evaluating individuals, instead focusing on the evaluation of functions and processes.  Don’t reinvent the wheel – evaluate based upon documented (hopefully!) plans and procedures and use the Homeland Security Exercise and Evaluation Program (HSEEP) standards to guide your process. Incident evaluation must be managed to ensure that evaluation gaps are minimized and that evaluation progresses as it should.  Observations should be recorded and, just as we would for an exercise, prepared for and eventually recorded in an after action report (AAR).

I favor honest after action reports.  I’ve seen plenty of after action reports pull punches, not wanting the document to reflect poorly on people.  Candidly, this is bullshit.  I’ve also heard many legal councils advise against the publication of an after action report at all. Similarly, this is bullshit.  If our actions and the need to sustain or improve certain actions or preparations is not properly recorded, necessary changes are much less likely to happen.  If an AAR isn’t developed, a corrective action plan certainly won’t be – which gives us no trackable means of managing our improvements and disavows our intent to do so.

As a profession, public safety must always strive to improve.  We have plenty of opportunity to assess our performance, not just through exercises, which are valuable, but also through the rigors of incident responses.  Prepare for incident evaluation and identify triggers in your emergency plans for when evaluation will be employed, how, and who is involved.  Begin evaluation as early as possible in an incident – there are plenty of lessons learned in the early, and often most critical moments of our incident response.  Finally, be sure to document lessons learned in an AAR, which will contribute to your overall continuous improvement strategy.

How does your agency accomplish incident evaluation?  If you don’t, why?

Need help with the evaluation of incidents?  We are happy to help!

© 2017 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC