Applying What We’ve Learned

The COVID-19 pandemic shattered so many of our planning assumptions. Not only assumptions on how a virus would act, spread, and react, but also assumptions on human behavior. Many of our plans accounted for security in the transportation and distribution of vaccines to address theft and violence caused by people who would commit these acts to get their hands on the vaccine (perhaps too many apocalyptic movies led us to this assumption?), we also falsely assumed that everyone would want the vaccine. The political divisiveness, faux science, misinformation, disinformation, and members of the public simply not caring enough for each other to take simple actions to prevent spread were largely unanticipated.

I think that had the virus been different, we would have seen things align better with our assumptions. Had the symptoms of the virus been more apparent, and had the mortality rate been higher, I think we would have seen more people wanting to protect themselves and each other. Would this have been fully aligned with our earlier assumptions? No. I think that we’ve learned that human behaviors aren’t as easy to generalize, but also the societal and political climate we are in, not just in the US but in many other nations around the world would have still perpetuated many of the problems we have and continue to see during the COVID-19 pandemic.

Where to from here? I’m not a sociologist, but I’m a firm believer that much of what we do in emergency management is rooted in sociology. I’m sure an abundance of papers have already been authored on sociological and societal behaviors during the pandemic, with many more to come. I’m sure there are even some that are aligned to support and inform practices of emergency management, with valuable insights that we can use in planning and other activities. I look forward to having some time to discover what’s out there (and always welcome recommendations from colleagues). Speaking of implementation, what I do know is that we shouldn’t necessarily throw away the assumptions we had pre-COVID-19. Most of those assumptions may still be valid, under the right circumstances. The challenge is that there are many variables in play that will dictate what assumptions will apply. We do need to learn from what we have/are experiencing in the current pandemic, but this doesn’t hit the reset button in any way. This doesn’t necessarily invalidate what we thought to be true. It simply offers an alternative scenario. The next pandemic may yet align with a third set of truths.

While it makes things much more complex to not know which assumptions we will see the next time around, at least we know there are a range of possibilities, and we can devise strategies to address what is needed when it’s needed. What also adds complexity is the reinforcement of plans needing to be in place for various aspects of a pandemic and written to an appropriate level of detail. Most pandemic plans (and other related plans) that were in place prior to the COVID-19 pandemic simply weren’t written to the level of detail necessary to get the job done. Yes, there is a matter of variables, such as assumptions, but the fundamental activities largely remain the same. As with many disasters, jurisdictions were scrambling to figure out not only what they needed to do but how, because their plans were written at too high a level. As always, we are challenged to ensure the right amount of flexibility in our plans while still providing enough detail.

© 2022 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

When to AAR

A discussion with colleagues last week both on and off social media on the development of after-action reports (AARs) for the COVID 19 pandemic identified some thoughtful perspectives. To contextualize, the pandemic is arguably the longest and largest response the world has ever faced. Certainly, no one argues the necessity for organizations to develop AARs, as there have been an abundance of lessons learned that transcend all sectors. It’s thankfully not often we are faced with such a long incident, but in these circumstances, we need to reconsider our traditional ways of doing things, which has generally been to develop an AAR at the conclusion of the incident.

One central aspect of the discussions was about the timing of the AARs. When should we develop an AAR for an incident? I certainly think that with most incidents, we can safely AAR when the incident is complete, particularly given that most incidents don’t last as long as the pandemic has. The difficulty with the pandemic, relative to AARs, is time. The more time goes on, the more we focus on recent concerns and the less we remember of the earlier parts of the response. This likely remains within tolerable limits for an incident that will last several weeks or even up to a few months, but eventually we need to recognize that that longer we go without conducting the after-action process, the more value we lose. Yes, we can recreate a lot with through documentation, but human inputs are critical to the AAR process, and time severely erodes those. Given this, I suggest the ideal practice in prolonged incidents is to develop interim AARs to ensure that chunks of time are being captured.

Another aspect related to this is to determine what measure we are using for the incident. The vast majority of AARs focus mostly on response, not recovery. This is an unfortunate symptom of the response-centric mentality that persists in emergency management. We obviously should be conducting AARs after the response phase, but we also need to remember to conduct them once the recovery phase is substantially complete. Given that recovery often lasts much longer than the response, we certainly shouldn’t wait until recovery is complete to develop a single AAR for the incident, rather we should be developing an AAR, at a minimum, at the substantial completion of response and another at the substantial completion of recovery.

Yet another complication in this discussion is that timing is going to be different for different organizations. I presently have some clients for which the pandemic is much less of a concern operationally as it was a year ago, especially with a vaccinated workforce. So much less of a concern, in fact, that they have largely resumed normal operations, though obviously with the continuation of some precautionary measures. Other organizations, however, are still in a full-blown response; while there are still yet others somewhere in the middle. This means that as we go through time, the pandemic will largely be over for certain organizations and jurisdictions around the world, while others are still consumed by the incident. While the WHO will give the official declaration of the conclusion of the pandemic, it will be over much sooner for a lot of organizations. Organizations should certainly be developing AARs when they feel the incident has substantially ended for them, even though the WHO may not have declared the pandemic to have concluded.

Consider that the main difference between evaluating an exercise and evaluating an incident is that we begin the exercise with the goal of evaluation. As such, evaluation activities are planned and integrated into the exercise, with performance standards identified and staff dedicated to evaluation. While we evaluate our operations for effectiveness during a response and into recovery, we are generally adjusting in real time to this feedback rather than capturing the strengths and opportunities for improvement. Be it during the incident or after, we need to deliberately foster the AAR process to not only capture what was done, but to help chart a path to a more successful future. I’ve been preaching about the value of incident evaluation for several years, and have been thankful to see that FEMA had developed a task book for such.

Given the complexity and duration of the pandemic, I started encouraging organizations to develop interim AARs longer than a year ago, and in fact supported a client in developing their initial response AAR just about a year ago. FEMA smartly assembled an ‘Initial Assessment Report’ of their early response activity through September of 2020, though unfortunately I’ve not seen anything since. There was a question about naming that came up in the discussions I had, suggesting that the term ‘AAR’ should be reserved for after the incident, and a different term used for any other reports. I partially agree. While I think we should still call it what it is – even if it’s done in the midst of an incident, it is still an after-action report – that being an analysis of actions we’ve taken within a defined period of time. Afterall, it’s not called an ‘after incident report’. That said, I do think that any AARs developed during the incident do warrant some clarification, which can incorporate the inclusion of a descriptor such as ‘interim’ or ‘phase 1, 2, 3, etc’, or whatever is most suitable. I don’t think we need anything standardized so long as it’s fairly self-explanatory.

Have you already conducted an AAR for the pandemic? Do you expect to do another?

© 2021 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Incident Management Advisors

It’s frustrating to see poor incident management practices. For years I’ve reviewed plans that have wild org charts supposedly based on the Incident Command System (ICS); have conducted advanced-level training with seasoned professionals that still don’t grasp the basic concepts; have conducted and evaluated exercises and participated in incident responses in which people clearly don’t understand how to implement the most foundational aspects of ICS. On a regular basis, especially since people know my focus on the subject, I’m told of incident management practices that range from sad to ridiculous.

Certainly not everyone gets it wrong. I’ve seen plans, met people, and witnessed exercises and incidents in which people clearly understand the concepts of ICS and know how to put it into action. ICS is a machine, but it takes deliberate and constant action to make it work. It has no cruise control or auto pilot, either. Sometimes just getting the incident management organization to stay the course is a job unto itself.

If you are new here, I’ve written plenty on the topic. Here’s a few things to get you pointed in the right direction if you want to read more.

ICS Training Sucks. There are a series of related posts that serve as a key stone to so much that I write about.

The Human Factor of Incident Management. This bunch of related articles is about how ICS isn’t the problem, it’s how people try to implement it.  

As I’ve mentioned in other posts, it’s unrealistic for us to expect most local jurisdictions to assemble and maintain anything close to a formal incident management team. We need, instead, to focus on improving implementation of foundational ICS concepts at the local level, which means we need to have better training and related preparedness activities to promote this. Further, we also know that from good management practices as well as long-standing practices of incident management teams, that mentoring is a highly effective means of guiding people down the right path. In many ways, I see that as an underlying responsibility of mine as a consultant. Sometimes clients don’t have the time to get a job done, but often they don’t have the in-house talent. While some consultants may baulk at the mere thought of building capability for a client (they are near sighted enough to think it will put them out of work), the better ones truly have the interests of their clients and the practice of emergency management as a whole in mind.

So what and how do we mentor in this capacity? First of all, relative to incident management, I’d encourage FEMA to develop a position in the National Qualification System for Incident Management Advisors. Not only should these people be knowledgeable in implementations of ICS and EOC management, but also practiced in broader incident management issues. Perhaps an incident doesn’t need a full incident management team, but instead just one or two people to help the local team get a system and battle rhythm established and maintained. One responsibility I had when recently supporting a jurisdiction for the pandemic was mentoring staff in their roles and advising the organization on incident management in a broader sense. They had some people who handled things quite well, but there was a lot of agreement in having someone focus on implementation. I also did this remotely, demonstrating that it doesn’t have to be in person.

In preparedness, I think there is similar room for an incident management advisor. Aside from training issues, which I’ve written at length about over the years (of course there will be more!), I think a lot of support is needed in the realm of planning. Perhaps a consultant isn’t needed to write an entire plan, but rather an advisor to ensure that the incident management practices identified in planning documents are sound and consistent with best practices, meet expectations, and can be actually implemented. So much of what I see in planning in regard to incident management has one or more of these errors:

  1. Little mention of incident management beyond the obligatory statement of using NIMS/ICS.
  2. No identification of how the system is activated and/or maintained.
  3. As an extension of #2, no inclusion of guidance or job aids on establishing a battle rhythm, incident management priorities, etc.
  4. An obvious mis-understanding or mis-application of incident management concepts/ICS, such as creating unnecessary or redundant organizational elements or titles, or trying to force concepts that simply don’t apply or make sense.
  5. No thought toward implementation and how the plan will actually be operationalized, not only in practice, but also the training and guidance needed to support it.

In addition to planning, we need to do better at identifying incident management issues during exercises, formulating remedies to address areas for improvement, and actually implementing and following up on those actions. I see far too many After Action Reports (AARs) that softball incident management shortfalls or don’t go into enough detail to actually identify the problem and root cause. The same can be said for many incident AARs.

When it comes to emergency management, and specifically incident management, we can’t expect to improve without being more direct about what needs to be addressed and committing to corrective actions. We can do better. We MUST do better.

New polling function in WordPress… Let’s give it a try.

©2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Learning from the 2009 H1N1 Pandemic Response (Guest Post)

Another great article from Alison Poste. Please be sure to check out her blog – The Afterburn – at www.afterburnblog.com.

I’m looking forward to reading about the adaptations to ICS she references in this article.

-TR

~~

Learning from the 2009 H1N1 Pandemic Response

The ICS model remains a universal command and control standard for crisis response. In contrast to traditional operations-based responses, the COVID-19 pandemic has required a ‘knowledge-based’ framework. 

A fundamental element of ICS is the rapid establishment of a single chain of command. Once established, a basic organization is put in place including the core functions of operations, planning, logistics and finance/administration. In the face of a major incident, there is potential for people and institutions to work at cross purposes. The ICS model avoids this by rapidly integrating people and institutions into a single, integrated response organization preserving the unity of command and span of control. Support to the Incident Commander (the Command Staff) includes a Public Information Officer (PIO), a Liaison Officer and a Safety Officer.

In a study done by Chris Ansell and Ann Keller for the IBM Center for the Business of Government in 2014, the response of the U.S. Center for Disease Control and Prevention (CDCP) to the 2009 H1N1 Pandemic was examined in depth. In examining the response, a number of prior outbreak responses were reviewed. Prior to the widespread adoption of ICS, “the CDCP viewed its emergency operations staff as filling an advisory role rather than a leadership role during the crisis” (Ansell and Keller, 2014). This advisory function was the operating principle of the 2003 SARS outbreak response.

ICS was created to coordinate responses that often extend beyond the boundaries of any individual organizations’ capacity to respond. Considering the 2009 H1N1 pandemic response, the authors outline three features complicated the use of the traditional ICS paradigm:

  • The overall mission in a pandemic response is to create authoritative knowledge rather than the delivery of an operational response;
  • The use of specialized knowledge from a wide and dispersed range of sources; and 
  • The use of resources to manage external perceptions of the CDCP’s response.

In response to these unique features, the authors of the study have advocated seven adaptations to the ‘traditional’ ICS structure. These adaptations will be examined in depth in a future post.

Notwithstanding the unique challenges of a ‘knowledge-based’ response, the ‘traditional’ ICS structure is well-equipped to adapt and scale to the needs of any incident. While it is true that a ‘knowledge-based’ response differs from an operational one, this is not inconsistent with the two top priorities of the ICS model: #1: Life Safety and #2: Incident (Pandemic) Stabilization. The objectives of the incident will determine the size of the organization. Secondly, the modular ICS organization is able to rapidly incorporate specialized knowledge and expand/contract as the demands of the incident evolve. Finally, assigning resources to monitor external communications will remain the purview of the PIO as a member of Command Staff.

When the studies are written on the use of ICS in the COVID-19 pandemic, what do you think will be the key take-aways? As always, I’m interested to hear your thoughts and ideas for future topics.

Reference

Ansell, Chris and Ann Keller. 2014. Adapting the Incident Command Model for Knowledge-Based Crises: The Case of the Centers for Disease Control and Prevention. IBM Center for the Business of Government. Retrieved August 16, 2020 from http://www.businessofgovernment.org/sites/default/files/Adapting%20the%20Incident%20Command%20Model%20for%20Knowledge-Based%20Crises.pdf 

It’s Not Too Late To Prepare

The phrase I’ve been using lately when I speak to people has been “It’s not too late to prepare”.  Many people perceive that in the middle of a disaster we are unable to prepare.  Quite the contrary, we have the potential to integrate all of our preparedness steps into a response.  Because we have problems in front of us that need to be addressed, we have an opportunity to continuously improve, ensuring that organizationally we are offering the very best we can. 

There is a reason why there isn’t a mission area for preparedness in the National Preparedness Goal.  This is because preparedness is ongoing.  It’s not a separate or distinct activity.  Rather it is comprised of activities that support all mission areas, no matter when they are actioned.  Preparedness is continuous.

Assessment

Assessment is a key activity within preparedness.  In fact, assessment is foundational in understanding what’s going on.  During a disaster, good management practices dictate that we should be monitoring our response and adjusting as needed.  What exactly should we be monitoring?  Similar to evaluating an exercise, consider the following:

  • What was the effectiveness of deliberate planning efforts? 
    • Were planning assumptions correct?
    • Was the concept of operations adequate in scope and detail? 
    • What was lacking?
    • What worked well?
  • What was the effectiveness of plan implementation?
    • If aspects of plan implementation need improvement, what was the reason for the shortfall?
      • A poor plan
      • Lack of job aids
      • Lack of/poor/infrequent training
      • Lack of practice
      • Lack of the proper resources or capabilities
      • The plan wasn’t followed
  • Did resources and capabilities meet needs?  If not, why?

Planning

While some planning gaps will require a longer time period to address, I’m aware of many jurisdictions and organizations which have been developing plans in the midst of the pandemic.  They recognized a need to have a plan and convened people to develop those plans.  While some of the planning is incident-specific, many of the plans can be utilized in the future we as well, either in the form they were written or adjusted to make them more generally applicable without the specific details of this pandemic.  I’d certainly suggest that any plans developed during the pandemic are reviewed afterwards to identify the same points listed above under ‘assessment’ before they are potentially included in your organization’s catalogue of plans. Also consider that we should be planning for contingencies, as other incidents are practically inevitable.

Training

Training is another fairly easy and often essential preparedness activity which can performed in the midst of a disaster.  Many years ago FEMA embraced the concept of training during disasters.  FEMA Joint Field Offices mobilize with training personnel.  These personnel not only provide just in time training for new personnel or to introduce new systems and processes, but they provide continuing training a variety of topics throughout response and recovery, providing a more knowledgeable workforce.  I’ve seen some EOCs around the country do the same.  Recently, my firm has been contracted to provide remote training for the senior leadership of a jurisdiction on topics such as continuity of operations and multi-agency coordination, which are timely matters for them as they continue to address needs related to the pandemic. 

Exercises

While assessments, planning, and training are certainly activities that may take place during a disaster, exercises are probably less likely, but may, if properly scoped and conducted, still have a place.  Consider that the military will constantly conduct what they call battle drills, even in active theaters of war, to ensure that everyone is familiar with plans and protocols and practiced in their implementation.  Thinking back on new plans that are being written in the midst of the pandemic, it’s a good idea to validate that plan with a tabletop exercise.  We know that even the best written plans will still have gaps that during a blue-sky day we would often identify through an exercise.  Plans written in haste during a crisis are even more prone to have gaps simply because we probably don’t have the opportunity to think everything through and be as methodical and meticulous as we would like.  A tabletop exercise doesn’t have to be complex or long, but it’s good to do a talk through of the plan.  Depending on the scope of the plan and the depth of detail (such as a new procedure, conducting a walk-through of major movements of that plan (that’s a drill) can help ensure validity of the plan and identify any issues in implementation.  While you aren’t likely to go the extent of developing an ExPlan, an evaluator handbook, or exercise evaluation guides (yes, that’s totally OK), it’s still good to lay out a page of essential information to include objectives and methodology since taking the time to write these things down is one more step to ensure that you are doing everything you need for the validation to be effective.  Documentation is still important, and while it can be abbreviated, it shouldn’t be cut out entirely.  It’s also extremely important to isolate the exercise, ensuring that everyone is aware that what is being performed or discussed is not yet part of the response activity.  Evaluators should still give you written observations and documented feedback from participants.  You probably don’t need a full AAR, especially since the observations are going to be put into an immediate modification of the plan in question, but the documentation should still be kept together as there may still be some observations to record for further consideration. 

Evaluation and After Action

Lastly, incident evaluation is something we shouldn’t be missing.  We learn a lot about incident evaluation from exercise evaluation.   I’ve written on it before, which I encourage you to look at, but the fundamentals are ensuring that all actions and decisions are documented, that a hotwash is conducted (or multiple hotwashes to capture larger numbers of people or people who were engaged in very different functions), and that an after action report is developed.   Any incident should provide a lot of lessons learned for your organization, but the circumstances of a pandemic amplify that considerably.  Ensure that everyone in your organization, at all levels, is capturing observations and lessons learned daily.  Ensure that they are providing context to their observations as well, since once this is over, they may not recall the details needed for a recommendation. You may want to consider putting together a short form for people to capture and organize these observations – essentially identifying the issue, providing context, and putting forth a recommendation to address the issue. Don’t forget to encourage people to also identify best practices.  In the end, remember that if lessons learned aren’t actually applied, nothing will change. 

I welcome any insight on how we can continue to apply preparedness in the midst of a disaster. 

Be smart, stay safe, stay healthy, and be good to each other. 

©2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC

Improving the HSEEP Templates

For years it has bothered me that the templates provided for the Homeland Security Exercise and Evaluation Program (HSEEP) are lacking.  The way the documents are formatted and the lack of some important content areas simply don’t do us any favors.  These templates go back to the origination of HSEEP in the early 2000s and they have seen little change since then.  It gives me concern that the people who developed these have struggled with concepts of document structuring and don’t understand the utility of these documents. 

I firmly believe that the documents we use in exercise design, conduct, and evaluation should be standardized.  Many of the benefits of standardization that we (should) practice in the Incident Command System (ICS) certainly apply to the world of exercises, especially when we have a variety of different people involved in each of these key phases of exercises and entering at different times.  Much like an incident, some people develop documents while others are users.  Both should count on a measure of standardization so they don’t have to figure out what they are looking at and how to navigate it before actually diving into the content.  That doesn’t mean, however, that standards can’t evolve to increase utility and function. 

I’ve written in the past about the dangers of templates.  While they are great guides and reminders of certain information that is needed and give us an established, consistent format in which to organize it, I still see too many people not applying some thinking to templates.  They get lost in plugging their information into the highlighted text areas and lose all sense of practicality about why the document is being developed, who the target audience for the document is, and the information they need to convey. 

Some of my bigger gripes…

  • Larger documents, such as ExPlans, SitMans, Controller/Evaluator Handbooks, and After-Action Reports MUST have a table of contents.  These documents can get lengthy and a TOC simply saves time in finding the section you are looking for. 
  • Some exercises are complex and nuanced.  As such, key documents such as ExPlans, SitMans, and Controller/Evaluation Handbooks must have designated space for identifying and explaining those situations.  This could be matters of multiple exercise sites and site-specific information such as different scopes of play for those sites, limited scopes of participation for some agencies, statements on the flow and execution of the exercise, and others.
  • Recognize that the first section of an EEG (Objective, Core Capability, Capability Target, Critical Tasks, and sources) is the only beneficial part of that document.  The next section for ‘observation notes’ is crap.  Evaluators should be writing up observation statements, an analysis of each observation, and recommendations associated with each observation.  The information provided by evaluators should be easily moved into the AAR.  The EEG simply does not facilitate capturing this information or transmitting it to whomever is writing the AAR. 
  • The AAR template, specifically, is riddled with issues. The structure of the document and hierarchy of headings is horrible.  The template only calls for documenting observations associated with observed strengths.  That doesn’t fly with me.  There should similarly be an analysis of each observed strength, as well as recommendations.  Yes, strengths can still be improved upon, or at least sustained.  Big missed opportunity to not include recommendations for strengths.  Further, the narrative space for areas of improvement don’t include space for recommendations.  I think a narrative of corrective actions is incredibly important, especially given the very limited space in the improvement plan; plus the improvement plan is simply intended to be an implementation tool of the AAR, so if recommendations aren’t included in the body of the AAR, a lot is missing for those who want to take a deeper dive and see specifically what recommendations correlate to which observations and with an analysis to support them. 

Fortunately, strict adherence to the HSEEP templates is not required, so some people do make modifications to accommodate greater function.  So long as the intent of each document and general organization remains the same, I applaud the effort.  We can achieve better execution while also staying reasonably close to the standardization of the templates.  But why settle for sub-par templates?  I’m hopeful that FEMA’s National Exercise Division will soon take a look at these valuable documents and obtain insight from benchmark practitioners on how to improve them.  Fundamentally, these are good templates and they have helped further standardization and quality implementation of exercises across the nation.  We should never get so comfortable, though, as to let tools such as these become stagnant, as obsolesce is a regular concern. 

I’m interested in hearing what you have done to increase the value and utility of HSEEP templates.  How would you improve these?  What are your pet peeves? 

© 2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Emergency Management and Public Safety Should Prepare Like a Sports Team

When and how did a once-annual exercise become the standard for preparedness?  I suppose that’s fine for a whole plan, but most plans can be carved into logical components that can be not only exercised to various degrees, but training can also be provided to support and compliment each of those components.  There are a lot of elements and activities associated with preparedness.  Consider how sports teams prepare. They are in a constant yet dynamic state of readiness.

Sports teams will review footage of their opponents playing as well as their own games.  We can equate those to reviews of after-action reports, not only of their own performance, but also of others – and with high frequency.  How well does your organization do with this quiz?

  • Do you develop after action reports from incidents, events, and exercises?
  • Are they reviewed with all staff and stakeholders or just key individuals?
  • Are they reviewed more than once or simply archived?
  • Are improvements tracked and reviewed with staff and stakeholders?
  • Do your staff and stakeholders review after action reports from other incidents around the nation?

Planning is obviously important – it’s the cornerstone of preparedness.  Coaches look at standards of practice in the sport, best practices, and maybe come up with their own innovations.  They examine the capabilities of their players and balance those with the capabilities of the opposing team.  They have a standard play book (plan), but that may be modified based upon the specific opponent they are facing.  Their plans are constantly revisited based upon the results of practices, drills, and games.  Plans let everyone know what their role is.

  • Do your plans consider the capabilities of your organization or jurisdiction?
  • Do they truly include the activities needed to address all hazards?
  • Are your plans examined and updated based upon after action reports from incidents, events, and exercises?
  • Are your plans flexible enough for leadership to call an audible and deviate from the plan if needed?
  • Is your organization agile enough to adapt to changes in plans and audibles? How are ad-hoc changes communicated?

Training is a tool for communicating the plan and specific roles, as well as giving people the knowledge and skills needed to execute those roles with precision.  Sports players study their playbooks.  They may spend time in a classroom environment being trained by coaches on the essential components of plays.  Training needs are identified not only from the playbook, but also from after action reviews.

  • Is your training needs-based?
  • How do you train staff and stakeholders to the plan?
  • What training do you provide to help people staffing each key role to improve their performance?

Lastly, exercises are essential.  In sports there are drills and practices.  Drills are used to hone key skill sets (passing, catching, hitting, and shooting) while practices put those skill sets together.  The frequency of drills and practices for sports teams is astounding.  They recognize that guided repetition builds familiarity with plans and hones the skills they learned.  How well do you think a sports team would perform if they only exercised once a year?  So why do you?

  • What are the essential skill sets your staff and stakeholders should be honing?
  • What is your frequency of exercises?
  • Do your exercises build on each other?

I also want to throw in a nod to communication.  Even if you aren’t a sports fan, go attend a local game.  It could be anything… hockey, baseball, soccer, basketball, football… whatever.  It doesn’t necessarily have to be pro.  Varsity, college, or semi-pro would certainly suffice.  Even if you don’t stay for the whole game, there is a lot you can pick up.  Focus on the communication between and amongst players and coaches.  Depending on where you are sitting, you might not be able to hear or understand what they are saying, but what you will notice is constant communication.  Before plays, between plays, and during plays.  Sometimes that communication isn’t just verbal – it might be the tapping of a hockey stick on the ice, clapping of hands, finger pointing, or a hand wave or other silent signal.  Coaches are constantly talking to each other on the bench and with players, giving direction and encouragement.  There is a lot going on… strategy, tactics, offense, defense.  What lessons can you apply to your organization?

Lastly, accomplishments should be celebrated.  In public safety, we tend to ignore a lot of best practices not only of sports teams, but also in general employee relations.  Because of the nature of emergency management and other public safety endeavors, it’s easy to excuse getting stuck in the same rut… we get ready for the next incident, we respond to that incident, and we barely have time to clean up from that incident before the next one comes.  Take a moment to breathe and to celebrate accomplishments.  It’s not only people that need it, but also organizations as a whole.

What lessons can you apply from sports teams to your organization?

© 2019 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC℠

FEMA’s 2017 Hurricane Season AAR

A few days ago, FEMA published its after action report (AAR) for the 2017 hurricane season.  Unless you’ve been living under a rock, you probably know that last year was nothing short of devastating.  The major hurricane activity revolved around Hurricane Harvey (Texas), Hurricane Irma (Caribbean/South Atlantic coast), and Hurricane Maria (Caribbean), but domestic response efforts were also significantly dedicated to a rough season of wildfires in California.  While each of these major disasters was bad enough on its own, the overlap of incident operations between them is what was most crippling to the federal response.  Along with these major incidents were the multitude of typical localized incidents that local, state, and some federal resources manage throughout the year.  2017 was a bad year for disasters.  I don’t think any nation could have supported disaster response as well as the US did.

No response is ever perfect, however, and there were certainly plenty of issues associated with last year’s hurricane responses. Politicians and media outlets made issues in Texas and Puerto Rico very apparent.  While some of these issues may rest on the shoulders of FEMA and other federal agencies, state and local governments hold the major responsibility for them.

This FEMA AAR contains good information, perspective, and reflections.  There are a lot of successes and failures to address.  While I’m not going to write a review of the entire document, which you can read for yourself, but I will discuss a few big-picture items and highlight a few specifics.

First, is the overall organization of the document.  The document is organized through reflection across each of five ‘focus areas’.  I’m not sure why this was the chosen approach.  The doctrinal approach should be a reflection on Core Capabilities, as outlined in the National Preparedness Goal.  Some of these focus areas seem to easily align with a Core Capability, such as ‘Sustained Whole Community Logistics Operations’, which gives me reason to wonder why Core Capabilities were not referenced.  While we use Core Capabilities as a standard in exercises, the purpose for them being part of the National Preparedness Goal is so that we have a standard of reference throughout all preparedness activities.  Any AAR – incident, event, or exercise – should bring us back to preparedness activities.

The second issue I have with the document is the focus.  While it’s understood that this is FEMA’s AAR, not a wholistic federal government AAR, it’s almost too FEMA-centric.  The essence of emergency management is that emergency management agencies are coordination bodies, as such, most of their work gets accomplished through coordinating with other agencies.  While it’s true that FEMA certainly has a significant work force and resources, the AAR seems to stop at the inside threshold of FEMA headquarters, without taking the additional step to acknowledge follow-on actions from a FEMA-rooted issue that may involve other agencies.

Among the positive takeaways were some of the planning assumptions outlined in the report.  There is a short list of planning assumptions on page 9, for example, that provide some encouraging comparisons between planning assumptions and reality.  This is a great reminder for local and state plans to not only include numbers and percentages in their planning assumptions, which will directly lead to identifying capability and resource gaps, but to also reality check those numbers after incidents.

Page 10 of the repost highlights the success of FEMA’s Crisis Action Planning groups.  These groups identified future issues and developed strategies to address these issues.  This is actually an adaptation of an underutilized function within the ICS Planning Section to examine potential medium and long-term issues.

Pages 11 and 12 highlight how Threat and Hazard Identification and Risk Assessment (THIRA) data from states and UASIs can inform response.  It’s encouraging to see preparedness data directly inform response.  I hope this is something that will continue to evolve.

Pages 22 and 23 discuss the staffing issues FEMA had with massive overlapping deployments.  Along with their regular full time workforce, FEMA also deployed a huge volume of their cadre personnel.  They also tapped into a pilot program called State Supplemental Staffing.  While there were some administrative and bureaucratic difficulties, it seems to have been considerably successful.

Overall, this is a good document citing realistic observations and recommendations.  While the document is FEMA-centric, the way of FEMA is the way of emergency management in the US, so it’s always worth keeping an eye on what they are doing, as many of their activities have reach to state and local governments we as other federal agencies.

What important concepts jumped out at you?

© 2018 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC™

Hurricane Harvey AAR – Lessons for Us All

Harris County, Texas has recently released their After Action Report (AAR) for Hurricane Harvey that devastated the area last year.  I applaud any AAR released, especially one for an incident of this magnitude.  It requires opening your doors to the world, showing some incredible transparency, and a willingness to discuss your mistakes.  Not only can stakeholders in Harris County learn from this AAR, but I think there are lessons to be learned by everyone in reviewing this document.

First, about making the sausage… The AAR includes an early section on the means and methods used to build the AAR, including some tools provided in the appendix.  Why is this important?  First, it helps build a better context for the AAR and lets you know what was studied, who was included, and how it was pulled together.  Second, it offers a great example for you to use for future incidents.  Developing an AAR for an incident has some significant differences from developing an AAR for an exercise.  Fundamentally, development of an AAR for an exercise begins with design of the exercise and is based upon the objectives identified for that exercise.  For an incident, the areas of evaluation are generally identified after the fact.  These areas of evaluation will focus the evaluation effort and help you cull through the volumes of documentation and stories people will want to tell.  The three focus areas covered in the AAR are Command and Control, Operations, and Mass Care and Sheltering.

Getting into the Harvey AAR itself… My own criticism in the formatting is that while areas for improvement in the AAR follow an Issue/Analysis/Recommendation format, identified strengths only have a sentence or two.  Many AAR writers (for incidents, events, or exercises) think this is adequate, but I do not.  Some measure of written analysis should be provided for each strength, giving it context and describing what worked and why.  I’m also in favor of providing recommendations for identified strengths.  I’m of the opinion that most things, even if done well and within acceptable standards, can be improved upon.  If you adopt this philosophy, however, don’t fall into the trap of simply recommending that practices should continue (i.e. keep doing this).  That’s not a meaningful recommendation.  Instead, consider how the practice can be improved upon or sustained.  Remember, always reflect upon practices of planning, organizing, equipping, training, and exercises (POETE).

As for the identified areas for improvement in AAR, the following needs were outlined:

  • Developing a countywide Continuity of Operations Plan
  • Training non-traditional support personnel who may be involved in disaster response operations
  • Transitioning from response to recovery operations in the Emergency Operations Center
  • Working with the City of Houston to address the current Donations Management strategy

If anything, for these reasons alone, the AAR and the improvement planning matrix attached should be reviewed by every jurisdiction.  Many jurisdictions that I encounter simply don’t have the POETE in place to be successful in addressing these areas.

What is your biggest take away from this AAR?

© 2018 – Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC™