Measuring Return on Investment Through Key Performance Indicators

Return on investment (ROI) is generally defined as a measurement of performance to evaluate the value of investments of time, money, and effort. Many aspects of preparedness in emergency management offer challenges when trying to gauge return on investment. Sure, it’s easy to identify that m number of classes were conducted and n number of people were trained, that x number of exercises were conducted with y number of participants, that z number of plans were written, or even that certain equipment was purchased. While those tell us about activity, they don’t tell us about performance, results, or outcomes.

More classes were conducted. So what?

We purchased a generator. So what?

The metrics of these activities are easy to obtain, but these are rather superficial and generally less than meaningful. So how can we obtain a meaningful measure of ROI in emergency preparedness?

ROI is determined differently based on the industry being studied, but fundamentally it comes down to identifying key performance indicators, their value, and how much progress was made toward those key performance indicators. So what are our key performance indicators in preparedness?

FEMA has recently began linking key performance indicators to the THIRA. The Threat and Hazard Identification and Risk Assessment, when done well, gives us quantifiable and qualifiable information on the threats and hazards we face and, based upon certain scenarios, the performance measures need to attain certain goals. This is contextualized and standardized through defined Core Capabilities. When we compare our current capabilities to those needed to meet the identified goals (called capability targets in the THIRA and SPR), we are able to better define the factors that contribute to the gap. The gap is described in terms of capability elements – planning, organizing, equipping, training, and exercises (POETE). In accordance with this, FEMA is now making a more focused effort to collect data on how we are meeting capability targets, which helps us to better identify return on investment.

2021 Emergency Management Performance Grant (EMPG) funding is requiring the collection of data as part of the grant application and progress reports to support their ability to measure program effectiveness and investment impacts. They are collecting this information through the EMPG Work Plan. This spreadsheet goes a long way toward helping us better measure preparedness. This Work Plan leads programs to identify for every funded activity:

  • The need addressed
  • What is expected to be accomplished
  • What the expected impact will be
  • Identification of associated mission areas and Core Capabilities
  • Performance goals and milestones
  • Some of the basic quantitative data I mentioned above

This is a good start, but I’d like to see it go further. They should still be prompting EMPG recipients to directly identify what was actually improved and how. What has the development of a new plan accomplished? What capabilities did a certain training program improve? What areas for improvement were identified from an exercise, what is the corresponding improvement plan, and how will capabilities be improved as a result? The way to get to something more meaningful is to continue asking ‘so what?’ until you come to an answer that really identifies meaningful accomplishments.

EMPG aside, I encourage all emergency management programs to identify their key performance indicators. This is a much more results-oriented approach to managing your program, keeping the program focused on accomplishing meaningful outcomes, not just generating activity. It’s more impactful to report on what was accomplished than what was done. It also gives us more meaningful information to analyze across multiple periods. This type of information isn’t just better for grant reports, but also for your local budgets and even routine reports to upper management and elected officials.

What do you think about FEMA’s new approach with EMPG? What key performance indicators do you use for your programs?

© 2021 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

FEMA’s 2020 National Preparedness Report – A Review

It seems an annual tradition for me to be reviewing the National Preparedness Report. I’ve endeavored to provide constructive criticism of these documents, which are compilations of data from state and federal agencies, national-level responses, and other sources.

This year’s National Preparedness Report emphasizes that it is based on data from the 2019 calendar year. In looking back on past reports (note: they are no longer on the FEMA site – I was able to find them in the Homeland Security Digital Library) this has been the past practice. Perhaps I never realized it before, but a report talking about data from practically a full year ago seems to hold even less relevance. That means that enacting changes on a national level based on this data may not even begin to occur until two years have passed. Even taking into consideration that states and UASIs are compiling their reports early in a year for the previous year, it still seems a long time to wait for the national level report. This extent of lag is further emphasized by the document’s foreword, written by the FEMA Administrator, which makes many references to COVID-19 and how much different next year’s report will be, while not really speaking at all about the current report. This speaks a lot to how much we, as a practice, are attracted by the shiny objects dangled in front of us, seemingly ignoring all else.

My first pass of the 2020 report brought two primary impressions: 1) The instructive content of the document is some of the best I’ve seen out of FEMA, and 2) There is a considerable lack of data, with a low value for much of what they have included.

In regard to my first impression, the discussion of concepts such as risk (including emerging risk and systemic risk), capabilities, cascading impacts, community lifelines, public-private partnerships, and vulnerable populations has the perfect level of depth and detail. Not only do they discuss each of these concepts, but they also identify how they each connect to each other. This is EXACTLY the kind of consolidation of information we have needed for a long time. This lends itself to truly integrated preparedness and the kinds of information I’ve mentioned many times as being needed, including in the next version of CPG-101. I’m truly impressed with this content, the examples they provide, and how they demonstrate the interconnectedness of it all. I’ll certainly be using this document as a great source of this consolidated information. Now that I’ve extolled my love and adoration for that content, I’m left wondering why it’s in the National Preparedness Report. It’s great content for instructional material and doctrinal material on integrated preparedness, but it really has no place, at least to this extent of detail in the National Preparedness Report. Aside from the few examples they use, there isn’t much value in this format as a report.

This brings me to my next early observation: that of very little actual data contained in the report. Given the extent to which states, territories, UASIs, and other stakeholders provide data to FEMA each year by way of their Threat and Hazard Identification and Risk Assessments (THIRAs) and Stakeholder Preparedness Reviews (SPRs), along with various other sources of data, this document doesn’t contain a fraction of what is being reported. There are two map products contained in the entire report, one showing the number of federal disaster declarations for the year, the other showing low-income housing availability across the nation. Given the wide array of information provided by state and UASI, and compiled by FEMA region, surely there must be some really insightful trends and other analysis to provide. There are a few other data sets included in the report showing either raw numbers or percentages – nothing I would really consider analytics. Much of the data is also presented as a snapshot in time, without any comparison to previous years.

Any attempt to view this document as a timely, meaningful, and relevant report on the current state of preparedness in the nation, much less an examination of preparedness over time, is simply an exercise in frustration. The previous year’s report at least had a section titled ‘findings’, even though any real analysis of data there was largely non-existent. This year’s report doesn’t even feign providing a section on findings. To draw on one consistently frustrating example, I’ll use the Core Capability of housing. While this report dances around doctrine and concepts, and even has a section on housing, it’s not addressing why so little preparedness funding or even moderate effort is directed toward addressing the issue of emergency housing, which has arguably been the biggest preparedness gap for time eternal in every state of the nation. Looking broadly at all Core Capabilities, this year’s report provides a chart similar to what we’ve seen in previous years’ reports, identifying how much preparedness funding has gone toward each Core Capability. In relative numbers, very little has changed; even though we know that issues like housing, long-term vulnerability reduction, infrastructure systems, and supply chains have huge gaps. All these reports are telling me is that we’re doing the same things over and over again with little meaningful change.

So there it is… while I really am thoroughly impressed with some of the content of the report, much of that content really doesn’t have a place in this report (at least to such an extent), and for what little data is provided in the report, most of it has very little value. The introduction to the document states that “this year’s report is the product of rigorous research, analysis, and input from stakeholders”. To be blunt, I call bullshit on this statement. I expect a report to have data and various analysis of that data, not only telling us what is, but examining why it is. We aren’t getting that. The National Preparedness Report is an annual requirement per the Post Katrina Emergency Management Reform Act. I challenge that FEMA is not meeting the intent of that law with the reports they have been providing. How can we be expected, as a nation, to improve our state of readiness when we aren’t provided with the data needed to support and justify those improvements?

© 2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

A National Disaster Safety Board

You’ve heard of the National Transportation Safety Board (NTSB), right? If not, the nitty gritty of it is that they are an independent federal accident investigation agency. They determine probable cause of the full range of major transportation incidents, typically putting forward safety recommendations. They are granted some specific authorities related to these investigations, such as being the lead federal agency to investigate them (absent criminal aspects) and they maintain a schedule of deployment-ready teams for this purpose.  They can conduct investigative hearings (ever see the film Sully?) and publish public reports on these matters. Overall, I’ve had positive interactions with NTSB representatives and have found their work to be highly effective.

While certainly related to emergency management, the main purpose for my quick review of the NTSB in this post is to provide a starting point of understanding for Congressional legislation urging the formation of a National Disaster Safety Board (NDSB). The draft bill for discussion can be found here. This bill has been put forth with bi-partisan sponsors in both the US Senate and the House of Representatives.

The purpose of the NDSB, per this bill, is:

  1. To reduce future losses by learning from incidents, including underlying factors.
  2. Provide lessons learned on a national scale.
  3. Review, analyze, and recommend without placing blame.
  4. Identify and make recommendations to address systemic causes of incidents and loss from incidents.
  5. Prioritize efforts that focus on life safety and injury prevention, especially in regard to disproportionately impacted communities.

To execute this mission, the bill provides that the NDSB will have the authority to review incidents with 10 or more fatalities; may self-determine the need for board review of an incident; and shall have the full ability to investigate, review, and report on incidents.

The bill directs the NDSB to coordinate with all levels of government to identify and adopt standard methods of measuring impacts of disasters to provide for consistent trend analysis and comparisons, and to ensure that these standards are uniformly applied. The bill requires the NDSB to coordinate with all levels of government in their investigations during incident responses, and to participate in the incident command system for coordination of efforts as well as investigative purposes. Affected authorities shall have an opportunity to review the NDSB report 30 days prior to publication.

The NDSB will be comprised of seven board members, selected by the President from a slate of candidates provided by both houses of Congress, with no more than four board members having affiliation with the same political party, and with all members having technical and/or professional qualifications in emergency management, fire management, EMS, public health, engineering, or social and behavioral sciences.

There is a lot of other legalese and detail in the bill, but I’m happy to find that the language supports coordination among and with federal agencies, including FEMA, NIST, NTSB, and others; and also has an emphasis on investigating impacts to disproportionately impacted communities. The bill also charges the NDSB with conducting special studies as they see fit and providing technical support for the implementation of recommendations.

I’m thrilled with this effort and I’m hopeful the bill progresses to law. We have had a history of outstanding research from academic institutions and after action reports from government entities, which should all still continue, but it’s incredibly substantial that the NDSB will establish standards and consistency in how we examine disasters over time. We’ve seen how impactful the NTSB has been since its inception in 1967, and I feel the NDSB could have an even greater impact examining a broader spectrum of disasters. This is an effort which has been long encouraged by various emergency management related groups. The NDSB, I suspect, will also support a stronger and more defined FEMA, as well as strengthening all aspects of emergency management at all levels.

What thoughts do you have on the NDSB? What do you hope will come of it?

© 2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC

COVID-19 Vaccine Administration Preparedness

On September 16, the CDC released the COVID-19 Vaccination Program Interim Playbook for Jurisdiction Operations. This document lays out some fairly realistic expectations of jurisdictions (mostly states) in their distribution, administration, and tracking of COVID-19 vaccinations. That said, even though there continue to be many unknowns about the vaccines to be utilized, dosages, timetable of availability, and how and where vaccines will be delivered to states, there are reasonable assumptions that could be made and high probability strategies identified, which the CDC failed to do. Instead, as is a hallmark of many poor managers, they provided a punch list of considerably detailed demands but not the very essential information and parameters needed to support good planning. Information is everything.

Garbage in/garbage out is a pretty simple concept of utilizing poor or lacking information to inform a process, which will result in similar outputs. After reviewing New York State’s COVID-19 Vaccination Plan, that concept is fully demonstrated. Most sections of New York State’s plan are vague at best, saying what they will do but not how they will do it. They do identify some roles and responsibilities, but without delineating the boundaries between functionaries. For example: they will utilize pharmacies, local health departments, and state-run facilities, among others, to accomplish public vaccination. This is a solid and expected strategy, but the responsibilities for each are poorly defined for their own operations, much less how they will or won’t work together. Many concepts in the plan are vague at best, and even lacking more defined federal guidance, should have better detail. A big component of vaccination will be community delivery through local health departments, yet this is barely acknowledged. I would have expected this plan to provide guidance and outline preparedness requirements for local health departments, even if they were communicated separately. I acknowledge this is intended to be a strategic level plan, but it doesn’t seem to even consistently provide that measure of detail. I’m left with a lot of questions. And while it may be petty, the document itself is poorly written and published – I expect better from state government.

I’ve not looked at the plans of other states, but if this is indicative of the general state of things, the term ‘shit show’ is the phrase that comes to mind. While we will no doubt improve, there is a long way to go and I think jurisdictions will find themselves in a bind, being poorly prepared when they receive notice of an imminent delivery of vaccines with no detailed plan or assigned resources to get the job done. If anything, we have had plenty of time to prepare for vaccination efforts. There are clearly failures at all levels. While communication between and among federal, state, and local jurisdictions has certainly taken place beyond these documents, the standards and measures need to be more apparent.

We need to do better and be better. Reflecting a bit on the piece I wrote yesterday, we need to be thorough and imaginative in our preparedness efforts without excluding possibilities. Local jurisdictions must be prepared to support vaccinations in their communities. As I’ve written before, most health departments simply don’t have the capacity to do this. Jurisdictions need to engage with their health departments for the best guidance possible and work from that. An 80% solution now is better than a 20% solution later. As with any disaster, local communities are the first stakeholder and the last.

What are you seeing from your states? What do you think is missing in our overall efforts?

© 2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Preparedness in the Pandemic Age

Planning, training, and exercises, as the foundational activities of preparedness, shouldn’t be stopping because of the pandemic. Preparedness is an ongoing activity which needs to forge ahead with little disruption – and there is always plenty to do! What must we do, though, to accommodate necessary precautions in the age of the Pandemic?

Let’s talk about planning first. The biggest relevant issue for planning is the conduct of stakeholder meetings. These may be larger group meetings to discuss and get buy-in on broader topics, or detailed small-group meetings to discuss very specific topics. Information, sometimes sensitive, is exchanged, presentations are given, and documents are reviewed. I’ve mentioned in various posts through the years the importance of properly preparing for meetings. Even for traditional in-person meetings, there are important things to consider, such as:

  1. Do you really need a meeting?
  2. Developing an agenda
  3. Having the right people in attendance
  4. Ensuring that all speakers and presenters are prepared
  5. Ensuring that all attendees are prepared to discuss the subject matter
  6. An adequate meeting space and support (technology, dry erase boards, etc)

All of these rules still apply in a virtual world, perhaps with even more emphasis. While we’ve obviously had video meeting technology for a long time, we’ve discovered this year that many people haven’t used it much or at all until earlier this year. The surge in use has also brought attention to the plethora of tools which can be facilitated through video conference platforms. While the simple sharing of video supports most of our meeting needs, we can share screens, conduct presentations, and use collaborative tools such as whiteboards and shared documents. Pretty much everything we do in an in-person meeting can be accomplished through video conference platforms – but those who arrange the calls need to take the time to become familiar with the tools and functionality; and if there is anything that needs to be done by participants (some of which are likely to be less tech-savvy) you need to be able to coach them through it. Some of these tools require integrations of other technology, such as cloud document storage or various apps. Remember that meetings should be interactive, so encourage people to use chatrooms to help queue up questions for presenters. If any documents or information are sensitive, be sure you are taking the appropriate precautions with how the meeting is set up, how participants are invited, and how documents are shared.

My tip… read reviews to determine which platform will best suit your needs and watch some tutorials on YouTube.

When it comes to remote training, so much of what I mentioned for stakeholder meetings will apply here. Being interactive is still incredibly important, as is the ability to integrate other technologies, such as videos, PowerPoint, and shared documents. When designing training that will be delivered remotely, if it helps, don’t think about the platform first – think about how you would do the training in person. Would you have breakout sessions for group work? That can be easily accomplished on video conference platforms, but it takes some preparation. Would you put things on a white board or chart paper? That can also be accomplished. Giving an exam? Having participants complete a survey or feedback form? Yes and yes. It can all be done, but preparation is key. Some instructors, especially in public safety, have gotten too used to simply showing up and delivering their material – not because they are lazy, but because they have done it dozens or hundreds of times. They have a routine. If you want participants to get a similar, or perhaps even better learning experience, some deliberate thought and preparation is required. Also, make sure you simply don’t become a talking head. Break things up and be dynamic. It’s easy for our own demeanor to elevate disinterest. I often stand (using a variable height standing desk) when giving presentations and conducting training. Being on my feet helps me push more energy into what I’m doing.

Tip… remember to give people breaks, just as you would in face-to-face training.

Lastly, exercises. A lot of this is a combination of the information I gave for planning and training. Exercise planning meetings need to be conducted, and every exercise has some extent of presentations, with discussion-based exercises having more emphasis on this obviously. To answer the big question – yes, most exercise can be conducted remotely! Obviously, discussion-based exercises are generally the lower-hanging fruit, so they can and should be happening remotely. Remember that exercises are supposed to be interactive experiences, so your exercise design absolutely must account for identifying the means and methods of engagement in the virtual environment. All the things I’ve mentioned already are prime options for this, such as breakout groups, shared documents, live polling, etc. Facilitators and evaluators can be assigned to specific breakout rooms or have access to all of them, allowing them to float from room to room.

What about operations-based exercises? Yes, there are options for conducting operations-based exercises remotely. First, we do need to acknowledge the obvious challenges associated with conducting drills and full-scale exercises via remote environments. Is it impossible? No, but it depends on what the focus of the exercise is. Something like a cyber-security or intelligence exercise may be more naturally brought into a virtual environment, depending on the exercise objectives or tasks. Games may be fully integrated into digital platforms already, which helps, but if they aren’t, these may need to be re-imagined and developed in a virtual environment. This can get expensive, so it really needs to be a properly thought through. Functional exercises, such as the typical command post exercise or emergency operations center (EOC) exercise, can absolutely be performed virtually. Many jurisdictions successfully ran their EOCs virtually during the height of the pandemic (many still are). If the actual activity can be performed virtually, it can (and should!) be exercised virtually. Again, preparation is key to ensuring that participants can do what they would normally do, while controllers and evaluators still have full access and visibility. Simulation Cells can be virtually integrated and most EOC management platforms are web-based. With some thought, we can bring most exercises into a virtual environment and still make them effective experiences while also meeting all HSEEP requirements.

Tip… For a virtual functional exercise, unless the time period of your exercise is set after the initial response, consider including an objective for the participants (and the tech support of their agencies, as needed) to set up everything that is needed in real time during the exercise – just like they would in real life. This would include all their video, file share, data tracking, etc. That set up is a considerable challenge of running a virtual EOC. If you didn’t want that activity to distract from your exercise, it’s also a great drill. Don’t let it just be tech support personnel, though, as EOC personnel should be expressing their needs.

Remote work environments have helped many organizations overcome challenges associated with the pandemic. Some organizations were better prepared than others to make it happen, but most seem to have achieved effective operational continuity. Hopefully your preparedness programs haven’t stalled out because people feel these activities can’t be done in a virtual environment. We also can’t use the excuse that we’re too busy because of the pandemic to not be preparing. While some niche organizations might still be quite busy, the pandemic response, for most, has become an integrated job duty for the medium term. We can’t let things fall to the wayside or we will never get back on track. The time is now!

I’d love to hear how you are using tech platforms to support preparedness efforts.

©2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC

When the Solution Becomes the Problem

Ever think a problem was fixed just to find that the solution was really more of a problem or a totally different kind of problem. While this can certainly happen in our person lives, I see this happen a lot in my professional life, and I’m sure you do as well. Through my tenure in emergency management, I’ve seen a lot of ill-informed assessments, poorly written plans, misguided training programs, bad hires or contracts, unwise equipment purchases, and exercises that could really be called damaging. Not only is the time, money, and effort put into developing these a waste of time (aside from learning how not to do them), they can have ramifications that cause issues to be solved in the short term or down the road.

Poorly conducted assessments can result in a lot of problems. If the data, the analysis, or conclusions are wrong, this can have considerable consequences if that assessment was intended to inform other projects, such as plans, construction, hazard mitigation efforts, staffing, and more. I’ve seen people point to reports with the assumption that the data was complete, analysis was unbiased, and conclusions are correct, and with something akin to blind obedience. When an assessment is used to justify spending and future efforts, we need to ensure that the assessment is carefully planned and executed. Similarly, we’ve all seen a lot of decisions based on no assessment at all. This can be just as dangerous.

Bad planning is a problem that has always, and I fear will always, plague emergency management. Of course, there are some really stellar plans out there, but they seem to be the exception. There are an abundance of mediocre plans in existence, which I suppose are fine but in the end aren’t doing anyone any favors because while the plans themselves may be fine, they tend not to include much useful information, specifics on procedure, or job aids to support implementation of the plan.

Here’s an example of how disruptive bad plans can be: A few years ago, my firm was hired by a UASI to design, conduct, and evaluate a couple of exercises (one discussion-based, the other operations-based) to validate a new plan written for them by another firm. Being that the exercises were to be based on the plan, I took a deep dive into the plan. I honestly found myself confused as I read. I forwarded the plan to a member of our project team to review and, quite unsolicited, I received a litany of communications expressing how confounded he was by the plan. At the very best, it was unorganized and poorly thought out. The subject matter lent itself to a timeline-based progression, which they seemed to have started then abandoned, which resulted in a scattering of topic-based sections that were poorly connected. After conferring with that team member to develop some very specific points, I approached our client for a very candid conversation. I came to find out that the planning process recommended and established by CPG-101, NFPA 1600, and others, was not at all used, instead the firm who built the plan didn’t confer with stakeholders at all and delivered (late) a final product with no opportunity for the client to review and provide feedback. This is a firm that gives other consulting firms a bad name. Working with the client, we restructured our scope of work, turning the tabletop exercise into a planning workshop which we used to inform a full re-write of the plan, which we then validated through the operations-based exercise.

Having been involved in training and exercises for the entire duration of my career, I’ve seen a lot of ugly stuff. We’ve all been through training that is an epic waste of time – training that clearly was poorly written, wasn’t written with the intended audience in mind, and/or didn’t meet the need it was supposed to. For the uninitiated, I’ll shamelessly plug my legacy topic of ICS Training Sucks. Possibly even worse is training that teaches people the wrong way to do things. Similarly, poorly designed, conducted, and evaluated exercises are not only a waste of time, but can be very frustrating, or even dangerous. Don’t reinforce negative behavior, don’t make things more complex than they are, don’t put people in danger, and DO follow established guidance and best practices. Finally, if you are venturing into unknown territory, find someone who can help you.

Equipment that’s not needed, has different capability than what is needed, is overpurchased, underperforms, undertrained, poorly stored and maintained, readily obsolete, and not used. Familiar with any of this? It seems to happen with a lot of agencies. Much of this seems to stem from grant funding that has very specific guidelines and must be spent in a fairly short period of time. Those who have been around for a while will remember the weapons of mass destruction (WMD) preparedness program that started prior to 9/11 and was bolstered by post-9/11 program funding. The centerpiece of this program was equipment purchases. While there was some good that came from this program, I witnessed a lot of wasted money and mis-guided purchases for equipment that wasn’t needed, for jurisdictions that didn’t need it or couldn’t sustain it, and supporting training and exercises to teach people how to use the equipment and keep them proficient. A lot of this circles back to poor (or non-existent) assessments used to inform these purchases, but the real culprit here is the ‘spend it or lose it’ mentality of grant surges like this. Foundational aspects of this program, such as defined need, sustainability, and interoperability were often skewed or ignored in favor of simply spending the funds that were thrust upon jurisdictions. I really blame the poor structuring of this program at the federal level on the poor implementations I saw and heard of at the state and local levels.

There are so many other examples of poor implementations that cause problems. Poorly built infrastructure, misguided hazard mitigation projects, and even poor responses. In the realm of response, I’ll draw on another example that I was involved in. Large disasters really do need to draw on a whole-community approach, which often leads to agencies who aren’t used to large-scale and long-duration incident operations going in over their heads. In one large disaster, I had been hired to help lead a team assembled to fix just such an occurrence, charged with rescuing a functionally necessary program that had been managed into the ground by a well intentioned but overly bureaucratic agency with high degrees of micromanagement. The time, money, and effort exerted to support saving this program from itself was fairly extensive, and, in implementation, challenging given the layers and nuances created by the agency that built it. In the end, the biggest issues they had were not listening to subject matter experts, some of which were in their own agency, and, ultimately, a failure of executives to deal with very apparent problems.

Most emergency management agencies operate on very slim and limited budgets. Being efficient and effective is of great importance. Don’t waste limited money or limited time of limited staff. Sometimes the things with greatest impact are simple, but if executed poorly the consequences can be high. Think things through and consult the right people. It makes a difference.

©2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC

The Universal Adversary Mindset

Some of you are probably familiar with the concept of the Universal Adversary (UA). From previous Homeland Security Exercise and Evaluation Program (HSEEP) doctrine, UA is “a fictionalized adversary created by compiling known terrorist modifications, doctrine, tactics, techniques, and procedures in live, virtual, and constructive simulations. The UA is based on real realistic threats … providing participants with a realistic, capabilities-based opponent.” UA is often executed by a Red Team, which serves as an exercise-controlled opposing force for participants.

Over the past few years, I’ve heard less and less of the Universal Adversary concept. DHS used to have a UA Program supporting terrorism-based prevention and responses exercises, dating back to the early 2000s, but lately I’ve neither seen or heard anything about the continuation of the program or capability. (can any readers confirm the life or death of this capability?)

Regardless, the concept of UA offers a fair amount of opportunity, not only within the Prevention Mission Area, but across all of exercise design and perhaps other areas of preparedness – yes, even across all hazards. Of course, I recognize the difference between human perpetrators and other hazards, but just stick with me on this journey.

The fact of the matter is that we so often seem to have, as the 9/11 Commission Report made the phrase infamous, a failure of imagination in our preparedness. I’m not saying we need to go wild and crazy, but we do need to think bigger and a bit more creatively – not only in the hazards that threaten us, but also in our strategies to address them.

The UA concept is applied based on a set of known parameters, though even that gives me some concern. In the Prevention world, this means that a Red Team will portray a known force, such as ISIS, based upon real intel and past actions. We all know from seeing mutual fund commercials on TV that past performance does not predict future results. While humans (perpetrators and defenders alike) gravitate toward patterns, these rules can always and at any time be broken. The same can be said for instances of human error or negligence (see the recent and terrible explosion in the Port of Beirut), or in regard to someone who we have a love-hate relationship with… Mother Nature. We need to be ever vigilant of something different occurring.

There is the ever-prolific debate of scenario-based preparedness vs capability-based preparedness. In my opinion, both are wrong and both are right. The two aren’t and shouldn’t be set against each other as if they can’t coexist. That’s one mindset we need to move away from as we venture further into this. We need to continue with thinking about credible worst-case scenarios, which will still be informed by previous occurrences of a hazard, where applicable, but we need to keep our minds open and thinking creatively. Fundamentally, as the UA concept exists to foil and outthink exercise participants, we need to challenge and outthink ourselves across all areas of preparedness and all hazards.

A great example of how we were foiled, yet again, by our traditional thinking is the current Coronavirus pandemic. Practically every pandemic response plan I’ve read got it wrong. Why? Because most pandemic plans were based upon established guidance which emergency managers, public health officials, and the like got in line and followed to the letter, most without thinking twice about it. I’m not being critical of experts who tried to predict the next pandemic – they fell into the same trap most of us do in a hazard analysis – but the guidance for many years has remained fairly rigid. That said, I think the pandemic plans that exist shouldn’t be sent through the shredder completely. The scenarios those plans were based upon are still potentially valid, but Coronavirus, unfortunately, started playing the game in another ball field. We should have been able to anticipate that – especially after the 2003 SARS outbreak, which we pretty much walked away from with ignorant bliss.

It’s not to say that we can anticipate everything and anything thrown at us, but a bit of creativity can go a long way. Re-think and re-frame your hazards. Find a thread and pull it; see where it leads you. Be a little paranoid. Loosen up a bit. Brainstorm. Freeform. Improv. Have a hazard analysis party! (I come darn close to suggesting an adult beverage – take that as you will). We can apply the same concepts when designing exercises. Consider that in the world of natural hazards, Mother Nature is a Universal Adversary. Any time we hope to have out-thought her, she proves us wrong, and with considerable embarrassment. We also try to out-think the oft stupidity and negligence of our fellow humans… clearly, we’ve not been able to crack that nut yet.

“Think smarter, not harder” is such an easy thing to say, but difficult, often times, to do. So much of what we do in emergency management is based on traditional practices, most of which have valid roots, but so often we seem reluctant to think beyond those practices. When the media reports that a disaster was unexpected, why the hell wasn’t it expected? Consider that many of our worst disasters are the ones we never thought of. Challenge yourself. Challenge others. It is not in the best interests of this profession or for the people we serve to stay stuck in the same modes of thinking. Be progressive. Break the mold. Do better.

© 2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

Failures in Preparedness

In May the GAO released a report titled “National Preparedness: Additional Actions Needed to Address Gaps in the Nation’s Emergency Management Capabilities”. I encourage everyone to read the report for themselves and also reflect on my commentary from several years of National Preparedness Reports. I’ll summarize all this though… it doesn’t look good. The National Preparedness Reports really tell us little about the state of preparedness across the nation, and this is reinforced by the GAO report as they state “FEMA is taking steps to strengthen the national preparedness system, but has yet to determine what steps are needed to address the nation’s capability gaps across all levels of government”.

First of all, let me be clear about where the responsibility of preparedness lies – EVERYONE. Whole community preparedness is actually a thing. It’s not FEMA’s job to ensure we are prepared. As also made evident in the GAO report (for those who haven’t worked with federal preparedness grants), most preparedness grants are pretty open, and as such, the federal government can’t force everyone to address the most critical capability gaps. Why wouldn’t jurisdictions want to address the most critical capability gaps, though? Here are some of the big reasons:

  • Most or all funding may be used to sustain the employment of emergency management staff, without whom there would be no EM program in that jurisdiction
  • The jurisdiction has prioritized sustaining other core capabilities which they feel are more important
  • The jurisdiction has decided that certain core capabilities are not for them to address (deferring instead to state or federal governments)
  • Shoring up gaps is hard
  • Response is sexier

The GAO report provided some data to support where priorities lie. First, let’s take a look at spending priorities by grant recipients:

While crosscutting capabilities (Operational Coordination, Planning, and Public Information and Warning) were consistently the largest expenditures, I would surmise that Operational Coordination was the largest of the three, followed by Planning, with Public Information and Warning coming in last. And I’m pretty confident that while these are cross cutting, these mostly lied within the Response Mission Area. Assuming my predictions are correct, there is fundamentally nothing wrong with this. It offers a lot of bang for the buck, and I’ve certainly spoken pretty consistently about how bad we are at things like Operational Coordination and Planning (despite some opinions to the contrary). Jumping to the end of the book, notice that Recovery mission area spending accounts for 1% of the total. This seems like a poor choice considering that three of the five lowest rated capabilities are in the Recovery mission area. Check out this table also provided in the GAO report:

Through at least a few of these years, Cybersecurity has been flagged as a priority by DHS/FEMA, yet clearly, we’ve not made any progress on that front. Our preparedness for Housing recovery has always been abysmal, yet we haven’t made any progress on that either. I suspect that those are two areas, specifically, that many jurisdictions feel are the responsibility of state and federal government.

Back in March of 2011, the GAO recommended that FEMA complete a national preparedness assessment of capability gaps at each level of government based on tiered, capability-specific performance objectives to enable prioritization of grant funding. This recommendation has not yet been implemented. While not entirely the fault of FEMA, we do need to reimagine that national preparedness system. While the current system is sound in concept, implementation falls considerably short.

First, we do need a better means of measuring preparedness. It’s difficult – I fully acknowledge that. And for as objective as we try to make it, there is a vast amount of subjectivity to it. I do know that in the end, I shouldn’t find myself shaking my head or even laughing at the findings identified in the National Preparedness Report, though, knowing that some of the information there can’t possibly be accurate.

I don’t have all the answers on how we should measure preparedness, but I know this… it’s different for different levels of government. A few thoughts:

  • While preparedness is a shared responsibility, I don’t expect a small town to definitively have the answers for disaster housing or cybersecurity. We need to acknowledge that some jurisdictions simply don’t have the resources to make independent progress on certain capabilities. Does this mean they have no responsibility for it – no. Absolutely not. But the current structure of the THIRA, while allowing for some flexibility, doesn’t directly account for a shared responsibility.
  • Further, while every jurisdiction completing a THIRA is identifying their own capability targets, I’d like to see benchmarks established for them to strive for. This provides jurisdictions with both internal and external definitions of success. It also allows them an out, to a certain extent, on certain core capabilities that have a shared responsibility. Even a small town can make some progress on preparedness for disaster housing, such as site selection, estimating needs, and identifying code requirements (pro tip… these are required elements of hazard mitigation plans).
  • Lastly, we need to recognize that it’s difficult to measure things when they aren’t the same or aren’t being measured the same. Sure, we can provide a defined core capability, but when everyone has different perspective on and expectation of that core capability and how it should be measured, we aren’t getting answers we can really compare. Everyone knows what a house is, but there is a considerable difference between a double wide and a McMansion. Nothing wrong with either of them, but the differences give us very different base lines to work from. Further, if we need to identify how big a house is and someone measures the length and width of the building, someone else measures the livable square footage of a different building, and a third person measures the number of floors of yet another house, we may have all have correct answers, but we can’t really compare any of them. We need to figure out how to allow jurisdictions to contextualize their own needs, but still be playing the same game.

In regard to implementation, funding is obviously a big piece. Thoughts on this:

  • I think states and UASIs need to take a lot of the burden. While I certainly agree that considerable funding needs to be allocated to personnel, this needs to be balanced with sustaining certain higher tier capabilities and closing critical gaps. Easier said than done, but much of this begins with grant language and recognition that one grant may not fit all the needs.
  • FEMA has long been issuing various preparedness grants to support targeted needs and should not only continue to do so, but expand on this program. Targeted grants should be much stricter in establishing expectations for what will be accomplished with the grant funds.
  • Collaboration is also important. Shared responsibility, whole community, etc. Many grants have suggested or recommended collaboration through the years, but rarely has it been actually required. Certain capabilities lend themselves to better development potential when we see the realization of collaboration, to include the private sector, NGOs, and the federal government. Let’s require more of it.
  • Instead of spreading money far and wide, let’s establish specific communities of practice to essentially act as model programs. For a certain priority, allocate funds for a grant opportunity with enough to fund 3-5 initiatives in the nation. Give 2-3 years for these programs to identify and test solutions. These should be rigorously documented so as to analyze information and potentially duplicate, so I suggest that academic institutions also be involved as part of the collaborative effort (see the previous bullet). Once each of the grantees has completed their projects, host a symposium to compare and contrast, and identify best practices. Final recommendations can be used to benchmark other programs around the nation. Once we have a model, then future funding can be allocated to support implementation of that model in other areas around the nation. Having worked with the National Academies of Sciences, Engineering, and Medicine, they may be an ideal organization to spearhead the research component of such programs.
  • Recognize that preparedness isn’t just long term, it’s perpetual. While certain priorities will change, the goals remain fundamentally the same. We are in this for the long haul and we need to engage with that in mind. Strategies such as the one in the previous bullet point lend themselves to long-term identification of issues, exploration of solutions, and implementation of best practices.
  • Perhaps in summary of all of this, while every jurisdiction has unique needs, grant programs can’t be so open as to allow every grantee to have a wholly unique approach to things. It feels like most grant programs now are simply something thrown at a wall – some of it sticks, some of it falls right off, some might not even make it to the wall, some slowly drips off the wall, and some dries on permanently. We need consistency. Not necessarily uniformity, but if standards are established to provide a foundational 75% solution, with the rest open for local customization, that may be a good way to tackle a lot of problems.

In the end, while FEMA is the implementing agency, the emergency management community needs to work with them to identify how best to measure preparedness across all levels and how we can best implement preparedness programs. Over the past few years, FEMA has been very open in developing programs for the emergency management community and I hope this is a problem they realize they can’t tackle on their own. They need representatives from across the practice to help chart a way ahead. This will ensure that considerations and perspectives from all stakeholder groups are addressed. Preparedness isn’t a FEMA problem, it’s an emergency management problem. Let’s help them help us.

What thoughts do you have on preparedness? How should we measure it? What are the strengths and areas for improvement for funding? Do you have an ideal model in mind?

© 2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

NIMS Guidance – Resource Management Preparedness

Last week FEMA issued a national engagement period for updated NIMS guidance on resource management preparedness. This is the first version of such a document, with most material on the subject matter, to date, being included in the NIMS doctrine and a few other locations. I regularly participate in the national engagement periods and encourage others to do so as I think it’s a great opportunity for practitioners and subject matter experts to provide input.

Some observations:

  1. The footer of the document states that it’s not for public distribution. I’m guessing that was an error.
  2. The phrase of ‘resource management preparedness’ rubs me the wrong way. While I understand that there are resource management activities that take place within the preparedness phase of emergency management, we’re not preparing to manage resources. All the activities outlined in the document are actually part of resource management. If they want to put a time stamp on this set of activities, they can refer to them as ‘pre-incident’, but inventorying, typing, etc. are all actually part of the resource management cycle.
  3. I’d prefer to see a comprehensive NIMS Resource Management guide that addresses all aspects of resource management. Considering that resource management is a cycle, let’s actually cover the entire cycle. I think there will be far more value in that. Hopefully that’s eventually where this will go.
  4. The document is too stuck in NIMS. What do I mean by this? It seems that more and more people seem to forget that NIMS is a doctrinal component of incident management. While the document is focused on NIMS, it would have greater value if it addressed pre-incident resource management activities that might not found in the NIMS doctrine (though some are), but are none-the-less best practices in resource management. Many of these practices begin pre-incident.
  • One of the biggest things is resource tracking. Yes, resource tracking is a concept found in NIMS, but it’s not at all addressed here. How many jurisdictions struggle to figure out how to track resources in the middle of an incident (answer: most of them). The best time to figure out the means and methods of tracking resources is before an incident ever occurs. Resource tracking has a fair amount of complexity, involving the identification of what will be tracked, how, and by who; as well as how changes is resource status are communicated. Data visualization and dashboarding is also big. People want to see maps of where major resources are, charts that depict utilization, and summaries of resource status. All things best determined before an incident.
  • Resource inventories should identify operating requirements, such as maintenance and service. This is vaguely referenced in the guidance, but not well. Before any resource is deployed, you damn well better have the ability to operate and support that resource, otherwise it’s nothing more than a really large expensive paperweight. Do you only have one operator for that piece of equipment? That’s a severe limitation. All things to figure out before an incident.
  • How will resource utilization be tracked? This is important for cost controls and FEMA reimbursement. Figure that one out now.
  • What consumables are stockpiled or will be needed? What is the burn rate on those under various scenarios? (We’ve learned a lot about this in the pandemic)
  • What about resource security? When it’s not being used where and how will it be secured? What if the resource is left unattended? I have a great anecdote I often tell about a portable generator used in the aftermath of a devastating snow storm to power the traffic lights at a critical intersection. The maintenance crew doing their rounds found it to be missing, with the chain cut. Luckily the state’s stockpile manager had GPS trackers on all of them. It was located and re-acquired in little time, and the perpetrators charged. This success was due to pre-incident activity.
  • Resource ordering processes must also be established. What are the similarities and differences in the process between mutual aid, rental, leasing, or purchasing? What are your emergency procurement regulations and how are they implemented? How are the various steps in the ordering process assigned and tracked? This is highly complex and needs to be figured out before an incident.
  1. Resource typing. I honestly think this is the biggest push in emergency management that isn’t happening (maybe perhaps second to credentialing). Resource typing has been around for a long time, yet very very few jurisdictions I’ve worked with or otherwise interacted with have done it and done it well. I find that most have either not done it at all, started and gave up, or have done it rather poorly. I’ve been involved in resource typing efforts. It’s tough and tedious. I’ve done it for resources that we’re yet typed at the national level, leaving agencies and jurisdictions to define their own typing scheme. This literally can devolve into some heated discussions, particularly fueled by the volume of rather heavy customization we tend to do with resources as technology evolves, giving resources that may fundamentally appear to have similar capability to in reality be quite different. I’ve also done it for resources that have been typed at the national level. This certainly helps, as you aren’t first having to figure out your own thresholds, but it can still be challenging to pigeon hole resources that, again, may be heavily customized and don’t cleanly fit within a certain pre-defined category. It’s even more frustrating to have developed your own typing scheme in the absence of a national one, only to have national guidance issued a couple years later and needing to go back to those discussions.

I’m not saying resource typing is bad, in fact the benefits, both internally and externally, can be incredibly helpful. That said, it’s a time-consuming effort that, in the broader sense of limited time and other assets available to most emergency managers, is perceived to pay a lesser dividend than other activities such as developing and updating plans, training people on the implementation of those plans, and exercising those plans. It also can be difficult convincing agencies that it should be done. I can’t tell you how many times I get the response of ‘We know what we have’. I know that’s not the point, but that’s how the effort of typing resources is perceived. Even after some explanation of the benefits, most agencies (and I think rightfully so) would rather invest their time and effort into preparedness activities are that are seen as more beneficial. It leaves me wondering… is there a better way?

While it’s good to see information on the topic of early resource management steps being collated into one document, along with some resources and references that I’ve not seen before, this document is missing a lot. I just wrote last night about emergency managers being our own worst enemy. If we are just focused on implementing NIMS, we will absolutely fail. NIMS is not the end all/be all of incident management, but it is fundamentally promoted as such. Yes, the concepts of NIMS are all incredibly important, brought about from lessons learned and identified best practices of incident management through decades of experience. But the documents related to NIMS seem to pick and choose what they will focus on, while leaving out things that are highly critical. Perhaps some of these will be covered in future editions of resource management guidance, but they aren’t doing anyone any favors by omitting them from a document on pre-incident activity. We need to think broader and more comprehensive. We need to do better.

What are your observations on this document? What feedback do you have on my observations?

© 2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC®

It’s Not Too Late To Prepare

The phrase I’ve been using lately when I speak to people has been “It’s not too late to prepare”.  Many people perceive that in the middle of a disaster we are unable to prepare.  Quite the contrary, we have the potential to integrate all of our preparedness steps into a response.  Because we have problems in front of us that need to be addressed, we have an opportunity to continuously improve, ensuring that organizationally we are offering the very best we can. 

There is a reason why there isn’t a mission area for preparedness in the National Preparedness Goal.  This is because preparedness is ongoing.  It’s not a separate or distinct activity.  Rather it is comprised of activities that support all mission areas, no matter when they are actioned.  Preparedness is continuous.

Assessment

Assessment is a key activity within preparedness.  In fact, assessment is foundational in understanding what’s going on.  During a disaster, good management practices dictate that we should be monitoring our response and adjusting as needed.  What exactly should we be monitoring?  Similar to evaluating an exercise, consider the following:

  • What was the effectiveness of deliberate planning efforts? 
    • Were planning assumptions correct?
    • Was the concept of operations adequate in scope and detail? 
    • What was lacking?
    • What worked well?
  • What was the effectiveness of plan implementation?
    • If aspects of plan implementation need improvement, what was the reason for the shortfall?
      • A poor plan
      • Lack of job aids
      • Lack of/poor/infrequent training
      • Lack of practice
      • Lack of the proper resources or capabilities
      • The plan wasn’t followed
  • Did resources and capabilities meet needs?  If not, why?

Planning

While some planning gaps will require a longer time period to address, I’m aware of many jurisdictions and organizations which have been developing plans in the midst of the pandemic.  They recognized a need to have a plan and convened people to develop those plans.  While some of the planning is incident-specific, many of the plans can be utilized in the future we as well, either in the form they were written or adjusted to make them more generally applicable without the specific details of this pandemic.  I’d certainly suggest that any plans developed during the pandemic are reviewed afterwards to identify the same points listed above under ‘assessment’ before they are potentially included in your organization’s catalogue of plans. Also consider that we should be planning for contingencies, as other incidents are practically inevitable.

Training

Training is another fairly easy and often essential preparedness activity which can performed in the midst of a disaster.  Many years ago FEMA embraced the concept of training during disasters.  FEMA Joint Field Offices mobilize with training personnel.  These personnel not only provide just in time training for new personnel or to introduce new systems and processes, but they provide continuing training a variety of topics throughout response and recovery, providing a more knowledgeable workforce.  I’ve seen some EOCs around the country do the same.  Recently, my firm has been contracted to provide remote training for the senior leadership of a jurisdiction on topics such as continuity of operations and multi-agency coordination, which are timely matters for them as they continue to address needs related to the pandemic. 

Exercises

While assessments, planning, and training are certainly activities that may take place during a disaster, exercises are probably less likely, but may, if properly scoped and conducted, still have a place.  Consider that the military will constantly conduct what they call battle drills, even in active theaters of war, to ensure that everyone is familiar with plans and protocols and practiced in their implementation.  Thinking back on new plans that are being written in the midst of the pandemic, it’s a good idea to validate that plan with a tabletop exercise.  We know that even the best written plans will still have gaps that during a blue-sky day we would often identify through an exercise.  Plans written in haste during a crisis are even more prone to have gaps simply because we probably don’t have the opportunity to think everything through and be as methodical and meticulous as we would like.  A tabletop exercise doesn’t have to be complex or long, but it’s good to do a talk through of the plan.  Depending on the scope of the plan and the depth of detail (such as a new procedure, conducting a walk-through of major movements of that plan (that’s a drill) can help ensure validity of the plan and identify any issues in implementation.  While you aren’t likely to go the extent of developing an ExPlan, an evaluator handbook, or exercise evaluation guides (yes, that’s totally OK), it’s still good to lay out a page of essential information to include objectives and methodology since taking the time to write these things down is one more step to ensure that you are doing everything you need for the validation to be effective.  Documentation is still important, and while it can be abbreviated, it shouldn’t be cut out entirely.  It’s also extremely important to isolate the exercise, ensuring that everyone is aware that what is being performed or discussed is not yet part of the response activity.  Evaluators should still give you written observations and documented feedback from participants.  You probably don’t need a full AAR, especially since the observations are going to be put into an immediate modification of the plan in question, but the documentation should still be kept together as there may still be some observations to record for further consideration. 

Evaluation and After Action

Lastly, incident evaluation is something we shouldn’t be missing.  We learn a lot about incident evaluation from exercise evaluation.   I’ve written on it before, which I encourage you to look at, but the fundamentals are ensuring that all actions and decisions are documented, that a hotwash is conducted (or multiple hotwashes to capture larger numbers of people or people who were engaged in very different functions), and that an after action report is developed.   Any incident should provide a lot of lessons learned for your organization, but the circumstances of a pandemic amplify that considerably.  Ensure that everyone in your organization, at all levels, is capturing observations and lessons learned daily.  Ensure that they are providing context to their observations as well, since once this is over, they may not recall the details needed for a recommendation. You may want to consider putting together a short form for people to capture and organize these observations – essentially identifying the issue, providing context, and putting forth a recommendation to address the issue. Don’t forget to encourage people to also identify best practices.  In the end, remember that if lessons learned aren’t actually applied, nothing will change. 

I welcome any insight on how we can continue to apply preparedness in the midst of a disaster. 

Be smart, stay safe, stay healthy, and be good to each other. 

©2020 Timothy Riecker, CEDP

Emergency Preparedness Solutions, LLC