Generative AI in EM

We’ve recently seen a significant increase in discussion about applications of artificial intelligence (AI) (generative AI, to be more specific) in the emergency management (EM) space. In just a few minutes of scrolling through my LinkedIn feed this morning I’ve come across four user posts and one user-posted article expressing caution and concern about the use of AI in a full range of EM-related work, and one post extolling the advantages of using AI in certain EM applications. These posts expressing caution included topics such as being disingenuous in our work, inaccuracies of AI products, and accountability for use. The post in favor indicated ease of use and efficiency as principal advantages.

AI can certainly be a tool with many applications to support aspects of our job. It can short cut a lot of activities for us, saving huge amounts of time. It can generate ideas or create an outline to work from. But it cannot reliably and completely replace what we do. I see it as a complementary tool – one that still requires human input, intervention, and review to be successful. In examining the pros and cons, we can’t just look at it superficially, though. There are concerns of information security, intellectual property, inaccuracies, environmental impact, and ethical accountability to consider.

There are concerns about where generative AI platforms source their data. In essence, it can be seen as a type of crowd sourcing, pulling data from across the internet, similar to how we might in doing research. However, generative AI does not often cite its sources and has been heavily criticized by writers and artists of plagiarism. I’ve actually run a few tests of my own, asking a generative AI tool to write about certain topics that are very niche, with myself being one of the few people writing on those topics. While it did cite me as a source on a couple of occasions, it typically did not, though there were clearly word-for-word phrases sourced from my own writing. Additionally, generative AI is not skilled in discerning truth from misinformation or disinformation, potentially leading to significant inaccuracies. On the flip side, anything you input into a public generative AI platform, such as an emergency plan, is now part of that AI’s dataset, bringing potential security concerns to the discussion.

What has me even further concerned is the cognitive impact on those who habitually use AI to do much of their work. MIT did a study which concluded that overuse of AI harms critical thinking. Microsoft partnered with Carnegie Mellon for a study that came to similar conclusions. We should also be aware of the environmental impacts of AI data farms, (and here is another article for your reference) which is a significantly growing concern around the world.

In regard to the impacts on critical thinking, I have severe concerns about the need to raise the bar of emergency manager knowledge, skills, and abilities (KSAs), not just as a matter of progressing the profession, but because of some serious gaps we’ve recently seen identified in after action reports (AARs), media statements, social media posts, and other releases that demonstrate a huge lack of understanding of key concepts among emergency managers. While the use of generative AI may help support the work involved in various projects, I would argue that it is not promoting or advancing individual KSAs in the field of emergency management (aside from those needed to interface with AI). If unplugging an emergency manager from AI tools results in us no longer having a knowledgeable, skilled, and able emergency manager, we have a major problem. 

I say all this not ignorant of the fact that I have friends and colleagues who use generative AI to help them develop content, such as their LinkedIn posts. Largely, these individuals have been transparent about their use of generative AI, indicating that they use it either up front to help provide structure to an idea which they then use as a framework to flesh out on their own, or at the end of their own creative process to tighten up their work. Overall, I don’t see much detriment in these approaches and uses, and have even acknowledged that my college students may be using it in these ways and providing them some guidance that supports successful use while helping them to ensure accuracy and avoid any allusion of plagiarism. It’s when people habitually use generative AI to pass off work as their own with little to no human input that I have concern. I also have friends and colleagues working with much broader applications of AI, for which I have concerns. While my concerns aren’t necessarily opposition, as I clearly see the benefits of these tools and uses for what we do in EM, I still see a lot of potential for eroding KSAs and critical thinking in our field, which is something we cannot afford. Yet I remain cautiously optimistic of a net gain. 

For those that choose to use it to generate content and outputs, be ethical and transparent about it. There is no shame in using AI, just consider citing it as you would a source (because it’s not your work) just as you should be with any other sources, and obviously be aware of the pros and cons of using generative AI. Generative AI is still a developing technology, a toddler perhaps in terms of relative growth, and I think even proponents should be skeptical, as skepticism can help address many of these concerns. Consider that toddlers can be fun, but they can cause absolute chaos and can’t be left unattended for even a moment.

I’m reminded of a saying with its roots in project management that goes something like this: You can have it cheap, fast, or good, but you can only pick two. Here’s what the options look like:

  • Fast and good won’t be cheap.
  • Good and cheap won’t be fast.
  • Fast and cheap won’t be good.

It seems to me that most people using generative AI are trying to pick the ‘fast and cheap’ options, which bring about the majority of concerns associated with quality and integrity in this article, but when we look beyond the superficial, into things like the environmental impacts of AI data farms and the cognitive impacts to high-volume users, the end result certainly isn’t cheap, no matter what options we pick.

©2025 Tim Riecker, CEDP

Emergency Preparedness Solutions, LLC®