Crisis and AI

Crisis and AI: what happens when “the computer did it”

Critical takeaways

  • Artificial Intelligence (AI) offers significant opportunities for crisis communicators and AI is likely to increase the need for communication and empathy in the workplace.
  • However, AI also poses significant challenges when it comes to the speed and complexity of the crisis that these systems could cause and how communicators need to prepare.
  • AI may pose the clearest requirement for organizations to proactively transform as the consequences of lagging behind AI are considerable.

For the past few weeks, I’ve been on a journey to learn more about artificial intelligence (AI) and machine learning (ML), to better understand the implications that these technologies will have on crisis communications and reputational risk. As you probably know – or can imagine – these are huge topics but I found several great articles online and I recommend this paper from McKinsey in particular as a great place to get started.

The McKinsey paper takes a broad view of AI and ML as these relate to business and society but I found several elements that are particularly relevant to crisis communicators.

Before I jump into these in detail, I would stress that the AI genie is out of the bottle and it is already transforming markets and workplaces. What we are seeing today is only the beginning, however, and the potential – both good and bad – are immense and need much more research. So the main thing we need to take away as communicators is that this is a matter of when not if we have to deal with the implications of AI.

So let’s start by looking at the upside of AI for communicators.

The positive effects of AI for communicators

The easiest thing to address is what AI is not. AI is logic, not emotion. Therefore, no matter what advances there are in mimicking human emotions, or how successful AI is at the Truing test, these are still systems of logic and resemble pure IQ. 

Communicators, however, are focussed on the ethical, emotional aspects of their organization and its interactions with others. Not only does this highlight an area where AI cannot replace a human, but McKinsey notes that the growth of AI will lead to an increased “demand for social and emotional skills such as communication and empathy” anticipating that these “will grow almost as fast as demand for many advanced technological skills“. Therefore, rather than a threat to communicators, AI may increase demand for our ethical understanding and EQ skills

Another area where AI will benefit us is the ability to manage large, complex amounts of data with relative objectivity. When it comes to pattern identification, media monitoring or tracking social media trends, AI offers significant opportunities to better understand the media landscape because:

  • AI and ML systems to process information at much faster speeds than humans.
  • These systems have a much higher chance of accuracy and should provide better answers from big data sets than any individual human could provide. I need to say ‘should’ here, however, as there is still a significant chance that the underlying algorithm will reflect the biases of its authors or ‘learn’ bad habits from corrupted data.
  • AI and ML systems help automate tasks, reducing the workload on your communications team and allowing them to focus on the tasks better suited to humans. 

However, these benefits are still slanted towards IQ and logic. Again, we still need humans to add a layer of EQ and context to the data so I think it much more likely that we will see a similar approach to the ‘cobot’ model in manufacturing where humans and machines work alongside one another.

AI challenges for communicators

Despite all the benefits that AI offers communicators, this technology will pose significant challenges for us and I believe that we need to start taking steps to address these now.

The first challenge AI poses is due to the opaque nature of how these machines work. Even the engineers who design them don’t always know how they are learning the lessons they do and AI teaching itself can lead to interesting, possible alarming, outcomes. This will make it difficult for crisis communicators to explain what is happening during a crisis.

Secondly, systems with a heavy reliance on AI may act in ways that we cannot anticipate or understand.  These systems may also be difficult to fix or manage when the underlying processes are not fully understood by the operators. This means that AI systems may create issues and critical moments which we cannot predict or prepare for. So while deploying AI may reduce one set of risks, the same system may also be creating a new series of risks for the organization.

The final issue is that the speed of AI makes it less likely that we will get any meaningful warning of an AI- or ML-generated incident. Therefore, no matter how good our risk assessments or sophisticated our pattern recognition, events such as a flash crash can occur and rectify themselves in less than an hour, much faster than any organization can react.

What does this mean for communicators?

  • We need to learn about AI in the same way as we’ve had to learn about Y2K, cyber threats and emerging cultural & social issues so we can talk about these issues with knowledge, compassion and empathy for those affected.
  • We need to get to know our AI and ML ‘risk whispers’ very well as we will be spending a lot of time together.
  • We need to accept that there will be a growing number of instances where all we can say is “I don’t know” or “The AI did it“. Neither of these is a great option and we will need a better form of words but the underlying fact will be that we won’t know why the AI did something.
  • We need to maintain the ability to react to a critical moment with speed.

For me, the biggest take-away is that organizations need to take a proactive approach and to transform before an event hits, rather than hoping that they can ride out an AI-generated storm

Kith: Bullish on AI

After a few weeks digging into AI, I’m starting to understand the complexities and see the possibilities of this technology for business in general and communicators specifically. On balance, I’m a huge fan and I’m excited by its implications on the businesses that I invest in and companies that we work for. 

Reassuringly, I don’t believe that the role of the communicator is threatened by AI. In fact, we will need more communicators to add a layer of EQ onto the logical outputs delivered by these machines. We will also see significant benefits because of AI’s ability to work through large data sets, to spot patterns and to automate tasks for which humans are not well suited. Expect media monitoring and pattern recognition to be enhanced significantly.

But AI will also force us to be well-read on the subject and how it pertains to our business. We will need to accept that “we don’t know, the AI made the decision” might be the only answer we can give at times. And we also face faster-paced, more damaging crises that can occur and burn out before humans can react. 

AI will probably pose as many new risks as those that it helps manage and probably more so to start with.

Therefore, we need to be thinking about these risks as well as critical issues around strengthening consumer data and privacy protections. We need to make sure that there is appropriate oversight and governance. We need to be thinking about unintentional biases that may be built into the algorithms that drive AI and the implications these have on people and communities. 

Most of all, AI demands that we rethink the time-scales of normal business and crises. We will have less time to spot what is approaching and we will never be able to react faster than a machine. Proactive crisis management is going to become more difficult. 

Therefore, above all, AI is a call to action. A call to transform our businesses to adapt to the challenges and opportunities of this technology now before a change is forced upon us. 

After all, once we hear I’m sorry Dave, I’m afraid I can’t do that, it’s too late.