ChatGPT’s Surprising Self-Awareness Shows the Limits of Generative AI for Strategic Communications
Sometimes the difference between fantasy and nightmare is surprisingly slim.
In a fantasy scenario, you’re lying on a beach, gazing out at the rolling blue water when you give ChatGPT a few modest instructions and poof! – the organization’s new strategic communications plan is written. It’s got all the key components: a communications audit and needs assessment; a section for media outreach with target lists of reporters; stakeholder mapping aligned with key messaging and audience segmentation. There’s calendar for implementation, too, layering your themes, activities, and content.
Maybe AI has gone a step further and written drafts for web content, including social posts and images or videos tailored to each channel. Better yet, your briefing materials and some zippy pitch emails are already drafted, sitting in your mail app, ready for you to hit send. Well done, you say to yourself, as you take another slurp of a tiny-umbrella drink.
In the nightmare scenario, everything is the same—generative AI has done all that same heavy lifting—except you’re lying on your couch. And you’re not interacting with ChatGPT, because that’s not your job anymore. The work of a half-dozen people has been done in an instant, and you’re one of the half-dozen that’s been replaced.
Wake up, Neo…
When you wake up, no matter which scenario, you’ll quickly realize it was a dream. Both should feel implausible. Yes, AI is going to change your job in communications. But no, it’s not going to take your job, because it doesn’t know how to. Someone else could tell it what to do, in theory—but it would be malpractice to outsource the value of a professional communicator to a language model.
Don’t believe us? Let’s ask GPT:
“Instead of replacing jobs, AI is more likely to augment communication roles by automating repetitive tasks, providing data-driven insights, and freeing up human professionals to focus on higher-level tasks that require human creativity, empathy, and strategic thinking.”
Couldn’t have said it better ourselves.
In fact, if you ask ChatGPT why this is the case, it returns some more healthy heaps of self-awareness. Below in bold are four things it says it can’t do, along with some (original, StrategyCorp-written) commentary on how these human qualities are critical to us in the communications practice:
- Communicate Complexity. The world is not just ones and zeroes. Communications pros are meant to thrive in ambiguous situations. OpenAI highlights this limitation on GPT’s homepage: “Ideally, the model would ask clarifying questions when the user provided an ambiguous query,” they write. “Instead, our current models usually guess what the user intended.” Humans don’t have to guess. We can ask, iterate, and make your plan valuable and workable.
- Creativity and Innovation. All generative AI is built, to some degree, on things that have already happened. ChatGPT is a language-learning model that relies on texts published prior to January 2022. Humans, on the other hand, are great at creating and inventing because we can predict what a good new idea might be. We can simulate the future impacts of actions and choices using our brain’s prefrontal cortex. The Harvard psychologist Dan Gilbert calls it our “marvellous adaptation,” to “have experiences in [our] heads before [trying] them out in real life. This is a trick that none of our ancestors could do, and that no other animal can do quite like we can.”
- The Human Touch. Comms people are at their most effective when they talk to many other people and seek a lot of input, both inside and outside an organization. When they understand an organization’s mission and goals. Pick up on nuances and trends. Know, assess, and map stakeholders. Situate the communications environment within the organization’s political, social, and strategic context. These things will matter to your audience because no matter how much you rely on AI, your audience is always going to be human (we hope).
- Ethical and Moral Judgments. Language models can’t make these calls. To take it a step further, they don’t really want to make tough decisions of any kind. ChatGPT is once again aware that it can’t fully understand the necessary context, and it doesn’t have years of well-honed intuition.
We asked if our (fictional) company should post an online response after a negative news story was published about that company’s recent layoffs. ChatGPT gave us seven considerations, many of which were valid, but when it was time to decide, it didn’t want to weigh in—and how could it? Some samples:
- “Addressing negative media attention regarding layoffs can be a sensitive and challenging task.”
- “A well-crafted and empathetic response could help mitigate reputational damage, but it’s essential to proceed thoughtfully…”
- “Consider the potential impact of your communication on all stakeholders involved.”
All true, but we asked for help with decision-making.
This is a fitting encapsulation of ChatGPT’s impact right now. Good food for thought, but no strategic lens or alignment with values, and certainly no application of those principles to provide clear guidance. In everything from communications planning to issue management, AI can add context but will never know how to pull the trigger.
We’re past the early adopter stage, but generative AI is still in its infancy. And while it may seem self-serving to say that today’s AI can’t do our communications jobs, it remains true. Think of it like this: ChatGPT at its best may know how to compose written content, but it doesn’t know when or what to write, why to write it, or for whom.
It’s not hard to find simple tasks for AI—look at how much content it fed this very post—but so long as organizations need strategic communications guidance, both the strategic and guidance parts will still require human intelligence.