For researchers, ChatGPT’s true danger is dullness
By Ross Denton, February 2023
By Ross Denton, February 2023
Commentators talk about ChatGPT’s threat to jobs, industries, and homework, but the moment I felt its danger was when I read a social media post from someone who had decided to use ChatGPT to write his wedding speech.
In his excitement at his idea, he posted it in full. It thanked each group of attendees for coming, in a way that was helpful, detailed, and conscientious. It was also incredibly dull. There was no specific reference to his wife or anything that conveyed who he was as a person. Even worse, it wasn’t funny.
For those who are not aware, ChatGPT is an interface that allows you to interact with a smart set of algorithms that have been trained on a large language model. Through computation and extensive training by humans, these algorithms have become effective at simulating written speech.
Scarily good, in fact. Market Researchers have started posting on LinkedIn & trade press about how well ChatGPT drafts surveys or summarises research findings. Some have even imagined that it could start writing their reports better than they do. Existential questions soon follow: how do we outcompete software that can produce sensible copy in 10 seconds?
While navel-gazing handwringing makes for good headlines to doom-scroll during a Zoom meeting, I would suggest that there is a more pertinent question: how do we avoid ChatGPT turning us into dull writers?
As our wedding speech outsourcer may find out, the pleasantries only get you so far. As researchers helping clients make sense of ever-increasing volumes of data, it is our job to infuse our advice with commercial nuance, departmental understanding, and empathy. An algorithmically generated summary of findings cannot replicate that context.
A colleague and I were discussing ChatGPT the other day and floated the idea of using the tool to auto-generate the text for a “market background” slide to set a baseline. This would be our challenge – if this is the polite, considered way to explain what is going on, how can we beat that? But on further reflection I wonder whether this is corrosive. Could repeatedly seeing generic, wordy explanations prime us to express ourselves anonymously, in platitudes, without thought of the desired impact of our work? Also, the “market background” section is actually important – it enables us to position the project in the real world and consider issues related to the sector. (And as an aside, how would you feel if someone used ChatGPT to write the intro to their proposal for your project – it doesn’t really say that the agency is investing time in the relationship!)
Instead, I would suggest we take a human-first approach to using ChatGPT. If we’re designing questionnaires or reports, we should first think what we want to achieve and sketch it out. Then we can use ChatGPT to find new ideas, expose anything we’ve missed, and even free us from some of the boring formatting. If we let ChatGPT lead us, we will be increasingly influenced by its anonymous tone and will avoid the friction of actually thinking creatively.
Research agencies who use ChatGPT intelligently will get a competitive advantage by more effectively getting from ideas to delivery and freeing up time to discuss and think. Those who don’t will be left reading notes written by algorithms that are designed to avoid challenging anyone. And you can bet guests will be gossiping in the halls about how long that marriage will last.