There’s no doubt that tools like ChatGPT have changed how we work. But in our rush to embrace efficiency, are we sacrificing something more valuable?
Bob and I have been thinking about this a lot lately – especially when it comes to customer interviews and Jobs to be Done research.
Personally, I’m afraid that we’re actually outsourcing thinking.
And I’m not alone in that concern.
The problem with AI-powered interview analysis
When I first saw people on LinkedIn talking about how they used ChatGPT to summarize job interviews and books, it stopped me in my tracks. The issue isn’t with the tool itself – it’s with how people are using it.
The thing is, AI cannot really summarize a true job interview, because it’s just going to give you the words. It’s not going to give you the intent, because it can’t actually see the intent.
Bob agrees with me. He suggests that the way these algorithms work is by looking at the next highest probability of a word. “It doesn’t actually have logic,” he says. “It’s based on the data that it has at the time.” The result is always summaries that might look polished on the surface but miss what matters.
You get words, not meaning
The real danger is that AI summaries push everything toward existing belief systems rather than uncovering new insights. It summarizes those belief systems that everybody already has. It forces it more towards the supply side than the demand side.
True interview analysis isn’t about counting words or swapping synonyms. The analysis is trying to figure out what were the intents of people in the patterns, and then what context and things were happening. A lot of times people don’t give you the whole context, they give you clues.
That’s where the real thinking happens – putting together a puzzle without the box to see what it looks like. And that’s precisely what AI can’t do
More efficient, less effective
When someone asks ChatGPT to summarize a book so they don’t have to read it, they might get the gist. But they lose the nuance. The tool might change a word from “easy” to “pleasant,” and suddenly you’ve distorted the original meaning entirely. Those are totally different things.
Bob suggests thinking about Clayton Christensen’s Disruption Theory as a case in point. “You could talk to Clay’s struggle towards the end of his life of how people were taking disruption theory and distorting it to fit a context that was not what he was trying to say. I’ll even acknowledge I’m guilty of this myself – we all distort meaning to fit our own narratives.”
If humans already distort meaning through their own interpretations, adding another layer of AI summarization only makes it worse. If I put Christensen’s book on disruption theory through ChatGPT, I don’t know what it will come up with. But it’s probably not his intent.
The same applies to interview transcripts.
Bob saw someone take an interview and break it down into its components – push, pull, anxiety, habit. “But it’s still missing all the details of what they meant by it,” he said. “The raw data itself sometimes is misleading because they use a word over and over and over again. And it seems like okay, this is a big word, but it’s like, now it has eight different definitions as opposed to one.”
The question you need to ask yourself
Neither Bob nor I are saying throw the tool away. What we’re asking is for people to stop and think before they use it.
I want people to ask themselves a question when they go to use it. Why am I using it? What is it going to help me do? And if the answer is, it’s going to save me time, I might be okay with it. But I need to unpack how it is saving you time.
If it’s saving time because you don’t have to think, that’s a problem. If we’re just going to regurgitate stuff, then we stop thinking and if we stop thinking, we stop humanity.
Bob adds another layer: “What you’re getting back, especially if you’re not an expert in it, you don’t know if it’s any good.”
Something might look like a clear insight at a superficial level, but when you start to unpack it to actually build something, the foundation crumbles.
The real work happens in the debrief
Bob and I both emphasize that the magic of interview analysis happens in the conversation afterwards – the arguing, the clarifying, the unpacking. ChatGPT can help summarize , but it’s not going to connect those dots for you.
“It’s a tool that needs respect,” says Bob. “You need to realize it’s only going to give you so much. It’s only going to regurgitate what’s there. It’s not going to help you actually with new thought.”
The takeaway here isn’t to abandon AI tools altogether. It’s to recognize their limitations and resist the temptation to shortcut the thinking that makes your work valuable in the first place.
We think we’re shortcutting it, but are we actually creating more work for ourselves?
That’s a question worth sitting with – no algorithm required.
Listen to the podcast version of this article, How to use ChatGPT in interviews.