Back to Writing

A line in the sand

How do we, as UX researchers, feel about the irony unfolding in our discipline currently? As AI platform CEOs and proselytisers promise a revolution in how companies engage with their users, the more we use the tools they promote, the more we discover that - maybe to our surprise? - the human element has never been more critical.

Automating donkey work, sure, there’s great value in that (up to a point - more in a second), but the judgement, contextual understanding, nuance, whatever you want to call it, it’s miles away from what emerges from the probability engines. Generative AI as a replacement for real people? Although Synthetic Users is a wonderful bit of marketing attached to a seductive front-end, they admit, albeit buried within the marketing copy, it’s mostly there as a (very expensive) framing device, not as the genuine insight you’ll need. There is no new, lived experience being observed here. No mums on three hours’ sleep swearing at a fiddly interaction. No man on a rural wifi connection with failing sight who’s viewing your web app on 300% zoom.

But, yeah. Automation. Creating drafts, or prompts for interesting framing devices, are where I find ChatGPT/Claude deliver entirely new value. Basic analysis, open-ended coding, can be sped up. And maybe some of us are running AI moderators for more established problems, and finding that its results stack up. But in the main, it feels that we’re all learning that the results don’t stack up.

Yet here’s the actual irony. If these issues are what we’ve been observing and experiencing, they’re not being heard. Or if they are, it’s all couched in terms of our new roles being ‘strategic oversight’, where the bulk of the research activity has been automated to leave us with more time to concentrate on the higher-value aspects of the job. So either the AI does it all, or we only need 10% of researchers to be able to do all the work.

But the inaccuracy of what the models produce, allied to the fact that in order to deliver insight, you do need to embody the problem, means that, well, I’m not seeing a hell a lot of time saving out of all this. If you don’t know your research data inside-out, you can’t spot when an AI tool produces something plausible but wrong. Knowing how to deliver the value from qualitative research, interviews that take you past the generic, remains a stand-out skill, especially if it becomes another space in which we need to evaluate the work of our LLM colleagues. So you’ve still got to put in the hours.

I don’t think I’m being luddite - or at least, too luddite. I recognise that the future of our discipline lies in our relationship with AI models, and how we’re only just starting to design that relationship. I also recognise that there’s a huge chunk of work to be done to research generative AI-driven experiences, too - we’re not just studying users interacting with static products, the experiences evolve and learn over time, which could lend itself to AI-driven studies.

But to be too BML about this feels risky. Something deep in my bones wants to make sure that the learning happens through prototyping, in more controlled circumstances, rather than on the general population. This means more co-design workshops, not fewer. More human involvement, not less.

The cognitive work of connecting business goals to research findings, of thinking through implications while deep in the data, will stay human. Shortcuts in this thinking process don’t reduce billable hours - they reduce the quality of insights.

Previous post: What even is user research