Thoughts on using synthetic users for product development
"Thoughts on" articles are about ideas that popped into my head, usually while reading an article or book. They are usually about topics where I'm not an expert in, and unvalidated by practice.
TLDR
LLMs can provide information about how specific user groups behaved in the past. Combining a product idea with that can sharpen the understanding of how the product may help the user. It can give feature ideas on how to make the product more complete for the user, and help with understanding which features are not as important. Best for development of MVPs, but can also be useful for existing products.
Synthetic users are AI-generated personas. They can be asked questions and respond with an approximation of what a real user would say. They come from the UX research space, but while reading this article I had the idea that they may be used for product research and development.
The strength of LLMs are their knowledge and ease of interaction. They encode the knowledge of the world, and can be easily chatted with.
This means I can give it a persona (e.g. engineering manager in a software company) and tell it to answer questions as this person would. LLMs have a tendency to revert to the mean, to be average. That tendency actually helps here, because I get the typical behavior of that persona. When talking to people I’d have to talk to quite many to get that kind of understanding. On the flipside, LLMs are incapable of giving specific responses (e.g. engineering manager with a 4-person team at Google).
So, lets say I have an idea for a product, and a rough idea about who may be interested in it. I can then create a synthetic user (AI persona) about their typical day, their tasks, workflows, and so on. I can find out how an average person does things, and may even ask for multiple ways to achieve the same goal. This is purely information gathering, which LLMs are good at.
Based on that information I can judge how the product can help and which features may be useful. Then I can go further into these areas. Get more details about the specific tasks the product may help with. Using that process I can define (a minimal version of) a product or feature that is more grounded in reality than without.
The key is to stay clear of any kind of value judgments and future predictions. Don’t ask it if a task is annoying, how important it is, how much time it takes, or how the feature and product would change the behavior or workflow. This is where LLMs are even worse than humans.
The core message of the book “The mom test” is to ask users about their behavior, what they did in the past. To never ask users to predict how they would behave. This is even more critical with LLMs. They will tell you what you want to hear, but by staying factual, by asking how user groups behave, they may have value.
The use-case I’m thinking of for this type of interaction is a software engineer having a product idea. Maybe a side project, maybe something that should become a business. In any case, they want others to use the product. Often they would not do any user research before starting development. By using LLMs it’s possible to get information quickly which may help to refine the idea before writing the first line of code.
Note: It goes without saying that contact with real people, either through interviews or selling the product, is at some point necessary to verify if the information holds true. But that’s a much higher investment which is not warranted every time time before starting development.