User Research for AI applications
Artificial intelligence offers many exciting new possibilities to solve problems, create experiences and make interacting with technology more personal and more powerful. However building personalised applications that offer such delights to the user can be complex. One way of making sure that Artificial Intelligence can work for you and your users is to inform your building, design and strategy with research.
Although most of the rules of good research still apply when you’re working with AI there are some new challenges and useful techniques that can be applied. Here we explain some of the thinking behind research strategies we use at Dovetailed to build better AI.
AI applications don’t mean throwing away the rule book
Existing principles and strategies of human-centred design are still super important to design applications with AI. If you are building personalised systems you will still want to understand your users, witness their everyday worlds and routines, and build personas and user journeys to capture and translate these insights into your development and design processes.
Understand what your users think is going on
One of the major differences between a heuristic-based (“If this then that”) solution and an Artificial Intelligence is that you are unlikely to have a master plan for how each interaction will turn out. This also means that your user is not going to either. If they don’t understand what is going on or how decisions are made they end up questioning the system or themselves and giving up. A way to avoid this is to understand the mental model a user has of your system. It’s quite likely that this model will not match with your understanding. This is the baseline with which your users start their experience, therefore as a designer or developer you need to understand how they are thinking to help guide them through an experience with your app, product or service. Using interviews or visually mapping techniques users can be supported to describe what they think is going on.
Wizard of Oz
Wireframes are not going to cut it when you want to understand how a user reacts to your personalised AI system. A great technique for simulating an ‘AI’ is simply an ‘I’, getting hidden humans to play the role of an artificial intelligence. This technique is known as Wizard of Oz and allows teams to quickly prototype an experience that looks and feels real to the research subject but doesn’t require extensive training, building and testing to make it work. Tone of voice, amount of input, level of formality, presentation of working out can all be explored through simulations with real world users.
You design with two behaviours, that of your user and that of your algorithm
An important benefit of a well-designed, personalised AI system is that it can allow a relationship to develop between the human and technology. It can help to understand this as a relationship between two subjects who each have their own behaviours. Thinking about AI technology as a partner will frame your research towards understanding how a user would like to act with this partner. Do they want advice, support, care, motivation or something else? Through understanding the desired dynamic we can help construct a fruitful relationship between humans and technology.
Understand the impacts
Often AI systems may consider a trade-off worth making that can conflict with the decision the user might prefer. It is not simply enough to improve efficiency in all areas. Take for example, cycle route planning, we may optimise for speed, safety or to avoid air pollution, however what about the experience of riding by the river on a sunny day? Or dropping past an old stomping ground to see how it has changed? Understanding what desires should be catered for requires interaction with your users, bodystorming or physically acting out the experiences and observing the moments of divergence and flights of fancy. On the ground ethnography can be super useful for this.
Be involved in defining the data – accessible, diverse and equitable
We are all familiar with the ‘rubbish in, rubbish out’ rule for working with AI. What also should be understood is that training data can and should be something that is collaborated on through user research and between all members of the design team. The data you train your algorithm will transfer its prejudices and biases into the experiences and interactions your user has. By opening up and collaborating on the data collection with your users you can help avoid negative experiences. Accessible, diverse and equitable systems come from data sourced from a broad base, engage your users to help you achieve this.