We all love a good story. Gripping plot, twists, turns and surprises are just plain nice to experience. Professional life is no different. Water cooler talks or grumbling after difficult and unproductive meeting are a part of our daily routines. We’re more engaged, more motivated if there’s something interesting for us to follow.
These principles apply to reporting insights from research. While we might see value in raw data, it just ain’t gonna have our attention and motivate us to follow through in the same way a spicy gossip would. Storytelling is a key element of a researcher’s toolkit and presenting research results should always take it into account.
Yet not all stories are made equal, or maybe a more fair description would be that not all listeners treat all stories the same. Different structure, pacing and emphasis should be placed on different aspects of the data presentation depending on which environment will consume your research products:
- internal — researchers, data governance people, anyone who’s primary work duties pertain to understanding of not only the raw data, but it’s context and information from the meta level
- external — people whom you cannot implicitly expect to have methodological understanding, i.e. business people, designers, marketing, sales, customer support
External data consumption
Let’s first talk about external stakeholders, for it’s more likely you’ll primarily interact with these, or even exclusively if you’re a team-of-one researcher. You’ll be tempted to “let the data talk for you”, make dry, matter-of-fact reports in order to limit your exposure to criticism and accusations of research incompetence (you’ve been there, “The data says our app is running bad and users are frustrated? Well maybe there was something wrong with how you setup the recording environment!”). I don’t believe this is the right way of presenting research, no matter how tempting. Concise stories, built around current business or development context, rich in visualizations — that’s what you’ll win hearts and minds with.
I’ve said before that this external environment is “people whom you cannot implicitly expect to have methodological understanding”. It’s important to remember, that this doesn’t forbid them from having this understanding, just that it isn’t in their job description. Do not fall into a trap of explaining stuff in a way that’ll make people stupid, which most often happens when you provide solitary definitions for stuff, that’s when you risk people thinking “Why do you feel you need to explain this to me? Do you think I’m stupid, that I don’t know what these words mean?!”. Make sure that if you have to explain something you’ll do so through examples that make sense in the context of your current presentation.
You will face pressure from stakeholders to deliver standardized reports, filled with direct, yes-no answers, and you will have to deliver such. “Having seat at the table”, having seniority means taking responsibility for research products, and that includes making decisions based on them. Do not, however, be tempted to fall into templatization trap. There’s a crucial difference between building stable expectations and filling in text fields. By reducing friction too much you risk creating a cargo cult around metrics like NPS — there’s nothing inherently wrong with such scores, but it’s extremely easy to treat them as end-all-be-all and discard the context that grants them actual meaning.
Last but definitely not least, prioritize. Some insights will be more interesting, some more mundane, but in the end your work is about providing data for business course correction or pivot. Judge your conclusions based on bottom-line impact and ease of implementation. The latter might seem weird, but in my experience, it’s better to push for implementation of something rather than hitting the wall with super important thing that’ll go nowhere and never end up improving the user experience.
Internal data consumption
Let’s move onto reporting research for other researchers and data governance specialists. Storytelling will remain an important aspect of your research products, this does not change. In addition to this, you will need to provide methodological context. Which means what, exactly?
Research should be manageable, so easy to search through, possible to replicate and available for auditing, or having a meta-analysis done. Beyond the obvious data, like participant screener, researcher name, methods used, in-house lab or remote session context, there are couple more things you’d want to consider, if your organization is research mature enough to do so:
- Participants characteristics different from their demographics
- How “easy” they were as participants, did you have to pry comments from them with a crowbar or did the conversation flow naturally
- Did they say something happened in their life, what was their (available to know without research ethics violation, of course) mental and emotional state that day, e.g. did they seem defensive, visibly uncomfortable, did they show a desire to please the researcher and “pass the test”
- Socioeconomic, environmental or political major events that could influence participants on that particular day of research
- Did the researcher follow the research plan or were there necessary corrections made during particular session? What were they and why did they happen? Were they necessary because of a participant, or was there something wrong with the research process itself? If latter, how did it change your research methods for the future.
- Where did the research come from? What was the business need, how did it translate into research activity and what was done based on insights from this study?
- How the insights from that research session influence existing knowledge base? Do they confirm existing data, or are they contrary?