At the end of 2018 I began a unique two-month project where I was embedded as a UX Researcher on a team at Microsoft. I worked closely with several designers, a product manager, and two program managers who all shared a common mission to tackle important issues facing two key products in the organization. After many conversations with team members and key stakeholders and reviewing previous research to better understand the problem space, I developed and executed two key research studies; one generative and one evaluative.
User-value Ranking was the first study I developed and executed.
The goal of this research study was to understand which future features are most valued to end users. This information would be used to ensure that existing design efforts align with the features end users desire. Additionally, the findings could help prioritize both design and development efforts. Ultimately, I felt it was important to ensure that the team (focused on this product) understood their end users’ needs and desires and that those matched the vision they had planned out.
A survey was sent to 100 participants through the tool UserTesting. In the survey, participants were given a list of future and existing features (20 in total) and asked to allocate money from a budget ($2,000) to the features they wanted to see implemented or improved first. They were also asked to explain why they choose to allocate money to their top three choices. Lastly, they assigned a rating of usefulness to each of the same features.
Primary Research Questions
Which features are the most valuable based on allocation amounts and why? Which features are rated most useful and why?
As the lead researcher focused on this specific track. I worked closely with the design director and product manager to develop the list of future features and existing features we wanted to present to our participants. To better understand the order in which participants would rank the features, I had participants allocate money to the features they wanted to see implemented or improved first. Allocating allows participants to assign a unique value to a feature instead of each feature having a one-to-one value (e.g., a participant could allocate $1,500 to one feature, $250 to another, and $250 to another—from this, along with their qualitative answers, it is clear that the participant is prioritizing three features and one in particular). Rating the usefulness of the same features gave us another insight into the needs and desires of our participants. With both sets of data (average allocation amounts and average usefulness ratings) I was able to plot the future features on the graph below. Visually it is clear that one feature stood out to participants as highly valuable.
After analysis, the survey results verified that the vision planned out by the design team and the product manager matched many of the features end users found most valuable and desirable. One feature that was highly ranked was not in development so it was recommended they explore adding that feature in the future. Additionally, more research around future features should be conducted. Feedback from key stakeholders helped us see that a more focused survey is needed to validate existing customer feedback and to ensure participants clearly understand the features they are allocating money toward. (Several future features presented to participants had subtle differences that may not have been easily distinguished.)
Several stakeholders shared their excitement and appreciation for the research.
“This is great to see, especially confirming and reaffirming our commitment to landing our critical work [in this area].”
“Tim, excellent job here – this is a great report. Very excited to see all the improvements that [the team] are leading come to life in the near future. Thank you for the hard work on this.”