I've been a User Researcher at Liberty Mutual for more than 2 years. I've moderated and observed hundreds of research sessions for fast paced agile squads. As a result, I've learned a great deal about the research process and team collaboration.
I work with 12 different agile teams in my current role, that are designing, developing, and implementing projects in quick 2-3 week sprint cycles. That means that prioritizing projects is essential.
Earlier in my career, I fell into a scheduling trap. I accepted research projects in the order that they were requested. It didn't work. High priority projects were postponed or requested to be squeezed in the middle of two studies, creating back-to-back all day work. Teams who I didn't have time to work with were also running their own research and experiencing issues with setup and analysis.
Prioritization is now based on level of importance and urgency. At the beginning of each quarter, I meet with teams to develop a research schedule for the next 2.5 months. I work with designers and product owners to determine the importance of a study and the time frame in which the study needs to be done.
Of course, most designers view their work as "high" priority and want things done ASAP. That's why I will ask questions about the prototypes/designs, their questions and goals, and context. This provides insight about why a study is needed and how the results of a given study may align to business objectives and goals critical to product and company success.
Prioritization necessitates pushback. It means saying "no" while knowing that will disappoint some people. It also means asking for help and trusting that the teams can complete smaller scale research (like surveys, A/B in production tests) on their own.
Once projects are prioritized, I coordinate with the stakeholders of the studies to schedule them. I do this while looking at a lot of calendars: the user research calendar, department calendar, and the portfolio (subdivision of department) calendar. Scheduling requires flexibility and patience, as study rooms and security lists may need to be booked months in advance. I make my best judgement to make a tentative schedule and confirm the details after the kickoff and as I get prototypes.
The kickoff is where I put my detective hat on. I use a MS word template to record information as I ask stakeholders about their goals and objectives, the questions that they want answered, information about the prototype (fidelity, completion level, etc) and confirm scheduling details.
During this meeting, I use my expert knowledge to determine the methods that will best suit the study (remote vs in-person, moderated or unmoderated, online panels or recruited participants). I set expectations at this stage and let them know the time frame and constraints of each method and why I think one approach is best for the study. I also use a working backwards approach to let them know by which date I need certain information and deliverables to run the study effectively. For example, if a recruited study needs to be done in 5 weeks, I will need the final prototypes a week beforehand to write the mod guide.
Shortly after the kickoff, I also create a checklist of to-dos, with assignments and symbols to indicate the status of the task.
I typically run a pilot ~2 days prior to the first day of the study. If I am using an online usability platform, I test using a real participant. For in person studies, I typically find a co-worker or someone in the building. I have found the pilot to be extremely valuable in the identification of technical issues or questions that might trip up participants.
Of the user researchers, I am the advocate for new technologies and often use new mixed methods for studies (testing things like heatmaps on mobile phones, think aloud surveys within customer interviews, and other tests-within-tests). Running a quick pilot with 1-2 participants is a great way to learn whether a new methodology or mixed method is a viable option or not.
I love study days. There is something exhilarating about getting real feedback from users and using that information to drive findings that can improve a product.
In an ideal world everyone would show up on time and provide a good amount of feedback. But that isn't the case. (I've given several talks on these "worst case scenario" testing situations).
During test days, I keep a backup plan in case things go wrong. For example, I call upon alternates or schedule an extra session the next day. During test sessions, there is typically a dedicated person watching/note-taking (usually another researcher) that can provide advice if things go wrong.
On study days, I request that as many agile team members as possible observe the sessions, while they are happening. This ensures that they dedicate time to hear from customers, allows them to ask relevant questions in the moment, and ultimately improves their buy-in, as they will be the teams to create the product and implement the recommendations.
Each observer of the study writes notes in a note packet and then writes their top 3 findings on sticky notes. Each study participant is a different colored sticky note.
After the study, I lead a collaboration analysis session. The collaboration session consists of affinity diagramming the sticky note findings: removing redundancies (among each participant's notes), calling out themes, placing different participant notes under themes, and voting on the top themes.
Following the themes and voting on top themes, we go through each theme and I discuss recommendations with the teams. The agile team members (developers, scrummasters, product owners) provide great feedback at this stage due to their intimate knowledge of the product and technical constraints.
Following the analysis session, I create a summary report as well as a links document that provides easy access to relevant files and information from the study. I may present study results to leadership and larger groups.
Collaboration is essential throughout the research process. Being a researcher is sometimes like being a HR professional, analyst, conflict mediator, IT person, planner, interviewer, and workshop facilitator, all in one.
As I receive more research requests than I can handle, I take on the more complex projects and delegate simpler projects to stakeholders to complete. In an effort to be transparent, I communicate deadlines and requests in advance. I follow-up if deadlines and critical information is not provided on schedule.
During research sessions, I take time to introduce myself to people I haven't met before and I listen to their stories about what they do and how they got into their career. I usually end with a cheeky suggestion that they try moderating a session in the future, to empathize with users and gain conversation skills in a low pressure environment.
I also advise on studies and run workshops and one-on-one trainings to teach others about research methods and softwares like Validately, Userzoom, Optimal Workshop, and Qualtrics. By doing so, I hope to improve the quality, validity, and reliability of the studies that are run by non-researchers.
Another initiative I'm starting is User Research Office Hours. This is time (2 hours per week) for UX designers, content strategists, product owners, and agile squad members to come in and ask research questions.
Overall, user research can be a rollercoaster and I am enjoying the ride!
Schedule a cadence of meetings to collect requests
Prioritize based on importance, urgency, and business impact
Say "no" to projects and/or push back but explain why
Focus on advising and training teams on user research
Take on projects of high importance and complexity. Trust that teams can complete simpler projects on own
Use the kickoff to set expectations with teams (address timelines, delegate tasks, describe the expected outputs of the study)
Complete a dry run/pilot
Always write a research report and make it easily accessible to all stakeholders and team members