Please note, to comply with my non-disclosure agreement, I have omitted and obfuscated confidential information. All information in this case study is my own and does not necessarily reflect the views of Premise or its employees.

 

TLDR

(Too Long Didn’t Read)

Who are the people willing to map points of interest? How might we engage them to complete mapping tasks while ensuring quality data and their safety? This project sought to better understand Contributors to Premise’s points of interest data collection (i.e. mapping places) to inform growth and data quality efforts. I completed this project for a qualitative research methods course at the UC Berkeley School of Information. I was consulting for Premise at the time and was able to fulfill my course requirements applied in a real world Premise-context. While an abbreviated project, I learned a lot through studying interview and grounded coding theory and best practices in class in parallel to executing this project at Premise.


Premise is a two-sided marketplace:  1) The Premise mobile app pays Contributors (users) to submit sentiment (surveys), observation (photos), and location based data; 2) Customers pay Premise for data. Premise data collection includes mapping points of interest (POIs) by tasking Contributors to go to specific points (e.g, a business, facility, or structure), verify the location and operational status, and take some photos. 

This research sought to build profiles of people likely to engage in POI data collection tasks—their motivations for completing tasks and completing them well. The objective was to understand the experience of engaged Contributors to inform the data quality strategy for this type of data collection.

Overview


Timeline

September-October 2021


Role

Lead UX researcher


1. When do engaged Contributors use Premise? 

2. Why do engaged Contributors choose to use Premise or other apps for flexible income, if applicable?

3. How do engaged Contributors select which Premise tasks to complete?

4. What challenges do engaged Contributors face in completing POI tasks?

Research Questions


Process

Interviews:  I identified engaged Contributors from two key, contrasting markets with the intention of understanding the nature of the ideal Contributors for the data collection across diverse environments. Seven Contributors participated in 30 minute semi-structured interviews. The interviews focused on their background and experience with Premise, especially in completing POI tasks. With participants’ consent, I shared an artifact of their experience with POI tasks—photos they had recently submitted to the Premise app—to prompt more specific recollections. Participants shared their context for completing the task from logistics to decision points and how they felt about the experience.

Analysis & Synthesis: Two cycles of grounded coding of the interview transcripts and development of an accompanying codebook using MaxQDA informed the analysis and synthesis. The analysis also drew on demographics of the participants collected through a survey as part of their Premise account creation.

Report & Presentation: Alongside a more traditional paper and deck, I developed four Contributor profiles to communicate the findings in a way that would generate empathy and make the Contributors’ experience more memorable. In contrast to personas, the profiles conflated similar Contributors’ backgrounds based on emerging themes from the interview data and anonymized real stories. The goal was to leverage the empathy-building value of personas while avoiding potential bias or stereotype-based characterizations, especially with limited data. The Contributor profile cards include descriptions of the profiles’ gender, age, economic situation, why they use Premise, how they use Premise, and how they understand the purpose of Premise’s data collection, as well as a narrative of the Contributors’ experience completing a task. 

I distributed the Contributor profile cards to cross-functional team members in advance of the group research debrief and brainstorm. Attendees presented their Contributors during the group session and adopted the perspective of that profile for brainstorming before adding ideas to address others’ experiences.


Findings

In addition to Contributor profile cards, the report and presentation highlighted a variety of Contributors’ experiences across four key interrelated spectrums, from low to high: reliance on Premise, risk-taking, tolerance for uncertainty, and grasp of the purpose of the data collection. For example, Contributors with low reliance tended to take minimal risk, mild awkwardness at most, while completing tasks and had a higher tolerance for the uncertainty of earnings from Premise.

An example Contributor profile card


Impact


  • The storytelling approach to the Contributor cards was effective in building empathy for Contributors. For example, from the perspective of the cross-functional team managing the data collection, data quality challenges sometimes appeared to be the result of a lack of effort by the Contributor. There are Contributors who will cut corners, but the stories of these dedicated Contributors who faced challenges which manifested as data quality challenges even when giving their best helped to inspire more innovative thinking about how to ensure quality data while not penalizing any individual Contributor for circumstances outside of their control.

  • The Contributors’ experiences also highlighted other priorities for data collection operations, such as clustering POIs for efficient travel and mitigating the impact of a sudden stop in availability of tasks.

  • This project focused on a specific suite of Premise tasks. Another Premise research project leveraged the design and interview guide for research about a different suite of Premise tasks a year later.

It’s so easy to fool ourselves into thinking we know how people are experiencing what we’re putting out there—so many unexpected insights!
— Data Science Director following research debrief

Learnings

  • Using artifacts, submission photos, as a prompt in interviews was an effective resource for guiding the conversation into specifics that I’ve adapted in many interview guides since.

  • Two cycles of grounded coding for each transcript was more thorough analysis than I typically have had the bandwidth to do in industry, but it helped to shape the way I think about sensemaking of qualitative data. It was the perfect opportunity to practice the most rigorous form to build the aptitude for more expedient analysis.

  • I did not allocate much time to advocating for the research after my class assignment was complete, and I think this research could have had more of an impact if I more specifically and persistently socialized ideas from it. While it built empathy for Contributors, there were lost opportunities for more directly translating that empathy into action. In addition to the time on the tail end, in the future, I would allocate more focus to scoping and engaging partners in how the research could inform decisions from the start.