BrightCove, Boston, MA 


Jenn Green, Maura Robinson, Dan Lopes and Sandra Romero


YouTube, Vimeo and BrightCove are three leaders in the Video Content Monetizing industry. One of them, asked my school team to evaluate how users would fair out completing scripted tasks while performing them in all 3 video players.

We updated our stakeholder regularly and made them part of the key decisions. At the end of the assignment, we presented our findings and provided recommendations based on the qualitative and quantitative data we gathered.


This was the first time any of us worked with remote unmoderated test users by a 3rd party service, in this case www.usertesting.com.  This, on one hand, expedited the recruiting process but then, on the other hand, we were not in control of the quality level of the responses we were presented with.

We worked with our stakeholder creating the test script, formulating the language and putting it through a smoke test. Maura ran a pivotal role here while Jill acted as our team lead.

Through the smoke test, we were made aware that we could introduce some bias in sequencing the video players in the same order for everyone. Users seemed to either make up their mind early when asked about preferred browser or learned from initial stumbles and became more efficient by the time they tried the 3rd browser.

We, therefore, came up with 3 different combinations. Here are the sequences:    

            key: (B) BrightCove, (Y) YouTube, (V)Vimeo

             BYV (P1), BYV (P2), YBV (P3), YBV (P4), VYB (P5), VYB (P6)


Below is the test script we used. We recorded age group and gender. Our tasks revolved around the play button, the video scrubber and the volume control for all three browsers. 


The output was very compelling and having the ability to replay the users interactions with the different players allowed us to fully flesh out all the issues users were encountering.

I helped synthesize very significant quotes into a mind map which would later let us see what the major paint points were.

I also drilled into the data to come up with visualizations reflecting both qualitative and quantitative data side-to-side, including unexpected findings, external influences that may have played a role in the type of responses and those determining factors that seem to split the users' preferences. 


Armed with all this newfound knowledge we then set out to compress the most content we could into a very information rich presentation for our stakeholders. Below is part of our final presentation reflecting issues found in each of the players leading up to a consolidated view of most overarching issues our we suggested our stakeholder should take into account from the experiences on all 3 browsers.


Final Recommendations