Along these lines, one of my criticisms of the newly launched BringMeTheNews.com website is that it’s hard to read. For my eyes, the contrast in type to background browns is not high enough for comfortable reading. Considering that the site is meant to be read, this seemed like a legitimate problem – at least for me. Here’s an example of a paragraph from BMTN:
Just because something is a problem for me doesn’t mean it’s a problem for everyone, so I set out to attempt to measure whether font choices affect on a person’s reading speed or comprehension.
To do this, I tapped into one of my hobby businesses, 3rd Party Feedback, to design a test that could quantify this sort of thing. At first thought, it seemed nearly impossible to determine how long it would take someone to read a piece of text. In the end, an indirect measurement of reading comprehension was created: transcription.
The final test asked 10 independent web users to transcribe four snippets of text. There were two snippets screen-grabbed from BMTN, and two snippets with the same copy presented in a black on white color scheme like this:
As I mentioned in my previous critique, I find the latter of the two above examples much easier to read. However, I was curious to find out if I was the only one.
The task 3rd Party Feedback workers were asked to complete was to transcribe one of the four examples. Each task would have 10 workers complete it and all tasks would be timed. At the end of the project, 3rd Party Feedback had timed results to 40 tasks. When averaged out, the results (surprisingly to me) looked like this:
So what’s saying is that it took people an average of 40 seconds longer to transcribe the black on white version of the Tim Pawlenty snippet above than the same copy when presented in BringMeTheNews.com’s color scheme. To me, this didn’t seem intuitive. How could brown on brown beat black on white? I pondered the results and discussed them with some friends. Then, it dawned on me that there may be a lurking variable in the results. Typically, when we run tests at 3rd Party Feedback, we include a comments field after the assigned task asking for feedback. The feedback can be very constructive and help improve the way future tasks are worded. However, when running a timed test, this can skew results. The black on white versions generated more comments on average, which would have extended their project completion time:
would you want leagues to be possessive? league’s, right?
I typed it as is, but there was a mistake. “leagues” should be “league’s”
The black on white version drew out the grammar nazis.
Having discovered this, we removed the comments field from this task and ran it against a new group of testers. Here are the revised results without the comments field distraction:
That’s what was expecting from the first test.
Is this a case of tweaking the test to create results that match expectations or creating a better test? I’m clearly biased so let me know what you think. As I see it, the latest version of the test shows a 6-9% improvement in transcription times when using black on white vs. BringMeTheNews.com’s default font and color scheme. However, it’s still not a perfect test since more than one variable was changed. The color scheme and font size were both adjusted.
So that’s my little experiment.
A few people asked me how much costs to run split tests like this through 3rd Party Feedback. Generally, it’s around $1 per response so a test like this would run $40 (10 respondents to 4 tasks). That doesn’t include analysis.
If you’d like to give this a try, here’s a coupon worth $10 that you can use as many times as you’d like between now at the end of 2009 on orders of $50 or more:
Coupon code: FOTD
If you stuck with this post this long, this is something you should be doing. Feel free to along this coupon code to friends, family, but not your competition.