Observations on (final) Week 7 of Usability II

August 18, 2013

This week’s assignment for Usability II was a culmination of things we’ve been working on for at least the last two weeks or longer.

 Our final eyetracking report was due, based on viewing 5 separate eyetracking sessions as well as heatmaps and gaze plots. This part of the assignment truly brought home to me the almost Jedi-level of competence required in order for someone to thoroughly and accurately de-code eyetracking, heat map and gaze plot data. It reminds me very much of what I’ve seen of satellite photo image analysis. They call it analysis but it seems much more like interpretation to me, with the possibility of huge margins of error (is it a hospital or is it a biological weapons manufacturing plant?).

 I felt the same way looking at eyetracking, heat maps, and especially gaze plots. It’s so voodoo to me that I feel like I have no business even trying to comment on what I see because I think it takes years of training to get even close to being accurate at this type of interpretation/analysis.

 But, despite all this, I’ve re-affirmed, again and again, this is the field for me. Even when it’s hard and I don’t feel competent to make a good call, I still enjoy the effort, the readings and the learnings.

 I have high standards of accomplishment. I’ve gone through law school, passed the bar exam, and I feel the same sense of intellectual and academic challenge in this class that I felt in law school, with the added (finally!) enjoyment and sense of pleasure in what I do. Doesn’t get much better when it comes to education.

All that glitters

August 11, 2013

I just re-watched Rain Man and realized that a lot of that movie can be viewed as a study in eyetracking. Eyetracking in scenes intended to be real life as opposed to a website, but eyetracking nonetheless. Many of the scenes are montages of what Dustin Hoffman’s character’s eye is attracted to. Many of the images are fascinating for their inhuman, mechanical repetition, speed, and play of light. But, this is not the main story, and it takes director Barry Levinson’s focus to include the eyetracking, but also build it coherently into the story.

I like what I’ve learned about eyetracking so far. First, that it seems that it has to be viewed as a real art form in order to be done well. Ever since I read Daniel Pink’s A Whole New Mind, I’ve been on the lookout for things that can’t be learned by rote, are difficult to automate and require qualitative rather than quantitative analysis. Quantitative analysis still might be a part of eyetracking, but I like how Aaron reminded us in the intro video for this week, that statistically valid quantitative results aren’t necessary to learn a lot about a site when using eyetracking.

The biggest problem I see with eyetracking, heatmaps, gaze plots, etc. is the cost. It’s expensive to get the equipment, and even more expensive in terms of additional salary, to get people who know enough of the “art form” side of the studies to make good use of someone (assuming you’ve even got someone doing UX hiring who at least knows what questions to ask, in order to find, in Pink’s terms, a whole new mind – or at least a right-brained, UX person) who can work with the technology.

Otherwise, I think you end up with a lot of participants who, while in an eyetracking study (I know I’m like this), act like Rain Man in the Las Vegas casino scene. Everything is very sparkly, definitely very twinkly. And without the skills to know whether the sparkly and the shiny are important or not to the overall story, you may end up with results that go nowhere, just constantly trying to figure out who’s on first.

Gestural interfaces for mobile

August 4, 2013

I like the exposure we’re getting to the usability of mobile devices. I mentioned in a previous post that I didn’t know much about designing for mobile and now I feel that, although by no means am I an expert, I have a better handle on the topic and some of the problems it can cause, as well as some design methods to deal with them.

It surprised me that, with the current popularity of using mobile devices to access the web, that there isn’t already good usability and gestural capture software. Someone will make a lot of money if they solve that problem well.

For now, it does seem that gestures are deeply embedded in the use of mobile devices. Still, it seems like the same problem I mentioned in last week’s post: the devices are almost all very limited in how much they can allow for (an average) 10mm finger tip to do.

We had a speaker a while back at Intuit and his team had started building an app that was literally based on the one from Minority Report. The funny thing was that, luckily before they had put too much time and money into it, while trying to use paper prototypes on a whiteboard to simulate the gestural movements of Tom Cruise, everyone’s shoulders got way too tired, way too quickly and they had to scrap the idea. I guess even with “simple” gestures, you sometimes have to go too far before you know where the limits are.

Week 4 Blog for Usability II

July 29, 2013

This week’s assignment deals with mobile devices. This topic hits close to the bone for me. I constantly see jobs in the UX field for those who have expertise with mobile devices. This makes me feel like I need to understand this medium and the fact is that I don’t. Actually I do understand it, I just don’t know why it’s in demand.

Given sufficient screen size, the web is currently a fairly, fully sensory experience: visual, including text, graphics, photo, sound, etc. It’s a great medium and I love designing for it. Depending on your technology, settings and screen size, we can render a Vermeer with a relatively lame level of verisimilitude. That lame level decreases by orders of magnitude when everything has to be miniaturized.

Although I seem to be fighting a losing battle, I can’t for the life of me figure out why you would even want to try to view a Vermeer on an iPhone. It’s a metaphor of course, but there are some things that work in some forms of a medium and others that don’t.  Mobile devices are good for some things, some times. But it seems like we’re trying to make everything that works on a laptop or even an iPad, work on a phone, all the time. Not.

Not unless everything on your iPhone can pop up, hologram Princess Leia-like, and render some sort of image that is WAY better than what’s currently visible on an iPhone or iPad.

Help me Obi-Wan.

Usability II Remote Testing Analysis from Loop 11

July 21, 2013

This week we gathered results from Loop 11 on our survey questions from the previous week, although I had a few people I asked to complete the survey who didn’t finish until today or late last night.

Anyway, not sure how Loop 11 works. I had a zero % completion rate, which, as I explained in my .flv recording, I don’t understand.

I have high standards for proficiency and I don’t feel very proficient with Loop 11. And I last used screencast-o-matic during Usability I and I didn’t even remember the type of file I used (.flv, I discovered after going back to my Usability I course file).

 Anyhow, not happy with my Loop 11 results, and don’t really understand my results. I would have asked questions sooner but, having been at the UXPA conference all last week, I came back to two weeks worth of work that had to be done in one and didn’t look at my results until Saturday morning. Looking forward to having more time for class work this week and learning more about my Loop 11 results.

Loop 11 and the X Factor in UXPA

July 15, 2013

This week’s assignment was fascinating. We had the opportunity to use Loop 11 in order to create our own remote usability study (cool, just by itself because it seems like, without being students, this could be very expensive to set up). I was even more excited by the project because we were allowed to choose which websites and which tasks we wanted to test and one of the suggestions just happened to by kayak.com.

 Kayak.com was co-founded by Paul English, a former Intuit employee, and I heard him speak when, in 2011, he was asked back to speak to Intuit employees about entrepreneurship and what characteristics he looks for in kayak.com employees. It was a very interesting talk and I came away with the idea of being an “energy amplifier” for my team. So that just made using kayak.com as part of my test even more interesting.

 I’m coming back from the UXPA conference in Washington D.C., jet lagged and really feeling like I need another few days to recover, but it was a great experience and I got a lot out of it. Interesting that, despite the name change from UPA to UXPA, the focus at the conference was still very strongly on usability (eye tracking was big, research methods, etc.) and not so much on the “x” of experience.

 Still a ways to go, in my opinion, before we’re fully ready (as a profession) to include everything we’ve learned from usability, and also to include that X factor – experience.

Week 1 of Usability II

July 6, 2013

This week in Usability II, we’re looking at two software packages for remote usability recruiting. Our assignment is to explore the features offered by Ethnio and those by Mechanical Turk, then pick which one we would use for usability recruiting, explaining why we made the selection and why the unchosen option wasn’t sufficient.

 I chose Ethnio mostly for its focus on usability recruiting. If you have a very specific task to perform (recruiting usability participants) then it seems sensible to use software that is designed for that very particular task. I work on TurboTax. You could use Microsoft Excel, for example, to prepare your taxes, but TurboTax is specifically designed for just that purpose. Assuming the cost is not prohibitive, for most people, it would make sense to use TurboTax to prepare your taxes, if you’re using software at all. The competition isn’t even worth investigating. Trust me 😉

 Likewise with Ethinio. I’m not sure if I would want to use Ethinio for the actual testing, but I would definitely prefer their software over Mechanical Turk for the purpose of recruiting for usability. Mechanical Turk is simply too generic; the usability equivalent to using Excel to prepare your tax return.

Final Report IAKM 60104 Usability I

June 29, 2013

This was the final week for Usability I. Our Final Report is due by midnight. I’ve already submitted mine. I was, coincidentally, on vacation for the past three days. Good thing too. Based on looking at the assignment requirements I didn’t expect it to take long to complete since a large portion of the report is comprised of documents we’ve already assembled throughout the course. But, as my wife strolled on the beach, I found myself in the hotel room for three eight to 10 hour days, listening and watching all the participant videos, trying to decide which tasks to measure and how to document everything.

I really enjoyed it and find it a bit strange and surprising that I didn’t resent missing out on my vacation. I must (finally) be studying the right field.

 After watching all of the videos posted by other students and seeing where the pain points were, I decided to try and stick with the same demographic (novice user) that my team had been assigned to. I was able to find five novice user videos and decided that the most important thing for someone who had never ordered pizza online was, well, to be able to place an order online. Papajohns.com’s Order Now button is very well placed and sized for just such a task and participants had no trouble finding and using the button.

 The second task I chose to analyze was ordering a specialty pizza. This task contrasted well with the first because it was difficult for most participants who tried to follow the instructions literally. This is because there is no label on papajohns.com that says “Specialty Pizzas”.  This was a good example of making sure that your labels match your customers mental models. In this case I don’t think customers come to the site wanting to order a “specialty pizza”, rather they have a specific pizza in mind or they can get suggestions from the site itself.

 Overall a really good final assignment to pull together a lot of what we’ve learned over the past seven weeks.

Quantitative Measurements for Usability

June 9, 2013

This week’s assignment in Usability I was about measuring the results of usability and the difference between quantitative and qualitative measurements.

Despite their power and precision, I have to admit to a deep distaste for most things quantitative. Over the years I’ve taken four courses in statistical methods, two in undergrad and two at the graduate level. I received A’s in all but one of those classes. I have great respect for data and the ability to slice it in different ways, yet it is simply a way of thinking that I do not enjoy.

As part of this week’s assignment we had to choose a quantitative method of measuring the usability of a website. I know that doing that is valuable, I just don’t like it. Working with tax software, I see the same split. We have visual and interaction designers that are left-handed, right-brained folks. I like them. I get along well with them. Then we (because it’s tax after all) have those that love their calculations. Although I manage to get along with those folks, I typically don’t enjoy being around them.

I guess I’m glad that usability has room for both types, but if you tell me you’d rather focus on human emotion than data, I’ll probably enjoy being around you.

Moderating a Usability Study

June 2, 2013

I have moderated several usability studies and watched dozens of other, professional, moderators conduct usability studies, before the one I did yesterday for Usability I. So there weren’t a lot of surprises. One was how difficult Papa John’s made it to report a complaint on line. You’re never going to get true customer feedback with a process so difficult. The other thing that surprised both me and my participant was how hungry we were for pizza by the time we were done!

I actually thought I did better than I expected. I am by no means a born moderator. I am not an extrovert, not much of a “people person” and find that, unless I’ve had some sort of creative input into what I’m watching, or working on, with the participant, I find usability sessions a bit dull. This is less true of this particular exercise than usability at work. The usability joke at work is: What’s more boring than doing your own taxes? Answer: Watching someone else do theirs.

It was fairly easy for me to remain unbiased. I don’t have a lot of emotional investment in Papa John’s one way or the other, and even during parts where my participant was very critical of the site, it wasn’t personal to me. This has not always been true in previous usability studies where the participant is actually criticizing something I worked on, especially if I put a lot of thought and effort into it.

I think I cut my participant off just once and tried to make up for it. For the most part, I think I allowed my participant to speak and tried not to introduce bias in the way I asked the questions. My responses were intended to be encouraging without being judgmental and often were simple acknowledgments like “OK” or “I see.”

Altogether, a pretty good session with some useful insights, especially on the positive side: Seeing the motion graphics of adding toppings to your pizza order and on the not-so-positive: trying to register a complaint with the corporate office.