Customer Experience Journey Maps - A New Buzzword for an Old UX Practice?

I have a confession to make - when my workplace started to go bananas for customer journey mapping, I didn’t see much of a difference between what a journey map offers and what is provided through traditional UX research techniques. To me, customer experience journey maps were just a new buzzword for an old thing. 

In one respect, that statement is entirely true. A good customer experience journey map begins with a persona, and includes the user’s flow. The persona and customer journey map are founded in first-person customer research - the hallmark of good UX research, not on the business’s assumptions or generic marketing demographics. Usually this information is gathered from one-on-one interviews with customers, observing the customer using the product or service in the wild, surveys, message board and social media posts, and digging into customer feedback channels and analytics.

From there, the journey map deviates from the typical UX research deliverables. Instead, it is a representation of this persona’s interaction with the product or company from end to end - the customer’s initial need for a solution, their discovery of the product or service, deciding to purchase the product/service, and what it is like to be a customer or owner for the life of the purchase. Often UX user flows assume one thing - the customer is already a user. It ignores the period before adoption where a lot of decisions are made, and it often doesn’t address all the phases of a user’s relationship with the company. 

Another aspect of a customer journey that is lacking from traditional UX documentation is that the customer’s emotional journey is illustrated with equal weight alongside the customer’s interactions with the product. What is easy or delightful and frustrating or confusing during the process is called out boldly in a customer journey map. In addition, the product’s marketing efforts and customer touch points are overlaid with the same clarity. It’s a complete snapshot of the journey, whereas a user flow is just a piece.

The biggest difference, though, is that a customer journey map and persona are agnostic of the technical solution. It’s a customer need-finding tool that identifies opportunities for new products, services, or features. In 12 years of working in user experience, UX research generally begins once the product or service offering has already been identified - the client wants a new website, the company needs a tool to do the following, etc. User experience research sets out to create useful and satisfying designs within the framework of a solution, whereas customer experience research sets out to determine what will be a useful or satisfying solution - and therein lies the value.

Liminal Thinking for More Empathetic Customer Research 

I recently finished reading Liminal Thinking by Dave Gray and I can’t stop telling people about it. None of the descriptions of this book do it justice (and neither will mine), but it’s about how people form beliefs, what happens to human interactions as a result of beliefs, and how to change beliefs (or your understanding of people’s beliefs). Like I said, this book defies description. But, it’s a quick read, so I strongly suggest you do. 

As I was reading, I kept thinking about why customer research is often flawed. If you’re a researcher on a project - maybe making personas or journey maps about an audience’s experience - you’ve already done a lot of initial research, maybe talked to the client and heard their opinions, formed your own judgments, made some assumptions, and already thought of a few solutions. It’s only natural. But these are also the thoughts that will color your research, and perhaps skew it toward your preconceived notions. It is easy to interview people, write surveys, and run focus groups that are unwittingly structured to validate your own assumptions.

How do you avoid skewing your own research? Gray offers some liminal thinking practices as a solution, and I’ve identified three that I think are the most helpful. 

Assume That You Are Not Objective

Gray shares a great anecdote about a boss who shoots the messenger when he hears bad news. Because the boss reacts poorly, his employees stop telling him things. After a while, the boss feels out of the loop and wants to find a way to change it. What the boss doesn’t realize is that his reactions caused this cycle - he needs to change his behavior to get his employees’ behaviors to change.

Now, imagine that you are a UX researcher and you’ve worked on a product for years. You know why things work a certain way. While you’re researching, users keep bringing up issues, but because you’re so close to the product you dismiss what they’re saying because it “had to be that way” or it’s “not in the budget.” Meanwhile, an outsider might find these insights to be the most valuable part of the research - possibly the key to designing a better product.

If you’re part of the system, you need to approach things like an outsider and assume that you are not objective. 

Empty Your Cup 

Emptying your cup is one way to get an outsider perspective. This means that you have to consciously suspend your judgment. Let go of any knowledge you have on the subject and forget your theories, preconceived notions, and assumptions in order to let other people’s thoughts and beliefs in. This is hard, but it is possible! 

How do I do this? While I’m in a research mode, I do not let myself dwell on theories or solutions. When they inevitably come to mind, I control them by writing them down in my notebook so that I’ll have them for later. Then I block the thought out (knowing I have it noted) and continue researching. 

For example, I was working on a project where I was interviewing about a dozen people who were considering an optional medical procedure. At the start of my interviews, it seemed like health insurance coverage was an enormous blocker. I jotted a solution in my notebook that said “make it easier to check insurance - maybe an online tool?” By the time I had talked to everyone on my list, I had learned that determining insurance coverage was a simple phone call that many people easily made. Those who weren’t calling the insurance company weren’t ready to commit to saying yes or no to the procedure, so they procrastinated while they thought more about it. It was a different problem (and a different solution) than the one I had identified earlier. 

If I had clung to my early idea, I could’ve used my interviews to validate the idea. I bet if I asked everyone I talked to, they all would have liked an easy online insurance coverage checker. But my insurance-checker idea probably would not have changed minds or solved the bigger issue of fear and uncertainty around the procedure itself. 

Triangulate and Validate

The story I just told also brings up the practice of triangulating and validating. With this, Gray encourages people to not just assume that they know what’s going on. Talk to as many people as you can, and to as many different types of people as possible. For my project, I talked to the client, to the medical office staff, and to people who were considering the procedure, in the process of having the procedure, and those who had recovered from the procedure. I read message boards and Reddit strings, online health information, and more. I tried to examine the research through a lot of different lenses so I that I could gain deeper insights. 

As difficult as it is, for the best customer insights you have to let go of your own theories and stay flexible about the outcome. If you manage to do this - even just a little bit - you’ll understand your subject more, and you’ll be able to make an empathetic connection with your customers.

Market Research Tools: Using Google Forms for Surveys & Screeners

Whether I’m recruiting usability test participants or planning customer research, one of my go-to survey tools is Google Forms. I often find myself turning to online survey tools for things like participant screeners for usability testing or user interviews, and I also use it to perform market research and customer surveys. Plus, Google Forms is free, which matters a lot when your client has a small budget or doesn’t need the full capabilities of a paid survey tool. 

Though there are a lot of freemium and free survey tools out there, Google Forms offers a handful of valuable features, and I know that the service is reliable - it’s not a start-up that is going to be shuttered or sold to someone else a month from now. At the risk of sounding too much like an informercial, here’s a list of the many reasons why I like using Google Forms to conduct online surveys. 

Advantages to Using Google Forms for Surveys

  • First and foremost, you’re able to build robust surveys with conditional logic. Really, as much logic as you need. Though the editor portion of the form tool can get unwieldy, you’re able to make a friendlier survey experience for your audience with low effort and at no cost. 

  • You can collect a lot of survey responses. At the time of this writing, you can collect 400,000 responses if you import to a spreadsheet, and unlimited responses if you only use the tool’s built-in reporting.

  • Your data is portable. You can download the responses into a Google Spreadsheet or a CSV file so you can import and analyze the results in whatever stats tool you’re comfortable with. Also, you can easily save a copy of the raw survey data outside of the tool for posterity. 

  • Lastly, Google Forms can be be customized with an image. Adding an image automatically changes the color scheme to match. So, the survey can match your brand if you need it to, which makes the survey feel more credible (and removes some of that “cheap” Google Form aesthetic).

With Google Forms, cost is never a barrier to reaching out to your users or your customers - you will always have a free, highly capable tool to help you collect information and gain insights from your audience. 

User Interviews: Bias and How to Reduce It

Imagine the following scenario. You’re a UX or CX researcher, and you’re working on a website redesign. You’ve had some preliminary meetings with your client and you’ve done a little of your own research about the marketplace. Based on what you know so far, you have a few ideas that you think would be great for the new website. You’ve set up a half-dozen interviews with your client’s customers to understand their needs, and during the interview, you bring up one of your ideas to solicit their feedback. 

Interviewer: “Wouldn’t it be great if you were able to [XYZ] online?” 
Interviewee: “I never thought of that before. I guess so…sure!” 
Interviewer: “It would be quicker and more efficient…”

Do you believe the interviewee really wants this feature? 

If you answered yes, perhaps you shouldn’t have. When you’re conducting user interviews for research, every question that you ask and every topic that you introduce has the potential to skew your results. There are several types of bias that may influence the interviewee’s response to this type of questioning. 

Politeness and Halo Effect

Often, people will be polite in their answers to spare your feelings, to be friendly, or to build a rapport. There’s also a chance that your interview subject doesn’t want to make waves in his organization, or maybe she or he fears that a negative response will get back to a boss or an important contact. 

Alternately, if you’re conducting interviews and you’ve been introduced as an expert in the field, your interview participants may think you have special insight, and your position might cast a positive glow over all of your suggestions, whether they are good ideas or not. This is called "halo effect." 

Tips for remediating politeness and the halo effect: 

  • Downplay your level of expertise and your own investment in the project. “I’m just the researcher…”  

  • Tell them you’re at an early stage in your research, even when you’re not. 

  • Say that negative feedback is encouraged, and is often more helpful than positive feedback. 

Query Effect

The Nielsen Norman Group cautions researchers to be aware of the query effect. They claim that when you ask someone for their opinion, that person will come up with an opinion, even if it is unimportant to them or is about something they have very little information about. 

Tips for reducing query effect: 

  • Ask followup questions that dig deeper into their need and prompt for specifics. For example, ask if they had the proposed feature for 5 days, how many times would they use it based on their current workload? How much time might a feature like this save for them personally? Try a technique like asking why

  • Even better, don’t ask if they want a certain feature. Inquire about their current experience and whether or not they have any suggestions for improvement. They might suggest the feature. If possible, observe them doing whatever process it is that you’d like to replace or improve. 

What You See Is All There Is (WYSIATI)

In his book, Thinking, Fast and Slow, Daniel Kahneman proposed a theory called What You See Is All There Is (WYSIATI). Kahneman suggests that your mind is hard-wired to tell a story based on the information you already know and to draw a conclusion from it, forgoing additional research. In the scenario above, the interviewer's findings from her preliminary research made her instinctively identify problems and formulate solutions. This is completely natural, but it can stop you from gathering more information and possibly finding a more appropriate solution. 

When conducting need-finding interviews with users or customers, your primary goal is to deepen your knowledge and to find out what you don’t know. User interviews should not be used to gain confirmation or sign-off for your ideas.

Tips for reducing the influence of WYSIATI: 

  • Be open to new information. Although you will absolutely come to the interview with assumptions and solutions, don’t share them with the participant. Pretend that you are a complete novice. Avoid steering the conversation too much in one direction or inadvertently selling the participant on your idea.

  • Ask questions like “is there anything else that might be useful to us as we research XYZ?”

When in doubt, it’s always best to keep solutions out of need-finding interviews, and instead focus on gathering as much information as possible about the problem. Save coming up with ideas and recommendations for after you’ve completed all your research, when you can properly reflect on more comprehensive findings. 

5S: A Tool for Web Content Management

Too often, website content grows old and outdated and loses its usefulness for visitors and the organization. No one pays much attention to this until it is time to overhaul the website. At that point, content becomes a huge obstacle and a seemingly insurmountable amount of work. The fix for this? Routine housekeeping to make sure the content has life-long value for site visitors and the organization. 

To create a process for routine content management, borrow a practice from Japanese lean manufacturing called 5S. 5S is a workplace organization methodology that consists of five Japanese words that loosely translate to sort, straighten, shine, standardize, and sustain. In manufacturing, applying this methodology reduces errors and improves quality - the same thing you need for your web content. Here’s how to 5S your web content. 

Sort  

The first practice is Sort. For this, you’ll want to identify and remove unnecessary content. This can be done by making a content inventory. You can learn all about making a content inventory here, but the basic idea is to make a spreadsheet that lists page titles and links to all of your content, from pages to pdfs and videos. Examine each piece of content and indicate if anything is redundant, outdated, or trivial (ROT). If it is, delete the content or fix it. 

In conjunction with the content inventory, take a look at your website’s analytics. Specifically, examine the least visited pages. What about these pages makes them so unpopular? Are they ROT, or is there a problem that is causing these pages to be overlooked? Fix the problems if the content is still valuable, or move these pages to the trash bin. 

Straighten

The second practice is Straighten, which is essentially the old aphorism “a place for everything and everything in its place.” Going back to the content inventory, now you’ll assess the organization of the content. Has it grown disorderly? Do the navigation or categories still make sense based on the content contained within them? Are callout features and promotions still relevant to your audience? Identify the top tasks for your website and conduct a usability test to check that everything is still functioning well for your users, or to test any proposed changes to your site’s navigation or workflow to make sure they'll solve the problem.  

Shine 

Next up is the Shine step. In manufacturing 5S, this is cleaning everything and eliminating dirt. For your website, do the following: 

Standardize

To Standardize is to create and follow best practices for your site and maintain high standards. You might consider doing this by creating a style guide if you don’t already have one, and make sure everyone who is contributing to your website has the right tools to do their best work (such as image and video editing tools). Update documentation and training materials to make sure they are still useful, and make them easily accessible to your content contributors. Re-run trainings if you need to, or if you’ve never held training before, start now! 

Sustain 

This brings us to the final S, Sustain. This step is to “do without being told.” Repeat Sort, Straighten, Shine, and Standardize as part of a systematic process to continuously improve. Do it quarterly or monthly - make it habit! If you do, 5S will help you improve the user experience, quality, and relevance of your content - turning your website into a well-oiled machine. 

How to Become a User Experience Designer

I am often asked how to become a user experience designer - it’s actually the number one question that comes in through the contact form. Anyone can learn UX best practices, but what makes a great UX designer are the soft skills - being able to empathize with users, see the big picture, and have good sense. That aside, here is how I would learn the hard skills if I were starting over. 

As you probably know, some people go to undergrad or grad school for Human Computer Interaction or Interaction Design, and there are also various certifications at various price points. You don't really need to do either, but they are out there, depending on your goals and access to the field. If you’re already working in the digital space, you can probably apprentice your way into a position. If not, you might need a degree program or certificate to get your foot in the door. I started in the field as a web copywriter with an English degree, but I worked at a digital design agency where I could learn on the job from other talented UX professionals.

First, I would start with design theory. 

If you aren’t engaged and interested in this stuff after you read about it, you might want to look into related digital careers. These are the two books I would absolutely read: 

The Design of Everyday Things by Don Norman is a foundational usability/design book that everyone talks about. If you grab an old version and are under 30, you may not recognize any of the examples, so spring for the updated version! Don Norman also covers the highlights in this free online class from Udacity

Change by Design by Tim Brown, who founded IDEO - Another one of those books everybody reads - and by everybody, I mean lots of people in business in general. Using the term "design thinking" has become very buzzy of late, and it originates from this book. This book is also great if you're thinking about experimenting with startup ideas and service design.

Second, learn the practice. 

If you’re video-oriented, Coursera has an interaction design series that results in a certificate. If you decide to pay, it's affordable compared to what else is out there. I tried the first course in HCI a couple years ago, and it was very good. Even if you just watched those videos for free, you'd have a good working foundation of the basics. 

I would also read a selection of books from Rosenfeld Media. They are deep-dives into particular subsets of UX, so it will deepen what you learned in the Coursera course. They are also very practical books to skim or use for reference. 

UX Team of One by Leah Buley - Read this first! This is going to give you a really good overview of stuff you should know, and techniques for implementation. 

Web Form Design by Luke Wroblewski - The best book about form layout. The internet is made up of forms, and most of them are kind of bad. If you are a coder, reading this will also up your game. 

A Web for Everyone by Horton and Quesenbery - Usability for people with disabilities. Accessibility is fundamental to user experience, and this book will tell you why. I think everyone should read this, because it will change the way you think about 508 compliance or WCAG 2.0.

Prototyping, A Practitioner's Guide by Todd Zaki Warfel - The part where it talks about tools is a little outdated, but the book outlines approaches UX prototyping, and why one should prototype first. 

Which brings me to prototyping and wireframing tools - don’t worry about whether or not you have a well-developed skill in design tools. Learn them as you need them. Anyone can learn how to use Balsamiq, Axure, Omnigraffle, Visio, and the Adobe suite, and plenty of people just draw or even use PowerPoint. Anyone who is hiring based solely on how good you are at a tool is probably offering a UX job that is not worth having.

There are also millions of websites out there with practical information, but you can’t go wrong with these few: 

Good luck - I hope you found what you needed to become a great user experience designer! 

How to Use Google Analytics to Identify Web Accessibility Issues

Unfortunately, in many organizations, a case must be made for adhering to web accessibility best practices, even though it is the right thing to do and is often legally mandated. If you’re in the situation where you have to ask for accessibility to be considered, you might think that Google Analytics will help you make your case. Perhaps there’s reporting that captures screen readers, for instance. However, there’s no screen reader report, and determining what users have disabilities is not clear cut (for good reason - think of privacy implications). 

The best way I’ve come up with to identify the potential for increased occurrence of disability among users is to focus on the Age demographic information in Google Analytics. Why age? Inelegantly put, the older one gets, the more likely one will experience vision, hearing, and mobility issues - in other words, reading glasses, hearing aids, and arthritis. All of this affects the user experience, and typically older users are a large and valuable website audience. 

Demographics > Age

The first step is to identify the size of your older population. This is as easy as going to the Audience section in Google Analytics and clicking on the Demographics tab. From here, you can access the built-in report for Age, and then view columns for percent of sessions, bounce rate, and session duration. You’ll be able to see how large your older audience is, and get an initial feel for whether they are spending more or less time with your website than the younger audience. If it is not similar to the experience of the younger age segments, this could be your first clue that there is a bad user experience for older populations. 

Using Age as a Secondary Dimension 

Next, you’ll want to view a few key content reports with Age selected as the secondary dimension. 

Exit Pages: Are there any abnormalities to the exits? Are older users behaving differently from other audiences?

Landing Pages: Check out the bounce rate and the average session duration.

  • Do your older age segments have a higher bounce rate than your younger segments?

  • Is your average session duration different for older age segments?

  • Are older users taking longer on the page (struggling?) or are they abandoning the page faster than everyone else? 

All Pages/Content Drilldown: If you have pages with video, and they don't have accessible alternatives, sample a few of the pages against the Age dimension. Are older users leaving these pages or not spending enough time to watch the video? (If you have video plays tagged as Events in Google Analytics, you can apply the Age dimension to that for more accurate insights.) 

Investigate and Fix the Issues 

If you found that older users are behaving differently from the younger users, it’s time to investigate by analyzing pages on your website. Check that images and videos have appropriate accessible alternatives, and evaluate the page for colors and font sizes that don’t meet WCAG standards. Check the code. If you find WCAG violations, it might be causing older populations to experience trouble with your website. A great tool for quick accessibility checks is WebAIM’s WAVE Chrome extension

Though none of the information you’ll glean is definitive, the knowledge will help you better serve people with disabilities, or make a case within your organization for the importance of web accessibility. 

Does your Agile process look like Waterfall? Story size and WIP limits can help.

SCRUM-fall, waterfall in little pieces - whatever you call it, it infects Agile teams with alarming frequency. Agile-turned-waterfall seems to follow one of two patterns:

  1. Waterfall within the sprint: This happens when research and additional requirements gathering occurs first in the sprint. Next, the stories are handed to development, who then move the stories to testers in the final days of the sprint. You may be experiencing this if your testers are idle, then slammed, and you never seem to have enough time to fix what they’ve found within the sprint’s boundaries. You’re often pretending things are error-free during your sprint review demos, or you have trouble closing stories because of unresolved test defects. 
     

  2. Waterfall across sprints: You are experiencing this when you do your requirements and wireframes in one sprint, develop them in the next sprint, and test them in a third sprint. You have sprint reviews with no demos because it was a “requirements sprint.” It often makes sense to do research spikes and design ahead of the sprint, but going too far in that direction can cause the team to deliver a large chunk of working, tested features every two to three sprints instead of delivering smaller chunks of working product at the end of every sprint. This also means that you’re getting less customer feedback, you may lose speed to market, and most critically, you may find yourself unable to quickly pivot. 

I’ve been on SCRUM teams that experienced both of these dysfunctional manifestations of waterfall within Agile. Recently, I learned a why this happens, and how to stop it. 

Story Size & WIP Limits to the Rescue

First, the team’s stories are probably too large. It might not seem like it, but they are. No single story in the sprint should take longer than a day or two to develop, test, and close - all inclusive. Where I work, this means that nothing should be greater than 3 points at most, with most stories at 1 or 2 points. Smaller stories move through the workflow faster, enabling testers to begin their work earlier in the sprint, and giving developers more time to resolve any issues that were found. 

To make this even more effective, the team should institute work-in-progress limits (WIP limits) for each team role. This means that you set a limit on the number of open stories for development and test. If you’re at the WIP limit, one story has to move before a new story can open. For example, if development and testing both have WIP limits of 4, and both roles are at their WIP limit, then development must move one story into test before opening story 5, and testing must resolve a story before they can accept the new story from development. This encourages the team to move work through the pipeline, and also reduces time lost due to task-switching between many open stories.

Though writing smaller stories and adhering to WIP limits will be difficult at first, it’s one of the best ways to break teams out of accidental Agile/Waterfall.  

agileJulie Young
Measuring Task Time During Usability Testing

I design applications that are used all day, every day in a corporate setting. Because of this, I measure efficiency and time when I do usability studies to make sure that we are considering productivity as part of our design process. 

Although actual times gathered from real interactions via an analytics package are more reliable and quantifiable than those gathered in usability testing, they require you to have a lot of users or a live product. When you're in the design stage, you often don't have the ability to gather that kind of data, especially when you're using mockups or prototypes instead of a live application. Being able to gauge the relative times of actions within a process during usability testing can be helpful, and being able to compare the times of two new design options is also valuable. Gathering information about task times early in the design phase can save money and effort down the road. 


 

HOW TO CONDUCT A TIME STUDY

During a typical usability study, simply collect the times it took to accomplish a task. The best way to do this is to measure time per screen or activity in addition to the duration of the task, so that you'll be able to isolate which step of a process is taking the most time or adding unnecessary seconds. This can be more illuminating from a usability perspective than simply knowing how long something takes.

Make a video screen recording of the session. Pick a trigger event to start and pause timing, such as clicking a link or a button. Gather the times via the timestamp when you replay the video. Don't try to time with a stopwatch during the actual usability test. You can make a screen recording with SnagIt, Camtasia, or Morae, or through any number of other tools.

When comparing two designs for time, test both designs in the same study and use the same participants. This means you'll have a within-subjects study, which produces results with less variation - a good thing if you have a small sample size. To reduce bias, rotate the order of designs so each option is presented first half of the time.  


 

COMMON QUESTIONS ABOUT TIME STUDIES

Should you count unsuccessful tasks?

Yes and no. If the user fails to complete the task, or the moderator intervenes, exclude it from the time study. If the user heads the wrong direction, but eventually completes the task, include it.

What if my participant is thinking aloud and goes on a tangent, but otherwise, they completed the task?

I leave "thinking aloud" in and let it average in the results. If the participant stops what they are doing to talk for an extended period of time (usually to ask a question or give an example), I exclude the seconds of discussion. But, be conservative with the amount of time excluded and make sure you've made a note of how long the excluded time was. 

Should you tell participants they are being timed?

I don't. Sometimes I'll say that we're gathering information for benchmarking, but I generally only give them the usual disclaimer about participating in a usability test and being recorded.

How relevant are these results? 

People will ask if times gathered in an unnatural environment like usability testing or a simulation are meaningful. These times are valuable because some information is better than no information. However, it's important to caveat your results with the methodology and the environment in which the information was collected.


 

REPORTING RESULTS: AVERAGE TASK TIMES WITH CONFIDENCE INTERVALS

Report the confidence interval if you want to guesstimate how long an activity will take: "On average, this took users 33 seconds. With 95% confidence, this will take users between 20 and 46 seconds."

Report the mean if you want to make an observation that one segment of the task took longer than the other during the study. A confidence interval may not be important if your usability results are presented informally to the team, or you're not trying to make a prediction. Consider the following scenario: you notice, based on your timings, that a confirmation page is adding an average of 9 seconds to the task, which end-to-end takes an average of 42 seconds. Does it matter that the confirmation screen may actually take 4-15 seconds? Not really. The value in the observation is whether you think the confirmation page is worth nearly 1/4 of the time spent on the task, and whether there's a better design solution that would increase speed. 

When you're determining average task time, always take the geometric mean of times instead of the arithmetic mean/average (Excel: =GEOMEAN). This is because times are actually ratios (0:34of 0:60). If the sample size is smaller than 25, report the geometric mean. If the sample size is larger than 25, the median may be a better gauge (Excel: =MEDIAN).

If you're reporting the confidence interval, take the natural log of the values and calculate the confidence interval based on that. This is because time data is almost always positively skewed (not a normal distribution). Pasting your time values into this calculator from Measuring U is much easier than calculating in Excel. 


 

REPORTING RESULTS: CALCULATING THE DIFFERENCE BETWEEN TWO DESIGNS

For a within-subjects study, you'll compare the mean from Design A to the mean of Design B. You'll use matched pairs, so if a participant completed the task for Design A, but did not complete the task for Design B, you will exclude her both of her times from the results.

There are some issues with this, though. First, I've found it very difficult to actually get a decent p-value, so my comparison is rarely statistically significant. I suspect this is because my sample size is quite small (<15). I also have trouble with the confidence interval. Often my timings are very short, so I will have a situation where my confidence interval takes me into negative time values, which, though seemingly magical, calls my results into question.  

Here's the process: 

  1. Find the difference between Design A and B for each pair. (A-B=difference)

  2. Take the average of the differences (Excel: =AVERAGE).

  3. Calculate the standard deviation of the differences (Excel: =STDEV).

  4. Calculate the test statistic.
    t = average of the difference / (standard deviation / square root of the sample size)

  5. Look up the p-value to test for statistical significance.
    Excel: =TDIST(test statistic, sample size-1, 2). If the result is greater than 0.01, you have statistical significance.

  6. Calculate the confidence interval:

    1. Confidence interval = Absolute value of the mean of the difference +/- (critical value (standard deviation / square root of sample size).

    2. Excel critical value at 95% confidence: =TINV(0.05, sample size - 1)


 

REFERENCES

Both of these books are great resources. The Tullis/Albert book provides a good overview and is a little better at explaining how to use Excel. The Sauro/Lewis book gives many examples and step-by-step solutions, which I found more user-friendly. 

Measuring the User Experience by Tom Tullis and Bill Albert ©2008

Quantifying the User Experience by Jeff Sauro and James R. Lewis ©2012

Interested in more posts about usability testing? Click here.

Measuring Efficiency during Usability Testing

Recently, most of my work has been developing enterprise software and web applications. Because I'm building applications that employees spend their whole workday using, productivity and efficiency matters. This information can be uncovered during usability testing. 

The simplest way to capture the amount of effort during usability testing is to keep track of the actions or steps necessary to complete a task, usually by counting page views or clicks. Whichever you count should be meaningful and easily countable, either by an automated tool or video playback. 

There are two ways to examine the data - comparison to another system, and comparing the average users' performance to the optimal performance.

Compare One App to Another

Use this when you're comparing how many steps it took in the new application vs. the old application. Here, you'll compare the "optimal paths" of both systems and see which one required fewer steps. This doesn't require usability test participants and can be gathered at any time. It can be helpful to present this information in conjunction with a comparison time study, as it may become obvious that App A was faster than App B because it had fewer page views.

Compare the Users' Average Path to the Optimal Path

To do this, you'll compare the average click count or page views per task of all of the users in your usability study to the optimal path for the system. The optimal path should be the expected "best" path for the task. 

More than simply reporting efficiency, comparing average performance to optimal performance can uncover usability issues. For example, is there a pattern of users deviating from the "optimal path" scenario in a specific spot? Was part of the process unaccounted for in the design, or could the application benefit from more informed design choices?

Here's the process I use to calculate efficiency against the optimal path benchmark. 

  1. Count the clicks or page views for the optimal path.

  2. Count the clicks or page views for a task for each user.

  3. Exclude failed tasks.

  4. Take the average of the users' values (Excel: =AVERAGE or Data > Data Analysis* > Descriptive Statistics).

  5. Calculate the confidence interval of the users' values (Excel: Data > Data Analysis* > Descriptive Statistics).

  6. Compare to the optimal path benchmark and draw conclusions.

*Excel for Mac does not include the Data Analysis package. I use StatPlus instead. 

Reference

Measuring the User Experience by Tom Tullis and Bill Albert ©2008

Read more posts about usability testing.

Inclusive Usability Testing Best Practices

Are you capturing feedback from all of your users with usability testing, or are you leaving out an important segment of the population - people with disabilities? You might think of web users with disabilities as having mobility issues, colorblindness, or severe vision or hearing impairments. A disability doesn't need to be severe or require technology adaptations. For example, aging adults often begin to display low-level or early-stage disabilities (poor vision, slower motor function) and may already be a significant user group. And, disabilities can affect anyone at any time - consider the person with a broken arm, or the employee working through a headache. 

One of the best ways to make sure an application or website is easy for people with disabilities to use is to include them in usability testing. It is often not necessary to plan and conduct special "accessibility" usability tests, as people with disabilities perform the same tasks as those without disabilities.

Usability testing isn't the only way to include people with disabilities in the UX process. People with disabilities should also be included in user research, represented in personas, and invited to be beta testers.  

Recruiting Participants for an Inclusive Usability Test

If you are iteratively usability testing a product, consider including one or two participants with a disability in each round. Over time, you will have generated feedback from people with a wide range of experiences: vision problems, difficulty hearing, motor issues, and cognitive impairments.

Where to find participants:

  • Post an open invitation asking for participants (Craigslist, Facebook, Twitter, community boards)

  • Charitable or support organizations

  • Senior citizen centers

  • University accessibility accommodation offices

  • Use a third-party recruiter

Use screening questions to determine if potential participants use a screen reader, a magnifier, a special mouse or keyboard, or any other adaptations.

Testing with Assistive Technologies or Special Adaptations

People who use assistive technologies or hardware adaptations may have highly personalized setups or expensive equipment. So, it may be best to conduct sessions using his or her computer remotely or as a field study. If you take this approach, note what technology or adaptations are being used.

Should you usability test a prototype or the actual product?

You can test a prototype when you know the population does not use assistive technology or certain adaptations like keyboard-only input. People with vision, hearing, mobility, or cognitive disabilities will still be able to identify usability issues concerning readability, function, and content.

However, if you intend to test participants who use assistive technologies or who only use the keyboard to navigate, you will want to use the real application or website so that any errors related accessibility actually belongs to the product, and not the prototype. 

Also, make sure the tasks you are testing are navigable via a screen reader or a keyboard prior to the test. It is not a good use of time to discover that the participant can't make it past the first screen because the product was never coded to be accessible in the first place.

Draw Conclusions with Caution

Be careful about generalizing issues that participants with disabilities found.  No two people are alike, and, as with any usability test participant, the range of experiences is very diverse. Finding any usability issue in a product is likely to impact other users, not just those with a disability.

Additional Resources

WebAIM: Rocket Surgery and Accessibility User Testing

W3C: Involving Users in Evaluating Web Accessibility

IBM Human Ability and Accessibility Center: Conducting User Evaluations with People with Disabilities

A Web for Everyone by Sarah Horton & Whitney Quesenbery

Web Content Accessibility Tips for Writers

Julie Young
Usability Testing Hack: Speed Through Videos in Half the Time

There are two reactions to this usability testing hack: 

  1. Doesn’t everybody do it that way? OR, 

  2. I can’t believe the hours I’ve squandered! 

Ready to find out which side you’re on?

Watch your usability testing videos at a playback speed of 1.5 or 2. An hour-long video will only take 30 to 45 minutes to watch. 

When I usability test, I always record and re-watch each session to make sure that I see all the behaviors that were invisible to me at the time, as well as to backup my own notes and assumptions. (Ever finish the sessions feeling like “everybody” missed something, only to discover that fewer than half actually did? This is why I re-watch.). If you’re doing unmoderated remote usability testing through usertesting.com (or similar), you’re also faced with hours of video to watch. Re-watching, though valuable to the process, makes usability testing more expensive for the client, and also lengthens your turnaround time for reporting results. It’s in everyone’s best interest to recover some of this time by adjusting the video’s speed. 

How to Adjust Playback Speed

Nearly every video player has a playback speed control. On a Mac, I like the VLC video player because it’s not obvious how to change playback speed in iTunes or Quicktime (or maybe it’s not possible anymore). If you’re using Windows Media Player on a PC, you can find playback speed if you right-click the video and click on “Enhancements” (I wish I was making this up). 

A speed somewhere between 1.5 and 2 works well for me to be able to watch and take notes. It’s even possible to grab user quotes at this speed. If I’m grabbing timestamps for a time study, and I have already collected my general usability findings, I’ll set the video to play as fast as possible (8-16x) and only look for the clicks that correspond to what I’m timing.

Once you know about this hack, you’ll find yourself watching YouTube at 1.5, speeding through podcasts, and even taking online classes at warp speed - there are so many applications! 

Interested in more posts about usability testing? Read on.

Ask "Why?" for Stronger Requirements

In a recent post, I listed a few ways that you can accidentally end up with a bad user experience. But there's another way to add bloat to your design: failing to ask "why."

The other day, a business analyst asked me to add a checkbox to one of my screens so that an administrative user could indicate, once every 6 months, that someone reviewed the screen for errors. Yet, we already have proof that the screen's data is being maintained because it's in the change log.

On the surface, adding a checkbox is an easy update to make. But, we don't know why we're being asked to make this update, or how this feature is going to be valuable to users. Simply put, our customer is requesting a solution without indicating what problem it solves. Though it might a good solution, how do we know it is the BEST solution?

Why ask why?

The next step isn't to drop the checkbox onto a wireframe, write up a few requirements, and head home for the weekend. The next step is to call up the client and ask "why."

  • Why does the client need this feature?
  • What information is the client hoping to collect via this feature?
  • How does the client plan to use this information?
  • Does the client know about the information we're already collecting?  

Once all these questions have been answered (and maybe a few more), we'll know how to proceed.

Needs vs. design

Good requirements describe a user need, but the need is never a "checkbox." The need is bigger. In my example's case, maybe the need is reporting for an audit, or maybe that checkbox is really supposed to notify someone of something. But we'll never know unless we stop and ask why, and solve the real problem.    

How Bad UX Happens

And now, a note about imperfect user experiences and how they happen. 

The late-breaking requirement

The UX is perfect, things are going great, and you just finished usability testing. Then the client calls and asks you to add something. It's not a problem, it's just a small tweak. Easy! 

A few weeks later, you're looking at your screens and wondering how things went so wrong.

Daniel Kahneman would blame it on a lazy controller. Basically, your instinctual mind makes a snap decision based on previous experience and your analytical mind blindly follows. Most of the time, this works well. Other times, not so much. Maybe you were rushed, or didn't quite understand the full impact of the change. Either way, once you look at your design change with a fresh mind, you can see what you would've done differently.

The early design that overstays its welcome

The other good time to unwittingly make a mistake is early on. You make a solid choice based on the information that you have at the time. But as you refine the design and learn more about the project, this "solid choice" becomes irrelevant or even totally unnecessary.

You, and probably the larger team, have gotten used to seeing it, and you have a type of "banner blindness" to your own work. And maybe it's so irrelevant that usability testing won't uncover it, because it literally is useless and unremarkable.

Much later, you or someone on your team notices it and you see it for what it is - clutter.

How to fix it

These types of unitentional mistakes get shipped to customers and handed off to clients all the time. Sometimes, you only notice them after your engagement has ended. If you're still working on the project, offer to clean it up as part of a larger group of updates - they can only turn you down. There is rarely a good argument against continuous improvement.

If your project is still underway - just suck it up and admit that a part of your design needs rework! Maybe your team will have a better idea, or maybe they will remember something about the requirements that you've already forgotten. Most people understand that it is hard to self-critique, and I think they'll appreciate you more for your willingness to make a change.

How to keep it from happening

Don't work in a silo! Ask for opinions from other smart people as you're designing, whether they are on your team or not.

For those small late-breaking changes, it's tempting to do something quickly, call it done, and post a file. Instead, make the change and let it rest a few hours. Then you'll be able to review it with a more critical eye.

Optimizing Calls to Action

This morning I was reading a post on the Travel 2.0 Blog that hit home. Troy Thompson wrote:

“Recently, I was asked to critique changes to an advertising campaign from a well-known tourism destination. While the creative was fine…amazingly not touting anything and everything…the call to action seemed, cluttered.

Perhaps that was because it featured not only the traditional website address and phone number, but also icons for Facebook, Twitter, YouTube, a blog (disguised as an RSS icon that few will understand) plus a QR code.”

Seven calls to action in one print piece! Thompson points out that watering down a strong call to action with six “extras” doesn’t provide more choice, it muddies the water for the user and scrambles your metrics.

This lesson isn’t just for print. On websites, there’s a tendency to offer everything to everyone at all times. Take the typical higher education website, for example. There’s usually semi-permanent placement of calls to action for applying, visiting campus and requesting information. There may also be callouts to promote social networks. In some sections (or everywhere), the school wants you to “give now.” The alumni section wants you to update your info or join an online community. And let’s not forget the ubiquitous share buttons, begging you to Like, Tweet or +1 every page you visit.

What action do you want your visitors to take? You can make a case for everything, but like the Travel 2.0 post said, seven calls to action is probably too many. So how do manage your calls to action?

How to Create Focused Calls to Action

It’s simple: let the context and content of the page guide you. Here are three ways to get started.

#1: Target the context for your call to action

In our higher education example, apply, visit and request information callouts should be seen only in prospect sections. Admission tools shouldn’t bubble over into the alumni or current student-focused content. Likewise, you don’t want a prospective student to be asked to donate to your capital campaign. It’s easy to design permanent calls to action that cascade across every single page of your website, but if your calls to action aren’t targeted, they are visual clutter.

#2: Make the connection between information and action

The second trick to focusing your calls to action is to put them close to the body of your content to indicate a relationship between the copy and the action. If a user is on a page describing first-year housing options, a contextual link to schedule a campus visit or view a virtual tour is more in line with what the he or she might want to do next, while also fulfilling your own conversion goals. A “Visit Campus” button designed into the header of your website won’t have the same contextual relevance as a callout nestled in the copy.

#3 Think “Mobile First”

Finally, think about your mobile site. If you had a limited screen size, what calls to action would you devote space to? Which ones would you cut?

Ready to tackle your calls to action? It’ll be worth it for your visitors—and your conversions.

Originally published as Context and Content Are King: 3 Ways to Focus Your Calls to Action over on the Elliance blog. 

Is Your Website Content Accessible and Universally Usable?

We all start our website projects with the best intentions: most of us plan to adhere to Section 508 and W3C Web Content Accessibility Guidelines. We want to support all website visitors, whether they have a disability, are new to the language, new to the web (born daily!), or are merely aging gracefully. And improving accessibility helps everybody’s usability (and often, your site’s SEO)—so, we implement accessibility guidelines.

But what happens to your accessible website after launch?

While accessibility practices built into the website’s foundational code remain relatively evergreen, the new content that you add to your website may not adhere to accessibility best practices. Perhaps content authors forget, lack proper training or oversight, or don’t realize its importance. Either way, a segment of your visitors may suffer because of an inconsistent—and a downright frustrating—website experience.

Good news: most content-related accessibility issues are easily repaired, and once your content team is aware of them, ongoing implementation will become a habit. Your content team should build it into their workflow, because every copywriter would surely prefer handcrafting the accessibility language for their pages, images and videos, rather than leaving it up to a developer or CMS manager to correct later.

Content Accessibility Best Practices

The following are some tips for writing accessible content that helps all users, derived from Section 508 standards, W3C guidelines and plain old good sense.

Copy & Links

  • Write in clear, simple language.

  • Include descriptive page titles and headings.

  • Link text should be meaningful and provide context—no “click here” or “learn more” link text.

  • Don’t link characters that you would not want to hear spoken.

  • Avoid jargon.

  • Write out an acronym or abbreviation for its first occurrence on the page. Consider using the ABBR and ACYRONYM tags to tell screen readers how to pronounce them (M-o-M-A versus moe-mah).

  • Be aware of common misspellings and zero-result queries in your internal site search logs. What can you do to help poor spellers or people searching for unsupported synonyms?

Images, Video & Audio

  • Provide “alt” and/or “longdesc” attributes to describe the content and purpose of images, graphics, video and audio.

  • Closed-caption videos.

  • Transcribe audio.

Tables

  • Use tables for tabular information, not layout.

  • Remember to use the <th> tag to identify column and row headings for tables.

  • Write meaningful content for the “summary” attribute of your table.

resources

Section 508: Web-based intranet and internet information and applications

W3C Web Content Accessibility Standards

Beyond ALT Text: Making the Web Easy to Use for Users With Disabilities

Accessible Design for Users with Disabilities

It’s Time to Embrace Accessibility SEO

This article was originally published on the Ellilance Blog.

Julie Young
Surveys and Interviews: Getting to Know Your Mobile Visitors

Let’s look at how surveys and interviews can help you glean insights about your mobile presence (or lack of) from your site’s visitors.

Why surveys and interviews?

While site analytics gather quantitative behavioral data, online surveys and interviews can also collect more qualitative data, like opinions and self-reported preferences. Though self-reported anecdotes should always be taken with a grain of salt, survey and interview responses can be very helpful for prioritizing ideas, uncovering new insights, and giving a voice to your site’s visitors.

Surveys

Conducting an online survey is a great way to gain insights about your visitors because you can easily collect data and opinions from a large group of people. Surveys can be structured to gather both qualitative and quantitative data. For example, say that you wondered how many of your site visitors were interested in a mobile-optimized experience. You could do a quick pop-up survey on your site to gauge interest. From that, you could say with certainty that a certain percentage of your audience wanted (or didn’t want) a mobile version.

Other insights surveys can provide include:

  • Ranking and prioritizing of features, content and tasks

  • Rating or commenting on ideas and concepts

  • Suggesting ideas or features not captured by the survey choices

  • Self-reported behaviors like how often they use their mobile device to view your site, and where they are when they do (at home, on the go, etc.)

As with any research method, there are drawbacks to surveys. Survey questions are easy to bias with your own leanings, and self-reported behavior is not always actual behavior. Plus, your survey participant may feel like he or she has to be polite or agreeable, even to a survey on a computer. (No joke: researchers at Stanford wrote a book about people being polite to computers.)

Interviews

It’s always great to talk to your site visitors one-on-one or in a focus group because it enables you to prod more deeply into the “why” and “how.” Plus, interviews and focus groups can be conducted face-to-face or over the phone; the only equipment you need is a pen and paper for your notes.

When conducting an interview, you’ll be able to look for detail. Consider asking:

  • When was the last time you visited X site on your mobile device? What were you looking for? Did you find it? How easy or difficult was it?

  • Can you think of any other times you used X site on your smartphone or tablet? Ask questions about the experience.

  • What apps do you use the most? Ask the interviewee to look at her device and list her most used apps. Ask what she likes about them.

  • Start a discussion about how she would like to use X website on her mobile device. If she doesn’t want to use your site on her mobile device, why?

You’ll collect great information and perspectives that can only help you understand who your mobile visitor is, and what their motivations are.

Naturally, there are drawbacks to interviews. The politeness problem crops up again, and so does error-prone self-reported behaviors. You also cannot draw make generalizations about your visitors based on a few interviews because of the low number of participants—interviews are purely qualitative. Aside from these drawbacks, interviews are valuable because they humanize your site visitor. Through conducting a handful of interviews, you’ll be able to draw a picture of your actual visitors in greater detail than by data alone.

If you take the time to learn a little bit more about your mobile visitors, you’ll be able to better understand your customer. With the knowledge gained from easy research techniques like surveys and interviews, you’ll have the basis for a great mobile experience that your visitors actually want to use.

This article was originally published on the Elliance Blog.

Julie Young
Review: Mindfire by Scott Berkun

If you know me (in person), I've probably emailed you some of Scott Berkun's blog posts. He recently published a collection of "best of" essays in book form, Mindfire, which is an outstanding read. 

In one of my all-time favorite essays, "The Cult of Busy," Berkun calls out the "busy" people. First, there are the people who only "look" like they are busy. In the past, I had a coworker like this. Someone who was so stressed at work and couldn't take on a single extra thing---who couldn't even help a peer out of a jam---yet, she aquired an amazing amount of Farmville garden implements from 9 to 5 every day. 

These "fake busy" folks aside, Berkun makes a rather insightful point about the other "busy" people: the people who are overbooked or who are not using their time wisely.

The phrase “I don’t have time for” should never be said. We all get the same amount of time every day. If you can’t do something, it’s not about the quantity of time. It’s really about how important the task is to you.

How true this is! Yes, you may be too busy to help because this matter doesn't require your attention, or because you can't say "yes" to every party invite (you popular gal, you). But Berkun points out that if you are honest with yourself, you are busy because you are either over-involved, or you are failing at effectively doing whatever it is you do.

I suspect this really means that you are so busy being busy (or pretending to be busy) that you end up being the person who isn't asked for help because you aren't that helpful, who isn't as valuable because you don't share, and who can't see the forest for the trees. So, you miss out on great opportunities and quality working relationships...all because you have a priority problem. An effectiveness issue. You also miss out on free time, because you are busy being busy. You work too much. You become sad. But heck, you can prove you worked a lot of hours! But what is that really worth? 

This essay in particular really got me to think about how I respond to other's people requests for my time, and frankly, how I use my time. Do I prove my worth by hours worked, rather than projects accomplished? Maybe. Do I become a martyr for time? How can I fix myself so that I use my time more wisely? All of this out of one short easy-to-read essay! 

So, this is why you should read the whole book, and the whole essay (lucky for you, it's included in the preview!). Don't take my word for it. ;) 

Julie Young
Your first idea is not your best idea

The following scenario happens to me on repeat:

When starting a project, I make a quick sketch, see a “perfect” example, or jot down some idea that I think is just the tops.

I misplace it.

Then I become convinced that it was the key to my genius, and I cannot move forward solidly without it.

Then I find it, and it is complete crap. I’m finally free to move forward toward something smarter.

Does this happen to you?

Obviously a lot of good thinking happens between that first moment of conceptualization and the process of planning a feasible, delightful idea. And not all first ideas are bad ideas. Sometimes they are just raw and require cooking.

But in a way, that bad first idea is good. It’s good because it exposes that there is a problem begging for a solution. It’s a note that there is something there to work with, something to improve.

First ideas can also be very, very dangerous. What happens when bad “first ideas” become idealized rather than rejected? Instead, repeated as mantras? What happens when your internal criticism fails you, or you don’t have someone to help you workshop your ideas? Are these the projects you aren’t proud of? 

content strategyJulie Young
How to design a highly usable form with the help of a good book

One of the most useful books I’ve read in the past few years is Web Form Design by Luke Wroblewski. The reason I like this book so much is simple: when I’m working on a big form project or reviewing someone else’s work, it’s very easy to grab the book and flip through the best practices at the end of each chapter, using it as a rubric, as well as a source of much-needed inspiration. 

Over time, though, I realized that rather than dig out my ebook, I could just create a checklist that I could use to prompt me on some pivotal issues. The actual explanation of what you should do and why, of course, is in Luke W’s book. Which you should buy it. Right now. I’m not kidding. Anyway, his book has examples of high-quality patterns and the research to back to it up so you don’t look foolish when you are questioned about why you are recommending a certain label alignment over another, or why you chose those smart defaults. So, to the checklist…

High-Level Form Organization

Does the form need a start page?

  • The form needs room to explain need, purpose, and incentive, if any.

  • The form requires specific information that should be compiled or found before starting. (Example, tax returns and bank statements)

  • The form requires an investment of time. Warn the user. (Example, long surveys)

Should the form be broken into multiple pages?

  • Does the form contain a lengthy amount of question groups that are relatively independent of each other?

  • Can you ask optional questions after a form is completed?

  • Is the form a good candidate for progress indicators?

Have you considered gradual engagement for sign up and registration?

Questions & Form Inputs

  • Have you grouped questions/fields by types (contact, billing info, etc.)?

  • Have you eliminated unnecessary questions/form fields?

  • Have you clearly identified optional or required fields, using a label for whichever is in the minority?

  • Are labels in natural/plain language and consistently formatted?

  • Can you top align labels? If not, can you right align? If mobile, have you top-aligned labels?

  • Do the field lengths provide meaningful affordances? (Example, a ZIP code field for the U.S. should be 5 characters long, not 27.)

  • Have you set smart defaults?

  • Are there any areas of the form that could be enhanced by hiding irrelevant form controls/using progressive disclosure?

Buttons

  • Are your primary actions, such as save or register buttons, aligned with input fields?

  • Can you remove secondary actions like clear and reset?

  • If you use secondary actions, are they visually distinct from the primary action?

  • Can users undo secondary actions if accidentally clicked?

Help Text, Error Messages, and Success Pages

  • Do you need that help text? Is it clear and concise? Any input limits or restrictions? (Example, password must contain XYZ.)

  • Are error messages clearly communicated by both a top-level message and a contextual message near the affected field?

  • Have you considered in-line validation for questions/fields with high error rates or specific formatting requirements?

  • Have you clearly communicated the successful completion of the form? And it's not a dead end, right?