Prototypes are an effective method for incorporating early and frequent user feedback into the design process.
Even low-fidelity prototypes have been found to be good predictors of usability problems and perceptions of ease compared to fully functioning products.
We test client prototypes just about every week here at MeasuringU using our research platform MUIQ. They range from prototypes for major consumer brands to internal facing IT apps.
At one time, all high-fidelity prototypes seemed to come from Axure. Over the last couple years we’ve seen a complete shift to InVision. Regardless of the prototyping solution you use, here are six recommendations we’ve found helpful for teams looking to evaluate prototypes with users.
- Plan for changes. Be sure you have someone with the right knowledge and skills to make changes or fix problems with your prototype. In our experience, the people with the skills to create clickable prototypes are in high demand. Be sure someone who can change both the look and the functionality of the prototype is available before and during evaluation.
- Use caution when comparing prototypes with live sites. Organizations will often want to know whether their proposed designs are more effective than their existing websites or products. Running a head to head comparison is a natural step. However, by definition, fully functioning websites or apps offer a more complete experience to users (full content, links, logging in, etc.) versus even high-fidelity prototypes. Consequently, in most cases we’ve found participants will generally rate prototypes lower on perception metrics (including ease and functionality) compared to the fully functioning counterparts. The best way to minimize this confounding effect is to replicate the germane parts of the full website or product as a prototype. This takes more time but can be worth it when the differences in design changes are subtle.
- Let users know it’s not fully functioning. Prototyping tools can do such a good job at creating a realistic looking experience that participants will think it’s the real thing. When participants encounter non-functional areas or dead-end links they get frustrated very quickly and often react unfavorably—even though the proposed design may offer a better experience ultimately. We like to set expectations ahead of time by telling participants it’s not a fully functioning product with a simple message about the prototype in an unmoderated study or by telling participants directly in a moderated test.
- Have a confirmation message on dead-ends. Having clickable hotspots in prototypes helps emulate the full product experience. However, you often can’t fully build out content or features for all links. It can be a jarring experience when users reach the end of the content even if you already told them it wasn’t fully functioning—especially for shallow (one or two page) prototypes. We recommend including a simple message like the one in the figure below that lets participants know there’s no additional content. The message should also make it clear that they can go back if they want to search elsewhere.
- Code your links with unique URLs. Even with a shallow prototype that includes only one or two pages, use unique URLs for each key call to action even when the destination is the same. This allows you to easily track where users click, both in a log file or in software platforms like our MUIQ. For example, while all or most links may end at your end-page message (endlink.html), add a hashtag or other indicator so you know whether users are clicking the header or signup button (e.g. endlink.html#header and endlink.html#button).
- Be sure to turn off comments and meta-navigation. Products like InVision are great for emulating interactions and they also include features that make getting input from other design stakeholders easy (such as adding comments or viewing all pages at once). But this functionality is not ideal when presenting to participants. The result is that participants in a study may inadvertently enable or make comments, see visible hotspots, or turn on the global settings to see all pages at once, which makes these experiences less authentic (see the figure below). While miscues can be corrected in a moderated session, you’ll want to disable these functions (which may not be straightforward) during an unmoderated setup to keep the experience as authentic as possible.