Maximum Difference Scaling

Research shows that humans are much better at judging items at extremes rather than discriminating among items of middling importance or preference1. Since Maximum Difference Scaling (also known as MaxDiff or best-worst scaling) involves choices of items rather than expressing strength of preference, researchers can get a finer level of discrimination among attributes and there is no opportunity for scale-use bias. MaxDiff is a statistical method invented by Jordan Louviere in 1987 while on the faculty at the University of Alberta2. MaxDiff is rooted in psychology, and like other trade-off analyses, it derives utilities for attributes which can be used to derive optimal products, to segment respondents into groups with similar preference structures, or to prioritize strategic product goals.

How Does it Work?

Let�s say you are a home builder and want to find out what attributes are most (and least) important to potential homebuyers.

You could go the traditional route and ask several likert-style rating questions, such as �on a scale of 1 to 5, how important are [upgraded appliances] to you?� This is typically a sound approach, but when you have many attributes to ask about, it can become difficult to disentangle the relative importance of each. In other words, you may end up with similar mean ratings for many attributes, leaving you feeling like you are back at square one. The advantage of MaxDiff�s forced-nature can help uncover a clear winner.

Using our homebuilder example, a sample MaxDiff question is shown below:


Respondents would select one attribute as most appealing and one as least appealing and then get another grouping of four attributes. The groupings of attributes are referred to as �tasks.� The number of tasks the respondent performs is a function of the number of attributes being tested. As a general rule of thumb, .75 times the number of attributes is sufficient to get satisfactory discrimination (20 attributes would equal 15 groupings or �tasks�). Thus, a typical analysis includes 15-16 sets of questions like the above example.

The respondent typically sees each attribute at least 3 times and the tasks are designed with the goal that each item appears an equal number of times and that each item appears with every other item an equal number of times. In addition, designs are chosen such that each item appears in each position in the set an equal number of times3.

Once the data are collected, the relative importance of each attribute is calculated using advanced analytics software. The software produces �utility functions,� or scores, for each of the attributes. In addition to utility scores, raw counts can be generated that will display counts for the total number of times an attribute was selected as best and worst.

Example Output

The chart below shows typical output from a MaxDiff analysis for our fictional homebuilder project. Attribute importance is the major output measure, and it corresponds roughly to the percent of the average respondents� preference that is captured by each attribute.



As with most types of analyses, there are limitations. MaxDiff provides relative importance levels, not absolute ones. It is important, then, for the attributes to be chosen by those who have insight or prior knowledge about what the attributes of interest are likely to be.

It is also important to ensure the complete set of options is included in the question set and that the respondent answers consistently. Unlike other conjoint or discrete choice methodologies, interaction terms are not modeled by MaxDiff. That is, the MaxDiff-estimated appeal of Upgraded Cabinetry, for example, does not differ depending on which other attributes of the tested set are included in the product. Additionally, Max Diff output tends to be very strong on the most important variables, but the weaker variable ratings will not be as clearly differentiated.

The Max Diff question format can also be very tedious for respondents to complete, resulting in early break-offs and incompletes. Thus, it is important to design the questionnaire in a way that helps respondents understand what to expect. Additionally, the fewer the attributes, the easier it is for respondents to complete the series of tasks.

Once instructions are given, MaxDiff questionnaires are rather easy for respondents to understand. Web survey methodology is ideal for MaxDiff because attributes can be shown visually and attributes selected quickly and efficiently.

Feel free to visit our website for more information on advanced analyses offered by Polaris.

  1. Louviere, J. J. (1993), �The Best-Worst or Maximum Difference Measurement Model: Applications to Behavioral Research in Marketing,� The American Marketing Association's 1993 Behavioral Research Conference, Phoenix, Arizona.


  3. Sawtooth Software (2007), �The MaxDiff/Web System Technical Paper,� Sawtooth Software Technical Paper Series.

All Content � Copyright 2004-2011. Polaris Marketing Research, Inc. All Rights Reserved.

Send inquiries to or call 1-888-816-8700.