Version Notice: This article covers features in our r9/IS Pro platform. If you're looking for information on this topic related to r8, see Scoring a MaxDiff.
IntelliSurvey's platform gives users the ability to score MaxDiffs directly within the software. Scores can be run several times throughout the survey project lifecycle. It is suggested to run MCMC when soft launch is complete, before data is checked, and then again when the survey fieldwork is complete.
Bayesian Inference for Marketing/Micro-Econometrics (the bayesm package) is used to provide the individual utilities in this scoring applet. MCMC stands for "Markov Chain Monte Carlo," a probability sampling methodology used within many bayesm algorithms.
To access the MCMC Scoring applet, select the MCMC Scoring option in the Analytics applet group.
Scoring an exercise
All MaxDiff exercises available for scoring are listed by ID at the top of the screen. First, select the exercise(s) to score. Then, use the Select records options to select which records to include in the analysis, and which field to analyze. These options function similarly to the Report Builder used for reporting applets.
While it is possible to score a max diff with the records filtered, in most cases, all completes should be scored at one time unless specific data cuts (e.g., dog vs. cat owners) are needed for meaningful analysis. This is because MaxDiff scores are comprised of averages for a whole group; if the responses after soft launch are scored separately from the responses during soft launch, then the value of adding more respondents to the pool is lost.
Below the Select records box, the following options are available, with the default settings shown in the above screenshot:
- Iterations: The default value of 10,000 is recommended for most cases and can be left unchanged. Increasing the number of iterations may lead to slightly improved estimates, particularly for large designs. Increasing the number of iterations may also be necessary when a solution does not converge.
- Score MCMC items in parallel: In the case there are multiple MaxDiffs to be scored, checking this will score all selected questions concurrently instead of sequentially. This is a faster approach if several MaxDiffs require scoring.
- Email me when complete: In most cases, MaxDiff scoring will take at least a few minutes to complete. Scoring large datasets may take considerably longer. Users can request to be emailed when the scores are ready. If you uncheck this, you will not receive an email.
When ready, click Calculate Scores. In most cases, MaxDiff scoring will take at least a few minutes. Rather than wait and watch the progress bar, the tab can be closed entirely. When the scoring is complete, the system sends an email to the user who initiated the scoring as long as the box is checked next to Email me when complete, as shown above.
Note: The MCMC Scoring and TURF Analysis applets require a "leading Q" to score a MaxDiff. As a result, using leading_q: n
is not recommended when programming a MaxDiff exercise.
Scores and reports
The scoring applet will generate a series of fields for each MaxDiff exercise by default. Note, researchers may not use all the fields since many are for logical ordering and design purposes. However, the two primary groups of fields used are the most important/least important groupings and the utility scores. Additionally, raw scores can be provided alongside rescaled scores if needed.
Tip! To see a full list of all available data fields for your MaxDiff exercise, open any report, search for "MD" in the Field Selector portion of the Report Builder.
Most and least important answers
The most important/least important answers are "rolled up" in a parent variable called QMD_ML, where "MD" is the MaxDiff's ID. These fields record the answers the respondent chose in the MaxDiff.
Each set of statements/items has a "most" and "least" field. For example, set 1 is represented by the fields QMD_1M and QMD_1L. QMD_1M records the item selected as "Most important", and QMD_1L records the item selected as "Least important".
Note: The items shown in each set depends on the design file used for your specific MaxDiff exercise. If you need assistance with obtaining and understanding this information, please reach out to Support.
Utility scores
The utility scores are rolled up in a parent variable called QMD_scores, where "MD" is the MaxDiff's ID. These fields record the rescaled scores calculated by the MCMC Scoring applet – they will not contain data until after the MaxDiff has been scored.
The scores range from 0 to 100, and reflect the score given across each item in the MaxDiff statement list, per respondent. The individual fields are labeled QMD_UTIL_N, where "N" is the statement/item ID.
In the example below, QMD7_UTIL_1 will store the rescaled score for the item "Apple" for each respondent. Note, this is an overall score, regardless of how many times the item was presented in a set.
When reviewing the scores in an Excel output, the file will appear something like the example below. For each respondent, the total of the UTIL_X scores should equal 100, as shown in the manually calculated TOTAL column below.
Raw scores
For some researchers, the raw calculated MaxDiff scores are needed for analysis instead of the rescaled utility scores. By default, a MaxDiff exercise does not store and export the raw scores. To add these to your data output, add the tag raw scores: y
to the MaxDiff exercise in the survey's source code.
Similar to the utility scores, a field called QMD_RAW_N will be generated for each item, where "N" represents the item ID. There will be a unique field generated for each item ID in the MaxDiff statement list. The raw score fields are rolled up into the QMD_scores parent field.
Negative scores infer items are less important than average, and positive scores infer items are more important than average. Therefore, you should expect to see a range including both negative and positive numbers in the the raw data, as shown in the example Excel output above.
Note: If you have already completed your fieldwork, you can still add raw scores: y
to the exercise's code. You will need to republish the source code and rescore the MaxDiff exercise.
RLH (Root likelihood)
In addition to raw and rescaled scores, two other industry standard metrics are included among the MaxDiff fields: RLH (Root likelihood) and Percent certainty. Both of these metrics are indications of of how closely the solutions fit the data.
Responses to a MaxDiff exercise can be categorized along a scale of consistency. Some people will answer haphazardly and/or quickly, which will result in a lower RLH score; some will be very thoughtful, which will result in a high RLH score. The majority of respondents will answer reasonably consistently across the exercise.
Comments
0 comments
Please sign in to leave a comment.