IntelliSurvey's platform gives users the ability to score MaxDiffs directly within the software. Scores can be run several times throughout the survey project lifecycle. It is suggested to run MCMC when soft launch is complete, before data is checked, and then again when the survey fieldwork is complete.
Bayesian Inference for Marketing/Micro-Econometrics (the bayesm package) is used to provide the individual utilities in this scoring applet.
To score a MaxDiff exercise, select MCMC Scoring from the Analytics button group in the Survey Navigation menu.
Scoring an exercise
All MaxDiff exercises available for scoring are listed by ID at the top of the screen. Select the exercise(s) to score. The following configuration fields are available, with built in default values as shown:
- Sbeta iterations/Burn-in iterations/Save iterations: The default values for these are recommended for most cases and can be left unchanged. Increasing the number of iterations may lead to slightly improved estimates, particularly for large designs. Increasing the number of iterations may also be necessary when a solution does not converge. For more information on these fields, read the informational section at the end of this article.
- Score MCMC items in parallel: In the case there are multiple MaxDiffs to be scored, checking this will score all selected questions concurrently instead of sequentially. This is a faster approach if several MaxDiffs require scoring.
- Email me when complete: In most cases, MaxDiff scoring will take at least a few minutes to complete. Scoring large datasets may take considerably longer. Users can request to be emailed when the scores are ready. If you uncheck this, you will not receive an email.
Use the Record Selector filter to define which records to include (Completed, IDs, Date/Time, Data Cut, or Weights). The Completed box is checked automatically. While it is possible to score records using selection methods, all completes should be scored at one time unless specific data cuts (e.g., dog vs. cat owners) are needed for meaningful analysis. This is because MaxDiff scores are comprised of averages for a whole group; if the responses after soft launch are scored separately from the responses during soft launch then the value of adding more respondents to the pool is lost.
When ready, click Calculate Scores. In most cases, MaxDiff scoring will take at least a few minutes. Rather than wait and watch the progress bar, the tab can be closed entirely. When the scoring is complete, the system sends an email to the user who initiated the scoring as long as the box is checked next to Email me when complete, as shown above.
Note: The MCMC Scoring and TURF Analysis applets require a "leading Q" to score a MaxDiff. As a result, using leading_q: n
is not recommended when programming a MaxDiff exercise.
Scores and reports
The scoring applet will generate a series of fields for each MaxDiff exercise by default. Note, researchers may not use all the fields since many are for logical ordering and design purposes. However, the two primary groups of fields used are the most important/least important groupings and the utility scores. Additionally, raw scores can be provided alongside rescaled scores if needed.
Tip! To see a full list of all available data fields for your MaxDiff exercise, open any report, go to the Field Selector, and type in "MD" to see the available options.
Most and least important answers
The most important/least important answers are "rolled up" in a parent variable called QMD_ML. There will be a field QMD_XM and QMD_XL for each set of statements/items shown; "X" represents the set number. For example, for set 1, you will have the fields QMD_1M and QMD_1L. These fields will store the item selected as the "Most important" in QMD_1M and the item selected as the "Least important" in QMD_1L, out of all the items presented in set 1.
Note, the items shown in set 1 will depend on the design file used for your specific MaxDiff exercise. If you need assistance with obtaining and understanding this information, please reach out to Support.
Utility scores
The software will also generate a series of fields called "utility scores." These are the rescaled scores, which will range from 0 to 100. These fields reflect the score given across each item in the MaxDiff statement list, per respondent. The rolled up variable created will be called QMD_scores. The expanded fields will be named QMD_UTIL_N, where "N" represents the statement/item ID. As such, there will be a separate field generated for each item ID found in the list used in the MaxDiff exercise.
In the example below, QMD_UTIL_1 will store the rescaled score for the item "Community Job Portal" for respondent X. Note, this is an overall score, regardless of how many times the item was presented in a set.
When reviewing the scores in an Excel output, the file will appear something like the example below. For each respondent, the total of the UTIL_X scores should equal 100, as shown in the manually calculated TOTAL column below.
Raw scores
For some researchers, the raw calculated MaxDiff scores are needed for analysis instead of the rescaled utility scores. By default, a MaxDiff exercise does not store and export the raw scores. To add these to your data output, add the tag raw scores
to the MaxDiff exercise in the survey source code, and set its value to 'y' (yes).
Similar to the utility scores, a field called QMD_RAW_N will be generated for each item, where "N" represents the item ID. There will be a unique field generated for each item ID in the MaxDiff statement list.
The average score for all respondents, across all items, will be '0'. Given that, the individual Topline tiles for all raw scores will show an average of '0', as shown below. Negative scores infer items are less important than average, and positive scores infer items are more important than average. Therefore, you should expect to see a range including both negative and positive numbers in the the raw data.
For more information on how to change the display options for this kind of tile, read Open-end numeric tile display options.
Note: If you have already completed your fieldwork, you can still add raw scores: y
to the exercise's code. You will need to republish the source code and rescore the MaxDiff exercise.
Summary tiles
Additionally, a MaxDiff creates a summary tile which aggregates scores across all respondents. The tile shows the average of the rescaled (utility scores) and the raw scores (if applied) per item.
For more information about manipulating Topline tiles of this type, see Closed-end two-axis (summary) tile display options.
Density charts
In addition to raw and rescaled scores, two other industry standard metrics are included for you by default in the Topline Reports applet: RLH (Root likelihood) and Percent certainty. Both of these metrics are indications of of how closely the solutions fit the data.
Responses to a MaxDiff exercise can be categorized along a scale of consistency. Some people will answer haphazardly and/or quickly, which will result in a lower RLH score; some will be very thoughtful, which will result in a high RLH score. The majority of respondents will answer reasonably consistently across the exercise. The density charts below show the distribution of this consistency measure across the sample.
Additional information
MCMC stands for "Markov Chain Monte Carlo," a probability sampling methodology used within many bayesm algorithms.
Originally the three fields Sbeta iterations, Burn-in iterations, and Save iterations, were used for a "burn-in" period. When statisticians did further testing, they decided the burn-in was not really needed versus a total number of iterations. Shortly after, IntelliSurvey switched to the newer MCMC module, bayesm. The current MCMC Scoring applet still divides the inputs, but the three iteration numbers are then combined into the R parameter of the MCMC input in the bayesm module.
Therefore, in software versions prior to r9, there is technically still a difference between entering '1000' for each field; or entering '3000', '0', and '0'; etc.
Comments
0 comments
Please sign in to leave a comment.