If missing participant data in a study stands out, look to see if the study researchers conducted “sensitivity analyses” to see whether results would have changed under various possible assumptions (or scenarios) regarding why the data were missing.
In some cases, researchers may do this by trying out some different reasonable scenarios. For example:
1. What would the results look like if all or 90%, 80%, 70%, etc., of the missing participants experienced the final outcome?
2. What would the results look like if none or 10%, 20%, 30%, etc. of the participants experienced the final outcome?
If these scenarios lead to similar overall results for the entire study, it means that data from missing participants wouldn’t have dramatically altered the results either way. If these scenarios show very different results, perhaps because there are a lot of missing data and so these assumptions are perhaps unrealistic, the missing data suggest a high risk of bias.
Ideally, study researchers would like to “impute” (or fill in their best guesses for) participant data that are missing.
This best guess for a given participant’s data is usually based on other information that is available for that participant, information from other participants in the study who are similar to that participant, or both. Many complex statistical algorithms have been developed for making such imputations.