Incomplete outcome reporting

In the last section, we talked about instances where a study might be missing noticeable amounts of data about individual participants.

But, what about cases when researchers don’t report entire outcomes? Or only share selective information about the outcomes?

This is what we call incomplete outcome reporting.

Protects against: Reporting bias, specifically, outcome reporting bias

If incomplete outcome reporting is found for a given study, it suggests that the researchers cherry-picked their results and did not report all the outcomes in which they were interested when they began the study.
Examples of incomplete outcome reporting

1. A certain study’s researchers only reported outcomes for which they found “statistically significant” differences in results between the treatment groups, and left out all other outcomes.

2. Another study’s researchers reported all the outcomes, but they omitted a measure of statistical uncertainty (e.g. confidence interval) for a certain outcome.

3. To save space, some authors will simply report that some outcomes showed “no difference” or “p>0.05,” which does not give reviewers enough information to include these outcomes in a meta-analysis.

How to assess this domain:

Check the protocol! It is good research practice to write and file a research protocol before conducting a study. It typically describes all details about the study, including the outcomes that the researchers are planning to measure and report.

To assess a study for incomplete outcome reporting, you’ll need to compare the study’s protocol with the outcomes that the researchers ended up reporting. If the researchers did not report an outcome or did not report an outcome completely, it could suggest possible reporting bias. Failure to file a protocol may raise a further red flag.