Just as when you build a house, you want to make sure it’s stable enough to withstand harsh winds, you want to make sure that the model you used to calculate your meta-analysis result is robust, or reliable enough, that your result won’t change dramatically if some of the assumptions you made were incorrect.
To check to see if your estimates are robust, you do a “sensitivity analysis.”
A sensitivity analysis looks to see how your answer changes when you alter any of the inputs to your model. Those inputs include the data or the assumptions you made.
There are a lot of different ways to conduct a sensitivity analyses. A basic, but useful one is motivated by a simple question:
What would my final answer look like if I removed one of the studies?
To answer this question, you simply conduct your meta-analysis again. Except for this time, you remove one of the studies and record the result.
Then you run your meta-analysis again, this time removing a different study.
You repeat this process over and over again until you’ve run your analysis with each study missing. This gives you an idea of how much influence each individual study has on your overall result.