This graph shows how the intervention’s effects change over time, by using a standardized measure of the effect size. A measure is a standardized measure when it is used in a consistent or standard way. That means we can directly compare the effect sizes across outcomes and across interventions. The effect size value will show us if the intervention has had a small or medium-to-large effect on the outcome measure. If a study reports multiple findings in a domain at any point in time, the effect size reported on the graph is an average of the effect sizes of those findings.
Solid data points indicate statistical significance for that finding, meaning that the result likely occurred because of the intervention, rather than randomly or by chance. A significant finding means you can feel confident the effect is real, not that you just got lucky (or unlucky) in choosing the sample. The Pathways Clearinghouse considers a finding to be statistically significant if there’s less than 5 percent chance the study could yield that finding if the intervention had no effect. Because this number is so small, it’s likely that the intervention is causing the findings. Statistical significance helps us consider how confident we can be that the intervention will result in similar findings upon replication.
If an average effect size is marked as not statistically significant (that is, not solid), it might still include an individual finding that is statistically significant. Let’s say you had an intervention with impacts on earnings in the first quarter, and the impacts included two findings, one statistically significant and one not. When we average them together and then calculate the significance of the average. This means that the average could be statistically significant or not.