You have been calculating an effect size for each of your CME activities, right? And now you have a database full of activities with corresponding effect sizes for say, knowledge and competence outcomes. Sound familiar? Anyone…anyone…Bueller? Okay, for the one straggler, here’s a refresher:

  1. What is effect size? (link)
  2. How to calculate effect size (link)
  3. Reporting effect size (link)
  4. Effect size – other methodologic/statistical considerations (link)

Now that we’re all on the same page, let’s move on to the next question…what exactly is a “good” effect size? Well, you would first start with Cohen (Cohen J. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Hillsdale, NJ: Lawrence Earlbaum Associates; 1988), who identified the following general benchmarks: 0.2 = small effect, 0.5 = medium effect, and 0.8 = large effect.

Although effect size is relatively new to CME, thankfully more specific effect size data are available. Starting with recent literature (specifically, meta-analyses), the following effect sizes have been reported:

It’s important to note that these effect sizes are the result of mixed measurement methods (and that measurement approach influences effect size), but they are certainly more relevant than Cohen’s benchmarks (and we know that Cohen wouldn’t take offense, because refining effect sizes through repeated measurement in a given area is exactly what he recommended).

In regard to repeated measurement, Med-IQ has been measuring knowledge- and competence-level effect sizes for a variety of CME activities over the past four years. In a future post, we’ll be publishing our effect size results for a variety of live and enduring material formats. We’d love to hear how these results jibe with your findings.

Leave a Comment

Please confirm that you are not a robot.