An experiment begins with a hypothesis. For example…I suspect that hitting the Netflix “play something” button is a bad idea (denied, by the way – Hunt for the Wilderpeople was outstanding).

A neat and tidy hypothesis for CME outcome assessment might read: I suspect that participants in this CME activity will increase compliance with <insert evidence-based quality indicator here>.

Unfortunately, access to data that would answer such a question is beyond the reach of most CME providers. So we use proxy measures such as knowledge tests or case vignette surveys through which we hope to show data suggestive of CME participants increasing their compliance with <insert evidence-based quality indicator here>.

Although these data are much easier to access, it can be pretty tedious to weed through. Issue #1: How do you reduce the data across multiple knowledge or case vignette questions into a single statement about CME effectiveness? Issue #2: How do you systematically organize the outcomes data to develop specific recommendations for future CME?

For issue #1, I’d recommend using “effect size.” There’s more about that here.

For issue #2, consider organizing your outcome results into the following four buckets (of note, there is some overlap between these buckets):

  1. Unconfirmed gap– pre-activity question data suggest that knowledge or competence is already high (typically defined as ≥ 70% of respondents identifying the evidence-based correct answer OR agreeing on a single answer if there is no correct response). Important note: although we shouldn’t expect every anticipated gap to be present in our CME participants, one cause of an unconfirmed gap (other than a bad needs assessment) is the use of assessment questions that are too easy and/or don’t align with the education.
  2. Confirmed gap– pre-activity questions data suggest that knowledge or competence is sufficiently low to warrant educational focus (typically defined as ≤ 70% of respondents identifying the evidence-based correct answer OR agreeing on a single answer if there is no correct response).
  3. Residual gap
    a. Post-activity data only= typically defined as ≤ 70% of respondents identifying the evidence-based correct answer OR agreeing on a single answer if there is no evidence-based correct response.
    b. Pre- vs. post-activity data= no significant difference between pre- and post-activity responses.
  4. Gap addressed
    a. Post-activity data­ only= typically defined as ≥ 70% of respondents identifying the evidence-based correct answer OR agreeing on a single answer if there is no correct response.
    b. Pre- vs. post-activity data= significant difference between pre- and post-activity responses.

Most important to note, if the outcome assessment questions do not accurately reflect gaps identified in the needs assessment, the results of the final report are not going to make any sense (no matter how you organize the results).

 

Leave a Comment

Please confirm that you are not a robot.