Tuesday, July 04, 2006

 

Stimulating the Evidence Discussion

Copied below is the rather interesting e-mail to Math Panel Members for Vice Chair Camilla Benbow regarding evidence that served as the discussion primer at the June North Carolina meeting of the panel. From this observer's perspective it sounds a lot like the What Works Clearninghouse standards of evidence that have been set so high as to be essentially useless for making policy. As usual, this early in the game, we'll have to see.


Dear NMP,

Larry and I thought that we needed to arrive at and articulate some standards for the evidence that we will drawing upon to formulate our recommendations and report. To start the dialogue and to prepare for the meeting, I came up with some suggestions. This is a sacrificial and surely incomplete draft; so please do not hesitate to weigh in.

Our recommendations need to be grounded in the evidence drawn from scientific studies. We will not limit ourselves to data from just one type of methodological design but the design must be scientific, must be consistent with best practices, and results should be triangulated and replicated.


When comparing curricula or instructional practices:
control or comparison groups are utilized.
pre- and post-testing are part of design.
differences in relevant student characteristics predictive of learning outcomes are assessed prior to or simultaneously with pre-testing.
the temporal gap between pre- and post-testing is sufficient to ensure appreciable learning.
Multiple post-test criteria are utilized to establish confidence in capturing focal constructs through methodological triangulation.

Differences or results are quantified in terms of effect sizes, a standard metric.

Sample sizes are sufficient to establish confidence in statistical conclusions, with the statistical power of the design reported or amenable to calculation.

For findings to be considered anything but suggestive, results must be replicated.

Camilla

This page is powered by Blogger. Isn't yours?