I was disappointed to see the results of this study, but not surprised.
The study, Effectiveness of Reading and Mathematics Software Products Findings From Two Student Cohorts, sponsored by the U.S. Department of Education, reports on student test scores of a second year of use of selected software programs aimed at 1st grade reading, 4th grade reading, 6th grade math, and algebra I. They looked at 10 software products and found that only one had a statistically significant effect. Given that there were 10 chances, that one should also be considered suspect.
Until we really understand the details of human learning we will not be able to build or evaluate effective teaching technology. These broad brushed studies provide such a coarse look at the overall process that we can conclude very little. The study itself ends with a list of caveats that include: “the study preclude direct comparisons of product effects”; “Because districts and schools volunteered to implement particular products, their characteristics differ and these differences may relate to effectiveness”; “The study design does not rule out the possibility that a product the study finds to be ineffective could be effective if implemented by other districts or schools”.
So why did the Department of Education bother to do it?
It would be much better to spend resources on understanding the learning process with enough rigor to construct educational environments that improve it.