Vol 5 . . . No 4 . . . December, 1995


"Did anybody learn anything?"

Why has there been so little assessment of technology programs?

In this section we will explore a number of hypotheses which might serve to explain the "glass ceiling" keeping annual ERIC evaluation reports under 40 since 1980. Following each hypothesis will appear a rationale, most of which will be conjecture.

Hypothesis #1: Most school districts do not have the expertise or the resources to conduct solid evaluation studies.

Most of the existing studies have been completed by large districts, vendors or universities. Few districts have personnel with formal evaluation skills or the specific assignment to conduct such evaluations. Research is rarely conducted as part of the decision-making process. The collection and analysis of data, a cornerstone in the Total Quality movement, is rare in many school districts. In times of scarce resources, these are the kinds of budgets and projects first cut.

Hypothesis #2: Program proponents have a vested interest in protecting new programs from scrutiny.

Those who push new frontiers and encourage large expenditures are always taking a considerable risk, especially when there is little reliable data available to predict success in advance. Careful program evaluation puts the innovation under a magnifying glass and increases the risk to the pioneers.

Hypothesis #3: Accountability is sometimes counter-culture.

Many school districts have been careful to avoid data collection which might be used to judge performance.

Hypothesis #4: There is little understanding of formative evaluation as program steering.

Since most program evaluation in the past has been summative (Does it work?), few school leaders have much experience with using data formatively to steer programs and modify them. While this kind of data analysis would seem to be more useful, more helpful and less threatening than summative evaluation, lack of familiarity may breed suspicion.

Hypothesis #5: Vendors have much to lose and little to gain from following valid research design standards.

Districts are unlikely to pour hundreds of thousands of dollars into computers and software which will produce no significant gains. Careful research design tends to depress some of the bold results associated with gadgetry and the Hawthorne effect. Amazing first year gains, for example, often decline as programs enter their third year. In some cases, vendors report only the districts or schools with the best results and remain silent about those which are disappointing.

Hypothesis #6: School leaders have little respect for educational research.

Many school leaders joke that you can find an educational study to prove or disprove the efficacy of just about any educational strategy. Studies have shown that such leaders typically consult little research as they plan educational programs.

Hypothesis #7: Technology is often seen as capital rather than program.

Some school leaders do not associate technology with program. They view technology as equipment not requiring program evaluation. Equipment may be evaluated for speed, efficiency and cost but not learning power.

Hypothesis #8: Evaluation requires clarity regarding program goals.

Unless the district is clear about its learning objectives in terms which are observable and measurable, as was done by the RBS study, it will be difficult to design a meaningful evaluation study. In some districts, the technology is selected before a determination is made regarding its uses.

Hypothesis #9: Adherence to evaluation design standards may create political problems.

In addition to increasing risk by spotlighting a program, evaluation can also anger parents as some students are involved in experimental groups and others may have to put up with the traditional approach. Random selection can anger people on either side of the innovation, participating teachers, included. Voluntary participation, on the other hand, immediately distorts the findings.

Hypothesis #10: Innovative programs are so demanding that launching an evaluation at the same time may overload the system.

Many schools are perennially stable and conservative organizations with a preference for first order change (tinkering) rather than second order change (fundamental change). Stability needs conflict with innovation, as change is seen as threatening and pain producing. Because the potential for resistance runs high in such organizations, many leaders may trade off evaluation just to win buy-in for a change.

Theses hypotheses originally appeared in the September, 1992 issue of From Now On.

The Sad and Sorry State of Technology Program Assessment---Hypotheses for the Sad and Sorry State---Why Bother? What's the Pay-Off?---The Centrality of Clear Goals and Outcome Statements---Assessment for Navigation---Self-Assessment Instruments---Performance Assessment Instruments---When all is said and done---Resources

Return to December, 1995


Copyright Policy: Materials published in From Now On may be duplicated only in hard copy format for educational, non-profit school district use only. All other uses, transmissions and duplications are prohibited unless permission is granted expressly. Showing these pages remotely through frames is not permitted.
FNO is applying for formal copyright registration for articles.