From Now On
The Educational Technology Journal


  Vol 7|No 5|February|1998

Emerging from the Smog:
Making Technology Assessment Work for Schools
by Jamie McKenzie

 

Making Assessment Work

Stage One - Clarifying Outcomes

When all is said and done, what should your students be able to do?

What will their "PERFORMANCE" look like?

If we give them a set of challenges, how well will they fare?

The first step in creating robust assessment is to clarify expectations and then convert them into something which is observable, palpable and measurable.

Example: The Bellingham Schools had three goals clearly stated in their Technology Plan . . .

  • Communicating
  • Analyzing Data
  • Solving Problems

The outcomes were also quite clear. The district expected that students could join in teams and employ networked information resources to create a solution to a problem. Their solution should be logical, well supported and persuasively presented.

Stage Two - Selecting or Constructing Instruments

Lofty goals and outcomes are not enough. They must be measurable.

The challenge is locating already existing instruments which have been field tested and proven to offer reliability (accuracy) and validity (ability to measure what you actually care about). Consult Resources for sources.

If you cannot find instruments which match the outcomes you have selected, you must find a way to create them. While this will mean sacrificing the ability to relate findings to national samples, invention allows for customization. You are more apt to measure what you hope to examine if you build your own. This construction process is amply outlined in several publication from SAGE, which provide you with all the tools you need to "grow" your own measures.

Example: The Bellingham School convened a group of a dozen librarians and elementary teachers.

"How will we know if the fifth graders can communicate effectively, analyze data and solve problems?"

The group tossed around ideas and settled upon the strategy of confronting students with a choice which they must research and explore on the way to making a decision.

"Which of the following accidents should become the focus of the National Safety Council in the next few years?"

The group created a grid to help the students organize their findings, with accidents down the left side, criteria across the top. Next, they created clear directions explaining what sources each time might consult. Finally, they developed a template for a multimedia presentation which students would employ to make their "case."

Home grown? Decidedly. But heavy on what researchers call "face validity." The group was united in the conviction that success with this challenge would demonstrate that the technology program was making its mark.

Stage Three - Piloting Assessment

The more home grown the instrument, the more important the "piloting" or "field testing" stage.

No matter how well you have tried to anticipate issues and problems, the performance tasks you have devised will need to be refined and improved based upon your observations. In many cases, the directions accompanying the activities may require expansion and clarification, for example. Other activities may simply fail to lift off the ground. Good intentions do not automatically translate into good design.

The best strategy is to select a small sample and see what happens. Just make sure that some of the observers are "observing the observation" while the others are observing the students.

Example: The Bellingham performance task was conducted without a small pilot. Every one of the 12 elementary schools selected samples of 16 students (four teams of four), except for small schools, and administered the challenge in October of 1995. The idea was that the entire Fall testing was a pilot and the real assessment would be the May version. The October results were viewed as "baseline data" to clarify the starting point.

This Fall pilot was flawed in quite a few aspects which led to major changes and improvements prior to the May assessment.

Stage Four - Interpreting Early Findings

What can we learn?

If we have built credible instruments, we can sit down with numbers to help us ask . . .

  • How often did the students demonstrate the collaborative problem solving behaviors we hoped to see?
  • How did the teams differ on this measure?
  • How did each team measure up on the rubrics we designed to measure the quality of their thinking and analysis?
  • How well did each team present its findings according to their scores on the persuasion rubrics?
  • Which aspects of their performance were worthy of praise?
  • Where are the gaps?
  • What was disappointing?
  • Where do we need to change and improve our technology program?
  • What strategies are most apt to create the effects we desire to see?
  • Which strategies need to be abandoned as ineffectual?
  • Who are the key players? Can we count on all of them?
  • How do we communicate these findings to our colleagues to enlist their support?

Example: After the October pilot, some of the schools in Bellingham did not review the data or ask the above questions until after a December district wide meeting made the importance of this analysis clear and the district offered half day workshops designed to combine principals, librarians and teachers in the process. Other schools dove right in and asked the prime questions without prompting, eager to find out how the program might be improved before the May assessment.

Performance assessment was new to the district as was the technology being assessed. There was still substantial ambivalence about the value of such projects.

The real turning point for many participants was actual involvement in the assessment as participant observers. The biggest mistake of the October assessment was the failure to involve all of the key players in the experience so they could see the value of the effort. Once they participated, many became believers.

As the assessment program continued for the next few years, the level of commitment and interest varied from building to building depending upon the commitment and interest of staff and leaders. Some pursued the data avidly, wishing to see which of their efforts had paid off the most. Others may have viewed technology assessment as just one more Central Office intrusion worthy of lip service.

Stage Five - Modifying Assessment

Having reviewed the data, the next step is to take a new look at the assessment procedures and ask how they might be tightened and improved. There are usually quite a few elements which need to be modified in light of the first few trials.

Example: In the October assessment, Bellingham's failure to include all fourth and fifth grade teachers as participant observers was quickly identified as a strategy in need of change. Several schools had included these teachers despite the lack of requirement, and their experience was so positive that the rest of the schools agreed to make this change a priority for the May assessment. There were quite a few other changes, most of which were relatively minor adjustments in wording, instructions, tasks, formats, etc.

Stage Six - Modifying Program

Robust assessment data may suggest ways to change the delivery of the technology program in order to achieve better results. By noting what is not working, by identifying performance gaps, a team may focus in on strategic opportunities and challenges which might have otherwise gone unnoticed.

The best approach is to begin with a list of needs generated during the assessment and then brainstorm possible responses for each of the areas needing attention. The team responsible for creating success should have a major role in determining just how the technology program should change.

Example: Most of the schools in Bellingham noticed that students had considerable difficulty with the thinking and communicating required by the accident problem. Students generated impressive looking graphs from spreadsheet data that left the decades grouped with accident data instead of placing them along the X axis as categories. The spreadsheets looked good but made no sense at all. Teachers agreed that students needed practice with the visual representation of data - creating charts to express ideas.

Teachers also noted that most teams had little collaborative skill. They agreed to provide students with more opportunities to work in groups and they created a series of lessons to show students how each of the collaborative sub skills might work.

Persuasion in front of a group was yet another weakness. Students had little prior experience presenting a case in a persuasive manner. Teachers agreed to give them more practice and help them learn how to produce multimedia presentations which they could "stand and deliver" effectively.

Schools which took the data seriously created new, more powerful partnerships between teachers and library media specialists to provide students with more carefully targeted technology experiences.

Stage Seven - Repeating the Cycle

The most effective assessment programs provide daily and weekly opportunities for teachers (and preferably students) to gather data about performance which might help them alter their course and improve performance. The cycle described here is meant to be ongoing and continuous, as assessment guides learning and steers program toward good ends.

 

Resources

Columbia's Reform Readings

A collection of articles in LiveText covering dozens of essential topics regarding educational change and restructuring.

http://www.ilt.columbia.edu/k12/livetext/readings/index.html

Criteria for Evaluating Use of Information Technology in K-12 Education . . . National Study of School Evaluation (1996). Technology: Indicators of quality information technology systems in K-12 schools. (Project directed by K. Fitzpatrick and J. Pershing). Schaumburg, IL: Author. Copies may be obtained by calling the U.S. at (847) 995-9080, fax to (847) 995 - 9088, or mail inquiries to: The National Study of School Evaluation, 1699 East Woodfield Road, Suite 406, Schaumburg, IL 60173, USA.

http://education.indiana.edu/keyfrick.html

Developing Educational Standards

An excellent annotated list of those sites which have educational standards documents prepared by various states and professional organizations.

http://putwest.boces.org/Standards.html

Effectiveness

The Costs and Effectiveness of Educational Technology - November 1995. Effectiveness. Data on the benefits of optimal school-wide technology...

http://www.ed.gov/Technology/Plan/RAND/Costs/costs4.html

The Hub: Regional Alliance for Math & Science: Assessment Resources

Extensive listing of assessment resources on the Internet.

http://ra.terc.edu/

Making the Connection

Federal study of technology integration into regular classrooms. Commissioned by now defunct Office of Technology Assessment

PDF file: http://www.ota.nap.edu/pdf/data/1995/9541.PDF

New Times Demand New Ways of Learning

This section of a report from NCREL details the indicators that educators and policy makers can use to measure the effectiveness of technology in learning.

http://www.ncrel.org/sdrs/edtalk/newtimes.htm

Report to the President on the Use of Technology

President's Committee of Advisors on Science and Technology. Panel on Educational Technology. Report to the President on the Use of Technology

http://www.whitehouse.gov/WH/EOP/OSTP/NSTC/PCAST/k-12ed.html

Teacher/Pathfinder Assessment Resources

An excellent listing of educational assessment sites on the Internet.

http://teacherpathfinder.org/School/Assess/assessmt.html

Technology

NCREL's technology-planning resource page
Technology's Effectiveness in Education

http://www.ncrel.org/sdrs/areas/te0cont.htm

Technology and School Reform

http://www.rice.edu/armadillo/About/reform.html

A wonderfully annotated list of resources provided to you by the folks at Armadillo, one of the great educational lists on the WWW.

What Does Research Say About Assessment?

More great information about assessment from NCREL.

http://www.ncrel.org/sdrs/areas/stw_esys/4assess.htm

Return to February

Credits: The icons are from Jay Boersma.
Other drawings, photographs and graphics are by Jamie McKenzie.

Copyright Policy: Materials published in From Now On
may be duplicated in hard copy format if unchanged in format and content for educational,
non-profit school district use only. All other uses,
transmissions and duplications
are prohibited unless permission is granted expressly. Showing these pages remotely through frames is not permitted.
FNO is applying for formal copyright registration for articles.





Network 609