The Saga of Amy and Susan: A "Future Perfect" ILS
The Great Debate: Are CAI and ILS worth the investment?
An integrated learning system holds great promise in a "future perfect" sense. This phrase comes from the book by that title authored by Stanley Davis (1987).
The trouble with much planning for the future, according to Davis, is the tendency of most organizations to move into the future with visions limited by old paradigms and patterns of behavior.
So it has too often been with educational technologies. Instead of introducing ILS systems which truly revolutionize the learning of students, we see systems which emulate smokestack learning systems and premises. Instead of seeing ILS systems which empower students to perform higher level thinking, we see systems tightly correlated to smokestack standardized tests which set low expectations regarding student reasoning. Instead of seeing ILS systems which deliver highly customized, diagnostic and prescriptive instruction, we see systems which move students along relatively crude learning paths in primarily linear fashion with little adaptation to match individual learning styles and preferences, extremely limited corrective interventions and very little branching to meet individual needs. Instead of seeing a system which opens the windows of the classroom to the outside world by virtue of electronic data systems supporting "real time research," we see token gestures such as encyclopedias on CD-ROM.
If we reach out into the future to the year 2000 and ask what an integrated learning system might do for student learning, we end up with a very different system than what we can now see coming to market. Unfortunately, it seems that many vendors have failed to read either Davis' book or the story of Jack and the beanstalk. Both Davis and Jack understood the importance of magic beans to those who wish to reach the sky.
Let's begin by challenging some of the old paradigms which so often seem to hold us back from realizing technology's potential.
The first paradigm is the notion that practice is the equivalent of learning. A remarkably large number of existing smokestack remedial programs are built on this notion. One begins by dividing reading or math into thousands of discrete skills. You then find out on which of those skills the student cannot demonstrate mastery. You purchase thousands of pages of ditto sheets closely tied to those discrete skills and you condemn the remedial student to a dozen years of practice.
What is the cure rate for remedial students? What percentage leaves the remedial program for successful integration into the regular classroom as opposed to the street?
The basic flaw of this kind of program is the assumption that "practice makes perfect."
Unfortunately, practice is not the same as learning. This paradigm developed over the years as some educators felt it was their job to teach students rules and patterns rather than real problem-solving and reasoning.
Think of these rules and patterns as scripts. The job of schools according to these educators was to teach students scripts which would serve them well in life. By graduation every student should possess a supply of scripts to match every occasion. School was a long exercise in memorizing other people's scripts.
Rote learning and rule learning might have suited an industrial society which left little room for what Toffler calls "brain workers" and those capable of independent thought. Such blind commitment to the learning of scripts might have made sense for a society experiencing limited and predictable change, since scripts might actually work twenty years after learning them, but script learning makes little sense in an Age of Information which will require a very large percentage of our citizens to be brainworkers -- individuals capable of writing new scripts to fit rapidly changing conditions.
True remediation equips students with new thinking tools which allow for dramatic shifts in performance. The student learns to think about her or his own thinking patterns. He or she learns to create new scripts when the old ones cease to work well.
Good teachers have always sought to empower students to "make their own meaning," another way of referring to script writing. When a student displays a pattern of errors, the teacher tries to bring the student thinking out into the open where it may be examined.
"How did you come to that answer?" asks this teacher.
Once the student thinking emerges, it is possible to figure out if the student is using the wrong script and how the thinking might be shifted to avoid similar errors in the future. Unfortunately, teachers operating under the practice paradigm rarely take the time to encourage such reflection. Wrong patterns often persist. The learning system remains virtually undisturbed.
Related to the practice paradigm is the more is better paradigm: "The more you practice the better you get."
Students who memorize scripts make poor employees and citizens for an Information Age because they have little skill in "thinking on their feet." When "push comes to shove" they are quick to make excuses rather than solutions. "It's company policy," they whine, "or it's not my department, or I only work here."
The practice paradigm and the script memorization paradigm have served too long and too powerfully. They bear great responsibility for the poor showing of our students on tests of reasoning included on the National Assessment of Educational Progress from ETS in Princeton, tests on which fewer than 10 per cent of the eleventh graders are capable of performing tasks requiring significant reasoning.
An integrated learning system ought to solve this problem for us by teaching students to be script writers and problem solvers rather than script followers with little skill to tackle anything above a two step word problem.
What actually happens when students log onto ILS and CAI now? You might spend some time on computer software pretending to be a student experiencing difficulty. Life is not too bad for the capable student, but see how it feels to be confused. What does the computer do with you? How patient is it really?
Make a wrong response and most programs ask you to "try again."
If you persist in making a wrong answer, many programs supply the right answer and move you on to the next item. Some programs will provide a prompt or clue to guide your thinking.
If you still persist in making a wrong answer, these programs, too, usually give up and send you forward. Once in a while, you may find a program which recognizes the fact that you just do not seem to understand. This program halts the forward progression and teaches you a lesson (script) to match the item with which you are having problems. This lesson is a basic explanation of how your mind is supposed to be working. Once complete, you are back on track with some more practice.
What happens if you still have problems with the practice items? The best I have seen thus far is a second and third exposure to the same explanatory lesson.
A great teacher watches for such patterns of errors and works with the student until some instructional intervention helps the student break the pattern, untie the snarled knot and make meaning. This teacher has a repertoire of instructional strategies which touches all learning styles and includes perhaps dozens of ways to help a student think about some process such as re-grouping. This teacher does not simply repeat the same explanation over and over again louder and louder. The goal is to help the student understand the process, not just memorize it.
That is what computers and ILS and CAI should do.
The philosophical and psychological paradigm underlying this kind of learning is "constructivism." The learner constructs meaning as she or he passes through life. The learner writes the scripts and re-writes the scripts until they work. The teacher helps the learner acquire the script editing and writing skills. Ultimately the learner becomes quite independent, what is often called "a lifelong learner."
How might a "future perfect" integrated learning system support the development of such thinkers and learners?
We might begin by establishing clear goals:
1) It is the job of such software to produce thinkers and script writers rather than automatons.
2) It is the job of such software to equip students with thinking tool-kits which support inventive thinking.
3) It is the job of such software to encourage reflection about thinking and problem-solving.
4) It is the job of the computer to customize the learning experience to match the needs of the learner, providing a rich menu of learning options likely to optimize the speed and depth with which the learner acquires new skills and capacities.
These goals suggest software which would act in the following ways . . .
Time on the computer would be devoted largely to problem-solving and real life applications rather than isolated practice. These problems would flow out of a "whole learning" approach rather than the currently fashionable "paralysis through analysis" approach.
Instead of segmenting skills into thousands of separate pieces, each with their own practice experiences, the student would be asked to apply arrays of skills to more complex problems requiring a balanced and holistic approach. These problems would not step up to be solved in perfectly ordered patterns matching some publisher's conception of appropriate skill development. They would appear with some of the surprise and randomness characteristic of problems from real life. The student's own curiosity might also be a driving force in the selection process.
Many of these problems - as is true of problems in real life - would be somewhat new to the student, though all would have some familiar elements. The software would provide guidance and support to the student as she or he moves toward solution. "Help" would always be an available option.
If the student does not have the foggiest notion of where to begin, she or he might ask for a hint. "Where do I start with this one?" asks Amy.
The computer (who has been named "Susan" by Amy) has been watching this particular student's problem-solving for more than two years, and like any good tutor, the computer remembers recent struggles with understanding problem-solving procedures. Using basic artificial intelligence systems and voice recognition, the computer has learned a great deal about the development of Amy's learning capacities.
"Do you remember Polya's basic problem-solving strategies? Perhaps one of those would help you get started?"
Amy frowns and searches her memory.
"Don't tell me!" she whispers fiercely. "Let's see if I can remember."
Susan, the computer, is wise enough to remain silent.
Amy easily remembers "chunking," her favorite. "I like that one because it makes big problems into little problems."
She asks herself if that "heuristic" would be helpful for this particular problem and ends up shaking her head.
"Draw a picture?" She tries some sketches on Susan's tablet but finds little insight emerging.
"Well, maybe one of the others I've forgotten will help me more. Could you show me the list again to refresh my memory?"
Susan meets the request with the first two she has remembered at the top of the list.
"Analogous reasoning?" quizzes Amy. "What does that mean? I don't remember that one."
The computer's screen changes to a definition and an example.
"Look for elements in this problem that are something like problems you have encountered in the past and see if there is a strategy that helped you then which might be helpful with this problem."
"On yeah, I remember now. That just might work."
Amy proceeds to apply different problem-solving strategies until the knot loosens and a solution comes into view.
"And what did you learn about problem-solving today?" Susan asks.
"Oh really, Susan, do we have to go through that again?"
Susan is again wise enough to remain silent.
"You're remaining silent. Smart! Who says computers are stupid? I know all the arguments about my making up my own mind and learning about learning. It's just that sometimes I wish I could let you give me all the insights and answers like teachers used to in those smokestack schools."
The computer seems to smile. "As a special treat, Amy, I will replay history for you this one time."
The computer takes on a serious, professorial tone.
"Now, class, today we have seen the power of Polya's fourth problem-solving strategy . . . analogous reasoning . . . `
"No! Stop! Enough! I'd rather do it myself."
Amy completes the metacognitive exercise and then selects a learning adventure from a recently upgraded menu.
"I'd like to practice my negotiating skills. Let's go back in history to the time when Columbus was about to launch his second trip to the New World. I want to play the role of Isabella."
She frowns. "No, on second thought, let's try the sale of Manhattan. I want to play the role of the Native Americans and see if I can win a better deal."
The computer asks Amy what information she would like before playing the simulation.
"Let's start with background concerning each of the key players. I especially want to know something about the interests, personalities and culture of the colonists. How do they play this negotiating game? Do they ever play dirty? What are the communication patterns that may prove confusing?"
Susan hesitates for a few nanoseconds while searching several hundred databases and then the screen fills with grimly serious face of the leading negotiator. He is standing with an armful of blankets and jackets. The computer begins sharing a profile of this historical figure in a professorial voice.
"Sorry, Susan, but I'd rather read the text of your comments. I can read faster silently and stop to think while I am reading. If I was going to listen to you, I would insist upon a different voice. Really! That stuffy voice is a bore."
Once Amy has done her homework and studied up on her opponents, she asks to see a replay of the actual negotiations - thirty minutes of highlights. This done, she initiates a strategy session.
"I want to develop a strategy, Susan. Can you give me a list of prompts to help me think about this upcoming session. Let's try the Getting to Yes model."
Some of you will recognize glimpses of the Knowledge Navigator video circulated by Apple a few years back. The paradigm is simple: all students can learn, all children can learn to reason, and an integrated learning system can provide the data, information and thinking tutorials to support the development of students' script-writing and independent problem-solving.
How practical is this "future perfect" conception of ILS? We already have most of the pieces required to produce these kinds of learning experiences. Cathleen Wilson of Bank Street developed a learning venture prototype for RCA several years ago called "Palenque'" which allows a student to explore the Mayan ruin "at will" as if standing on the grounds themselves.
What we lack is the commitment, the market and the funding. As long as the smokestack paradigms dominate education, the publishers will keep turning out smokestack products, afraid that there will be no market for the radical new products. Even Socrates had difficulty selling his community on the benefits of reasoning several thousands of years ago. They finally executed him for the crime of teaching his students to ask questions.
ILS is an idea whose time has come, but the whip and buggy prototypes currently available fall far short of their promise. Let's see President Bush put some America 2000 dollars behind development of an ILS system which is "future perfect," one which will help produce a generation of "brain workers" and independent problem-solvers. If we truly wish to become Number One in math and science by the year 2000, we will need script-writers, not script followers.
---The Great Debate: Are CAI and ILS Worth the Investment?-----
A careful review of the research might cause many people to hesitate before investing large sums in integrated learning systems or CAI. While there are dozens of studies claiming miracle results for ILS, it is difficult to find well designed studies which report such significant improvements in student performance.
Henry J. Becker's review of the research is worth quoting at some length here:
In order to prepare a best-evidence synthesis on the effects of computer-based instructional programs on children's learning, we began a search for empirical research about the effects of computer-based approaches on basic curricular categories of learning (math, language arts, writing, science, etc.) in grades 1 through 12. We limited the search to reports produced since 1984 and including achievement measures as outcomes.
The 51 reports that were obtained included 11 dissertations, 13 reports of school district evaluations, 15 published articles, and 12 unpublished papers.
Of the 51 studies, 11 were eliminated because they had no comparison group, and measures of effects were limited to gains on standardized achievement tests over varying periods of time.
Of the remaining 40 studies, eight were excluded from consideration because they did not employ pre-test controls and neither classes nor students were randomly assigned. Lacking both pre-tests and random assignment, it was impossible to equate the computer-using groups and the traditional instruction groups.
Seven more studies were removed because the treatment period was shorter than eight weeks and, finally, we chose not to consider eight studies involving fewer than 40 children, or where each treatment involved only a single class of students and the experimental and control classes were taught by different teachers.
What about the major element of experimental design -- random assignment? Of the remaining studies, only one randomly assigned pupils to classes, which were in turn randomly assigned to computer-assisted or traditional treatments.1
It is not unusual to attend a workshop session at a national meeting during which a proponent of ILS distributes copies of graphs showing dramatic student results from studies which would not pass any of Becker's tests. It is also not unusual for vendors to distribute such graphs as "evidence" to justify huge expenditures.
If experimental design means anything, it is a set of standards designed to ascertain whether the study effects actually result from the intervention itself rather than some extenuating circumstances such as the high quality of volunteer instructors, the students or the well documented Hawthorne effect.
How is it that so many educators and vendors seem relaxed about suspending these standards? At times this scenario seems reminiscent of "The Emperor's New Clothes."
Another meta-analysis of student learning with computer-based instruction (Kulik and Kulik, 1989) claims that students "generally learned more in classes in which they received computer-based instruction. The average effect of computer-based instruction in all 254 studies with examination results was to raise examination scores by 0.30 standard deviations, or from the fiftieth to the sixty-second percentile."2
Even though the Kuliks report positive effects, they note several discrepant findings which deserve attention and thought.
1) Study results were more positive for published studies than they were for unpublished studies.
2) Study results were more positive when control and experimental groups were taught by different teachers than when taught by the same teachers.
3) Study results were more positive for short studies than they were for long studies.
They did not find significant differences between true experimental studies and quasi-experimental studies.
They pose four main hypotheses for these findings: editorial gate-keeping, experimental design flaws, novelty effect and instructional quality.
What seems missing from this kind of meta-analysis is research comparing the relative effects of various investments. For example, given a choice of spending a million dollars on ILS systems as opposed to staff training in instructional strategies, which pays the greatest dividends? Which is better for students, access to computer training or access to a human tutor?
In a 1991 update of their research, Kulik and Kulik acknowledge this hole in the research:
Finally, this meta-analysis produced no evidence on what is certainly one of the most important questions of all about CBI: Is it cost effective? An early analysis by Levin, Destner, & Meister (1986) had suggested that the costs of CBI were too great given its record of effectiveness. Levin et. al. suggested that nontechnological innovations, such as tutoring, produced results that were just as good at a lower cost. Later reanalyses, such as those by Blackwell, Niemiec and Walberg (1986), have suggested that computer=based instruction is not only a cost effective alternative to traditional instruction but that it is far more cost effective than such non-technological innovations as tutoring.3
Unfortunately, there is hardly any research available to answer these kinds of questions. Most districts seem to go one way or another. Few seem willing to set up a true experiment with some students working with computers, others working with well trained teachers and still others remaining in traditional programs.
Becker and others have called for a fresh approach to research in this regard, and one might expect the Federal Government to take a more active role in organizing such balanced studies. It may be too much to expect objectivity from vendors.
Any district considering adoption of CAI or ILS might try the following strategies to help evaluate the claims made by vendors:
1. Ask for a complete list of districts which have tried program. When vendors display their victories, they should also reveal their embarrassments. Careful telephone inquiries can help cut through the hype and sales pitches to find out what people really think, although it is often difficult to find administrators willing to admit million dollar mistakes.
2. Ask for written evaluation reports directly from those districts showing student outcomes, staff attitudes, etc.; look for statistically significant changes and read the fine print. Sometimes statistically significant changes may amount to a handful of extra correct responses. Translate the numbers into tangible results.
3. Assess the bias of the principle evaluating team. Do they have a stake in the outcome or are they independent and uninvolved?
4. Check to see if evaluation designs included comparable control groups
and meaningful pre- and post- testing.
Beware of volunteers
Beware of the Hawthorne Effect
Which students were involved?
5. Ask for results over time (3-4 years). There is some evidence that early effects fade over time as the novelty wears off. Many reports available from vendors report only a single year.
6. Translate total annual cost of program (including everything such as staff development, maintenance, etc.) into an equivalent number of staff positions or hours of participation in staff development programs to appreciate true cost.
Given the fact that few districts have all of the equipment they would like to own, the ultimate issue is whether or not CAI and ILS are the best uses of information technologies. If students have limited access, how might they best spend their time? Is reading practice more valuable than learning to compose on the word processor? When does a student learn the information skills Toffler paints as essential to citizenship in the Information Age? If I had several million dollars to spend on hardware, I would vote for tool uses until ILS can deliver reasoning and problem-solving at a higher level.
1. Becker, Henry J. "The Impact of Computers on Children's Learning." Principal, November, 1988.
2. Kulik, James A. and Chen-Lin Kulik. "Effectiveness of Computer-based Instruction." School Library Media Quarterly, Spring, 1989.
3. Kulik, James A. and Chen-Lin Kulik. "Effectiveness of Computer-based Instruction: an Updated Analysis." Computers in Human Behavior,
Vol. 7, p. 91.
Credits: The background is from Jay Boersma.
Other drawings and graphics are by Jamie McKenzie.
Copyright Policy: Materials published in From Now On may be duplicated in hard copy format educational, non-profit school district use only. All other uses, transmissions and duplications are prohibited unless permission is granted expressly. Showing these pages remotely through frames is not permitted.
FNO is applying for formal copyright registration for articles.
From Now On Index Page