College Quarterly
Winter 2004 - Volume 7 Number 1
Home

Contents
New Perspectives on Popular Culture, Science and Technology: Web Browsers and the New Illiteracy

by Elizabeth Charters, M.Ed.

Abstract

Analysts predict that the knowledge economy of the near future will require people to be both computer literate and print literate. However, some of the reading and thinking habits of current college students suggest that electronic media such as web browsers may be limiting the new generation’s ability to absorb and process what they read. Their approach resembles the actions of web browsers in several respects, especially its lack of discrimination and its treatment of sets of words as decontextualized images exclusive of any ideas behind them. One cause may be that, as media theorists McLuhan and Postman have pointed out, technological advances in the way we package information, including the printing press, television, and now computers, have influenced the way people use their minds to take in, structure and store information. There are a number of disturbing implications from this trend. According to information processing, psycholinguistic and brain theories, this treatment of complex syntactic structures as meaningless image patterns may inhibit brain development and lead to a sub-class that, while technically competent, cannot cope with the vast amounts of information generated by the electronic media.

Web Browsers and the New Illiteracy

Man the tool-making animal, whether in speech or in writing or in radio, has long been engaged in extending one or another of his sense organs in such a manner as to disturb all of his other senses and faculties. But having made these experiments, men have consistently omitted to follow them with observations. (The Gutenberg Galaxy. McLuhan & Zingrone, 1995, p. 100)

For the last 20 years, employers, educators and other societal planners have been trumpeting the need for everybody to become computer literate. They have much justification. Most jobs today are computerized in at least some ways, and the range of activities affected by computers is growing all the time. Thus, most would agree with OISE’s Robert Logan (1999) in his article “Proto-internets and the Rise of a Computerate Class” that those who are not computer literate are doomed to be limited in their life and work abilities. Logan, however, assumed that this computer literacy would be built upon a foundation of basic literacy in the more traditional areas of reading and writing. Such a foundation would still seem to be essential, in part because much computer information is transmitted in print form.

Just as literacy created a new social class and a new form of privilege and economic opportunity, the use of computers may do the same… I predict that the middle class will divide into two subsets, one merely literate and the other both literate and computer literate” (Logan, 1999, p. 334, italics added).

What Logan did not see is the possible emergence of a third constituency, one that is computer literate but lacking in the ability to read and write effectively. But there is evidence that this condition is becoming uncomfortably widespread. This paper will examine the evidence that as computer literacy increases, basic literacy may be diminishing. It will also look at possible reasons, which may in part stem from the influence of the electronic media in general, and the Internet and its browsers in particular. It is now forty years since Marshall McLuhan (see Marchand, 1989, E. McLuhan & F. Zingrone, 1995) pointed out the pervasive and subtle effects of “electric” media such as television on thought. Nonetheless, there has been little research done, in either the areas of media theory or cognitive psychology, on the effects of today’s most prevalent technologies on the way people read, think and interpret the world. A search of the Internet will not find critiques of the information overload it offers and the superficial way of “reading” demonstrated by its browsers—only praise. We benefit so much from the convenience of computers that, as Neil Postman (1992) has warned us, we have ignored the possibility that the computer “has usurped powers and enforced mind-sets that a fully attentive culture might have wanted to deny it” (p. 107). This paper will explore one aspect of these “mind-sets.”

Before discussing the effects of computers on literacy, it is necessary to define it, especially as it is applied to adults of college or working age. By theoretical definition, reading is a complex action which requires many skills including letter and word recognition, grapheme and phoneme correspondence, semantic knowledge, syntactic understanding, and comprehension and interpretation (Ely, 2001). For this reason, the standard for literacy varies with age group and purpose. Most would agree that, for individuals of college age and older, literacy should mean more than mere phonetic decoding and word recognition. Literacy at this level should include the ability to both “accurately comprehend the literal meaning of the text” and “reflect on the broader meaning” (Ely, 2001, p. 432). The great Brazilian educational philosopher, Paulo Freire (1987) defined “true reading” this way: Reading does not consist merely of decoding the written word or language; rather, it is preceded by and intertwined with knowledge of the world. Language and reality are dynamically interconnected. (p. 29).

Robertson Davies, in a 1990 interview on literacy, went even further, identifying three levels. The lowest level may be what is called “functional literacy” as it includes people with the survival skills to read labels on medicine bottles or the basic directions needed for driving a car. The second, most populous level, involves people who have fairly complex jobs, can tackle written information connected to them, and may even read newspapers from time to time, but rarely think of reading a book. Davies’ third level includes those who “can define things accurately and…can discuss things intelligently…who enlarge human knowledge” (p. 52). This last group may always be a minority. However, the illiteracy which may be growing in the shadow of the Internet is attacking the largest group, who still safely meet the skill levels for Davies’ first category but may no longer qualify for his second. They are not “true readers” in Freire’s sense because they read only the “word” and do not connect it to the “world.” They are not stupid, unskilled or “information illiterate” but are at risk of being manipulated by the machines they are supposed to master. There is much evidence for such a new type of illiteracy, familiar to many college instructors.

In classrooms today, instructors are frustrated by the fact that their students can read in the sense of pronouncing words, but frequently seem unable to comprehend in any depth what they have read. Few can paraphrase ideas by expressing them in different words. Many cannot find implied main ideas in a passage or synthesize several details to recognize a general trend. Of course, such problems are not new. They have always been characteristics of weak or learning disabled students. What is new is their increasing prevalence. Students who are articulate speakers with above average technical skills, who in other respects participate and learn well, may still perform ineffectively as readers. The problem is too widespread, amongst too varied a population of students, to be dismissed as a result of any one factor such as low intelligence, foreign language background or poor motivation. The reasons must be more complex and may be quite controversial. This is why the possible causes of the “new illiteracy” are worth some detailed discussion.

Complaints of growing rates of illiteracy have been common in North America since the 1950’s when Rudolf Flesch (1955) wrote his first attack on the education system: Why Johnny Can’t Read. Flesch and his followers believed that literacy levels declined when schools abandoned intensive drill in phonics. However, it is probably no coincidence that this perceived fall in reading ability coincided with the spread of television. Since that time, the pedagogical pendulum has swung back from “whole language,” phonics is again included in the elementary school curriculum, but literacy levels continue to sink. It seems worthwhile to ask whether the problem is at least partly due to television, which still dominates our leisure time, or even to the computer applications that have followed it into almost every North American home.

The problem with television is not so much its content but the medium through which it is presented. This was first pointed out by Marshall McLuhan in his pioneering exploration of the ways different communications media distort our senses by favouring one over another (Marchand, 1989). In McLuhan’s words:

A new technology extends one or more of our senses outside us into the social world, [and] then new ratios among all of our senses will occur in that particular culture… And when the sense ratios alter in any culture then what had appeared lucid before may suddenly be opaque. (The Gutenberg Galaxy, excerpted in McLuhan, E. & Zingrone, 1995, p.136)

McLuhan noted that television appeals to the sense of sound as well as that of sight and claimed that this affected people’s emotions rather than their reason. Yet television also deals in visual images, which are concrete, rather than the abstract ideas expressed by written language. It is also what McLuhan described as a “cool” medium. Thus it is a medium which encourages its viewers to absorb sensory impressions passively rather than wrestling with their deeper meaning.

Neil Postman (Postman & Paglia, 1999) went a step further than McLuhan, with his focus on sensory input, when he wrote: “Television, with its random, unconnected images, works against the linear tradition and breaks the habits of logic and thinking” (p. 288). Camille Paglia agreed: “Watching TV has nothing to do with thought or analysis. It’s a passive but highly efficient process of storing information to be used later” (p. 294). She defined the television experience, as McLuhan did, as sensual not abstract, emotionally involved not detached. Reading, on the other hand, “requires considerable powers of classifying, inference-making and reasoning…to weigh ideas, to compare and contrast assertions, to connect one generalization to another. To accomplish this, one must achieve a certain distance from the words themselves, which is…encouraged by the isolated and impersonal text” (Postman, 1985, p. 51). As McLuhan would define it, print is a “hot” medium that demands an active, intellectual response.

McLuhan’s famous book The Gutenberg Galaxy (originally published in 1962) centred on the earlier technology of the printing press and the revolutionary way it affected how modern people interpret the world around them. Print has been the dominant medium of communication for many generations. We no longer appreciate the way that our phonetic alphabet “dissociates or abstracts, not only sight and sound, but separates all meaning from the sound of the letters, save so far as the meaningless letters relate to the meaningless sounds” (McLuhan, E. & Zingrone, p. 141). Thus print is a mechanically produced symbol of a symbol (writing) of a symbol (phonetic speech) of thought.

All this disconnection from the thought to the medium has created a culture that approaches the world in a detached, sequential and fragmented way. “A place for everything and everything in its place is a feature not only of the compositor’s arrangement of his type fonts, but of the entire range of human organization of knowledge and action from the sixteenth century onward” (McLuhan E. & Zingrone, p. 285). According to McLuhan, our intellectual disconnection with the world around us is due to writing, not television, but others would argue that that disconnection has proved advantageous. Lewis Mumford (1999) wrote: “Though print undoubtedly accentuated man’s natural eye-mindedness… it also freed the mind from the retarding effects of irrelevant concreteness” (p. 88). In other words, it promoted abstract thought.

College instructors expect their students to display such a capacity for detached, abstract thought. If this skill appears to be rare in today’s classrooms, television may indeed be partly responsible. Yet today’s college generation have been exposed to the print medium of computers almost as much as the non-print medium of television. Should this not have helped them to become more, not less, capable of literacy and its associated skill of abstract thought? Perhaps, faced with the surfeit of information available on the Internet, they are becoming altogether too detached, unable to process the “data” scrolling past their eyes and convert it to meaningful knowledge (Shenk, 1997). One reason for this may be the form in which computers’ “information” is presented.

Few of those proposing computers as a gateway to higher levels of thinking may be aware that, even though they are dominated by the printed word, they resemble televisions in some vital ways. Facing a computer screen is a very different experience from reading a book. Robert Logan (1999) noted this about the computer:

Reading from a CRT (or video screen) is an unnatural activity because of the way the brain processes video information. Reading is a left-brain activity, whereas viewing video is a right-brain one. The mosaic pattern of light pulses must be reassembled by the right brain to create an image (pp.332-33).

Thus words on a computer screen are seen as images, more like the jpg images that may accompany them than like spoken or written language. Because they are made of light patterns, they are “read” in the same way photographs are “read,” described here by Postman (1985) in his book Amusing Ourselves To Death:

The way in which the photograph records experience is also different from the way of language. Language makes sense only when it is presented as a sequence of propositions. Meaning is distorted when a word or a sentence is, as we say, taken out of context; when a reader or listener is deprived of what was said before and after. But there is no such thing as a photograph taken out of context, for a photograph does not require one. In fact, the point of photography is to isolate images from context, so as to make them visible in a different way…” (p.73).

In this sense, then, interacting with a computer screen may be physically and psychologically more similar to watching television than to reading a book, print notwithstanding. And the method of absorbing information from a television screen is to “scan,” in a process Camille Paglia (1999) described more as a sensorimotor than cognitive:

It’s like the airline pilot sweeping his eyes across his bank of instruments or the driver cruising down the interstate at high speed, always scanning the field, looking for the drunk, the hot rod, the police …None of these people … is thinking. They’re only reading the field and working by instinct, deciding in an instant where to ….steer the jet or car. The decision is made by intuition, not by ratiocination (Postman & Paglia, 1999, p. 294).

Scanning can be a valuable reading skill, if defined as a technique for finding main ideas (Fraser & Brownhill, 1983). The new illiterates, however, have difficulty identifying main ideas in a passage because that would require them to look beyond the surface of the words. What they do is more like skimming, searching for a certain specific detail, usually a key word. For example, if they are asked to answer a question, they will search for key words from the question and then copy the text surrounding the key word as their answer. Some of these lifted passages may answer the question; others may not. The problem is that many of today’s students seem unable to discriminate between a quotation that vividly explains an essential point and one that is largely off topic. In this sense, they are acting like the most familiar “reading” machines, web browsers, rather than readers. They can search for patterns and recognize words, as images, but they seem oblivious to both context and meaning.

Experienced readers often do skim material because they believe it is either “familiar” or contained “unnecessary details.” On the other hand, effective readers also monitor their understanding as they read; if they are having difficulty grasping the intent of a certain passage they will make an effort to improve their comprehension by slowing down, looking up words or re-reading earlier passages (Pressley & Afflerbach, 1995). Inexperienced readers in today’s computer-trained generation may skim because they believe from the example made by computers that finding required key words and the phrases around them is enough. If they do not comprehend something they read, they do not recognize the need for alternative reading strategies. The problem with computers and literacy is not so much that computers take time away from reading but that they overload their users with decontextualized, largely irrelevant information at the same time as they model and promote a form of “reading” that is little more than pattern recognition. Perhaps this is one reason why many of today’s college students are unable to read at a mature, literate level. Computers have made their superficial form of reading seem perfectly adequate to them.

Because of what computers commonly do, they place an inordinate emphasis on the technical processes of communication and offer very little in the way of substance…there never has been a technology that better exemplifies Marshall McLuhan’s aphorism ‘The medium is the message.” The computer is almost all process. (Postman, 1992, p. 118)

A change in our approach to the printed word may be one of the dangers Neil Postman (1992) foresaw in his book Technopoly: The Surrender of Culture to Technology. Ideologically, “the computer redefines humans as ‘information processors’ and nature itself as information to be processed. The fundamental metaphorical message of the computer…is that we are machines” (p. 111). Beniger (1999) likewise found that “because most modern computers process digital information, the progressive digitalization of mass media and telecommunications content begins to blur earlier distinctions between the communication of information and its processing…as well as between people and machines (p. 314). What is more, the propaganda surrounding high technology implies that people are not only “just” machines but inferior ones. Postman (1992) quoted Marvin Minsky, one of the pioneers in the field of artificial intelligence, boasting that some day computers will be so vastly superior to us that “If we are lucky, they will keep us as pets” (p. 111). Is it any wonder that those growing up surrounded by such claims find it hard to believe that their reading skills have the potential to be much more complex than those of computer programs?

The plain fact is that humans have a unique, biologically rooted, intangible mental life which in some limited respects can be simulated by a machine but can never be duplicated. Machines cannot feel and, just as important, cannot understand… artificial intelligence cannot lead to a meaning-making, understanding and feeling creature, which is what a human being is (Postman, 1992, pp. 112-113).

Postman’s championship of the unique human ability to think and find meaning echoed the theories of cognitive psychologists and psycholinguists. They have tried teaching language to computers, but have failed to create programs that can interpret or produce meaningful sentences (Baileystok & Hakuta, 1995). In order to understand the effect of television, computers and other technologies on literacy, then, it is necessary to go beyond the media-centred theories of McLuhan and Postman and look at some of the psychological and physiological science explaining how the brain and language work—in other words, how people think and how they communicate those thoughts.

“Language is a mirror of mind in a deep and significant sense. It is a product of human intelligence, created anew in each individual by operations that lie far beyond the reach of will or consciousness” (Chomsky, 1975, p. 4). Researchers are barely beginning to decipher the physical and conceptual complexities of the way we think and communicate. Although its critics, such as Neil Postman (1992), dismiss it as overly mechanistic, information processing theory provides a basic framework for a more detailed understanding. The structure outlined by this theory may be familiar: data enters from the senses and is “processed” in working, or short-term memory before it is stored for later retrieval in our permanent databanks called long term memory (see Figure 1). However, applied to the reading of the new illiterates it may be instructive.

Figure 1
Information Processing Theory
(Adapted from Farquhar & Surry, 1995, p. 5)

Readers emulating web browsers scan (or skim) printed pages using the same sense any reader uses: vision. But what happens when those scanned words appear in working memory? In order to learn, readers need to perform such meaning-making mental activities as “arranging a mental organization of information, developing mental hierarchies or categories, producing mental images, and performing mental repetitions; events that assist in the encoding of information to the long-term memory state” (Farquhar & Surry, 1995, p. 5). But a web browser does none of these things. It merely collects and stores texts as a series of meaningless patterns. This is why a key word search will produce listings of tens of thousands of files, most of them useless to the searcher. A search on Google.ca for the key words “web browser” and “literacy” produced 9,370 hits, including websites for programs that boost computer literacy, directions for the use of web browsers, course curricula, book reviews and listings of ESL resources on the Internet. None of the first 50 websites included the subject relevant to this paper: the effect of web browsers on literacy. This is not surprising. The web browser cannot understand the words it searches for and thus cannot discriminate between contexts that are valid or invalid. Likewise, human readers who operate like web browsers also have trouble with discrimination. Furthermore, they are unlikely to remember much of what they read because it is largely meaningless to them. They do not think about it, so it passes through working memory just long enough for them to copy the words. But duplicating a set of words is not the same as constructing the ideas behind them or thinking about them.

This is because words do not have precise meanings but are “generalizations” that involve thought as well as sensation. Their purpose is communication, or as described by Lev Vygotsky, one of the seminal thinkers in cognitive psychology, “understanding between minds” or “shared thought” (1962, pp. 5-6). Their complexity is such that a single word may have very different meanings to two different readers. This is the great strength of human language, its “facility for pulling together many concepts under one symbol—make it possible for people to establish ever more complex concepts and use them to think at levels that would otherwise be impossible.” (Damasio & Damasio, 1992, p. 89). The only way for a reader to determine the precise and complex meanings intended by authors, and thus share their thoughts, is to determine “the relationship between text and context” (Freire & Macedo, 1987, p. 29). This sort of “concept creation” should be, Vygotsky argued, “a creative, not a mechanical, passive process” (1962, p. 54), the exact opposite of mechanical, passive browsing which cannot analyze context or understand the words it scans. As psycholinguist Stephen Pinker (1994) wrote in The Language Instinct: “Human communication is not just a transfer of information like two fax machines [or websites] connected with a wire; it is a series of alternating displays of behaviour by sensitive, scheming, second-guessing, social animals.” (p. 230). Is the current generation losing the ability to think like living beings in society’s rush to make them behave more like machines?

Brain scientists are only beginning to describe the incredible complex functioning of the human brain, which far surpasses that of any existing or projected machine. In his “Prologue” to The Gutenberg Galaxy (1962), Marshall McLuhan presaged some of the physiological consequences of media change which are only now beginning to be understood when he quoted one J.Z. Young, from Doubt and Certainty in Science:

The effect of stimulations, external or internal, is to break up the unison of action of some part or the whole of the brain…the disturbance in some way breaks the unity of the actual pattern that has been previously built up in the brain. The brain then selects those features from the input that tend to repair the model and to return the cells to their regular synchronous beating… . We tend to fit ourselves to the world and the world to ourselves. (McLuhan’s italics, McLuhan, E. & Zingrone, 1995, p. 100)

More recently, scientists are starting to map out the cognitive and linguistic functions of the brain, and they are finding that these may indeed vary according to the stimulation they receive in various individuals. Thus brain function may be influenced not only by education but also by communications media.

Damasio and Damasio (1992) identified three areas of the brain that process language: one that represents sensory impressions (between a body and its environment), one that generates and processes the words and rules that transform concepts into language, and one that mediates between the other two. “It can take a concept and stimulate the production of word-forms, or it can receive words and cause the brain to evoke the corresponding concepts” (p. 89). Such a mediation facility may be under-stimulated by a browser approach to reading that favours sensory input (appearance) and neglects concept creation (meaning). And such under-stimulation may have long term consequences. For example, the mediation described above may follow two different routes within the brain. One, the sub-cortical route, seems to involve lower level thinking such as mere word recognition, while the cortical route implies higher-level or “associative” thinking (Damasio & Damasio, 1992). This is the sort of thinking expected of fully literate adults. Yet Damasio and Damasio (1992) speculated that “It is likely that both cortical ‘associative’ and sub cortical ‘habit’ systems operate in parallel during language processing. One system or the other predominates depending on the history of language acquisition and the nature of the item” (p. 93, italics added). Thus reading through the methods modeled by web browsers and other passive electronic media may cause people to avoid their associative or cortical circuit. The lower level (the one involved in pattern recognition tasks such as key word search) becomes the default process they tend to use for all reading tasks. Such a development should indeed cause concern, for it may have wide ranging consequences.

The treatment of sets of words as rigidly fixed units instead of malleable representations of thought is not exclusive to the computer, but goes back to the early days of the printing press. Walter Ong (1999) in “Print, Space and Closure” observed that “alphabet letterpress printing, in which each letter was cast on a separate piece of metal, or type, marked a psychological breakthrough of the first order. It embedded the word itself deeply in the manufacturing process and made it into a kind of commodity” (pp. 97-98). Earlier, in 1958, McLuhan had written: “The fact that print fosters the consumer habit of mind, the readiness to accept completely processed and packaged goods, is a side of print that has been little considered” (McLuhan E. & Zingrone, 1995, p. 286). It is no coincidence that the concept of copyright—“private ownership of words”—coincided with the printing press (Ong, 1999, p. 104). Psychologically, printing a passage of words may make them seem inviolable: unable to be questioned or added to, particularly to inexperienced readers. The printed text is supposed to represent the words of an author in definitive or ‘final’ form…[since] the text does not accommodate changes…so readily as do written texts.” (Ong, 1999, p. 105) Such excessive reverence seems to be disappearing with computers and their ability to copy and alter text with ease. Yet a computer may manipulate the words themselves, but not the meaning behind them, which it cannot comprehend. The resulting attitude toward computer generated text—that its printed words can be lifted intact but its meaning cannot be questioned or recast—is one that encourages widespread plagiarism. This is harmful not only in its dishonesty to the original authors of the ideas but also to the readers who thus avoid having to wrestle with meaning. They end up cognitively impoverished, and less and less able to swim through the flood of information provided by the “electric” media in general, and the Internet especially.

In the Foreword to Amusing Ourselves to Death, Postman (1985) discussed the two great 20th century dystopias portrayed in George Orwell’s Nineteen Eighty-Four and Aldous Huxley’s Brave New World. He noted that Orwell had warned of governments that would ban books, but that Huxley was more prescient because he “feared those who would give us so much [information] we would be reduced to passivity and egoism…[so that] the truth would be drowned in a sea of irrelevance” (vii). Postman (1985) believed that Huxley’s warning was fulfilled by television, but 20 years later more should consider whether the information overload coming from the Internet also represents a fundamental challenge to our intellectual culture. As David Shenk wrote in 1997, “Eleven billion words and 22 million Web pages bring us more information than ever before and, because of this, less information shared” (p. 125).

At a time when computers are insidiously eroding a generation’s opportunities to develop higher level thinking, such thinking is more and more necessary to economic survival. Peter Drucker (1993) in his book Post-capitalist Society urged that schools of the 21st century place more emphasis on providing “universal literacy of a high order—well beyond what ‘literacy’ means today” (p. 198). He argued that the emerging “knowledge society” needs people who know “how to learn” and can continue to learn throughout their lives (p. 201). Ironically, he recommended that such enhanced reading ability could best be taught through computers. This is a point that deserves to be questioned and researched with more care. While computer literacy is certainly a necessity in today’s world of work, educators and employers may be aware of the tendency of computer applications such as web browsers to alter people’s perceptions in subtle ways that undermine their ability for higher level thinking. David Shenk (1997), distinguishing between superficial data and knowledge, argued that “the disenfranchised citizens of our country are not in need of faster access to bottomless wells of information. They are in need of education” (p.211). He feared that in the future only an educated elite would have the skills to sort through the mountains of information provided by the media and reject that which was unreliable or irrelevant. Robert Logan (1999) has predicted that “the control of information …will become more and more the key to success and power” (p. 335) and urged that our schools place more emphasis on building computer literacy. However, his ideal “computerate” class will be a small one unless equal care is given in promoting a more traditional literacy as well, that enables people to think critically and meaningfully about the information they control, no matter through what medium it is communicated.

References

Baileystok, E. and Hakuta, K. (1995). In other words: The science and psychology of second-language acquisition. New York: Basic Books.

Beniger, J. (1999). The control revolution. In D. Crowley and P Heyer (Eds.). Communication in history: Technology, culture, society. 3rd Ed. New York: Addison, Wesley, Longman, pp. 305-314.
Chomsky, N. (1975). Reflections on language. NY: Pantheon Books.

Damasio, A.R. and Damasio, H. (1992, September). Brain and language. Scientific American, 89-95.

Davies, R. (1990). A chat about literacy. In More than words can say: Personal perspectives on literacy. Toronto: McClelland and Stewart, pp. 48-55.

Drucker, P. (1993). Post-capitalist society. New York: Harper Collins.

Ely, R. (2001). Language and literacy in the school years. In J. Berko Gleason (Ed.). The Development of Language. 5th Ed. Boston: Allyn and Bacon, pp. 409-454.

Farquhar, J.D. and Surry, D.W. (1995). Reducing impositions on working memory through instructional strategies. Performance and instruction, 34 (8), 4-7.

Flesch, R. (1955). Why Johnny can’t read—and what you can do about it. New York: Harper & Row.

Fraser, I. and Brownhill, K. (1983). Reading is not for me. Toronto: Clarke, Irwin & Co.
Freire P. and Macedo, D. (1987). Literacy: Reading the word and the world. Westport, CN: Bergin & Garvey.
Logan, R. (1999). Proto-Internets and the rise of a computerate class. In D. Crowley and P Heyer (Eds.). Communication in history: Technology, culture, society. 3rd Ed. New York: Addison, Wesley, Longman, pp. 331-335.

Marchand, P. (1989). Marshall McLuhan: The medium and the messenger. Toronto: Random House.

McLuhan, E. and Zingrone, F. (Eds.) (1995). Essential McLuhan. Concord, ON: Anansi.
Mumford, L. (1999). The invention of printing. In D. Crowley and P Heyer (Eds.). Communication in history: Technology, culture, society. 3rd Ed. New York: Addison, Wesley, Longman, pp. 85-88.

Postman, N.. (1985). Amusing ourselves to death: Public discourse in the age of show business. New York: Penguin Books.

Postman, N. (1992). Technopoly: The surrender of culture to technology. New York: Vintage Books.

Postman, N. and Paglia, C. (1999). Two cultures—Television versus print. In D. Crowley and P Heyer (Eds.). Communication in history: Technology, culture, society. 3rd Ed. New York: Addison, Wesley, Longman, pp. 288-300.

Pinker, S. (1994). The language instinct. New York: Perennial Classics

Pressley, M. & Afflerbach, P. (1995). Verbal protocols of reading: The nature of constructively responsive reading. Hillsdale, NJ: Lawrence Erlbaum Associates.

Shenk, D. (1997). Data smog: Surviving the information glut. San Francisco: HarperEdge.

Vygotsky, L.S. (1962). Thought and language. E. Hanfmann & G. Vaker (Eds. Trans.) Cambridge, MA: MIT Press.


Elizabeth Charters, M.Ed., has been a community college professor since 1989, specializing in courses related to language and communications. She is currently teaching English Communications for Adult Academic Upgrading and College Preparation at Seneca College. Her Masters of Education thesis studied the relationship between writing and higher level thinking. she can be reached at or (416) 491-5050, ext. 6579.

Contents

• The views expressed by the authors are those of the authors and do not necessarily reflect those of The College Quarterly or of Seneca College.
Copyright ©
2004 - The College Quarterly, Seneca College of Applied Arts and Technology