Audio-Supported Reading for Students Who are Blind or Visually Impaired
Paper prepared for the National Center on Accessible Instructional Materials
By Richard M. Jackson, Ed.D. (Boston College and CAST)
With support from Ike Presley, M.Ed. (AFB)
The opportunity to access and interact with text, both printed and electronic, continues to be fundamental to education in the information age. It is often stated that students first learn to read so that they can later read to learn, but students who are blind or have low vision are particularly disadvantaged when it comes to working with printed text. While they may readily learn to read, using either braille or magnified print, they may struggle to reach reading rate levels adequate for achieving their true potential as they move through the educational system and on into employment. While some readers of braille obtain rates comparable to those of average print readers, most typically read at rates of one third to one half those of their sighted peers (Ferrell, Mason, Young, & Cooney, 2006; Legge, Madison, & Mansfield, 1999; Morris, 1966; Simon & Huertas, 1998). Readers of enlarged or magnified print fare only slightly better (Corn, et al., 2002). Remarkably, these comparative averages have remained stable for decades, suggesting that they are unrelated to shifts in teaching practices or opportunity to learn (OtL). Rather, these low reading rates are more likely the direct result of sensory limitations occasioned by blindness or visual impairment. Consequently, there is widespread agreement among educators that, while exceptions exist, students with vision impairments take appreciably longer to complete tasks requiring reading than do their sighted counterparts (Bradley-Johnson, 1994; Harley & Lawrence, 1984; Spungin, 2002). Such a consensus concerning reading rate expectations is used to justify use of extended time as a reasonable accommodation by Individualized Education Program (IEP) teams for standardized assessments and other routine classroom tasks requiring reading related to instruction. This practice leaves little room for the possibility that many students who are blind or visually impaired could be doing much better with supports afforded by current technology, again contributing to lowered expectations and a self-fulfilling prophecy.
As students progress through the grades into the upper elementary and middle school years, the demand of reading increases enormously. As the volume of required reading increases, students with visual impairments are encouraged or taught to supplement their reading of braille or printed text with recorded or synthesized speech. Supplementation with speech is, therefore, considered a necessary tool for increasing access to information not only to compensate for depressed reading rates but also because braille and large print materials have not always been available when needed. Today, however, advances in the production of digital media and in the design of technology tools have reached a point where they allow braille or print readers to process tactile or visual information at the same time they are engaging with text in an auditory format. At its simplest level, this can be achieved by reading hard copy braille or print while listening to an analog or digital recording of the same material. Newer technologies offer features that allow a user to more efficiently access and manipulate the braille, print, and audio information conveyed. In what is here referred to as audio-supported reading (ASR), the claim is made that the combination of refreshable (paperless) braille or screen magnification and text-to-speech screen reader technology can now enhance the way a blind or visually impaired reader interacts with text and further augments the speed with which a reader can acquire information. With these technologies, the task of reading and comprehending text can occur with greater efficiency, thus opening up learning opportunities that will support students in maximizing their educational potential.
The purpose of this paper is to define and elucidate the practice of audio-supported reading (ASR) as a powerful means of accessing and making productive use of text. To begin with, this paper defines ASR is a technology-based approach for augmenting and enhancing access to and use of text-either braille or print. Ordinarily, young students learn to read through the mediums of either braille or print. At some time during the acquisition of skills essential for reading, such as decoding or rapid naming of words, technologies like accessible personal digital assistants (PDAs) with a refreshable braille display or electronic print magnification systems are introduced. No set sequence has been agreed upon for integrating or aligning assistive technologies with early literacy skills, but the need to incorporate assistive technologies within educational programs for these children is widely recognized (Cooper & Nichols, 2007; Kapperman, Sticken, & Heinze, 2002; Strobel, Fossa, Arthanat, & Brace, 2006). As students progress further into the upper grades, they are either encouraged or explicitly taught to use audio systems (human readers, talking books, synthesized speech) to supplement their braille or print access to text. Thus, the practice of teaching blind and visually impaired students relies upon separate and distinct curricula in literacy instruction, such as 1) a curriculum to address early braille or print reading, 2) a curriculum for focusing on technology skills, and 3) a curriculum to teach listening skills (Corn & Koenig, 2002).
Given the current state of technologies for displaying refreshable braille or magnified text, paired with speech driven by screen reader technology, students can now access information available in text through multiple modalities. Rather than teaching the skills essential for braille reading, print reading, and listening in isolation as separate pedagogies, the apparent benefits of ASR suggest that a more robust and integrated approach for teaching and acquiring literacy skills will be more advantageous. In today’s era of general curriculum access and high stakes testing, it is imperative that new practices and pedagogies be examined with a view to improving results for these students, many of whom lag far behind their typically seeing counterparts (Wagner, Newman, Cameto, & Levine, 2006).
Audio-Supported Reading: A Theory-to-Practice Perspective
This paper uses a question/answer format for addressing the theoretical underpinnings of ASR as an approach for improving the reading proficiency of students who are blind or visually impaired. Improvement of reading proficiency should translate into greater academic achievement as students enjoy increased opportunity to learn by reading after they have learned to read.
How does the simultaneous presentation of dual-modality information affect text processing and cognitive load?
On the surface it would appear that listening to text while reading text imposes an excessive burden on a reader’s powers of concentration. Comprehending difficult expository text requires effort and engagement, the fruits of which seemingly would be degraded by having to multi-task or pay attention to two distinct modalities simultaneously. Yet the efficacy of ASR does not require simultaneous attention to what is seen or felt at the same time as what is heard. An explanation is in order. Over 50 years ago, Harvard psychologist George Miller observed that humans’ cognitive capacity is constrained by how long information can be retained in immediate memory without rehearsal or refreshment (Miller, 1956). Professor Miller also observed that this time-limited, focused attention-or working memory-is also constrained by the amount or quantity of information that can be maintained before decay or forgetting sets in. While there has been some dissention from this view over the years (Cowan, 2001), it is generally believed that humans can hold up to 7 (+/- 2) “chunks” of information in mind for a relatively brief period of time (Baddeley, 1994). In human information processing, as it has come to be known, information temporarily stored in working memory must be acted upon by some process of encoding so that it can be transferred to long-term or permanent memory for use in future problem-solving behavior. Miller’s thinking here was to encourage research into the nature of constraints on humans’ capacity to process information (Baddeley, 2007). In the execution of tasks such as those involved in the act of reading, the effort it takes to process units of information such as phonemes, graphemes, morphemes, and syntactic structures (not to mention semantic interpretations) is often referred to as cognitive load (Sweller, 1988). Seemingly, if these separate units are not adequately chunked for comprehension, the capacity of cognitive load is exceeded and the process required for proficient reading breaks down (Moreno & Mayer, 2000).
In audio-supported reading, concerns about exceeding one’s cognitive load capacity would be warranted theoretically if all of this chunking had to occur simultaneously in two distinct modalities (i.e., touching braille or seeing print at the same time as listening to speech). Certainly, devoting 100% of load capacity to each of two modalities would mathematically double the load. The subjective experience of audio-supported reading, once analyzed, tells quite a different story. To illustrate, consider devoting 100% of load capacity to audio-supported reading and then dividing the task proportionally depending on relative sensory-perceptual and cognitive demands. When text is informational, requiring a reader to closely examine content for study purposes, or otherwise interact with the text in order to highlight or extract, then much of the load will be devoted to tactile or visual modalities for processing. On the other hand, when text is highly familiar, redundant, or entertaining-such as a narrative story or a piece of fiction-then the load can be disproportionately auditory. To be sure, this is not an either/or thing. Rather, the reader will be actively engaged in the process of audio-supported reading and in controlling the distribution of cognitive load depending on the demands of the task and the ultimate purpose for reading.
How does the simultaneous presentation of text in two modalities affect reader attention and task persistence?
Again, common sense might dictate that the combination of looking at or palpating streaming text while listening to speech simultaneously may distract, confuse, or overly stimulate a reader to the point where comprehension is disrupted. To the contrary, ASR’s ability to allow a reader to control the rate at which text is presented and to decide which modality will take precedence during reading tasks allows for sustained engagement and freedom from distraction. Background on the evolution of technologies that support listening may illustrate this point.
Since the 1930s, fixed media for listening to text has undergone numerous transformations from the use of sound scriber sheets of plastic, vinyl discs, open reel tape, and audio cassettes. As playback units advanced from phonographs, tape players, and cassette players, so did technologies for interacting with the media such as variable speed control and beep marking or tone indexing. Both features dramatically changed the listening process from passive reception of human recorded speech to active rate-controllable listening and ease of navigation, which more closely mimicked visual reading. A listener could increase the rate of presentation for highly familiar or less demanding content and slow down for more challenging content and note-taking. The benefits of active listening over passive listening were well documented (Nolan & Morris, 1969). Because listeners could actively control the rate of information pick-up, they could persist at study and learning for longer periods of time.
Thus, audio-supported reading further extends, over mere listening, a reader’s opportunity to actively engage in the reading task by enabling tactual or visual monitoring of text. Miscues from phonemes, morphemes, or syntax, obtained from listening, can be verified or checked against the tactual or visual information accompanying the speech. Conversely, uncertainties in tactile or visual recognition can also be verified or checked against auditory information.
How does the addition of screen reader technology to braille or print augment the rate at which information is processed?
The size of printed text and the number of words per line on a page exist within a relatively small range of variation. For example, print can be too small to see and pages can be too wide to manipulate with adequate efficiency. Mature, skilled readers of text master ocular-motor eye movement patterns for efficiently scanning text as well as manual skills for holding materials and turning pages. By comparison with typically seeing readers, the sensory and motor limitations that are imposed on a student who is blind or visually impaired negatively impact fluency behavior and adequacy of reading rate. These limitations also adversely affect the relative ease with which students who are blind or visually impaired are able to navigate through and interact with text.
Reading rates for braille and print readers increase with age/grade according to a developmental process and sequence of skill acquisition. The developmental process has been carefully documented (Rayner, Foorman, Perfetti, Pesetsky, & Seidenberg, 2001), in which typical readers are observed to use fewer ocular fixations per line as they advance in reading proficiency. A combination of cognitive anticipatory strategy and peripheral attention to physical features contained in text alert a reader as to where gaze should be directed in moving across a line of print and down a page of text. An analogous process is also at work with respect to the hand movements involved in the reading of braille. Left and right hands appear to alternate between central or focal attentive processes and peripheral attentive processes, and anticipatory strategy is applied through content familiarity and word knowledge. The precise sequence of skills involved in early reading are well documented, too, by the work of the National Reading Panel (2000). Thus, the mature reader achieves a level of proficiency where making meaning from text appears to be automatic.
Constrained by the absence or diminution of vision function, a student with vision impairment will likely process information contained in text more slowly and is therefore more likely to experience cognitive overload as newly decoded information is combined with previously decoded information. Ken Goodman’s familiar example of “cowboys jumping on houses” nicely illustrates this point. The likelihood of confusing “horses” with “houses” in Goodman’s example increases as reading rate is constrained because a reader has to hold in working memory the subject of the sentence in anticipation of what is to follow (Goodman, 1967). In the field of reading research, there is a well-established relationship between words read correctly per minute and reading comprehension (Broaddus & Worthy, 2001; National Reading Panel, 2000). Highly fluent readers are generally good comprehenders. Screenreading software bypasses sensory and motor skills associated with decoding and rapid word naming, thus allowing an individual to listen to text read aloud at substantially faster rates. These increased rates, therefore, augment the speed with which information is being processed, thus allowing the reader to utilize his/her full capacity of working memory to comprehend meaning. Augmented reading rate, accomplished through ASR, boosts reading comprehension and shortens the time required to complete academic tasks.
How does dual-modality input enhance the means by which a reader interacts with informational or expository text?
In the reading of expository text, a skilled reader must take a very active role in controlling reading rate, stopping for rehearsal or interpretation, or scanning backward or forward in search of text that would affirm or disconfirm the reader’s sense of the author’s intended meaning. In ASR, a reader relies on technology to vary the rate (on the fly) at which text is displayed in braille or as magnified print. In more demanding passages, a reader’s primary medium (braille or print) will dominate the reading. For less demanding passages, text-to-speech screen reading technology will take over to accelerate the rate at which meaning is extracted from text. The proportional contribution of braille/print reading and listening through text-to-speech is determined by the reader, who controls the rate at which text is accessed. When a fast rate is called for, a switch to listening will occur; when a slower rate is desired, a switch to tactual or visual reading will be executed. With these facilities afforded by ASR, a reader will become more strategic in how she or he chooses to engage in the task of processing the information contained in text.
How does audio-supported reading align with current and envisioned technology?
The once ubiquitous textbook, long-standing symbol and mainstay of schooling, may soon be a relic of times gone by (Stahl, 2004). Reliance on the textbook as the preferred source for learning is rapidly slipping away in favor of platforms that can deliver rich media, where, for example, hyperlinks provide control over how deep a user chooses to go within content or in what form a user wants content to be represented (Jenkins & Thorburn, 2003). This “new media” can fill in background knowledge or enrich learning experience by connecting with extended or related content. Rich media can go far beyond occasional graphical representations to feature illustrative animations and narrated videos. The textbook, with its fixed format, can now be replaced with a multimedia source rich with choices for displaying a wide range of content representations-as well as multiple levels of content depth and breadth-all in a single, navigable source. With a single portal, learners can access multiple sources of information, multiple representations of content, and multiple levels of complexity (Lievrouw & Livingstone, 2006; Wardrip-Fruin & Montfort, 2003).
New technologies and new media conceptions are enabling this transformation. To be sure, all will benefit, but those with disabilities, for whom educational opportunities have historically been constrained, will stand to benefit the most. Just as typically seeing individuals now access recorded books at their local public library or purchase audio books from outlets such as Audible.com, they will be all the more captivated by the robust capabilities of technologies that enable ASR such as tablets and smartphones. The popularity of audio books among the general population can be partially attributed to the desire for and need to consume information contained in books while engaged in activities that are incompatible with holding and manipulating a physical book. Tablets, smartphones, or netbooks can all display e-books, which greatly enable access and provide portability. Without speech, a user is limited to activities that are usually compatible with visual reading, such as relaxing by a pool or commuting on a train. With ASR, a user can switch from visual access to audio access, or, if preferred, access both visual and audio output as the focus of their activity changes. Why stop reading an e-book when the lawn needs to be cut, the evening meal needs to be prepared, or errands call for driving? ASR-enabled technologies will prevent readers from having to suspend their engagement with an important or otherwise compelling piece of text. Thus, ASR-while arguably of greatest benefit to individuals with visual impairments-will likely be welcomed by the general population as well, since the addition of audio to visual reading experience expands options and flexibility for all.
How does audio-supported reading fit within existing pedagogies for educating blind and visually impaired students?
Teachers of the blind and visually impaired (TVIs) are prepared to teach the reading and writing of literary braille and the use of optical or low-vision devices for purposes of reading. However, the extent to which TVIs are qualified to actually teach reading consistent with the standards and practices recommended by the National Reading Panel Report (National Reading Panel, 2000) has been called into question. Murphy, Hatton, and Erickson (2008), for example, surveyed 192 teachers of young children with vision impairments and reported that very few were teaching phonological awareness skills or providing opportunities for early writing and alphabet experiences. Brownell, Sindelar, Kieley, and Danielson (2010), in tracing the history of special education teacher preparation, concluded that pre-service programs in special education are currently inadequate in their attention to evidence-based literacy practices. Ferrell, Young, and Cooney (2006) conducted a meta-analysis of literacy research on interventions with blind and visually impaired students covering the past 40 years. Their report uncovered a comprehensive source of what is known and not known in the reading of braille and print. A central conclusion reached by the authors was that, despite enormous change in inclusive placement practices over the past 40 years, the methods applied today to teach visually impaired children to read are essentially the same as those used in the 1950s. Moreover, students with visual impairments are not achieving adequate literacy, and they are not achieving it early enough when it matters most. As special educators and general educators are increasingly called upon to collaborate in support of literacy and numeracy across the curriculum, special educators-including TVIs-must gain competency in the use of evidence-based literacy practices.
Beyond braille and low vision optical devices, TVIs are also prepared to teach the use of assistive technologies such as electronic magnification systems, screen magnification software, accessible PDAs with refreshable braille displays, and screen reading technologies. However, the extent and type of training varies widely across teacher preparation and professional development programs (Smith & Kelley, 2007). In practice, disproportionate numbers of TVIs report that they feel inadequately prepared to support the technology-related needs of their students (Abner & Lahm, 2002; Kapperman, et al., 2002; Kelly & Smith, 2011). Furthermore, a secondary analysis of a nationally representative database reveals that surprisingly few students with visual impairments actually use technology in their educational programs ( Kelly, 2009; Kelly & Smith, 2011) and, for those who do, there is little evidence that technology use is linked to academic achievement (Freeland, Emerson, Curtis, & Fogarty, 2010; Kelly & Smith, 2011).1 These are troubling findings, causing some to recommend the creation of a new specialization in assistive technology specifically designed to meet the technology-related needs of students with visual impairments (Kapperman, et al., 2002; Smith, Kelley, Maushak, Griffin-Shirley, & Lan, 2009). To this end, Smith, et al. (2009), using a Delphi technique to reach expert consensus, have proposed an exhaustive list of competencies that would define the role and function of such a new specialization.
Today, the majority of students with vision impairments are placed in general education classrooms with support from a TVI who is typically not school-based but travels from school to school on an itinerant basis. Time for specially designed instruction is therefore very limited, necessitating careful prioritization of essential goals for student IEPs. When a decision is made to teach either braille or print reading, instruction will proceed intensively according to that decision. At some point technology may or may not be introduced into students’ programs and the explicit teaching of listening skills is likely to lose out in competition with braille and print reading instruction (Wolffe, et al., 2002).This combination of limited time available for specially designed instruction and the lack of TVI competencies in special assistive technologies present both policy and pedagogical challenges for educational decision-makers. Interviews with TVIs paired with observations of students using technology on authentic tasks suggests that exemplary practices do exist, albeit on a very small scale (Johnstone, Altman, Timmons, Thurlow, & Laitusis, 2009; Thurlow, Johnstone, Timmons, & Altman, 2009). Deeper examination of the work of these few exemplary practitioners could possibly point the way to an integration of pedagogical practices around literacy, technology, and listening; but this would require systematic inquiry into design of an instructional system whose efficacy could then be tested on a broader scale.
How can audio-supported reading fit within a comprehensive Learning Media Assessment?
The Learning Media Assessment (LMA) was developed to assist IEP teams with the task of determining a student’s primary learning modality for accessing the curriculum (Koenig & Holbrook, 1995). The LMA is not a test but rather a process occurring over time through which confidence is gained over a decision to teach braille, print, or, in some cases, both braille and print. After arriving at a determination of a student’s primary learning medium, the decision is reviewed annually through the use of a “continuous assessment of literacy media” form. This form prompts the IEP team to review their student’s progress to determine what adjustments might be indicated by a change in vision status or a change in the student’s academic performance. As part of the annual continuing assessment process, the team explores a range of “literacy tools” which, if indicated, may increase independent access to the curriculum. Literacy tools include devices for listening to recorded speech and other technologies to supplement and support reading and writing, such as accessible PDAs with refreshable braille displays and synthesized speech.
The development of the LMA represents a critically important milestone in the history of educating blind and visually impaired students because it provides IEP teams with a series of assessments to ensure that students will receive appropriate literacy instruction in a modality determined (through assessment) to be the most efficient and effective for learning. Today, however, the LMA is sorely lacking in two important areas: 1) frequency and nature of progress monitoring, 2) technology integration. Annual progress monitoring is insufficient for making changes in instruction when needed-that is, when a student is not making progress with his/her learning medium. More frequent measures of silent and oral reading fluency and checks on reading comprehension from story re-telling, recall of facts, and interpretation of author’s intent are also needed so that the LMA can facilitate classroom collaboration for just-in-time instructional planning.
Today, technological advances available since the LMA first appeared require that “literacy tools” be incorporated into primary literacy practices as opposed to supplementing practices on an as-needed basis. IEP teams need to make decisions about how to align the use of technology tools with typical classroom literacy practices for reading and writing. Claims made here about the advantages of ASR over single modality reading need to be substantiated with data from students’ relative performance and rate of progress. Advantages of increased interactivity-such as marking up documents, boosts in reading rate, including reduced time to complete passage reading with improvements in comprehension-all need to be included in an LMA. Thus, the LMA remains critically important but needs to be updated in order to provide more progress monitoring data to support decisions about the use of technology to support reading though listening.
How can audio-supported reading increase the usefulness of accessible instructional materials (AIM)?
The student or end-user of AIM can now obtain access through four primary alternative formats: large print, braille, audio, and digital text. In years past, as described above, educational practice was to supplement either print or braille with audio after basic literacy skills were established. Limited availability of braille and large print sources paired with insufficient reading rates in content area learning warranted audio supplementation. At least two independent studies demonstrated increases in listening rates and improvements in aural language comprehension as a function of practice (Bishoff, 1967; Stocker, 1970), setting the stage for the incorporation of listening skills training as an essential component of the expanded core curriculum in the education of students who are blind and visually impaired. Evans (1997) observed that some students with visual impairments also experience reading difficulties in decoding and word recognition and may therefore benefit from a technique available at that time known as audio-assisted reading. The history and efficacy of audio-assisted reading for struggling readers has been documented by Esteves (2007) and Lesnick (2006). This technique can be achieved with hard copy braille or large print and an audio recording in a format that allows the user to control playback rate and basic navigation. However, students who are blind or visually impaired may find the manipulation of hard-copy materials and audio recording to be cumbersome and tedious, which can ultimately lead to an unsatisfying and ineffective practice. A more effective and efficient option would be to use ASR as defined herein with the digital text option for AIM. ASR would rely on the digital text format for displaying magnified print or refreshable braille along with text-to-speech for listening. Certainly, digital formats provide the greatest flexibility. In ASR, digital text is displayed in either braille or on-screen print in synch with speech. Users accessing visual and audio information simultaneously can have the advantage of seeing each word highlighted as it is spoken via synthesized speech. Working with a DAISY format file, for example, students can access the information contained in the file from a wide variety of devices, players, PCs, tablets, or smartphones. Zabala’s (2002) SETT framework helps illustrate the relative advantages of ASR. The SETT framework is intended to help decision-makers with the consideration of assistive technologies in the IEP planning process as well as with implementation. SETT requires that the (s)tudent, the (e)nvironment, the (t)ask, and the (t)ool be considered together when making decisions. Depending on the needs and preferences of a student (braille or print reader), the environment where instructional materials must be accessed (classroom, home, bus), and the specific task required of the student (homework, group presentation), the most appropriate tool for accessing instructional materials can be determined. If the student has established a level of comfort with direct access to text through braille or print, as well as through speech, then ASR provides maximum flexibility in accessing and working with text.
How does audio-supported reading fit within the universal design for learning (UDL) framework?
UDL is a framework for guiding the development of curriculum that is widely and deeply responsive to the needs of the broadest conceivable range of learners. Relying on three critical design principles, UDL insists that curriculum 1) offer its content through multiple representations (the “what” of learning), 2) provide an array of options for demonstrating what has been learned or can be performed, 3) arrange for various means of connecting or engaging learners as the curriculum is being enacted. In 1997, Congress extended the rights of students with disabilities to include access to, participation in, and the opportunity to make progress within the general education curriculum (Hitchcock, Meyer, Rose, & Jackson, 2002). Prior to the 1997 re-authorization of the Individuals with Disabilities Education Act (IDEA), entitlement applied only to an education individually tailored to address needs arising from a disability. Thus, UDL appeared on the scene as a compelling framework for identifying, removing, and circumventing barriers inherent to the general curriculum-a curriculum which was never intended for students with disabilities in the first place. Universal design meant changing the focus from “student as problem” to the curriculum itself as riddled with unnecessary and irrelevant barriers which impede student progress.
IDEA presumes that students on an IEP because of a vision impairment will be braille readers until or unless the team determines that print is a more efficient and effective alternative to braille for that student. Braille, while essential for students who are functionally blind, is an exclusive medium, limiting a blind student’s written communication with teachers and other students unless they too know braille. Technologies that translate braille into print and print into braille have served to bridge this communication barrier between those who are blind and those who are sighted in schools and classrooms in particular and in society in general. Today, braille greatly improves the age-old challenge of access to the curriculum, but barriers remain with regard to a blind student’s participation in the curriculum. Participation or involvement with the curriculum implies keeping pace with classroom instruction and interacting with teachers and student colleagues as instruction transpires. Highly skilled aides or para-educators, who know and can work with braille, have been employed effectively by school systems to mediate these communication barriers, but at times these individuals may restrict or otherwise interfere with student-to-student interactions and teacher-to-student interactions. Since so much of classroom learning is socially mediated, classroom aides must take care not to supplant opportunities for high-quality instruction. In a school or classroom in which teachers and administrators strive to remove barriers from the curriculum in order to advance inclusive practices, solutions that rely less on aides and more on technologies that are portable so that they can be used in small and large group settings show great promise for bringing the student with visual impairment directly into the instructional situation and more likely to keep up with classmates as reading load increases. These technologies should be flexible for displaying braille, print, and providing speech, and capable of supporting writing or note-taking in braille and print, all of which can contribute to student independence, self-reliance, and self-determination.
Students who are blind or visually impaired were among the first students with disabilities to be educated in the United States. From the early 1800s with residential schools to the early 1900s with public day school classes, these students and their teachers have demonstrated that the absence or curtailment of vision need not preclude the attainment of academic achievement and entry into the world of work. Over the years, however, these laudable successes have been more often the exception rather than the rule. Taken in the aggregate, students who are blind or visually impaired do not enjoy academic or employment outcomes even remotely comparable to those of their typically seeing counterparts. Since the passage of IDEA ’1997, students with vision impairments, as well as all other students with disabilities, are expected to access the general education curriculum, have involvement with that curriculum, and thus receive the opportunity to make progress within that curriculum as measured against challenging academic standards. These new heightened expectations and expanded opportunities are today more likely to bear fruit for students with vision impairments because of the availability of digital text and technology tools. However, many of the practices in place today for enacting these opportunities remain as obstacles that must be overcome.
With ASR, students, parents, and teachers no longer have to settle for artificially low rates of reading and cumbersome tactics for interacting with text. Evidence of ASR’s effectiveness is logically compelling and anecdotally substantiated, but for wide-scale adoption, much empirical investigation must be undertaken. The appeal to logic comes from the literature on learning through listening which establishes that listening rate significantly exceeds reading rate for both braille and print readers with visual impairments. Logically, if listening gets a reader through text more quickly, then it must be considered more efficient when time is of concern. For this reason, the tools for accessing spoken text and the strategies for comprehending spoken text have been taught by TVIs and additional instructional time should be devoted to it, particularly in connection with grade advancement. The challenge of keeping up with the demands of content area learning are somewhat mitigated with listening as a supplement. One can conclude that combining a student’s primary learning medium (i.e., braille or print) with supports for listening would simply add value to the overall effort of reading to learn. Anecdotally, both authors routinely employ ASR in their own literacy activity. Moreover, many other reports of friends and colleagues with visual impairments also indicate success with ASR from self-familiarization with technologies that enable this method of accessing and working with text. While such evidence of efficacy is compelling, much research remains to be done before ASR can be adopted on a broad scale in the education of blind and visually impaired students.
Several challenges must be addressed before ASR can be implemented as a preferred pedagogy in the education of blind and visually impaired students, the least of which is ensuring the provision of the technology necessary to practice ASR. TVIs operate under dreadful time constraints, which results in great pressure to determine a single most effective medium for literacy (braille or print). Assistive technology is considered by IEP teams but most typically as an add-on. The actual use of technology may or may not be taught by the TVI, but rarely is it incorporated into or integrated with literacy instruction. Finally, learning through listening, while continuing to be recognized as part of the expanded core curriculum for educating blind and visually impaired students, currently receives less attention it has than in years gone by. Consequently, there is a need to bring these separate pedagogies together in a unified instructional design. Learning to read with either braille or print, learning to listen, and learning to use technology must all come together to create authentic classroom activities. To move beyond logical appeal and anecdotal evidence, ASR must be subjected to a rigorous program of research carried out in real-world settings where mandated policies are practiced.
1. Special Education Elementary Longitudinal Study, Waves 1, 2, & 3 [CD-ROM database, with accompanying documentation] (produced under contract no. ED-00-CO-0017). (2003). Available from U.S. Department of Education, Office of Special Education Programs.
Abner, G. H., & Lahm, E. A. (2002). Implementation of assistive technology with students who are visually impaired: Teacher readiness. Journal of Visual Impairment & Blindness, 96(2), 98-105.
Baddeley, A. D. (1994). The magical number seven: Still magic after all these years. Psychological Review, 101(2), 353-356.
Baddeley, A. D. (2007). Working Memory, Thought and Action. Oxford: Oxford University Press.
Bishoff, R. W. (1967). The improvement of listening comprehension in partially sighted students. University of Oregon, Oregon.
Bradley-Johnson, S. (1994). Psychoeducational assessment of visually impaired and blind students: Infancy through high school (2nd ed.). Austin: Pro-Ed.
Broaddus, K., & Worthy, J. (2001). Fluency beyond the primary grades: From group performance to silent, independent reading. The Reading Teacher, 55(4), 334-343.
Brownell, M. T., Sindelar, P. T., Kieley, M. T., & Danielson, L. C. (2010). Special education teacher quality and preparation: Exposing foundations, constructing a new model. Exceptional Children, 76(3), 357-377.
Cooper, H. L., & Nichols, S. K. (2007). Technology and early braille literacy: Using the Mountbatten Pro Brailler in primary-grade classrooms. Journal of Visual Impairment & Blindness, 101(22-31).
Corn, A. L., & Koenig, A. J. (2002). Literacy for students with low vision: A framework for delivering instruction. Journal of Visual Impairment & Blindness, 96(5), 305-321.
Corn, A. L., Wall, R. S., Jose, R. T., Bell, J. K., Wilcox, K., & Perez, A. (2002). An Initial Study of Reading and Comprehension Rates for Students Who Received Optical Devices. Journal of Visual Impairment & Blindness, 96(5), 322.
Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences, 24, 87-185.
Esteves, K. J. (2007). Audio-assisted reading with digital audiobooks for upper elementary students with reading disabilities. Unpublished Doctoral dissertation, Western Michigan University, Kalamazoo.
Evans, C. (1997). Changing Channels-Audio-Assisted Reading: Access to Curriculum for Students with Print Disabilities. Texas School for the Blind. Retrieved from http://www.tsbvi.edu/braille-resources/70-changing-channels-audioassisted-reading-access-to-curriculum-for-students-with-print-disabilities.
Ferrell, K. A., Mason, L., Young, J., & Cooney, J. (2006). Forty Years of Literacy Research in Blindness and Visual Impairment: National Center on Low-Incidence Disabilities.
Freeland, A. L., Emerson, R. W., Curtis, A. B., & Fogarty, K. (2010). Exploring the relationship between access technology and standardized test scores for youths with visual impairments: secondary analysis of the national longitudinal transition study 2. Journal of Visual Impairment & Blindness, 104(3), 170-182.
Goodman, K. (1967). Reading: a psycholinguistic guessing game. Journal of the Reading Specialist, 6(4), 126-135.
Harley, R. K., & Lawrence, G. A. (1984). Visual impairment in the schools (2nd ed.). Springfield, IL: Charles C. Thomas.
Hitchcock, C., Meyer, A., Rose, D., & Jackson, R. (2002). Providing new access to the general curriculum. Teaching Exceptional Children, 35(2), 8-17.
Jenkins, H., & Thorburn, D. (2003). Democracy and New Media. Cambridge, MA: MIT Press.
Johnstone, C., Altman, J., Timmons, J., Thurlow, M., & Laitusis, C. (2009). Field-based perspectives on technology assisted reading assessments: Results of an interview study with teachers of students with visual impairments (TVIs). Minneapolis, MN: University of Minnesota, Technology Assisted Reading Assessment.
Kapperman, G., Sticken, J., & Heinze, T. (2002). Survey of the use of assistive technology by Illinois students who are visually impaired. Journal of Visual Impairment & Blindness, 96(2), 106-108.
Kelly, S. M. (2009). Use of assistive technology by students with visual impairments: Findings from a national survey. Journal of Visual Impairment & Blindness, 103(8), 470-480.
Kelly, S. M., & Smith, D. W. (2011). The impact of assistive technology on the educational performance of students with visual impairments: A synthesis of the research. Journal of Visual Impairment & Blindness, 105(2), 73-83.
Koenig, A. J., & Holbrook, M. C. (1995). Learning media assessment of students with visual impairments: A resource guide for teachers. Austin: Texas School for the Blind and Visually Impaired.
Legge, G. E., Madison, C. M., & Mansfield, J. S. (1999). Measuring braille reading speed with the MNREAD test. Visual Impairment Research, 1(3), 131-145.
Lesnick, J. K. (2006). A mix-method multi-level randomized evaluation of the implementation and impact of an audio-assisted reading program for struggling readers. Unpublished Doctor dissertation, University of Pennsylvinia.
Lievrouw, L. A., & Livingstone, S. (2006). Handbook of New Media. Thousand Oaks, CA: SAGE Publications Ltd.
Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological review, 63(2), 81-97.
Moreno, R., & Mayer, R. E. (2000). A coherence effect in multimedia learning: The case for minimizing irrelevant sounds in the design of multimedia instructional messages. Journal of Educational Psychology, 92(1), 117-125.
Morris, J. E. (1966, JUNE 26-30). Relative efficiency of reading and listening for braille and large type readers. Paper presented at the 48th BIENNIAL CONFERENCE OF THE AMERICAN ASSOCIATION OF INSTRUCTORS OF THE BLIND, SALT LAKE CITY.
Murphy, J. L., Hatton, D., & Erickson, K. A. (2008). Exploring the early literacy practices of teachers of infants, toddlers, and preschoolers with visual impairments. Journal of Visual Impainnent & Blindness, 133-146.
National Reading Panel (2000). Teaching children to read: An evidence-hased assessment of the scientific research literature on reading and its implications for reading instruction. Washington, DC: National Institute of Child Health and Hunian Development. Nolan, C. Y., & Morris, J. E. (1969). Learning by blind students through active and passive listening. Exceptional Children, 36(3), 173-181.
Rayner, K., Foorman, B. R., Perfetti, C. A., Pesetsky, D., & Seidenberg, M. S. (2001). How psychological science informs the teaching of reading. Psychological Science in the Public Interest, 2(2), 31-74.
Simon, C., & Huertas, J. A. (1998). How blind readers perceive and gather information written in braille. Journal of Visual Impairment & Blindness, 92(5), 322-330.
Smith, D. W., & Kelley, P. (2007). A survey of the integration of assistive technology knowledge into teacher preparation programs for individuals with visual impairments. Journal of Visual Impairment & Blindness, 101(7), 429-433.
Smith, D. W., Kelley, P., Maushak, N. J., Griffin-Shirley, N., & Lan, W. Y. (2009). Assistive technology competencies for teachers of students with visual impairments. Journal of Visual Impairment & Blindness, 103(8), 457-469.
Spungin, S. J. (2002). When you have a visually impaired student in your classroom: A guide for teachers. New York: AFB Press.
Stahl, S. (2004). The promise of accessible textbooks: increased achievement for all students. Wakefield, MA: National Center on Accessing the General Curriculum.
Stocker, C. S. (1970). Methods for improvement of listening efficiency in individuals with visual impairment: Final report. Topeka, Kan.: Division of Services for the Blind and Visually Handicapped, Kansas State Dept. of Social Welfare.
Strobel, W., Fossa, J., Arthanat, S., & Brace, J. (2006). Technology for access to text and graphics for people with visual impairments and blindness in vocational settings. Journal of Vocational Rehabilitation, 24, 87-95.
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257-285.
Thurlow, M., Johnstone, C., Timmons, J., & Altman, J. (2009). Survey of teachers of students with visual impairments: Students served and their access to state assessments of reading. Minneapolis, MN: University of Minnesota, Technology Assisted Reading Assessment.
Wagner, M., Newman, L., Cameto, R., & Levine, P. (2006). The academic achievement and functional performance of youth with disabilities. A final report from the National Longitudinal Transition Study-2. Menlo Park, CA: SRI International.
Wardrip-Fruin, N., & Montfort, N. (2003). New Media Reader. Cambridge, MA: MIT Press.
Wolffe, K. E., Sacks, S. Z., Corn, A. L., Erin, J. N., Huebner, K. M., & Lewis, S. (2002). Teachers of students with visual impairments: What are they teaching. Journal of Visual Impairment & Blindness, 96(5), 293-304.
Zabala, J. (2002). Get SETT for successful inclusion and transition Retrieved from http://www.ldonline.org/ld_indepth/technology/zabalaSETT1.html.
This report was written with support from the National Center on Accessible Instructional Materials (NCAIM), a cooperative agreement between CAST and the U.S. Department of Education, Office of Special Education Programs (OSEP), Cooperative Agreement No. H327T090001. The opinions expressed herein do not necessarily reflect the policy or position of the U.S. Department of Education, Office of Special Education Programs, and no official endorsement by the Department should be inferred.