With the new K-12 school year under way or on the verge, American elementary and middle school administrators are focused on “proving” that the kids in their districts are learning or “know” how to read. Several corporate entrepreneurs are on board to continue to make money through mass, data-driven program packages that administrators buy as a demonstration that care is being taken to “prove” kids are able to think about what they read. Lexile® leveling and Renaissance’s Accelerated Reading programs are probably the ones most commonly recognized both by families and library staff who are regularly asked to find books that respond to company profiles created of their students.
Individual student Lexile assessments are drawn from state testing results. The circularity is obvious and is discussed at length and critically in scholarly and popular publications. Renaissance’s Star Reading™ assessments are presented as “guiding” developing readers through increased skill levels by diagnosing their readiness through prepackaged tests. This approach, of course, has, like Lexiling, its proponents, as well as an increasingly voluble number of professional detractors.
Lexile’s work ignores audiobooks and reading by ear entirely. Renaissance doesn’t test listening comprehension, let alone any other auditory skill demonstration. Data- and score-driven literacy demonstrations, it appears, are geared exclusively to decoding text, a rather quaint approach to judging how well a student’s information gathering, metacognition, and 21st century-necessary multimodal literacy skills are developing.
California’s Department of Education addresses the administrative and pedagogical needs to assess and clarify English language development skills in those students who come to school with a home language other than English. The standards developed for both teaching and testing through its California English Language Development Test (CELDT) weights proficiency and progress in aural comprehension and oral expression just as it does text literacy and writing development. English language learners are, of course, not alone in developing their capacities to understand through listening or express themselves articulately through speaking. Here’s a test that clearly ties both metacognitive development and multimodal literacy skills into a complete demonstration of comprehension. Listening matters.
On the question of testing’s usefulness, its easy proximity to misuse and mischaracterization of tested subjects, and American education’s devotion to data, I have firm (and negative) opinions. On the question of whether listening and audiobooks are as germane to measuring literacy attainment as are answering machine-generated questions about the factual details of a printed fiction story (“What color was Bob’s hat?”), I can’t be silent. Connecting with what one reads—whether by eye or ear—is important; understanding why what was put forward in the book—whether printed text or audiobook—and how it was put forward are essential to what I, as a reader, can make of it is important. Bob’s blue hat may be easy to ask and check off as answering right and wrong, but if I can’t understand the word “blue” when spoken or the blueness of Bob’s hat is inconsequential to the fact that Bob is working on his uncle’s onion farm to earn college money, what does spitting back “blue” matter?
Listening is a skill that matters to literacy and which makes matter of language, just as text and decoding skills do. Here’s hoping, the corporate testers continue to ignore audiobooks and more school districts learn to ignore corporate testing, replacing test-taking time with listening time.