The Next Step: Assessment and the English Language Learner

An English as a Second Language resource teacher at four Anchorage elementary schools, Fredrick will individually test all 52 of her ELL kindergarten students on the four dimensions. She'll also give the speaking portion of the proficiency test to 250 first through sixth-graders. One after another, the students will sit across from Fredrick for 15 or 20 minutes, examine pictures, and orally respond to questions and prompts. For some monolingual youngsters, it will be a grueling ordeal. Other, more fluent, students will breeze through the session, barely breaking a sweat.

"We get a wide range of students," notes Fredrick, who spends half her time at Lake Hood Elementary, a school that's just a stone's throw from the world's largest seaplane airport and where almost a third of the youngsters are non-native English speakers. "Many of our students don't read or write in their first language, but they speak it or at least hear it at home. A majority of them were born here or came young."

The new battery of assessments, which are steeped in academic rather than conversational English, will give Fredrick a clearer picture of students' proficiency — from beginning to high-beginning, low-intermediate, high-intermediate, and advanced levels. Because the tests are aligned to state standards, they'll also help teachers tailor classroom instruction to each child's needs.

Language learners in the spotlight

Like Alaska, most states are rolling out their language proficiency tests for the first time in spring 2006. As the centerpiece of Title III legislation, the new tests are the second major piece of the No Child Left Behind Act to directly affect English language learners. Together, Title I and Title III have brought unprecedented attention to Limited English Proficient (LEP) students as a distinct subgroup — attention that's spawned the full range of emotions often associated with politically laden educational policy: from hopeful praise to bold criticism, cautious support, and widespread confusion.

According to Wendy St. Michell, the LEP program manager for the Idaho State Board of Education, "The number one issue we have with Title III — other than adequate funding — is to help district and school-level staff members understand the compliance issues involved with the new legislation. Some have not understood the difference between the language proficiency test and the statewide content assessment, and that the language proficiency test is not an optional assessment. It has also been difficult to communicate that the new language proficiency test has its own set of sanctions, which include AYP."

Although Title I requires testing all students on academic content, while Title III requires testing only the English proficiency of ELL students, the two pieces of legislation are inextricably twined and are based on the same major elements:

  • Specific grade-level standards and benchmarks. For Title III, standards and benchmarks are for language proficiency rather than academic content.
  • A comprehensive assessment. Title III legislation requires that ELL students be tested in five domain areas: listening, speaking, reading, writing, and comprehension. Comprehension can be determined with the same measurements used for listening and reading — the two receptive skills — and is therefore not assessed as a separate domain on most state tests.
  • Measurable achievement goals that are used to determine adequate yearly progress (AYP). In Title III these goals are called "annual measurable achievement objectives" (or AMAOs) and are separated into three major categories.
    1. The starting point and annual goals — set by the state — for the percentage of ELL students who will make progress on the yearly English language proficiency test
    2. The percentage of students who will attain proficiency and exit out of the program each year
    3. The percentage of students in the ELL subgroup who will meet state AYP goals on the statewide academic content test
  • A system of accountability. If a district fails to meet any one of the above objectives, it is not considered to be making AYP and is subject to the same sanctions as in Title I. This is the most direct link between the Title I and Title III legislation.

The state of the states

The U.S. Department of Education issued the final guidelines for Title III in February 2003, and states have been scrambling to meet the new requirements ever since. Developing the new English proficiency assessments has proven to be the most difficult — and most costly — part of the process. By the end of 2004, 40 states had reportedly developed an assessment, but few had actually administered one. The federal government set a final deadline of spring 2006. According to the latest reports, only a few states — including Montana in the Northwest region — will miss this mark. (For more on Montana's situation, see below.)

Several states used a separate federal grant program to form consortia — partnerships among states, higher education institutions, and educational companies — to collaborate on the development of a common test. One such group, the Mountain West Assessment Consortium (MWAC), originally included Alaska, Idaho, Montana, Oregon, several other western states, and Measured Progress, a nonprofit company that specializes in educational assessment.

While MWAC made some progress toward the development of a single assessment, several states began to question its relevance to their specific standards and needs. Some states felt they could not wait for consortium decisions, and so split off to develop their own assessments. Eventually, the consortium dissolved.

A brief look at the assessment development process in the Northwest reveals the uneasy balance between federal requirements and the states' preference for blazing their own trails. Alaska was one of the first states to split from MWAC and eventually contracted with Ballard & Tighe, a private company best known for its IDEA products, which include English language development programs and assessments. Like many companies, Ballard & Tighe responded to NCLB legislation by developing a package of assessment products that specifically addressed the new Title I and Title III guidelines. Alaska worked with the company to customize the IPT® 2005 (IDEA English Language Proficiency Test) to match the new Alaska state standards for English proficiency.


Idaho was also a member of the MWAC. When the consortium dissolved, Idaho held competitive bidding for an assessment vendor and eventually chose Touchstone Applied Science Associates (TASA), Inc. The company and the state worked together to modify the MWAC items and develop the Idaho English Language Assessment (IELA), which includes separate assessments for kindergarten and the 1-2, 3-5, 6-8, and 9-12 grade spans.

As in many other states, Idaho school districts are now required to hire test administrators using their own Title III funds. While the state pays for the actual test materials, critics say that a big chunk of total Title III funds are consumed by the administration of a single test. According to Wendy St. Michell, "Funding is definitely one of the biggest issues for us. We're trying to figure out how to use the available funds in the best way possible. As a small state, we have to be more creative than some in terms of how we come into compliance."


Montana has a very small LEP population. Native Americans are the largest ethnic group in the state, representing 11.3 percent of the total student population and 84 percent of all Limited English Proficient students. Because Title III is a formula-driven program in which districts receive allocations based on the total number of LEP students they report, Montana receives only a small amount of Title III funds.

A member of MWAC, Montana found itself in a difficult situation as other states split away from the consortium. "The cost of contracting for a statewide test, compared to the amount of funding we qualified for, has made for a real challenge,"says Lynn Hinch, Montana's Title III director. "We're still working to form a partnership with another state and have also been engaged in issuing a request for proposals as required by our state administration. We've been in constant touch with the [U.S.] Department of Education about that, and they've been very supportive." Montana expects to administer its first tests in the 2006 — 2007 school year.


Oregon was also an original member of MWAC but chose to go in a different direction early on. "In a project like [MWAC], sometimes standards get watered down to meet everybody's needs,"says Pat Burk, chief policy officer for the Oregon Department of Education. "We felt that the language of the assessment became pretty vague and general, and that ours needed to be more specific than what was emerging in the consortium."

Another factor was that the state wanted to deliver the assessment online. "Putting it online actually controls costs,"says Burk, "but the real benefit is that it allows us to get results back to teachers instantly. We felt that was very important."

The state eventually contracted with Eugene, Oregon — based Language Learning Solutions to develop the English Language Proficiency Assessment (ELPA). The ELPA consists of individual tests for grade levels K-1, 2-3, 4-5, 6-8, and 9-12, and is being given online.


Even before NCLB, Washington implemented a uniform statewide assessment for English language learners. Shortly after that, the state began developing English proficiency standards. Because the language proficiency test was selected before standards were established, the state conducted an alignment study.

Partly because of this head start, the state chose not to participate in MWAC or any other consortium. "In some ways we were ahead of the pack, but our development process may have been somewhat out of sequence,"admits Mike Middleton, an operations manager in the state assessment department. "We had a statewide assessment, but we had yet to fully articulate specific English language proficiency standards."

Based on the results of the alignment study, the state decided to work toward a test more closely aligned with both state standards and Title III requirements.

They eventually chose Harcourt Assessment, Inc. "We agreed to build an augmented test, with Harcourt's off-the-shelf product — the Stanford ELP — as the foundation,"says Middleton. "We needed to fill in gaps so that it better matched our state standards."The augmenting process included material created and reviewed by statewide panels of ELL teachers, using the state's English language development standards (ELDs).

The final product, WLPT-II, is "more closely aligned than either the earlier language proficiency test used by the state or Harcourt's base product,"says Middleton. The new test is divided into four separate grade levels: K-2, 3-5, 6-8, and 9-12, and covers all five domains for measuring language proficiency.

The challenges of the law

Besides limited finances and tight deadlines, several details of Title III legislation have drawn criticism from educators and technical assistance providers. For instance, as required by the legislation, most states have built their English proficiency standards around their existing language arts standards. The intent is to ensure that language learners do not fall behind in content. The reality, say critics, is that learning a language and learning more complex, formal uses of that language are two very different things.

"States have complied with the letter of the law,"says Gary Hargett, an independent educational consultant who specializes in ELL issues, "but the concern is that it doesn't necessarily help you define what real English language proficiency is. Language arts and English language proficiency are two separate constructs. To combine them in this explicit way is not going to help us understand how to help a student become proficient."

The inclusion of K-1 students in the statewide assessment has also raised some concern. "The law says that all students must be assessed, including K-1,"says Hargett. "So, you have to ask: What does English reading and writing look like for the K-1 ELL student? Well, we're not even sure what it looks like for the native English speaker at that age, especially in writing. I think it raises all kinds of theoretical questions that they didn't mean to raise, and those are interesting questions, but the mandate is not for research — the mandate is to develop tests that can be used for accountability."

According to many experts, the focus on a single kind of assessment and on limited measures for determining progress is a major issue. "They've tied language proficiency to content standards,"says Frank Hernandez, a program adviser for the Northwest Regional Educational Laboratory, "but I don't know if that is going to be sufficient or appropriate to assess instruction."

The effective assessment of ELL students, say many experts, has little to do with a single, annual test. "You need to include a lot of other factors and measures besides one high-stakes test to tell whether you have an effective program or not,"says Hernandez. "The best way to measure how well you're doing with ELL students is to use a number of different criterion-referenced assessments in the actual classroom. They need to be closely tied to your instruction, and they need to be given frequently — bimonthly, monthly, or even weekly, depending on the type of assessment."

Finding the positives

Even critics of NCLB agree that the new legislation has brought much-needed attention to the plight of ELL students. In the past, ELL students were seldom included in statewide testing, and ELL teachers had little clout, a tiny budget, and were often isolated in separate classrooms or even separate buildings. Mainstream teachers, meanwhile, received little professional development in effective instruction for ELLs and were given scant motivation to take the issue personally.

While some may find accountability measures to be a negative kind of motivation, the fact remains that ELL students can no longer be ignored. Educators around the country are already reporting some positive outcomes from this aspect of NCLB, including an increase in collaboration between mainstream teachers and ELL specialists; an increased awareness of the importance of language instruction across the curriculum; and a focus on professional development for mainstream teachers of ELL students.

For Hargett, the increase in high-quality professional development is the most important of these improvements. Although "sheltering"strategies and other language-centered approaches have been garnering support for several years, he says, the results have not always been successful.

"Teachers need help in figuring out how to embed language objectives into their lesson plans, says Hargett. "That's well known. But it's not enough to say you're going to create a language-rich atmosphere in the classroom. I've been in a lot of classrooms where I was told, "This is a sheltered classroom," and I couldn't see where the teacher had pulled out explicit attention to a form of the language.

"The key to success isn't the assessment you use," Hargett continues. "The key is the actual services you're providing. You have to take a close look at your program and say: 'Are these programs really addressing the learner's needs? Are they really developing English proficiency? Are they really making content accessible to ELL students during the time that they're still gaining English proficiency?' I think those programmatic questions have to be answered before you can have any meaningful discussion about the assessments. It's really a mistake to think that the key is in the nature of the assessment as opposed to what's being assessed."


For any reprint requests, please contact the author or publisher listed.


Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.