/

Design tests to measure priority outcomes

by Walt Gardner

Special To The Japan Times

Uncovering fraud in the administration of any test is reason enough to undermine confidence in the results. That certainly holds true for the disturbing news regarding TOEFL and TOEIC. But there is a larger issue that is given short shrift.

It has to do with the predictive value of these two high-stakes tests. Their designers maintain that they measure the knowledge and skills necessary for success in situations where English is spoken. But their emphasis is less on speaking than on writing English.

It’s hard to believe that students who excel only on translating passages from Japanese into English — or vice versa — possess the wherewithal to master subject matter taught in schools in the United States. That’s because lessons there depend far more on oral participation from students than in Japan.

In fact, teachers from Japan who come to the U.S. to teach for a year or so on exchange programs are astounded by how often students raise questions or express opinions about the ongoing lesson. This is the antithesis of the way students in schools in Japan react.

If valid inferences are to be drawn from the results of TOEFL and TOEIC, the tests need to be redesigned to place far greater emphasis on speaking English. This will unavoidably lead to teaching to the test. Yet doing so constitutes sound pedagogy.

Studies have shown that the closer practice mirrors the broad body of knowledge and skills deemed indispensable for success, the greater the likelihood of transfer. Therefore, if Japanese students want to be prepared for classes in the U.S., they’d be advised to develop their English-speaking skills.

That’s not the same thing as teaching to the actual items on a test, which is unethical. No one benefits if students are drilled repeatedly on a given script. But they will benefit if they are given repeated practice with a variety of real-life situations in which they have to speak English.

In sports, coaches rely on what is known as specificity of training to prepare their athletes for victory. They deliberately provide opportunities in practice that mimic game situations.

When I was working on my master’s degree in journalism at UCLA, I took the Spanish exam in the belief that it would enhance my ability to communicate with the large Spanish-speaking population in Los Angeles. After all, interviewing people is what journalists rely on to gather facts for their reporting in any country. But the exam was virtually useless because it consisted solely of translating a given excerpt from a Spanish newspaper into English.

By the same token, students in Japan are not being well served if the tests used measure competencies that have proven to be marginally helpful for success in the U.S. TOEFL and TOEIC both contain sections on speaking English. But they need to be given far more weight.

Walt Gardner writes the Reality Check blog for Education Week in the U.S.

  • http://www.compellingconversations.com Eric the sceptic

    Persuasive analysis of an open secret: both standardized tests of English do a very poor job of actually measuring the ability to speak English. Let’s recall that the TOEFL iBT asks test takers to “speak” with a computer. Is this authentic communication? Does this measure the ability of individuals to hold a genuine conversation in English?
    Given the astonishing significance to these very flawed, standardized exams, doesn’t it behoove ETS to create more realistic and authentic measurements of speaking skills?
    After all, the old teaching cliche remains true: what gets tested, gets taught.