Japan’s elementary and junior high school students have strong basic academic skills but struggle to solve complex questions, according to this year’s nationwide achievement test results, which were released Monday by the education ministry.

Since its launch in 2007, the test has been drawing attention from schools and local governments, as the 47 prefectures are ranked according to their average scores.

But at the same time, some question the necessity of holding the test every year, considering that recent results remain more or less the same.

Here is a primer about the background and past arguments over the government’s nationwide test:

What is the nature of the achievement test?

The annual achievement assessment, conducted by the education ministry each April on sixth-graders and third-year junior high school students across the nation, is meant to gauge students’ basic knowledge of mathematics and Japanese and their ability to apply those skills to solve complex problems.

The assessment is used to learn about how much knowledge students have gained as well as to spot areas of weakness. The findings are used for improving and bolstering teaching methods in a bid to improve students’ academic skills. The test also includes a survey of students and teachers about their daily habits and learning environments.

From 2019, the education ministry will include English in the assessment as well, in line with the government’s plan to improve students’ English abilities.

The government conducted similar achievement tests between 1956 and 1966. But they were discontinued due to intensified competition among schools. In a bid to push up student scores, some schools held extra classes solely to prepare for the test, and some teachers even helped children cheat in an effort to get higher scores.

Why did the government relaunch the assessment?

The system resumed mainly because of mounting concerns over the deterioration of students’ academic abilities after the government introduced the yutori (relaxed) education policy in the 1990s.

The policy effectively cut Saturday classes and reduced textbook sizes in an effort to nurture creativity instead of focusing on rote memorization, depressurizing school life.

But the “PISA shock” in 2003 — when Japanese students’ rankings dropped in math and reading in a global survey — prompted the government to revise the relaxed education policy and launch the annual assessment as a means to reverse the decline in academic ability.

The survey was the Programme for International Student Assessment conducted by the Organisation for Economic Co-operation and Development. In the 2003 PISA survey, Japan dropped to No. 6 from the top spot in math and to No. 14 from No. 8 in reading compared with three years before.

What can be learned from this year’s results?

This year saw results similar to last year: Students continue to have difficulty solving complex problems using their basic knowledge of Japanese and mathematics, but they had good basic knowledge of the subjects.

Like in the previous assessments, Akita, Ishikawa and Fukui prefectures topped the charts in most of the subjects, while Okinawa Prefecture ranked at the bottom in all of the assessments for junior high school students.

But the ministry said the gap between the national average and the scores of the prefectures ranked at the bottom have narrowed in recent years, thanks to efforts to make improvements in the low-ranking prefectures.

What has been a key concern in conducting the assessment?

One major concern has been the disclosure of the test results, because this is believed to have pressured schools and teachers to overly focus on training students just to achieve high scores.

To avoid triggering excessive competition among schools, the education ministry has been disclosing only the average scores by prefecture and giving individual scores to municipal boards of education, instructing them not to publicize the results.

However, several prefectural governments, including Osaka and Akita, disclosed the average scores by municipality, while Tottori and Saga prefectures went even further by disclosing the average scores of individual schools.

Given this reality, the education ministry changed its policy in 2013, conditionally allowing municipal boards of education to make public the average test scores of individual schools.

Another issue has been several cases of cheating reported in the past decade, including an incident in which an elementary school teacher pointed out a student’s wrong answer during the test. Another recent case took place at a junior high school in Naha, Okinawa, where teachers did not submit the answer sheets of a few students who did poorly to avoid lowering the school’s average score, according to media reports.

How are the test results used?

Analyzing the results has been considered useful in identifying students’ weaknesses and discovering more effective teaching methods to improve their academic skills.

The survey also provides information on the relationships between students’ daily habits and their academic skills. Among such findings: Those who skip breakfast or use smartphones for long hours — or those from low-income households — tend to score low on the test.

But after a decade, pundits say it is becoming meaningless to spend ¥5 billion a year to conduct the test every year, simply to confirm the same results over and over again. This year, ¥4.8 billion was used for the test, the education ministry said.

Some say it would be more helpful to use the funds to support schools, such as employing more teachers to mitigate the increasing burden on educators.

In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right.