Let’s take a closer look at we are doing and what they are doing that is different. These are broad generalizations and need to be explored more closely. The first relates to accountability and respect. It seems the more autonomy, professional responsibility and respect teachers are given, the better the results. The trend in the U.S. is in the opposite direction. As trust breaks down the push for external controls, incentives and accountability increases. High stakes testing, standardization and market management becomes more important. We are de professionalizing education and turning schools into businesses. We have discussed these issues at length earlier and in general we seem to be moving away from what works in other countries. The one exception seems to be class size. Many of the high performing nations have larger classes, but the strong cultures of learning and respect for teachers make this feasible.
However, we could look at international comparisons by seeing how different U.S. schools populations compare. What if we compare American students according to rates of poverty, which has been shown to be highly related to differences in student achievement? Using the 2009 Programme for International Student Achievement (PISA) tests in reading, U.S. schools with fewer than 10 percent of students in poverty ranked first among all nations, while those serving more than 75 percent of students in poverty ranked about fiftieth (Darling-Hammond, 2012).
If we break out the U.S. scores by percentage of schools’ students in poverty, how would they be ranked in international comparisons using the Progress in International Reading Literacy Study (PIRLS) and Trends in International Math and Science Study (TIMMS)?
In the PIRLs, U.S. students in schools with less than 10% poverty rate, which constituted 13% of all U.S. students scored the highest, those in the 10%-25% range were ranked second and those with 25%-50% poverty ranked only behind Sweden, Netherlands and England. In the TIMMS fourth-grade science rankings these same groups were ranked first, second and fourth respectively. (Bracey, 2007, p. 133)
Breaking the data down by poverty rate results in wealthier students in the United States outperforming all other nations. The strong relationship between poverty and test scores seen in the PIRLS data are replicated in the Scholastic Achievement Test (SAT), in the Trends in International Math and Science Study (TIMSS), and in the National Assessment of Educational Progress (NAEP) (Bracey, 2009, p. 4). So maybe we should be looking to our own high scoring schools in the U.S. to see what they are doing that we can learn from.
It is the high poverty schools in the U.S. that need to improve, not the wealthy ones, but the reforms being put upon these poor schools are the opposite of what is working in the high scoring nations and what is used in the most elite and high-scoring schools in the United States (Evans, 2000). No respectable independent private school in the United States, which charges tens of thousands of dollars in tuition, uses the U.S. reform policies being legislated on the schools serving low-income families. If they did, they would go out of business. What might we learn from them?
Though U.S. average scores are middling internationally, the rankings of many of our schools are the best in the world. How do we explain that? Another way of looking at the international assessment data is the real numbers of students who score well.
A publication from OECD itself observes that if one examines the number of highest-scoring students in science, the United States has 25% of all high-scoring students in the world (at least in “the world” as defined by the 58 nations taking part in the assessment—the 30 OECD nations and 28 “partner” countries). Among nations with high average scores, Japan accounted for 13% of the highest scorers, Korea 5%, Taipei 3%, Finland 1%, and Hong Kong 1%. (Bracey, 2009, pp. 2-3)
With a larger population, the United States would be expected to have a larger percentage, but looking at the highest scorers gives a different impression than looking at the national average scores, which include all students. What we have not discussed, but is an important question, is who and what are being assessed. How should we interpret the results of these assessments? Many questions and concerns can be raised. One final question—though the United States had consistently scored low to middling since 1964, why is it that these scores only become a concern in times of economic turmoil and how do we account for the times of prosperity with the same scores?