It is impossible to overstate the importance of innovation to a society’s health and prosperity. New technologies and new applications of existing technologies will disrupt every dimension of human activity and endeavor. Companies that master the frontiers of emerging technologies will dominate their industries; first movers will reap enormous economic rewards in these and related fields, and the governments to which those corporate leaders respond will enjoy greater power as well. In theory, success will provide advantages in economic competition and in the military domain.

This competition is frequently described as a new cold war in which the principal rivals are the United States and China, and each has adopted a different model to help it prevail. China is the standard-bearer for a state-directed approach, the shorthand for which is “industrial policy.” In the last two decades, it has produced several national plans to promote technology development, the most (in)famous of which is “Made in China 2025.”

It declared Beijing’s intent to master 10 strategic industries, to comprehensively upgrade Chinese industry and ensure its dominance of key technologies. One priority area is artificial intelligence. The China State Council released a strategic directive in 2017 which announced that China would achieve “world-leading levels” in AI by 2030; to that end, it developed the Next Generation of Artificial Intelligence Plan. Funding, which is critical, is opaque, but experts estimate that China will have spent $70 billion on AI by the end of this year — and that is just a down payment.

The U.S. approach is more laissez-faire, with the private sector driving policy and government largely staying out of the way. This model reflects the experience of the original Cold War, and as in that struggle with the Soviet Union, is expected to prevail.

The cold war metaphor has its critics. Odd Arne Westad, a noted scholar of that era, acknowledges the echoes from that time but calls that description “a kind of terminological laziness” that “obscures more than it reveals.” I believe that today’s competition is even more intense than the original: It may not be bipolar, but it is systemic in nature, the stakes are as high, the rivalry occurs across more dimensions of national power and is more complex. Historians will likely conclude that the first Cold War was a much easier fight.

The chief flaw in the Cold War analogy is the claim that the U.S. government was largely hands-off in that struggle (apart from the direct military competition). The West’s victory in the race for technological leadership, the foundation of its eventual triumph in the larger competition, was very much the product of government intervention.

Eric Schmidt, the former chairman of Google, is worried about revisionism that writes out that role, noting that “many of Silicon Valley’s leaders got their start with grants from the federal government,” and included himself on that list. He credited the National Science Foundation and the Defense Advanced Research Projects Agency, a legendary operation in the Pentagon tasked with identifying and funding technologies that will maintain U.S. military superiority.

Another analysis goes even further. A recent study by the Center for Strategic and International Studies (CSIS) concluded that “Silicon Valley would not exist as it does today without the Cold War-era tsunami of federal defense contracts. Not only did the U.S. government provide vast sums of money to develop computing technologies across various small ‘startup towns,’ it also stood as a ready customer long before these technologies were commercially viable.” Silicon Valley, ground zero for capitalism, iconoclasm and entrepreneurism “was built on a foundation of private defense contracting.”

The balance of power between the government and the private sector has shifted, however, and the private sector is now in the lead. One result of that shift is a steady decrease in U.S. government investment in research and development; one report noted a 68 percent decline in R&D expenditures as a percentage of the federal budget from 1962 to 2017. Worse, there is growing tension, if not outright friction, between the U.S. public and private sectors on how to support innovation and protect national security.

Ominously, as this shift has occurred, the U.S. has been losing its edge. Every two years, the U.S. National Science Foundation releases a report on science and technology indicators and the 2020 issue will likely conclude that China in 2019 topped the U.S. in spending on research for the first time in history.

The implications of this transition are not clear, but one study of artificial intelligence, the most important of the new technologies given its capacity to influence developments across a range of fields, concludes that the U.S. is well ahead of China today but will be overtaken within a decade.

Of course, money isn’t the only factor that determines innovative capacity; every meaningful index uses a range of metrics. The Bloomberg Innovation Index has dozens of data points across six criteria (R&D, manufacturing, high-tech companies, post-secondary education, research personnel and patents). In its 2020 report, Singapore is No. 1, the U.S. is ninth (it was No. 1 when the index debuted in 2013) and China is 15th. Japan is 12th, having dropped three places from 2019.

Toyo University’s Center for Global Innovation Studies has its Global Innovation Index (GII), which uses 58 indicators to measure innovation across 60 countries. It too has Singapore as No. 1, the U.S. is ninth and China is 15th. The GII ranks Japan 32nd overall, and sixth among the Group of Seven members. Other indexes generally align, although there are some differences, such as the World Economic Forum survey, which ranks Japan sixth.

Japan’s lackluster showing is proof that national strategies — Tokyo has many — and industrial planning are enablers, not determinants, of success. Also important are mobilization of the public and private sectors, the development of resources, both human and financial, the freedom to proceed in unexpected directions — and the freedom to fail. That last factor may be critical: A culture that punishes failure will discourage the entrepreneurism and risk-taking that drives the development of disruptive technologies and their adoption.

Japan has recognized its shortcomings and developed the Moonshot Program, which aims to “aggressively promote challenging R&D rather than improving conventional technologies, facilitate disruptive innovation through enhancing researchers’ trials and errors.” That sounds good, but disruptive technologies disrupt. A social preference for order and carefully regulated development, with the concomitant protection of vested corporate and political interests, will prove to be an insurmountable obstacle to success in this area.

Generating ideas is not enough. They must be developed, turned into products and services and then spread throughout an economy. That demands multiple infrastructures: legal, financial and educational, and the latter must include not just math and science but also ways to spur and sustain creativity. It sounds very much like a whole of government and whole of society approach, one that demands attention and direction from the highest levels of national leadership.

Brad Glosserman is deputy director of and visiting professor at the Center for Rule Making Strategies at Tama University as well as senior adviser (nonresident) at Pacific Forum. He is the author of “Peak Japan: The End of Great Ambitions.”

In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right.