Technology, Growth, and Development: An Induced Innovation Perspective. Vernon W. Ruttan. 2001. (Textbook)
Part One—Productivity and Economic Growth:
Throughout history and in today’s developing countries, sustained economic growth, which is achieved by increasing productivity, has been exceptional rather than typical. During the last two centuries, this has changed markedly in European and English-speaking countries in association with markedly increased productivity from the Industrial Revolution.
This growth continued due to a series of advances in general purpose technologies that have had a pervasive impact well beyond the industries in which they originated. This sequence included advances in manufacturing (factory assembly of mass produced interchangeable parts), agriculture, sources of energy, particularly electricity, the chemical industry, the aeronautical industry, the digital and computer industry, and the biotechnology industry. In 1957, Solow found that 80% of US worker productivity growth from 1909 to 1949 was from changes in technology. Thus, government may increase the rate of economic growth by pursuing an active technology policy.
High economic growth of the US and other developed countries has slowed during the last quarter of the 20th Century. Six possible explanations include increased energy and raw material prices, changes in capital formation, decreased infrastructure growth, increased share of the services in labor, decreased R & D spending, and measurement and data problems due to the shift to harder to measure services. Possible explanations that were dismissed include costs of pollution abatement and safety regulations, depletion of natural resources, and reduction of the work ethic.
Labor has been reallocated from sectors with increasing productivity to the service industry, which lacks gains in productivity. In the US, employment has declined in agriculture from 50% to 2% from 1870 to 1990 and in manufacturing, mining, and construction from over 30% to less than 20% from 1950 to the 1990s, while it has increased in the services to over 75% by the late 1990s. (Fig 1.2)
This “Service Sector Cost Disease,” describes failure of increased productivity in one sector that dampens growth for the entire economy. This is shown by a two sector simulation with increasing productivity of 3% annually in manufacturing (automobiles) but 0% in services (education) for thirty years. This “Service Sector Cost Disease,” describes rising prices without increased output due to failure of increased productivity in one sector that dampens growth for the entire economy. Thus, productivity of services must increase for economic growth to continue. (Table. 1.1)
Part Two—Sources of Technical Change
The emergence of new general purpose technologies is the major basis for increased productivity from technology. The fusion of science and technology for this transition began with the creation of industrial research laboratories (beginning with Edison in 1888) and publically funded agricultural research stations and research universities. The Vannevar Bush report to FDR of 1945 endorsed public funding of basic research, which became the dominant pattern for several decades. Changes in the relative prices of factors of production also spur innovation such as by substitution of capital for labor when labor expenses increase.
Institutional innovation to support growth is necessary during changing circumstances. Political leaders may organize collective action for this socially desirable change, but only if they are rewarded by greater prestige and stronger political support. This change may be resisted because of institutional drift to protect vested interests, industries, workers, or intellectuals fearful of technology. In any event, long term resistance to technology has seldom been successful. Also, continuing innovation may be discouraged by path-dependent “lock-in” for an established inferior technology, such as QWERTY or VHS.
Outcomes of institutional change vary greatly according to the power structure of vested interest groups, ideologies, and cultural traditions. English enclosure movements hurt farmers to benefit landowners. Chinese agricultural decollectivization benefited farmers. Argentina’s political dominance by the landed aristocracy impoverished smaller farmers. And the 1862 US Homestead Act created opportunities for small farmers.
Governmental and other nonprofit institutions have been established to advance basic scientific knowledge because benefits may not be profitable for individual firms, universities, or even nations. This public support produced most of the technical advances that led to the computer revolution, major new pharmaceuticals, the biotechnology industry, and many improvements in agriculture.
Government action may be required to supply public goods because special interest “distributional coalitions” make political life divisive, limit the capacity to reallocate resources, and slow technical development and economic growth. Models, such as the tragedy of the commons, the logic of collective action, prisoner’s dilemma game, and mechanism design, are profoundly pessimistic about the ability of individuals, acting alone or in cooperation, to achieve common action.
Part Three—Technical Innovation and Industrial Change
For new ground breaking technologies, the private sector was the major contributor for the mechanization of the industrial revolution and agriculture and for the early discoveries of electricity. However, public support was the major contributor for agricultural science, chemistry, computers, and biotechnology, particularly during the early non profitable stages.
Agriculture: Incentives for private development were better suited for mechanization than for science based techniques that had spill-over of benefits. Hence, private sector mechanization increased productivity by increasing the area cultivated. Largely publicly funded science-based biological and chemical technology increased productivity by increasing output per unit of land area. Beginning in the 19th Century, sources of funding for agricultural science shifted from private to public in England, then in Germany for agricultural research stations, and then in the US for agricultural research stations and research universities. Improved farming practices and the international wheat-breeding system provide powerful evidence for the productivity of publicly supported research.
Light, Power, and Energy: The Industrial Revolution saw greatly increased productivity with the transition of power for manufacturing from water power to steam engines in the mid-1800s and then to electric engines by the 1920s. Sources of energy for this transition progressed from flowing water to coal to oil to natural gas. Long distance transmission became possible after the change from AC to DC and the invention of the steam turbine engine. Eventually, entire systems of manufacturing were redesigned, first by replacing steam engines in turning long line shafts, then by use of multiple smaller motors and shafts, and finally by replacing shafting altogether. Expected progress toward alternate energy sources in the late 20th Century did not occur due to progress in natural gas production and electricity generation. However, contributions from some of these sources, such as photovoltaic cells and pure hydrogen fuel, will likely increase greatly by the mid-21st Century because of technical progress, pollution from fossil fuels, and the possibility of global warming.
Chemistry: The chemical industry is one of the first modern industries in which technical change depended upon prior scientific research. Governments have played an important role beginning in Germany, and then in the transfer of German technical knowledge to the US and UK after World Wars I & II. By the early 20th Century, German organic chemistry and high pressure chemical reactions dominated world markets for dyes, drugs, and other products. By the 1950s, US polymer-based synthetics and continuous process technology for large scale production positioned it well ahead of Germany. The growth of the petrochemical industry has slowed substantially since the 1970s in association with technical maturity, environmental demands, and decreasing budgets for R&D. It is doubtful that it can ever again play a similar dynamic role in economic growth.
Computers: Almost all technical advances in computers were publicly funded. Since the 1930s, the need for improved electric switching led to progression from vacuum tubes to transistors, semiconductors, integrated circuits, and microprocessors. Consequently, electronic computing evolved from huge vacuum tube computers to progressively much smaller, much faster, and much cheaper main frame computers, minicomputers, and microcomputers. The transition to the microcomputer, made possible by Intel’s programmable chip of 1969, led to the development of the PC market by the Apple II and IBM PC in the 1970s and 1980s. This was accompanied by the rapidly expanding independent software industry led Microsoft, which provided MS-DOS (Microsoft disk-operating system) for the Intel 8088 16 bit microprocessor of the new IBM PC in the 1980s.
The computer, semiconductor, and software industries have been uniquely influenced by public policy. The first computers and semiconductors were developed and supported by procurement from the US military. The DOD and NSF supported fundamental research and graduate education for the development of software. The internet was developed by the Defense Advanced Project Agency (DARPA). The rate of return on public investment in these industries has been high, in the 50-70%range. It is simply not credible to assume that the market could have developed anywhere nearly as rapidly in the absence of the large public support that began in the 1940s.
Biotechnology: Prior to the 1970s, almost all research was conducted by universities and the federal government. This produced four major advances in molecular biology (below the cellular level) that led to the development of biotechnology: 1. Identification of DNA as the physical carrier of genetic information in 1938. 2. Discovery of the helical structure of DNA in 1953. 3. Invention of gene splicing to insert genes from a foreign organism into a host genome in 1973. 4. Invention of hybridoma (fusion) technology to form a hybrid cell with nuclei and cytoplasm from different cells in 1975.
These breakthroughs led to new biotechnology advances: 1. Cell and tissue culture technology that regenerates entire organisms from single cells or tissues. 2. Recombinant DNA (rDNA) technology that joins pieces of DNA from different organisms. 3. Cell fusion technology that combines a myeloma cell and a lymphocyte to produce monoclonal antibodies. 4. Protein engineering to create new proteins with specific catalytic or therapeutic properties.
In the late 1970s entrepreneur-scientists created new university-industry relationships and genetic engineering start-up companies, such as Genentech. By the mid-1990s, there were 1200 small- to medium-size research-intensive dedicated biotechnology firms in the US. These start-ups lacked the capacity for large volume manufacturing, regulatory clinical testing (which averaged 100 months), and necessary extensive distribution networks. Consequently, a highly complementary, rather than competitive three-way relationship has evolved between universities, biotechnology firms, and multinational pharmaceutical and agrochemical companies, such as Genentech’s contract with Eli Lilly to develop bacterially-produced insulin. Marked growth is expected for the industry in the 21st Century, possibly similar to that of the computer industry in the late 20th Century.
Part Four—Technology Policy
For three decades after World War II, an implicit social contract between the federal government, the scientific community, and universities assured a steady stream of scientific advances that translated into new weapons, new medicine, new materials, new products, and new jobs. Beginning in the 1970s, this arrangement was challenged by the Environmental Movement, conservative anti-government ideology, and the end of the Cold War. Today’s Policy choices are whether to continue widening income differences, environmental destruction, and decreasing world security or to acquire the vision for sustainable development and convergence of the rich and the poor. Three systems are compared.
The American system of technical innovation: “The American System” of assembly of complex products from mass-produced interchangeable individual parts began with government funded manufacture of firearms in US Armories in the mid-1800s. It then spread to the private sector along with the rise of the machine tool industry that supplied necessary precision for interchangeable parts. High volume “mass production,” increased manufacturing’s share of commodity production from 10% to 50% from 1800 to 1900. This system became most highly developed with the Ford Model T assembly line, which decreased assembly man-hours from 12.5 to 1.5 and decreased price by two-thirds. From 1903 till the last Model T in 1927, the US GNP grew at 7% annually. At the same time (1911), Principles of Scientific Management by Frederick Taylor introduced time and motion studies to increase efficiency.
The Ford era of “classical mass production” ended in 1926, when General Motors introduced the era of “flexible mass production” characterized by multiple models with annual changes, as well as a system for purchase on credit. However, after World War II, the US automobile industry declined due to markedly decreased innovation and decreased productivity growth to 5% annually for 20 years, then to 2.5% after 1965. Ascendancy of a new business school trained managerial elite, with no experience and little appreciation of manufacturing and process technology, was an important source of this loss of US competitive leadership.
In the early 20th Century, science-based technology emerged due to critical institutional innovations, including agricultural experiment stations, industrial research laboratories, research universities, and support for public universities to develop necessary human capital. The US post-World War II lead in agricultural, electrical, chemical, aircraft, defense, computer, and biotechnology industries was associated with a rapidly growing, largely publically funded R & D industry that included NSF, NIH, DOD, AEA, NASA, and the department of Agriculture. Additional factors included small start-up firms, venture capital firms, military procurement, intellectual property rules that encouraged cross-licensing, and substantial spillover from military and space technology until the 1970s.
After declining in the 1970s, confidence in American leadership was revived by the 1990s by US dominance of the information revolution. At this time, structural changes in technical innovation included 1) greater reliance by US firms on research in collaboration with federal laboratories, universities, and other firms, 2) increased location of R & D facilities in other countries and by other countries in the US, 3) greater reliance by US universities on industrial funding, and 4) changes in patenting and licensing commercially oriented research.
The Japanese system: Japan was the first nonwestern country to successfully challenge the dominance of western technology. By the 1880s Japan’s economy achieved a “take-off” to sustained economic growth due to the latecomer’s advantage of “catch-up” by technology transfer. This began in the textile industry with low-cost Japanese labor, but progressed to the Japanese national technology system: 1) transfer of technology from abroad. 2) Strong public support. 3) Rapid adoption of imported technology. 4) Development of the capacity to innovate and manufacture. Other factors included the Ministry of Trade and Industry (MITI) that targeted a succession of industries for technological catch-up, high rates of saving and investment, low consumption, the Deming “total quality control” system, and “lifetime employment guarantees” that encouraged investment in worker’s skills.
Japanese producers successfully challenged global leadership in a series of industries, including textiles between World Wars I & II, steel and ships in the 1960s, consumer electronics in the 1970s, and automobiles, machine tools, and several areas of computers and semiconductors in the 1980s. Between 1970 and 1980, the market share of auto imports to the US increased from 4.2% to 22%. The Japanese GDP achieved a “miracle” growth rate of 9% per year during the earlier catch-up phase, fell to a still healthy 4% per year during the transition to higher-value products in the 1970s and 1980s, but began a long recession in the 1990s. Several factorsare believed to have contributed to this decline: 1) Being a borrower rather than a creator of science and technology. 2) A financial sector that protected inefficient companies. 3) Competition from other East and Southeast Asian economies that adopted the Japanese system. 4) A strong reaction to its protectionist strategy by older industrialized countries.
The German System: In the early 19th Century, German institutional innovations included the modern research university beginning with Humboldt University in Berlin in 1809 and publicly supported agricultural experiment stations beginning in 1852. Advances in chemistry, physics, and biology as well as technology transfer from the US and Briton led to industrial dominance for synthetic dyes, heavy chemicals, pharmaceuticals, and electrical machinery by the early 20th Century. After World War II, Germany made a remarkable recovery, with growth only slightly lower than in Japan, mostly from reestablishment of the same institutions and industries that were dominant before World War I.
The automobile industry is the major exception to the science-based development of German industry. The Volkswagen project begun by Hitler in 1937 followed the Ford system with a single model through the 1950s, but with two important differences—an emphasis on technical improvements and a cooperative rather than an adversarial relationship with its trade unions. When Toyota exceeded Volkswagen in exports to the US in 1975, Germany responded with a new class of luxury cars, including Daimler-Benz, BMW, and Audi, that were smaller, more powerful, and more reliable than US luxury cars.
Germany does have some disadvantages, including the highest paid Western workforce, conservative financing that inhibits venture capital, and a small domestic market. It has dealt with these by a focus on high value products, one of the strongest science and technology bases in the West, and developing the larger market of the European Union.
Discussion of the three systems: In the 1980s, science and technology development was characterized as mission oriented in the US to support defense, space, and computers, diffusion oriented in Germany to spread advances throughout most industries, and both mission and diffusion oriented in Japan to first develop a few target industries and then spread advances across multiple industries. Trajectories of new technologies have been divided into three phases. In the post-World War II era, the US has led in the emergence phase that requires sophisticated R & D and flexible financial institutions to develop new trajectories. Japan has excelled in the consolidation phase that exploits these new trajectories and transfers technologies from one trajectory to another. Germany has established a superior maturity phase that requires highly skilled labor and production engineering. Both Germany and Japan lack venture capital systems comparable to that of the US.
In the half-century since World War II, the US technological leadership severely eroded as it became a world leader in inequality and national debt. This probably reflects some combination of an inherently transient lead until competitors recovered from World War II and new factors like globalization and suboptimal exploitation of technology. Proposed solutions for the US have included addressing low savings rates and high capital costs, formation of a strategic or managed trade policy, and more public and private support for commercial technology development.
In any event, science-based industries represent leading sectors that tend to drive and shape technical change and economic growth. The ability of a high-wage economy to compete in international markets is increasingly dependent on the science-based industries that require strength in scientific education and research. Consequently, government direction and assistance are warranted.
Writing at the end of the 20th Century, the author believes that each of the three systems discussed face major difficulties. The US will be forced to confront its enormous income inequality and large deficiencies in education and health services. Japan will be forced to modernize its largely traditional service economy, particularly its financial markets, and to change its economic policies beyond those of a developing state. Germany (and Europe generally) will be faced with completing the economic and political integration of the European Union.
Technology, Resources, and the Environment: Since World War II, intellectual and populist challenges to science and technology emerged regarding resource depletion, pollution, global warming, and other global environmental changes.
Resource depletion has become less urgent due to induced technical change in resource exploration, backstop technology, and raw material utilization. From 1870 to 1960, the average private cost of extraction, in constant prices, fell for almost all extractive products. Also, material use intensity decreased due to dematerialization (smaller automobiles), substitution, recycling, and waste mining.
Pollution remains problematic because open-access resources such as water, air, and natural environments continue to be undervalued in the price system. Nonmarket solutions were introduced to manage this market failure. Clean air and water laws greatly reduced air pollution from automobile and industrial emissions, acid rain from sulfur dioxide in power plant emissions, and water pollution from various waste products. Nevertheless, these gains need to be protected and progress needs to be extended to other areas. The driving force behind this growth of the environmental technology industry has been regulation or threat of regulation.
Global warming first became a concern in the 1960s when atmospheric carbon dioxide was found to be 25% higher than at the beginning of the Industrial Revolution. By the end of the 20th Century, the balance of evidence supported a discernible human influence on global warming that could increase temperatures from 1.8-6.3 degrees F by 2100. This is a significant threat to human health and agriculture. While unsafe water, inadequate sanitation, and particulate pollution are found in poor populations, high carbon emissions come almost entirely from rich populations. (Fig. 12.6)
Preventionists argue that approaching catastrophe requires immediate action, while adaptationists argue that change will be slow enough to rely on market forces. Limited studies in the 1990s estimate a cost of 2% of GDP for a 50% reduction of US carbon emissions by 2050 and an interval of several decades till benefits exceed costs. Models show that even a modest carbon tax would be a powerful inducement to bias technology in an energy and carbon saving direction that could decrease costs.
UN conferences in Rio de Janeiro in 1992 and Kyoto in 1997 produced nonbinding fairly broad agreement of the requirement for progress: 1) Substantial reduction in global carbon emissions, with rich countries transferring resources to poorer countries. 2) Development of technologies, including increased nuclear power, solar power, fossil fuel conversion to hydrogen fuel, and storing carbon underground. 3) Creation of institutions for monitoring and enforcement (the most difficult task). The costs of these changes may be mitigated by new technology and anticipated growth.
US Science and Technology Policy: The US agricultural and industrial preeminence of the late 19th and early 20th Centuries was not a product of science-based technology, although several federal research bureaus were established at that time. The Bush report to FDR of 1945 promoted government support for research, particularly basic research, for both national security and commercial applications. Subsequently, US government support for science and technology markedly increased after World War II and during the Cold War.
By the 1980s, a complex, four quadrant relationship between government, science, technology, and industry had developed. These quadrants included government-supported curiosity-inspired basic research, government-sponsored applied science and technology, government and privately-supported use-inspired basic research, and privately funded applied technology. (Fig. 13.1) During World War II and the early Cold war, large flows of government resources into Rickover’s Quadrant for weapons development, atomic energy, and exploration of space led to “Big Science.”
The underinvestment rationale argued thatpublic investment was necessary for a socially optimal result because private firms would underinvest in research that was unprofitable for them but produced beneficial spill-over social goods for other firms and consumers. By the 1990s, multiple studies showed that social rates of return were significantly higher than private rates of return for investment in basic research and even for applied research.
Critics of the underinvestment rationale argue that necessary information is rarely available for assisting strategic trade industries or for preventing “lock-in” of less efficient technologies. They argue that greater weight should be given to transfer of technology, since the US has lagged behind foreign competitors in commercial development. Proponents argue that results of basic research cannot be predictedbut increase the number of options for technical development and commercial success.
For allocation of government resources for research, only specialized scientists truly understand the prospects for experimental success for projects in their own fields. Hence, they strongly advocate allocation by peer reviews according to two important internal criteria: 1) Is the field ready for exploitation? 2) Are the scientists in the field really competent? However, input by outsiders according to external criteriais also important. Scientists from neighboring fields help determine relative importance and prospects for cross-over benefits. Ultimately, allocation is determined by the political process. Unfortunately, cost-benefit analysis is of limited use for the political process because complexity and necessary assumptions produce fragile and unrealistic results.
Intellectual property policy to encourage development remains controversial. Patents are thought to provide only weak encouragement to research and limited protection against imitation. An evolving primary use for patents is in cross-licensing agreements that will likely become the dominant mode of settling intellectual property conflicts. International cooperation is pursued through organizations and agreements such as the WTO, GATT, and TRIPS (trade-related intellectual property rights).
It is hard to overestimate the role of government policies for military procurement in technology development. In the US, the defense establishment came to dominate R & D expenditures between World War II and the end of the Cold War. Commonly cited spin-offs include jet engines and airframes, insecticides, microwave ovens, satellites (for telecommunications, navigation, or weather forecasting), robotics, medical diagnostic equipment, lasers, digital displays, Kevlar, fire resistant clothing, integrated circuits, and nuclear power.
However, military R & D also carries the opportunity cost for whatever civilian technology development may have been foregone. Also, substantial military innovations of systems management were ineffective in public and private sector environments not conducive to command-and-control. By the 1980s, the role of the military was passing to the civilian economy with transfer to spin-ons of off-the-shelf technologies from civilian to military applications.
Politics of Science and Technology Policy: The process of allocation of R & D funds in the US is quite decentralized. The National Science Foundation (NSF), governed by scientists appointed by the president, was established In 1950 but with a role limited to support for basic research. Prior to its establishment, multiple other federal programs were already allocating funds for research, including the Departments of Agriculture, Interior, Labor, and Commerce, as well as the National Institutes of Health, the Atomic Energy Commission, and the Office of Naval Research. Within months of the Sputnik launch, NASA and ARPA (DoD Advanced Research Projects Agency) were added. Consequently, the flow of federal resources to its own laboratories, to the private sector, and to universities is exceedingly complex.
During the first two postwar decades, federal R & D support expanded rapidly, initially primarily for the military and atomic energy, then with space exploration added in the 1960s. During the Johnson and Nixon administrations of the 1960s and 1970s, federal R & D expenditures declined as resources were shifted to areas of social needs. R & D expenditures for energy briefly rose then fell in the late 1970s and early 1980s. During the Reagan administration of the 1980s, an increased share of federal R & D for the military from 48 to 64%resulted in an overall increase. With the end of the Cold War,expenditures slowed during the Bush administration and declined during the Clinton administration. (Fig. 13.4)
Institutional science advice to the president began with the effective collaboration of FDR and Vannevar Bush as head of the Office of Scientific Research and Development (OSRD). Several reorganizations followed, including formation of the Office of Science and Technology (OST) by President Kennedy in 1962, which added civilian areas to its portfolio and took long term planning from the NSF. In 1972, during a time of tension with academics, Nixon abolished both the OST and the President’s Science Advisory Council (PSAC) and demoted the advisory function to the NSF. In 1976, the president coordinated R & D with the help of his science advisor and ad hoc panels of scientists and engineers. In the 1980s, Reagan shifted this activity from civilian to military issues.
In response to erosion of US technical leadership, technology policy assumed an increasingly important role in the Bush and Clinton administrations. The 1990 Bromley report by Bush’s OSTP director and 1992 Clinton-Gore report helped to defend the federal role in R & D and emphasized support for commercialization and transfer relative to basic science. Hence, S & T policy continued the shift away from the Cold War focus on the defense, space, and nuclear fields toward areas that would enhance US competitiveness in world markets. A 50-50 split was achieved between military and civilian technology before the end of the decade.
Congress has been more hesitant than the president in establishing institutions for scientific and technical advice. Its only institution has been the Office of Technology Assessment (OTA), which was created in 1973 but assassinated by the fiscal revolutionaries of 1994. Since the demise of the OTA, the National Research Council (NRC) has been the only objective source with the necessary range and depth for major science and technology issues. The NRC is the operating arm of the nonprofit National Academy Complex that includes the National Academy of Sciences (1863) and associated academies. Other nonprofit organizations, such as the Carnegie Corporation, the Brookings Institution, and the Rand Corporation, have been helpful for selected issues. The large population of foundations and think tanks often pursue ideological agendas that result in less useful hortatory and polemic advice.
Issues of Science and Technology Policies: Although a federal R & D budget can be “added up,” it has never been allocated or managed as a coherent whole. With increasing budget constraints in the 1990s, congress sought better information for allocation of resources for research. Although scientists happily accepted economic analysis showing higher social than private returns for R & D, they feared that cost-benefit analysis by the same methodology would lead to mindless application of incomplete results. Nevertheless, the author maintains that when applied with skill and insight, rate of return analysis has been exceedingly useful.
Congress needed answers to two important questions when allocating resources for research: 1) What are the chances of advancing knowledge or technology? 2) What will be the value to society of these advances? The first question can be answered only by scientists or technologists at the leading edge, usually by peer reviews. The judgements of administrators (even former scientists and engineers), planners, and economists are rarely adequate. The answer to the second question requires use of formal analytical methods employed by planners, economists, or other social scientists.
State government contributions to R & D have been much smaller than federal contributions. Local critics have feared “spill over” of benefits to other political jurisdictions and use of funding to serve agendas of faculties other than their own. Nevertheless, state technology development programs have had some successes, such as the North Carolina Research Triangle Park, Massachusetts Route 128 development related to Harvard and MIT, and Silicon Valley related to UC and Stanford.
Prior to World War II, most science was “little science,” sometimes with big engineering, such as for the TVA or Manhattan Project. By the 1960s, “big science” had emerged, with monuments such as huge rockets, high-energy accelerators, and high-flux research reactors. This led to three questions: 1) Is big science ruining science? 2) Is big science ruining us financially? 3) Should we devote a larger part of our scientific efforts to bear more directly on human well-being than big science projects do? Weinberg answered that big science is here to stay but at the same time should not be allowed to trample little science. He added that the US should settle on some figure less than 1% of GNP for federally supported nondefense science.
In the 1990s, the Department of Energy (DOE), which employs 30,000 scientists and engineers, is a source of concern about big science, particularly for fusion research. In other areas, congress has cut off funding for the Superconducting Supercollider project and reduced funding for the Global Climate Change program. Increasingly, international cooperation and partnerships will be required for major projects like the human genome, fusion power, space exploration, particle physics, or global ecologic problems. The author projects that political support is unlikely for any new big science in the US.
The federal share of support for university research has been declining since the 1960s. Public support has been eroded by publicity of ethical issues like environmental controversies, misuse of human and animal research subjects, and isolated scientific misconduct. Critics have complained about inflated indirect cost recovery for items such as libraries and even a Stanford luxury Yacht and about bypassing federal peer and merit review systems by lobbying for congressional earmarks. Critics have called for downsizing research universities, which have increased from fewer than 50 after World War II to over 400 by the late 1990s. Of these, 50 accounted for 51% and 100 accounted for79% of federal research funding.
Three types of US government investment in technology programs are identified as relatively successful. In procurement-related technology, the government has knowledge of its own needs and the ability to communicate them to suppliers. Civilian spillover occurs but is not the primary source of legitimacy. In generic technology, university-based research largely funded by the NIH is t-oriented research. In Client-oriented technology, research has been most successful for specific agricultural missions, such as for higher yielding crops, animal feed conversion, management of soil and water, and economics of farm operation.
The fourth type of investment is the attempt to pick winners for development in commercial markets. Many regard the results of this as unequivocally negative, such as in housing technology, supersonic transport, and synthetic fuel. Others argue that government support is needed for projects that are unpromising today but may be promising tomorrow, although support is still more effective in generic research than in commercial markets. Critics continue to argue that private property rights improve incentives, private participants make more efficient choices, business cooperation leads to more efficient research, university development of technology for private use should be direct rather than indirect, and large public projects make pork barrel problems difficult to avoid.
A number of points about public funding of R & D have emerged: 1) Public sector support has played an important role in the emergence of every US industry that is competitive on a global scale. 2) The system of intellectual property rights is more efficient for diffusion than generation of technology. 3) As the cold war ended, the presumption that massive defense-related investment was a pervasive source of spin-off commercial technology returned to the more traditional view of a spin-on relationship between commercial and military technology. 4) Skepticism is increased in both public and private sectors that investments in S & T lead directly to commercial development.
The author concludes that significant constraints are likely for big science in the future. He regards the issues of whether the US is investing too much or too little in R & D or producing too many or too few scientists as unresolved. Perhaps the issue of spill-over of US scientific knowledge to the rest of the world should be dealt with by appropriating more of the knowledge generated abroad. A sharp distinction is made between support of target basic research and generic technology and a more narrow policy of “picking winners” in technology development. In any event, a rate of growth of R & D expenditures that exceeds that of productivity and income will not be sustainable over the long run.
Summary of Findings by the end of the 20th Century: A succession of general purpose technologies has served as important vehicles for technical change and economic growth throughout the economy. In the 19th Century, the steam engine powered the industrial revolution. In the early 20th Century, electricity enabled mass production, communication technology, and consumer electronics. Throughout the 20th Century, chemistry led to agricultural and military advances, as well as new fibers, materials, and pharmaceuticals. In the second half of the 20th Century, computers and semiconductors led to advances for manufacturing, services, and consumers. In the late 20th Century, molecular biology led to the emerging biotechnology industry. A consistent feature of these general purpose technologies has been a lengthy period between their emergence and their impact—a century for the steam engine, half a century for electric power and computers.
Government has played an important role in technology development in almost every US industry that has become competitive on a global scale. Examples include research for agriculture, highway infrastructure for automobiles, military research and procurement for computers, and basic research for biotechnology. Three types of public support have been successful: 1) Direct support for areas with strong government involvement, such as development of the internet by DoDs ARPA. 2) Support for generic technology, such as for molecular biology that led to the biotechnology industry. 3) Support for client-oriented technology, such as agricultural research that led to most increases in plant and animal productivity of the last century. Also, the US decentralized research system gives it greater flexibility to adjust to global circumstances.
Even for relatively mature industries, advances in technology, particularly process technology, can be important sources of productivity growth and competitive advantage. Continued growth of agricultural productivity after maturity resulted in decreased costs and decreased workforce share (to less than 2%) that maintained a dominant US position in world markets while allowing transfer of workers to other industries. On the other hand, the US automobile industry was mature and dominant in the mid-20th Century but then lost both jobs and market share when it lagged behind Japan and Germany in productivity growth as management passed from engineers to business school elites.
Prices of factors of production, particularly for labor relative to capital, powerfully influence the rate and direction of technical change. However, these factors are influenced by political as well as economic markets. In particular, the values of environmental resources, formerly regarded as free goods, have begun to rise. Consequently, technical change trajectories with high energy requirements and material consumption have come under attack as threats to human health and the environment. These perceptions have already stimulated substantial innovation in environmental policy and law in developed countries. Nevertheless, very substantial public sector investment in the generation of new knowledge and new technology will be required to achieve sustainability in the 21st Century.
Prospects for Transition to Sustainable Development: Some resource economists maintain that sustainability merely requires the capacity to supply the expanding demand for substitution of resources and commodities on increasingly favorable terms. Some ecologists argue that the present system is unsustainable for the natural balance and should be replaced. A third group argues that sustainability should include social considerations since technical change assaults community values, rural people, and indigenous communities, as well as the environment. In any event, a successful transition must include enhanced consumption for the vast majority of people now living as well as those to be added to the population in the future.
Wealthy societies that already resist redistribution to address present inequality can be expected to resist Intergenerational resource transfer, as well. Economists have proposed that a sustainable path of development gives future generations equal treatment with the present generation. The strong sustainability rule requires that the stock of natural capital be held constant or enhanced. The weak sustainability rule holds that the form of replacement for natural capital doesn’t matter if it is replaced by constructed capital to maintain the same aggregate capital stock. Even sophisticated models rely on assumptions that may include biases of conventional opinion and the strong temptation to project catastrophe, such as for the1970s projections for higher petroleum prices.
Three basic scenarios are presented for global change scenarios between the 1990s and 2050. The Conventional Worlds scenario continues present trends that lead to a richer but dirtier world with increased conflict, but has the option for improvement by vigorous government action and institutional reform. The Great Transformation scenario substitutes cultural consumption for material consumption to promise sustainability, with the possibility of even more radical decentralization, small scale, and decreased growth (Ecocommunalism). The Barbarism scenario leads to institutional disintegration, economic collapse, and intensified conflict, with the possibility of a Fortress World option from authoritarian response in more developed countries. (Fig. 14.1).
Successful transformation to sustainability will require major cultural and institutional changes from material and energy-intensive to service and cultural-intensive consumption. The more optimistic scenarios posit continued technological change leading to decarbonization and dematerialization. Productivity gains will be required in the growing service sector if growth of consumption is to continue. The author’s sense is that a substantial number of countries will fail to achieve a transition. He doubts that the New Sustainability variant will be more than partly realized or that the Barbarization Scenario well be completely eliminated.
Substantial progress has already been made in some transitions, such as from rural to urban human settlement, from low to high agricultural productivity, and from high to low rates of birth and death. Several other transitions appear to be well underway, such as from high to low levels of energy and materials consumption, from low to high levels of literacy and numeracy, and from early to late death. However, transitions remain problematical for the poorest countries, particularly from failure of institutional development. Food demand is expected to double in the next half-century. Less than 10% of health research funding is directed to more than 90% of the world’s preventable deaths. No coherent system is in yet in place for substantial institutional changes for environmental concerns.
The issues of substitutability, obligations toward the future, and institutional design are central to sustainability transition. The sustainability community regards substitutability between natural factors and constructed factors as severely constrained. If investment and technology can continuously increase opportunities for substitution, constraints on resources could still leave future generations worse off. If opportunities are even more narrowly bounded and cannot exceed some upper limit, catastrophe is unavoidable.
For obligations toward the future, some economists propose discounting costs and benefits by some “real” rate of interest, but critics insist this is dictatorship of present over the future that will make molehills out of mountains in fifty years. In any event, efforts must involve some combination of high contemporary rates of saving to defer consumption to the future, high investment in human capital, and more rapid technical change, particularly for resource productivity and substitutability[FH1] . In the short run, even the Conventional Worlds scenario seems sustainable. Over the long run, almost no scenario involving continuing economic growth appears sustainable.
For institutional designs, the costs of actions that generate the negative externalities for the environment must be internalized for households, private firms, and public organizations. Otherwise, technological development will be biased along inefficient pathways. Unfortunately, these pathways are often selected more for their political acceptability or their consistency with ideological commitments than on the basis of objective knowledge. If humankind fails to navigate this transition, it will be due to failure of institutional design rather than constraints of natural resources or technical innovation. This is not an optimistic conclusion.