Is science slowing down? A dissenting opinion.

I was a bit perplexed by Scott Alexander’s post on whether science is slowing down, and if it is, what the causes might be. I’d like to push back on a number of things, but first, I’ll briefly introduce what’s going on.

The Argument.

We are confronted with the fact that Moore’s Law has led to transistor density growing at 35% per year, creating a vast increase in the number of calculations that can be performed per second, for a constant dollar expenditure. Great! Then, we are told there are 18x more people working in transistor-related research today than in 1971. That constant growth has required more and more research input. Here’s Scott:

So apparently the average transistor scientist is eighteen times less productive today than fifty years ago. That should be surprising and scary.

He then cites a similar phenomenon happening in chemical elements, soy and corn crop yields, car and train speeds, and athletic accomplishments. We find a constant growth rate of performance, but also growing human input. Here’s the kicker. We find that, despite a 24x increase in the quantity scientific researchers, we have been only able to achieve flat or declining Total Factor Productivity (TFP) growth for the economy.

Scott then gives us with three dominant explanations for increasing researcher input to achieve a constant level of growth. I’ll summarize them and Scott’s responses:

(i) I’ll put this is Kenneth Arrow’s terms, when he talked about weapons development at RAND, “… the bulk of the really significant advances are made by a very small handful, and that the possibilities of substituting larger numbers of less skilled personnel are extremely limited.”

Scott responds that “… population growth should have produced proportionately more geniuses.”

(ii) Deteriorating culture and management within the researching social system has led to decreased effectiveness. “The 1930s academic system was indeed 25x more effective at getting researchers to actually do good research.”

Scott responds that this “… isn’t enough. There would have to be a worldwide monotonic decline in every field (including sports and art) from Athens to the present day.”

(iii) “All the low-hanging fruit has already been picked.”

This was Scott’s preferred explanation. He had a nice example that the first element, phosphorous, was found looking at urine, while the most recent element #117 took huge teams of cooperative researchers to stabilize for just some milliseconds.

Where to begin? I will outline some issues in no particular order:

1. All three explanations are going on, and more.

Just because there are more researchers doesn’t mean we are getting more research. What matters is that we have independence between research efforts — the conjectures, the styles of experimentation, and so forth. It doesn’t matter if we are increasing the number of researchers if they are all coming from a mono-culture that generates group think.

Mono-culture can be a big problem, not only in academia, but in defense requirements and management more generally. We need radically different ideas not only to be formulated, but tried out, so long as technically competent people believe in it and are willing to take risk. As Deirdre McCloskey argues, technological progress only occurred when it did because liberal concepts provided people an equality of dignity to “have a go”; to try things that other people thought were either crazy or stupid or not commiserate with their status.

And this relates back to the patterns of funding and deciding upon research programs. There is more bureaucracy and justification requirements today. Innovation requires all sorts of permissions. This was in part a response to the increasing scale of investment for many 20th century projects, which seems to have turned the other direction in the 21st.  Still, to get research funded through government or industry, you often have to make what are in effect business plans. These plans often require explicit justification of what will be done.

Here’s the problem, expressed by Hans Selye in 1959:

… the more manifestly sensible and practical a research project, the closer it is to the commonplace we already know. Thus, paradoxically, knowledge about seemingly most far-fetched impractical phenomena may prove the likeliest to yield novel basic information, and lead us to new heights of discovery.

This seemed to have been common sense in years past. Today, however, these business plans and research universities have indeed become worldwide, and possibly too self-referential. They destroy the initiative of what Eric Schmidt and Jonathan Rosenberg call the “Smart Creatives.”

I haven’t provided any concrete data here, but I wouldn’t simply dismiss talent or culture as having an effect. After all, TFP growth isn’t just research innovation, but management practices, supply networks, and values more generally.

2. Science is Revolutionary, not Cumulative.

What I immediately thought while reading Scott’s post was, what would Karl Popper say?

Scientific progress is about conjecture, testing, discarding what’s false, learning, and renewed conjecture. Perhaps it sounds empirical, but the secret is in the conjecture. Conjecture is based on pattern recognition, deduction, and theory.

Like ants exploring for food, you want conjectures to explore a diverse space instead of relentlessly circling one area. The more far-fetched the idea, the more likely to be revolutionary, which pays back even if the probability of success is lower for each idea tried.

And significant steps are made through revolutions, or what Thomas Kuhn calls “paradigm shifts.”  It does not come from incremental elaborations on a growing base of knowledge. Karl Popper wrote that “the scientific method is not cumulative… it is fundamentally revolutionary.”

So why does progress look like a straight line of cumulative progress. Where is the conscious design from Scott’s “Gods Of Straight Lines”, reflected in constant GDP, technology, or athletic growth?

Well, I don’t have a good answer but I think of it like hindsight bias. You measure computations-per-second-per-constant-dollar in a specific way because of the revolution in transistors. But we would define that progress differently if we had never invented transistors. Or consider what the success of quantum computing would mean. Perhaps we don’t even know the relevant measure of what we mean by “computation” for that or other revolutions.

So increasing effort is required to get sustaining growth within a paradigm, but that effort is also pushing at different frontiers which creates a new paradigm whose growth surpasses the old paradigm at far less effort. The old guard can then shift into other sectors. Over and over again on many layers. This might be a weak write off of the Gods of Straight Lines, so we’ll return to it later.

3. Returns are Proportional to Number of Trials, Not Researchers or Dollars.

Here, I’m mostly restating the first two points using Nassim Taleb’s framework.

Taleb reminds us that in a project, we face three parameters: cost, schedule, and technical. Because all three parameters are bounded at zero for project outcomes, increased uncertainty hurts you with respect to cost and schedule, but benefits you with respect to technical or performance outcomes.

So how do you collect the positive black swans but avoid the negative? Nassim Taleb argued that payoffs from research follows a

… power-law type of statistical distribution, with big, nearly unlimited upside but, because of optionality, limited downside. Consequently, payoff from research should necessarily be linear to number of trials, not total funds involved in the trials.

Taleb and Benoit Mandelbrot recommend the “1/N” research policy which can simply be expressed as:

… if you face n options, invest in all of them in equal amounts.

Well, usually your “n options” are not ideas, but people. This is why Venture Capital invests in the people and not the idea. And it is also why they make a relatively large number of limited investments. While many projects fail, VCs hedge exposure to radical cost/schedule growth but are able to collect the successes that give you a 1,000x return.

I imagine that VC is about revolutions, not cumulative gains. It is very different from the methodical, stage-gate, business case approach that is  mandatory in the Department of Defense, and often used in industry as well.

To summarize this section, scientific progress isn’t related to the quantity of researchers you put in (or quantity of dollars invested), it is related to the number of independent and non-consensual ideas pursued. Ultimately it is that which changes the “slope” of our straight-line progress.

4. None of this is new or surprising.

The problem of increasing labor input to achieve constant or diminishing technological gains has been debated seriously in the DOD for about 60 years or more. I recently commented on the very issue:

This problem has been recognized for a while now. The 1972 COGP Report provided an illustration of what, over the prior decade, had been referred to as the “technological plateau.” It finds that the marginal benefit of R&D investment decreases in a specific technical approach. It implies that disruptive innovations, new ways of doing things, are the only source of long-run progress.

Below is a chart from that COGP Report. You’ll notice that its just another way of presenting what Scott and BRJW were showing, where dollars rather than time, is on the x-axis.

Another important difference is that the COGP showed a “specific technical approach,” whereas the BRJW charts crossed technical approaches, such as going from mechanical to relay to vacuum tube to transistor to integrated circuit. And of course it is these changes, as well as fundamental changes happening within each paradigm, that is continuing the growth and masking the diminishing returns.

In any case, when we talk about ships, aircraft, tanks, and other traditional military platforms, it has been clear for a long time that it is taking far more effort to achieve small increments of advances. And with flat or declining labor input to R&D, it has been even harder to progress.

But it has also been understood that you can invent your way out, and these can come in big stair-steps. For example, in the Spanish American War, only a handful of naval fire hit their mark out of many thousand attempts. A couple years later, after employing continuous aim fire, naval gun effectiveness increased literally a thousand-fold or more. That’s like a VC success story. And the thing about it was, being simply an elevator, the primary impediment to the idea’s implementation was Navy culture.

5. Distinguish the tangible from intangible.

One thing that got to me in the analysis was that tangible or physical outputs are different from ideas or intangible performance outputs.

If I’m measuring naval fire effectiveness, and I’ve increased it by 1,000x, what the hell does 1,000x more effective look like? Well, it can’t be accuracy, we’ve basically closed that gap. Believe it or not, you actually hit a physical constraint on the accuracy parameter. You aren’t going to be able to put a shell within the area of an atom, and even if you could, eventually your measuring instruments will no longer be able to validate your claims.

Further, that increased accuracy has diminishing returns to human requirements at human scale. This means that you got to think of naval fire effectiveness differently, measure it using some other parameters, move our minds from the tangible object of shells hitting a target to the intangible concept of firepower more generally.

Scott shows human marathon times steadily decreasing. Until we genetically change humans into something very different, that cannot keep up. What does a person with a one-second mile look like? And then your parameters have changed, it’s no longer humans improving on that line, it is super-humans. Inventing our way out.

The real world has real constraints. Scale matters. Any physical exponential eventually hits negative feedback. Scott’s explanation kind of takes this to heart in some ways. We really don’t know the limit to our exponential progress in the physical world, but those constraints are requiring us to invest exponentially more people. And the amount of people is a real constraint. Here’s Scott:

If nothing else stops us, then at some point, 100% (or the highest plausible amount) of the human population will be researchers, we can only increase as fast as population growth, and then the scientific enterprise collapses.

Well, I’m not willing to admit that yet. Our next revolutions might be in computer-human cooperation, or bio-engineering, or, maybe, we meet altruistic aliens who give us an unforeseen jump. Or maybe all it takes is an adaptive culture. We have gains not only from our increasing individual knowledge, but from that knowledge embedded in the network of ideas and relationships between people.

6. Growth of Combinatorial Innovation.

Michael Polanyi famously argued that there is explicit and tacit knowledge, and that tacit knowledge is unarticulated.

Tacit knowledge isn’t just the knowledge of our motor skills — we can’t explain how to ride a bicycle, we have to do it — it is also knowledge about what conjectures might have some promise. Just as importantly, it is the knowledge contained in networks like the production and distribution system based on the price mechanism.

I believe that when these human networks grow, the tacit knowledge built into them grows exponentially. More of the knowledge and growth will be tacitly held in the system rather than explicitly held in people’s minds or textbooks. Thus, much of the knowledge growth will be unmeasurable except in hindsight. So I argue that a growing amount of knowledge is tacitly held by the network. Here’s one explanation.

When network nodes grow linearly, the number of interconnections grows exponentially. Metcalfe’s Law find that the number of unique connections in a telecommunications network is proportional to the square of the number of connected users.

Even if the early guys had access to the low-hanging fruit, we today have moved out in all sorts of lateral directions, giving us access to an incredible number of different re-combinations that can yield new and unheard of fruit varieties.

Researchers today have extended specialized knowledge into all sorts of domains, many of which do not seem to interact. We have different physical models from elementary particles to human biology to cosmology. Certainly the scales are connected in some way, and we now have so many more opportunities to find novel connections that may revolutionize technology.

It’s like Hal Varian’s concept of combinatorial innovation:

… if you look historically, you’ll find periods in history where there would be the availability of a different component parts that innovators could combine or recombine to create new inventions. In the 1800s, it was interchangeable parts. In 1920, it was electronics. In the 1970s, it was integrated circuits.

We often see the same thing going on in science. For example, when Claude Shannon was working at MIT on their early computer he noticed an analogy with Boolean logic from his mathematics studies. That cross-pollination gave us information theory in which Shannon not only asked all the important questions, but answered them.

With the growth of our research network, not just specialization but generalization is important so that these cross-pollinations can occur. And there are exponentially more of them from which we can build unique, never thought of, conjectures to test.

7. Objective uncertainty.

A lot of the discussion has tip-toed around the problem of framing our objective of measurement.

This uncertainty of what objective to measure is pervasive. For example, increasing train speed may not matter when we’ve created substitutes like the airplane. Or when we’ve created complements like WiFi so I can watch a movie on the train. Or its quieter now with better seats.

This brings up the point that whenever you do these measurements, it is either some technical parameter, like transistors density, or some performance parameter, like computational speed per dollar, or the hardest thing to measure, human progress across a broad range of considerations. Often, like the man looking for his keys under the light, we measure the things we have data on. Usually technical is the easiest, then performance, then contribution to humanity.

When we try to pick a single objective, we ignore the constant change going on across a massive number of competing and overlapping technologies, ideas, and values. And these new parameters, growing every day, have real impact on our lives.

For example, if aircraft speed is your measure, then the introduction of stealth is a real improvement not included before. If you tried to measure stealth, you’d have found basically found this incredible leap. And if you tried to tie that in to aircraft speed, you wouldn’t know how to weight it. They are incommensurable.

And then you recognize that today’s systems have a huge number of parameters along which we can measure progress in some way, and a lot of progress comes from making an ever expanding variation of parameters.

The more broadly you define your objective, like computation or transportation speed, the more ways the objective can be satisfied, where each solution has a large number of other relevant attributes that must be traded off. The more narrowly you specify your objective, like transistor density or diesel locomotive speed, the more it becomes irrelevant as technology will move onto different paradigms.

Conclusion.

I’ve given a rhetorical criticism of Scott Alexander’s view that a constant rate of scientific progress requires exponentially increasing quantities of researchers, and that this is primarily a result of the low-hanging fruit having already been picked. I counter that we cannot measure the general progress of technology as though it were some external object. We are embedded within it, where the relevant parameters of progress are continually increasing and changing. Culture matters to growth potential, and there is no reason why the broader system of technology development is bounded. So these are my beliefs, though I am in no way married to them.

I’ll close by commenting on the fact that technology seems to have progressed no matter what. There is a literature on multiple discovery, even of revolutionary ideas. The discoveries were inevitable almost. That inevitability, I think, comes from the tacit knowledge of networks which, to be reliable, must contain redundancy, competition, and conflicting perspectives. We’ve been lucky to have had a relatively stable liberal social order to maintain those networks.

3 Comments

  1. My intuition is the the rate of making scientific discoveries is not slowing down. Paul Romer makes the interesting point – “growth rate of consumption per person is proportional to the growth rate of the overall stock of ideas.” Because ideas are not rivalious the stock of knowledge constantly and new combinations of knowledge are always being made. growths; this growth is proportional to the growth of the human population. See: https://web.stanford.edu/~chadj/RomerNobel.pdf

    • In the history of Science progress is generally seen as a series of overlapping S curves. You get to travel at 12 miles an hour on a horse and then hit a point of diminishing returns. You shift to an internal combustian engine and get to 80 miles/hour. Perhap you move to telecommunications and have near-real-time images displayed as you talk to people around the globe. I think we are having a problem of moving from one technology curve to another, but this not a decline of science The reduction in TFP from say 3% to .4% is the result (in my view) of growing institutional sclerosis (See Rise & Fall of American Growth by Gordon). This process was outlined by Mancur Olsen in the 1980s. See: https://pdfs.semanticscholar.org/66c6/25b4f03591f5d3ba0d52d95b67c2cf50cd73.pdf More research is needed.

      • Yes, I tend to see both comments as related to the problem of measurement, including the subjective choice of what to measure.

        In the first case, more ideas/innovation leads to higher consumption. But, with Total Factor Productivity (TFP) seeming to decrease, that would imply that our rate of creating new ideas has also slowed down. I don’t think it is too controversial to say that measuring a complex system like a national economy with GDP and its constituents, like TFP, is a dicey proposal. As Arnold Kling says, GDP assumes a single constant quality good being made by a “GDP factory.”

        In the second case, when we have overlapping S-curves, they often approach the problem differently (e.g., horse and engine). Just measuring a single objective, “ground speed per constant dollar,” perhaps, is insufficient. One of the more confounding affects is that new technologies don’t just solve old problems along the same parameters but just more efficiently, they address new problems and expand the set of relevant parameters which could be measured.

        (Even if the new parameters could be measured accurately, they cannot be put into some index form with the old parameters because of (1) the incommensurable problem and (2) the “zero to one” issue where the newly added parameter means infinite quality improvements on that attribute).

Leave a Reply