How open is the government to creative destruction?

Arthur Diamond had a great conversation with Russ Roberts about his new book, Openness to Creative Destruction. In the episode, Diamond tells us a story about Sidney Farber, a chemist seeking to cure leukemia. He first tried using folic acid which had the effect of killing children faster. After being disgraced and almost shut down, he then sought the chemical opposite of folic acid and successfully helped prolong children’s lives, but not significantly.

This success was picked up by another group led by Emil Freireich, which sought to mix compounds to improve the cure for leukemia further. As Diamond explained:

What happened was they actually started to cure childhood leukemia in some cases. Not just remission but cure it. How they did it was these weekly meetings and making adjustments, seeing how things were going and trying nimble trial-and-error experiments.

These experiments were picked up by another researcher, Vince DeVito, and with some success, applied to Hodgkins Lymphoma. However, DeVito noticed how many research methods excluded such trial-and-error experimentation, and argued against it. The rationalist approach, by contrast, sought complete knowledge of means and ends before tests started.

The rationalist approach made its way into regulations like in the Food & Drug Administration. These roadblocks were encountered by the pioneer behind immunotherapy, Stephen Rosenberg, who said that the methods slowed his research by decades.

Rosenberg wrote a book, which said that the FDA’s requirements on drug approval blocks exactly the successful experimentation that had been working. It requires a research specification to be approved which cannot change throughout the trail. And approval has to be given by layers of bureaucracy before starting. Here is Diamond again:

He [Rosenberg] said not only did it slow him down, but it was extremely demoralizing. Just the science of what he was trying to do was so hard, that’s enough to discourage a person, but in addition he had to go down and fight this bureaucracy, and he stuck with it, but you have to wonder how many other people who would just give up at some point.

The story of innovation in healthcare has strong parallels to defense acquisition. Both are dealing with human lives, and therefore there is trepidation about experimentation. Any new concept must be vetted by experts and bureaucracies, so it must conform to their pre-existing biases. Once a project is approved, no further updates based on new information is possible. The original specification is fixed to allow for measurement, though it comes at the expense of fixing errors or exploiting opportunities.

All these difficulties discourage creative researchers. Only the most dogged will persist, and in any case must likely agree to compromises that could prove disastrous (though, the disaster cannot be known objectively because it is a counterfactual). We don’t see all the dead people that could have been saved by a drug not invented, nor do we see all the weapon systems that could have been developed but were not.

Diamond makes an interesting argument. All the regulations associated with stopping trial-and-error experimentation not only favor the large incumbents, but it is the cause of small gains at large costs.

And the incumbents, or so I argue in the rest of the book, are not as likely to do the breakthrough innovations, for a variety of reasons. So, if the only people who can make it through the screen of the FDA’s methodology are the big incumbents, that’s going to result in a lot of small, unimportant-in-general, innovations.

It is to this that Diamond attributes the apparent slow-down in sciences. It isn’t that we have picked the low hanging fruit. If trial-and-error experimentation were allowed to occur, then genuine discovery of new phenomena or technologies could make new groves of fruit appear to be low hanging.

People sometimes think that the reason we’re getting these little innovations is because we picked all the low-hanging fruit or whatever. I think that’s not the reason here. It seems like the incentives are set up so that’s what we’re developing. That doesn’t mean that there aren’t something approximating magic bullets out there to be found. It’s just we’ve set up the incentive structure so that’s not where it makes sense for them to invest their resources.

The incentives that move industry away from trial-and-error learning seem to be the regulations that forces new efforts to go through the bureaucracy. In essence, it requires explicit knowledge of what the outcomes will be before significant effort starts. That means we give up on truly novel discoveries. It means we must expect the results, which means we must rely only on what is known today. But that invites surprise failure, because if we are limited to doing what is known today — and sounds good to a slew of laymen — then most likely it is commonplace.

Be the first to comment

Leave a Reply