Chapter 6. Superseding Institutions in Science and Medicine

Anthony Di Franco

Recently, on a mailing list about open source medical devices, Damon Muma asked the following question: "I am feeling the urge to contribute to this movement, but I’m a bit lost on what has been done/tried/planned or what would be effective and useful. Worried that efforts would be duplicating others or not usefully focused. I’ve had random friends who aren’t even diabetic but into coding offer to help but wasn’t sure what to point them at."

My response began with some fairly concrete observations based on my experiences as a type I diabetic, but it grew to encompass more fundamental issues I know about as a person with an academic background in systems/control theory who works with machine learning techniques as a programmer. I wanted to share those observations here as a way to provoke thought on these questions and broaden the discussion.

Security in Medical Devices

In 2011, the tragically late Barnaby Jack invited me to share the stage with him a couple times to help him present his insulin pump hacks, where he demonstrated the near-total lack of security in insulin pumps, permitting remote control of all functions with no prior knowledge about an individual pump. He moved on to similar work with pacemakers. I was very much looking forward to his work putting pressure on manufacturers to secure their products, and more broadly to improve their quality standards and innovate a bit in their products' function to produce meaningful progress in patient standards of care (none of which they seem to have much interest in now, certainly not considering the enormous profits they reap from selling the products).

Closed-loop Control in Insulin Pumps

A few months ago, Medtronic began marketing an incrementally improved model of its pump as an artifical pancreas. Until now, in every usage I’m aware of, "artificial pancreas" has meant automatic dosing based mainly on continuous sensor readings. This new pump has the same manual dosing as the old models but adds a feature that cuts off insulin if the patient is trending into hypoglycemia. This is no doubt a useful feature for many diabetics, and I am happy to see it come to market, but suggesting that it has much to do with closed-loop blood glucose control is mere hype, and an insult to the diabetic community’s intelligence, and, I worry, symptomatic of the broad lack of innovation in megacorporatized medicine.

I suspect this even more because, from my academic background in control theory, I find the control algorithms that appear in the public research I’ve seen on closed-loop blood glucose control to be some of the oldest and least powerful and stable in the field of control. The proportional-integral-derivative (PID) method still appears prominently in the research, even though in the broader field of control, much more powerful and stable approaches such as model-predictive control have been in routine use for decades, and began to appear in closed-loop blood glucose control research during the last 15 years. PID control is distinguished mainly by having origins in the 1890s and being one of the only techniques possible before the invention of practical computers because it fit into the limitations of pneumatic-powered mechanical governor technology. It appears in the early chapters of many textbooks and remains in widespread use in devices such as thermostats—used to solve simple, one-dimensional problems—but it serves mainly as an example of an inadequate approach when there are any subtleties in the problem to overcome. Closed-loop blood glucose control involves many of the most notable of these subtleties, including nonlinearity, long time lags, and very noisy and biased feedback, and would likely benefit from being formulated in a multiple-input, multiple-state, multiple-output way in order to capture the relevant information and reflect the relevant complexities, instead of the one-dimensional formulation that PID is suited to. (For example, a multiple-input formulation would be necessary to use amylin, insulin, and glucagon infusions together to more fully mimic pancreatic function and prevent blood glucose swings at the beginning and end of the insulin dose response.)

It seems, from the outside, like a poorly supported effort organized more around pursuing minimally viable increments to existing products than around taking the technical problem and the goal of producing excellent outcomes for patients seriously on their own terms. There must be many reasons for this, and I speculate that alongside the petty bureaucratic ones, there are even ones based in concerns I would agree are objectively valid. But it is a problem in itself that this all happens out of sight of the main stakeholders in the results: the diabetic patients—that all that reaches the patients is preposterous hype. And in a context of institutionalized science and research and development, it is hard to imagine any significant change in this situation. Opacity and inertia are the default, and in many senses the main objectives, of this form of organization. Only token concessions can be made against them within the paradigm.

The community of diabetics could work around this apparent bottleneck and aggregate sensor and dose data from patients into an open dataset with which to build models and do offline experiments with proposed algorithms, which is an established methodology in the field. It could then build an ecosystem of open algorithms and hold contests for their improvement and selection. This would become a valuable resource for both institutional and citizen research efforts, and an important resource for checking scientific validity by reproducing results in a field hindered by datasets being mostly proprietary.

Open Source Treatments

Open sourcing everything required to treat diabetes is the most ambitious and difficult goal. However, it synergizes well with the broader DIYbio movement for two reasons. First, because recombinant insulin was the first major commercial success of biotechnology and set the pattern for future development of biologics. Second, because the tools involved are fundamental to biology and medicine and overlap well with the toolset needed for all serious DIYbio research and other community goals: production and purification of biologics, infusion pumps, and in-vivo sensing.

The effects of untreated diabetes include blindness, impotence, nerve damage and necrosis in the extremities, kidney failure, cardiovascular disease, coma, and death—all good reasons to assure the broad and consistent availability of insulin with decentralized production. Illustration by Zach Weiner of "Saturday Morning Breakfast Cereal."
Figure 6-1. The effects of untreated diabetes include blindness, impotence, nerve damage and necrosis in the extremities, kidney failure, cardiovascular disease, coma, and death—all good reasons to assure the broad and consistent availability of insulin with decentralized production. Illustration by Zach Weiner of "Saturday Morning Breakfast Cereal."

Open sourcing treatments is also important because pharmaceuticals in general and insulin pumping specifically suffer from very perverse economic incentives that, at least in part, favor keeping people on the treatments with the most disposable or consumable supplies possible, at the highest price relative to the cost of manufacturing. Open source efforts could develop incentives more attuned to encouraging innovations that improve patient outcomes than to encouraging sitting on profits collected from keeping patients addicted to consumable supplies. Some ideas:

  • Jet injectors are an established technology to administer insulin that need not use and don’t usually use consumable supplies.
  • Radio-frequency spectroscopy of body fluid is a means of measuring blood glucose continuously that also doesn’t directly need consumable supplies and has broader applicability to sensing concentrations of other components of body fluids that are of interest to medicine in general and quantified-selfers particularly.

Neither jet injection nor radio-frequency spectroscopy require any complex hardware or exotic materials to implement, so they would make good candidates for open source experimentation. Tim Cannon’s recent implantation of a smartphone-sized temperature sensor in his arm also shows that DIY tinkering need not remain strictly noninvasive.

Combining all these components into a working system would also result in a platform useful for taking the hypothetical open source closed-loop algorithm research to the point of real tests.

Economics of Effective Health Care and Scientific Research

The financial crisis of 2008 opened discussions on many economic topics of interest to medicine, but I have seen little public discussion taking advantage of the insights that arose in the broader economic discussions.

One issue is the kinds of incentives mentioned before in relation to consumable supplies. They need to be realigned to provide economic benefits to caregivers based on the health of the patient (the true goal) rather than on the amount or cost of services provided (which clearly should be minimized as long as the goal of patient health is achieved, both to minimize direct cost and to minimize risk of iatrogenics). In his 2008 essay in Harper’s Magazine entitled "Our Phony Economy," Jonathan Rowe described the problem well:

Current modes of economic measurement focus almost entirely on means…The medical system is the same. The aim should be healthy people, not the sale of more medical services and drugs. Now, however, we assess the economic contribution of the medical system on the basis of treatments rather than results. Economists see nothing wrong with this. They see no problem that the medical system is expected to produce 30 to 40 percent of new jobs over the next thirty years. "We have to spend our money on something," shrugged a Stanford economist to the New York Times. This is more insanity. Next we will be hearing about "disease-led recovery." To stimulate the economy we will have to encourage people to be sick so that the economy can be well.

The grim reality may be that we are already many decades deep into doing this—the perverse incentives have prevailed for at least that long.

Perhaps this can be addressed by aligning the interests of patients and caregivers by paying doctors only when their patients are well. This is sometimes said to have been the custom in premodern China and precolonial India. And perhaps this can be done under the structure of lodge practice, which was a common and successful structure for the provision of health care before the New Deal in America.[6]

As for the research itself, in his talk entitled "On Bureaucratic Technologies & The Future as Dreamtime," David Graeber put the problem this way: "Even though we’ve been pouring our research money into medicine we still don’t have a cure for cancer. But we do have Ritalin, and Zoloft, and Prozac, all these things that basically make people not go completely insane despite the intense work regime they are now under." And after noting the decline in the pace of scientific discoveries after the US succeeded the UK as global hegemon and bureaucratic institutions and corporations began to dominate as a result, he stated:

If you look at where did a lot of these discoveries actually come from in the UK, they didn’t come from institutions. A lot of them came from things like rural vicars. You know, sort of eccentrics of society, they put them somewhere where they only had to do something once a week, and they would, like, study the insect life, or work on their strange theories of whatever it might be, and 90% of them were completely crazy, but 10%, that’s where the patents came out of, that’s where discoveries largely came out of.

Graeber then quoted astrophysicist Jonathan Katz’s essay entitled, "Don’t Become a Scientist," where Katz said, "It is proverbial that original ideas are the kiss of death for a proposal because they have not yet been proved to work." Graeber continued:

If you want to actually come up with an unexpected breakthrough…you get a bunch of creative people, you give them whatever they want, whatever resources they need, and you leave them alone for a while. After a while you come back and most of them aren’t going to come up with anything but one or two will come up with something you would have never imagined. If you want to make absolutely sure that innovative breakthroughs never happen, what you do is you say, OK, none of you guys get any resources at all unless you spend most of your time competing with one another to convince me that you already know what you’re going to discover.

The case for putting serious money into citizen science turns out to be quite pragmatic.

Indeed, the modern research university was largely an innovation of the early modern French empire, adopted by Frederick William III’s Prussia and further refined into a tool for centralizing control of the Prussian intellectual culture. The totalitarian Prussian regime needed such a tool to serve its agenda of maintaining an obedient and rigidly regimented army to hire out to other warring powers as mercenaries or, that failing, to threaten them directly. Another contemporary innovation of that regime was to require all women to register their menses with the police so that the population’s fertility could be optimized, a fact noted by John Taylor Gatto in his Underground History of American Education.[7] The Americans who imported the research university system to America looked explicitly to such precedents as inspirational successes.[8] With our knowledge of what disastrous results these institutions ultimately contributed to in the German state that succeeded Prussia, we should be able to set a better course for ourselves than the people who committed these errors.

Decision Making Under Uncertainty and the Foundations of the Applied Scientific Method

It turns out that the statistical methods used to evaluate scientific research were born out of many of the same authoritarian, high-modern impulses (to use the terminology of Yale anthropologist James C. Scott) that drove the bureaucratization and corporatization of the economy. The two people who contributed the most to the currently dominant way of statistically testing hypotheses, Ronald Fisher and Karl Pearson, were ardent eugenicists, and though it is only a circumstantial observation, it is worth noting that eugenics' pretenses of rationality are based entirely on selective and specious interpretations of data, on grossly oversimplified and unjustifiedly optimistic beliefs about the effects of destructive interventions, and on mistaking correlation for causation as a matter of course. Even so, Fisher warned that his test of significance should not be used to come to scientific conclusions, but rather as a tool to guide the intuition and as a possible first step before formulating a more rigorous analysis specific to the problem at hand and more sensitive to ambiguities inherent in real situations. Fisher feuded with proponents of a rival methodology, Neyman and a younger Pearson, about the correct way to draw statistical inferences until the end of his life in 1962. A hybrid of the two methods that none of them intended became the version generally taught and used. It came under criticism from Bayesian and other perspectives, and the controversy over this methodological question and many others continues in journals in several fields alongside the results supposedly vetted by the controversial methods.[9]

More recently, in 2005, John Ioannidis of Stanford surveyed the situation and explained "Why Most Published Research Findings Are False" in his publication of the same name, considering medicine specifically, commenting that, "for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias." Starting around 2010, Stephanie Seneff of MIT found that statin treatment for high cholesterol may be counterproductive overall and may include extreme iatrogenic risks and identified fallacious reasoning and failures to reproduce in many key research findings in favor of treatment.[10] However, Lipitor, a statin, "is the world’s all-time biggest selling prescription medicine with cumulative sales topping $130 billion," and about 20 million Americans are taking statins now.[11] In 2012, Samuel Arbesman studied the turnover in scientific beliefs and cited research by Poynard et. al. finding that the half-life of a fact in the fields cirrhosis and hepatitis is about 45 years, and therefore about half of the ostensible facts a typical adult could have supposedly known about those fields during the time studied were false. [12] In a 2013 draft, Spyros Makridakis of INSEAD found no benefit, but only risk, in most cases of treatment of hypertension according to prevailing medical guidelines, and no evident overall benefit to life expectancy in treatment across the whole population.[13]

The litany continues in the introduction to a recent issue of The Economist dedicated to "How Science Goes Wrong":[14]

A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic. Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three-quarters of papers in his subfield are bunk. In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties.

Meanwhile, Bart Kosko compares the dominant use of probability models that accommodate easy analytical manipulation, and more broadly, reliance on probability theory itself rather than broader classes of models of uncertainty, to the proverbial drunk who searches beneath the light of the street lamp for lost keys rather than where they were last seen, off in the dark.[15]

From all this, I conclude that we have nothing like a reliable mechanism for drawing useful conclusions from data in any but the simplest of situations, certainly not in those encountered in biology or medicine, and that many mistakes are hidden and frauds are perpetrated by use of distracting and impressive but meaningless rituals. I expect that someday, historians of science, and perhaps any student of the history of our times, will be aghast at all the waste and suffering we have inflicted upon ourselves in this way. The question of what mechanism should be used, if even one should be used, to generate useful scientific results merits much more attention than the negligible amount it currently gets. (But it has a fair amount of mine already.)

Nassim Taleb offers the core of a respectable answer in his advocacy of what he calls, in the terms of his pragmatic philosophy, convex tinkering. He has built the most recent phase of his career on promoting his pragmatism and the phase before that on predicting and profiting personally from predicting the financial collapse of 2008, but the need for his insights might be most urgent in medicine. He stresses the limits to knowledge in drawing distinctions between the knowable and the unknowable that are unaccounted for by the most commonly applied statistical techniques, which assume away such dangers a priori, and even usually book unmodeled variation towards the confirmation of the hypothesis under consideration. From there, he advocates all feasible control over the consequences one derives from unknowable events, making them beneficial if possible and avoiding them otherwise. In research, this amounts to much broad tinkering under conditions where successes can be noticed and turned to benefit but failures lead to no great loss, rather than high-overhead, high-stakes, overwrought, top-down research agendas. Science and medicine deserve to be practiced accordingly, as it turns out they consistently have been in their most successful embodiments since ancient times. As Graeber’s observations suggest, citizen science will do so where institutional science has failed to do so.



[6] See George Rosen’s history of lodge practice and advocacy for the idea in our time at http://bit.ly/1cWg6Oi and http://www.freenation.org/a/f12l3.html and an early milestone in the debate around access to medical care. The US tax law category that exempts these types of organizations still exists.

[7] Gatto, John Taylor. The Underground History of American Education, New York: The Oxford Village Press, 2003. Chapter 7b.

[9] See Krantz, David H. "The Null Hypothesis Testing Controversy in Psychology," Journal of the American Statistical Association, 94, 448, Dec 1999, ABI/INFORM Global, 1372, and the class notes prepared by R. Chris Fraley.

[10] See Seneff’s materials at her website.

[13] Makridakis, Spyros. "Hypertension: The Evidence," 2013 draft.