The relentless development of new products, practices, and ideas has transformed everyday life to a degree we scarcely could have imagined a decade ago. This transformation spans entirely new artifacts, such as scarily smart mobile phones; how we perform old tasks, such as checking into airplanes and paying tolls and taxes; and even the revival of defunct practices, such as pedaling around cities on rented bicycles. We take for granted that things will continue getting better, faster, and cheaper, with hulking sport utility vehicles providing the fuel efficiency of the small cars of yesteryear.
Recent medical advances have been equally dramatic. Human immunodeficiency virus/acquired immune deficiency syndrome (HIV/AIDS) was tamed in a matter of decades after its sudden outbreak in the 1980s; cholera, leprosy, the plague, tuberculosis, and polio took millennia to cure or control. Gleevec and related drugs have nearly doubled the five-year survival rate for people with chronic myeloid leukemia. Harvoni, a recently approved drug, offers a cure for most hepatitis C patients. Minimally invasive surgery has revolutionized knee replacements, and cataracts are removed and lenses replaced without hospital stays.
But everyday experience suggests—and health statistics corroborate—that advances in healthcare have not had the same transformative impact on the common person’s life as, for example, information technology. Total deaths from cancer actually have increased, although survival rates for many cancers have gone up, and age-adjusted deaths from cancer have declined. Life expectancy is increasing at a snail’s pace. HIV/AIDS and hepatitis C apart, many new drugs target diseases that afflict relatively few patients. Obvious applications of information technology have not been used to improve the delivery of care: We can renew driver’s licenses online, but we can’t make medical appointments in the same way. And while technology (and technology-enabled supply chains) drives down the cost of clothes, computers, and other goods, thus holding down the overall inflation rate, health-care costs continue to rise in spite of significant public and private investments in medical research and development.
Several plausible explanations can be offered about why progress has been slow over the past few decades. The low-hanging fruit of diseases caused by a single bacterium or virus already has been plucked. The great afflictions of our time, such as cancer and aging disorders—whose causes are murky—are more intractable. Great advances in basic scientific knowledge that have been made by mapping the human genome, for instance, cannot be expected to produce breakthrough treatments immediately. Social norms about safety and privacy impede medical innovation to an exceptional degree.
This essay focuses on how limiting pluralism has slowed medical innovation. In other fields, I argue, innovation has become highly democratized and decentralized, and this multiplayer system has fostered a high degree of dynamism. In contrast, Western medicine has long relied on elite researchers with extensive, virtually identical training. Modern approaches to funding and regulating medical research have reinforced this age-old exclusionary tradition and the inherent conservatism of oligarchic governance.
HIV/AIDS stands out as an exception, I argue, in the speed and manner in which the disease was contained. Special circumstances spurred an unusually multifaceted, multiplayer effort. The outbreak of the disease was sudden and ubiquitous, so the impetus for a quick response was strong, with no entrenched paradigm standing in the way of try-anything solutions. Perhaps more importantly, patients included many well-educated and well-placed individuals who forced regulators and other procedure-bound organizations to deviate from their routines. Such potent patient coalitions are unlikely to reestablish.
Yet exceptions also can serve as beacons, potentially making the success against HIV/AIDS more than just a one-off. Some of the practices and attitudes it catalyzed—such as the willingness of the FDA to approve drug “cocktails,” and the skeptical assertiveness of patients —might persist. These and other trends could make medical innovation more open and fast-paced.
I will discuss:
- The nature and role of multiplayer innovation.
- The degree to which advances against HIV/AIDS fit the multiplayer pattern.
- Barriers to ongoing multiplayer medical innovation that make HIV/AIDS an outlier.
- Trends and policy changes that could make innovation in medicine and health care more inclusive.
Inclusive, Multiplayer Innovation
Before the industrial revolution, highly talented, ambitious individuals of undistinguished lineage could shine serving God or their sovereigns as priests, soldiers, or colonizers of distant lands. The first and second industrial revolutions allowed exceptionally creative and enterprising individuals, who lacked pedigree or formal qualifications, the opportunity to accumulate wealth and power by serving consumers. However, while the revolutions were meritocratic, they were also highly selective. Geniuses such as Henry Ford had ample scope to revolutionize transportation and build empires. But popular theories of Scientific Management and Taylorism sought to reduce rank-and-file employees to automatons or clerks. At the Ford Motor Company, assembly line workers were well paid, but they also were worked hard and told what to do by a small cadre of industrial engineers.
Eventually, however, big business gave up on extreme forms of centralization. It was demotivating to tell workers exactly what to do—paying high wages did not buy Ford great loyalty. And as Hayek would have predicted, it was wasteful: Workers who had knowledge of specific circumstances weren’t empowered to make adjustments or undertake new initiatives. Therefore, organizations adopted what Peters and Waterman call “loose-tight” controls, giving frontline workers scope to exercise creativity.
Innovation in general became a broad-based multiplayer game during the twentieth century (Bhidé 2008). The personal computer or the Internet, for instance, does not have a solitary Alexander Graham Bell. Innumerable entrepreneurs, executives of large companies, standard-setting institutions, scientists, programmers, designers, and even investment bankers, lawyers, and politicians revolutionized computing, communication, and commerce. While some players and their contributions are more visible and celebrated than others, the accretion of small advances is as crucial as the conceptual or scientific breakthroughs. Rosenberg, (1982, p. 62), for instance, highlights the importance of the “slow and often invisible accretion of individually small improvements in innovations” that are ignored because of “a preoccupation with what is technologically spectacular.”
The interlacing of small and large advances across different innovators and across time isn’t centrally directed or planned. Although some players participate through organized teams and hierarchies, coordination between teams and hierarchies is ad hoc, with interdisciplinary advances emerging without prior agreement or conscious design. Similarly, although individuals and organizations do plan ahead, because they have limited knowledge of what others are doing or of what will work, the development of new products and technologies involves considerable improvisation, and trial and error.
While entirely indigenous innovation always has been rare, international collaboration and rivalry now play an unprecedented role in developing the science, technologies, design principles, and business concepts undergirding new products and processes. And because ideas now travel so quickly and easily—and intense competition forces producers to cede more than ninety percent of the value of innovations to their consumers, according to Nordhaus’s (2005) estimates—where innovations are widely and effectively used matters more than where the underlying ideas originate.
Widespread, effective use is not automatic, however—the democratization of what I have called “venturesome consumption” also now plays a critical role. Unlike rich hobbyists who bought early automobiles, millions of the not-so-well-to-do scoop up products, such as the Apple iPad, from the get-go. But buying a new product involves a leap of faith: We cannot know in advance whether it will be worth the price. Similarly, using new products effectively often requires resourceful effort. Modern artifacts are rich in features and are complex. Few products, iPads and iPods included, “just work” out of the box; we have to learn about their quirks and nonobvious attributes. Affordable products also are standardized for mass production, and have to be hacked and tweaked by consumers to suit their individual needs.
Greater inclusivity in the twentieth century improved our capacity to develop, make, sell, and effectively use new products and services, even if pessimists say that innovation peaked with the second industrial revolution (see Appendix A). Inventions between 1850 and 1900 include the monorail, telephone, microphone, cash register, phonograph, incandescent lamp, electric train, steam turbine, gasoline engine, and streetcar, as well as dynamite, movies, motorcycles, linotype printing, automobiles, refrigerators, concrete and steel construction, pneumatic tires, aspirin, and x-rays. These may well overshadow inventions credited to the entire twentieth century. But in the late nineteenth and early twentieth centuries, extraordinary products and technologies typically were developed by a few exceptional individuals, and sold to a few wealthy buyers. Alexander Graham Bell invented the telephone with one assistant. Automobile pioneers were one- or two-man shows—Karl Benz and Gottlieb Daimler in Germany, Armand Peugeot in France, and the Duryea brothers of Springfield, Massachusetts. But small outfits couldn't develop products for mass consumption. Early automobiles were expensive contraptions that couldn’t be used for day-to-day transportation because they broke down frequently, and lacked a supporting network of service stations and paved roads. One or two brilliant inventors couldn’t solve these problems on their own. Rapid, widespread adoption also was hampered by a paucity of venturesome consumption.
Multiplayer innovation continues to deliver the goods in the twenty-first century, though not all is hunky dory. In my 2015 paper, I argued that the decline in promising, informally financed businesses is an alarming portend for inclusive innovation. And as we will see later in this essay, the problem of limited inclusivity is especially acute in health care, a sector that accounts for a large and growing portion of the economy.
Rolling Back HIV/AIDS: A multiplayer success
The story of how HIV/AIDS rapidly flared across the globe and then was contained is a tale of our times. Unlike diseases of earlier pandemics, HIV/AIDS does not itself kill. Rather, HIV infections induce AIDS, making patients vulnerable to other life-threatening diseases.
The disease isn’t age-old. It is thought to have jumped from apes to humans in the 1920s. After the first recorded fatalities in the early 1980s in North America and Europe, infections and deaths grew at fearsome rates. The pandemic wasn’t airborne, waterborne, or vector borne. Rather, the virus spread through bodily fluids transferred in distinctively twentieth-century ways. The retreat of colonialism brought Haitian doctors to Central Africa, who then carried the infection west. Artifacts from the twentieth century—notably syringes, blood banks, and blood products—and growing drug use, anonymous sex, and international travel after the 1970s helped infect diverse groups, including heroin addicts, gays, bisexuals, and hemophiliacs.
But in contrast to older epidemics that took centuries or even millennia to control, the HIV/AIDS epidemic was contained, at least in the West, in just a few decades. In the mid-1990s, the number one cause of death for individuals ages twenty-five to forty-four in the United States was HIV/AIDS-related illnesses. Now, although there is no cure, drugs have become so effective that GlaxoSmithKline foresees that, in about a decade, its AIDS treatment unit, now Glaxo’s most profitable business unit, may no longer have a purpose. According to Glaxo’s chief strategy office, “The industry has done a fantastic job of taking the fear of the late ’80s, and the death sentence, to one tablet a day.” Glaxo’s and Gilead Science’s drugs leave little room for improvement, short of a cure (Staley 2015).
The advances that rolled back HIV/AIDS had a lot in common with the multiplayer game whose features I have summarized. They drew—to a nearly unprecedented degree in medicine—on the contributions of a diverse cast. These included researchers working in government-funded and pharmaceutical-company laboratories, physicians working in hospitals, public health officials, providers of private capital and research grants, and community organizations. The venturesome role of at-risk individuals and patients in mobilizing a multifaceted, multiplayer rollback of HIV/AIDS was pivotal and unprecedented. They were determined, articulate, resourceful, well-educated, and affluent. They formed advocacy groups, and recruited Hollywood stars to lobby for funding research and treatment. When “cocktails” showed therapeutic promise—but the FDA refused to approve their use in the U.S.—advocates discovered a legal loophole for importing cocktails through “buyers clubs.” Eventually, they persuaded the Food and Drug Administration (FDA) to drop its traditional opposition to cocktails.
The multiplayer character of the effort against HIV/AIDS was evident from the very outset when astute clinical observation, prior advances in immunology, and technologies that enabled rapid sharing of information helped public health officials recognize a pattern across a relatively small number of disparate and geographically dispersed cases. The pattern provided the basis for naming and categorizing the disease before much had been learned about its underlying pathologies.
Similarly, even as a consensus about its name emerged, many actors undertook multifaceted efforts that drew on different knowledge and capabilities to control transmission, test for infections and treat patients.
Initiatives to control transmission focused on modifying behavior and practices, rather than on scientific breakthroughs or technological innovations. Transmission through unprotected sex was controlled by education about the risks, by new rules (such as the closing of bath houses), and by condom-distribution programs. Infection through contaminated syringes was attacked by instituting procedures to protect doctors and nurses from accidental needle sticks, and by distributing clean needles to heroin addicts. Transmission through transfusions of contaminated blood was controlled first by screening donors and later by treating the blood.
Researchers developed tests for detecting HIV infections by building on the paradigms, knowledge, and techniques of virology. Testing was crucial because of the long lag between infection and the appearance of clinical symptoms. Early detection facilitated the control of transmission (since extra precautions could be taken with individuals who tested positive) and increased the effectiveness of treatments (since treatment could be administered before the virus had seriously compromised the patient’s immune system).
Researchers who developed treatments followed a different approach than researchers who developed tests. They did not rely on a scientific paradigm, but simply tried drugs that had shown efficacy in treating other viral infections and boosting immune systems in an ad hoc, “see what helps” way. Development of treatments also was subject to more stringent regulation. In turn, patient groups pressured regulators to modify existing rules and standards.
Progress on all three fronts was accretive, proceeding through many incremental advances informed by novel discoveries and concepts, as well as by large and small disappointments. The first efforts to control transmission through contaminated blood, for instance, were crude: Blood from anyone in a group considered to be at risk was simply rejected. Similarly, early tests could not detect early-stage infections or show how far the infection had progressed. And AZT, the first effective drug to treat HIV/AIDS, had serious side effects, often damaging the liver and causing anemia; patients also quickly stopped responding to AZT treatments.
These advances, including the breakthroughs, also were accretive in not being sui generis: They built on scientific knowledge, instruments, technologies, practices, and organizations that preceded the outbreak of HIV/AIDS.
As is common in contemporary multiplayer innovation, participants were interconnected but not tightly coupled within and across their specializations. Sometimes, they consciously agreed to collaborate; in other instances, they drew on ideas and artifacts developed by strangers; and in yet other cases, they engaged in head-on competition.
International collaboration and rivalry were likewise salient in the HIV/AIDS effort. French scientists first identified the AIDS-inducing virus—debunking a prior hypothesis advanced by the American, Robert Gallo—while using a chemical agent developed by the American scientist. Reliable, economically viable tests for the virus were developed by competing multinationals. AZT had been first developed in Detroit to treat leukemia, shelved after it failed to deliver hoped-for results, shown to have antiviral properties by German scientists, and ultimately turned into an anti-AIDS drug by a UK-headquartered pharmaceutical company.
The rhetoric of national competitiveness notwithstanding, who made breakthrough discoveries or pioneered critical innovations and where was less important to the public good than local, on-the-ground effectiveness in exploiting the therapies, tests, and practices derived from high-level scientific or technological breakthroughs. Countries in Europe, or states and provinces in North America where no significant contributions to HIV/AIDS research were made, derived tremendous benefits. In contrast, African countries that lacked the capacity to use advances against the disease realized far fewer benefits.
The campaign against HIV/AIDS deviated from the typical multiplayer pattern in one important respect: The participation of private businesses was almost entirely through large, public companies, and a few new and growing businesses that could raise significant amounts of funding from professional venture capitalists or public markets. Informally financed and self-financed ventures did not play the role they often play in nonmedical innovation, in terms of conducting pioneering experiments and diffusing new technologies.
Historically, improvised informally financed ventures have conducted pioneering experiments, especially in fields where the payoff was small or murky. For instance, Ed Roberts bootstrapped the launch of the first personal computer in 1975, and Paul Allen and Bill Gates quickly provided its first high-level computer language (BASIC), also without outside funding.
Improvised entrepreneurs willing to pursue small opportunities also can play a significant role in the diffusion of new technologies. After the success of IBM’s PC legitimized personal computers, a swarm of self-financed entrepreneurs accelerated the computer’s deployment. This second wave of mostly uncelebrated entrepreneurs encouraged wavering customers to take a chance on the new technology, helped tweak mass-produced hardware and software to suit individual needs, and developed complementary products and extensions. In other words, informally financed entrepreneurs facilitated customers’ willingness to adopt new technology.
In my earlier work (Bhidé 2008), I had suggested that U.S. Food and Drug Administration (FDA) rules limit multiplayer innovation found in other sectors. Here, I will make a broader argument that, before there was an FDA, longstanding traditions dating back to Hippocrates discouraged inclusive innovation in medicine. The FDA and funding agencies such as the National Institutes of Health (NIH) then reinforced these traditions.
Barriers to Multi-player Medical Innovation: Why HIV/AIDS is an Outlier
A. Standardized medical training and knowledge. Building on a tradition initiated by Hippocrates nearly 2,500 years ago, the practice of medicine in Europe (and subsequently in America) has over time been restricted to credentialed physicians who undergo lengthy training Although the training entails much more than book learning, in Europe it long has been based in universities with instruction by professors who are themselves physicians. This practice dates back to a school in Palermo, founded primarily for the study of medicine, that is considered to be the first university of any sort in Europe (Nuland 2008 p. 70),
Restricting the practice of medicine to physicians who received university educations had (and continues to have) advantages. It helped establish a baseline of competence and create a common body of knowledge and beliefs among physicians. In turn, this helped provide consistent treatment of patients. University-based professors screened new ideas and helped preserve accepted ones into a canon that would be passed on to succeeding generations. Medical faculties and textbooks also helped disseminate standardized knowledge across geographies. For instance, William Harvey, who discovered the circulation of blood, studied medicine in Padua, Italy, after receiving his bachelor of arts in Cambridge, England. In other fields that relied on masters to train apprentices, knowledge was less codified, and communities that could build on the ideas of past generations were not as cohesive.
There was also a downside to the collectivized canon. As Thomas Kuhn has argued, once a scientific community accepts a paradigm, its foundational ideas and assumptions are not open to question. The medical paradigm was similarly resilient. Galen, a second century Greek physician, provided a foundation that was almost as unshakable as it was flawed (See box: Galen’s Anatomy).
Galen’s Hippocratic predecessors had only a surface knowledge of anatomy because of “cultural prohibitions against dissecting human bodies,” but they were not overly bothered by their ignorance (Bynum 2008 p, 11). Galen, who saw himself as “completing” the framework of Hippocrates, included a detailed anatomical account in his texts, but derived his schema entirely from animal dissections—he never saw a human dissection. Worse, even his animal dissections were guided by his religious, creationist imagination.
Galen’s anatomy became deeply entrenched withal. It even survived the introduction of human dissection into medical training because low-status barber/surgeons dissected bodies while professors in elevated chairs recited from Latin Galenic texts to bored students, never looking at the exposed cadavers below (Nuland 2008 p. 72). Ultimately, it was in the sixteenth century when Andreas Vesalius, a Brussels-born professor of anatomy at Padua who had studied medicine in Paris and did his own dissections, overthrew the Galenic paradigm. In 1543, Vesalius published the sumptuously illustrated De humani corporis fabrica (On the fabric of the human body), an unvarnished attack on Galen. Galen, Vesalius declared, had been deceived by his dissections of monkeys.
The reductive theory that disease was caused by an imbalance or corruption of the four humors (yellow bile, black bile, water, and blood) survived even longer. The humoral doctrine was “at the heart of Hippocratic physiology and pathology,” although it “was not contained in all the Hippocratic treatises.” The Hippocratic ignorance of anatomy likely encouraged reliance on the simple humoral model, since its “operative elements” were “the bodily fluids” (Bynum, 2008, p 11). Galen codified the Hippocratic doctrine of the humors, creatively tying it to a complicated—and erroneous—account of physiology and anatomy. Galen’s interpretation “gave humoral medicine such prestige that it dominated medicine until the eighteenth century” (Bynum, 2008, p. 10, 15).
Standardized medical training also likely contributed to the persistence of treatments and resistance to new therapies. The Hippocratics, for instance, believed in vix medicatrix naturae, the healing power of nature, favoring treatments that would help a patient’s body do its natural work. These included bloodletting, since symptoms such as local inflammations or flushes of fever were regarded as evidence that the body had too much of the blood humor. The practice of bloodletting continued to be a “mainstay of therapeutics until the mid-nineteenth century, and physicians abandoned it only gradually and reluctantly” (Bynum, 2008, p. 13).
Restricting who could practice also restricted who could innovate—or at least whose innovations would be included in the canon. The history of medicine is almost entirely a history of new ideas and techniques developed and advanced by physicians. The title of Sherwin Nuland’s book, Doctors: The Illustrated History of Medical Pioneers, is telling. Individuals who were not physicians faced two obstacles—lack of exposure to patients and treatments, and lack of credibility (See box: Exposure and Credibility).
French physician Rene Laennec’s invention of the stethoscope, a mechanically simple but medically revolutionary device, illustrates the importance of exposure to patients that only practicing “insiders” could have. Before Laennec’s 1816 invention, physicians placed their ears on patients’ chests to diagnose conditions such as heart murmurs. Laennec’s invention apparently was accidental: Because he was uncomfortable putting his ear directly on the chest of a plump young woman, he used a tightly rolled notebook and found he could hear the sounds more clearly. He then crafted a stethoscope out of a hollow tube with a bell at one end and a diaphragm at the other. Over the next three years, Laennec created the vocabulary still used today to describe breath sounds, and correlated several heart and lung diseases with their auditory patterns. Someone who wasn’t a physician would not have stumbled into Laennec’s discovery, or then been able to categorize body sounds that made the stethoscope a ubiquitous diagnostic tool.
The discoveries and ideas of non-physicians also could not easily secure the credibility and audience to be accepted into the canon. Leonardo da Vinci preceded Vesalius in his dissections and anatomical drawings; but, although da Vinci’s drawings are famous now, they had virtually no influence on physicians in his time. Ambrose Paré, the son of a cabinetmaker who could not afford a proper university education, instead apprenticed as a barber/surgeon and served in French military campaigns. Paré’s military service secured him an outstanding reputation, and his books eventually transformed surgery. Yet they were disdained by University of Paris professors because Paré wrote in French, not Latin.
Nineteenth century French scientist Louis Pasteur risked prosecution for conducting the first human trial of the rabies vaccine on a nine-year-old boy who had been mauled by a rabid dog. Pasteur did not hold the syringe, and the head of the pediatric clinic at Paris Children's Hospital was present. But because Pasteur wasn’t a licensed physician, his supervision of the vaccination was illegal. As it happens, because the boy was cured, Pasteur was spared prosecution and hailed as a hero.
In the twentieth century, even after the role and training of nurses was professionalized, their lower status in the medical hierarchy severely circumscribed their role in medical innovation. For instance, Dame Cecily Saunders—after securing a degree in Philosophy, Politics, and Economics from Oxford—qualified as a nurse. In the late 1950s, when working as a volunteer nurse, she proposed a specialized hospice. Told that her ideas would never be accepted unless she earned a medical degree, Saunders did just that. Eventually, in 1967, she opened the pioneering St. Christopher’s Hospice and, after another 20 years of trying, persuaded the Royal College of Physicians to recognize palliative care as a medical specialty.
Restrictions on who could practice and innovate were not foreordained. Physicians across much of Asia traditionally were not university trained. Even in Europe, builders of complicated artifacts such as bridges, cathedrals, ships, and aqueducts were not required to have formal university educations. Individuals acquired the necessary knowledge and skills through observation, self-study and formal or informal apprenticeships. Their ability to undertake technically challenging tasks was assessed not by their diplomas, but by their record of past projects, the quality of their mentors and patrons, and, in some cases, by their membership of a guild. Just as more individuals could practice outside medicine, more individuals, sometimes complete outsiders with spotty educations, could innovate (See box: Autodidacts).
As is well known, many of the storied names in the information technology revolution—including Bill Gates, Steve Jobs, and Mark Zuckerberg—have been college dropouts. They continue a long tradition, outside medicine, of pioneering contributions by the self-taught. John Harrison (1693-1776), for instance, followed his father into carpentry and taught himself to repair and build wood clocks in his spare time. When gridiron pendulum clocks appeared, Harrison’s inventions, such as the nearly frictionless grasshopper escapement, improved their performance. In the 1720s, another clockmaker, Henry Sully, invented a marine clock to help navigators determine longitude, but it worked only in calm weather. Harrison then spent more than four decades creating chronometers that kept accurate time in rough seas, eventually winning a prize from the British parliament after the intervention of the king. Notably, Harrison not only lacked much formal education, he also had difficulty communicating his ideas cogently.
Outside medicine, unschooled innovators whose contributions spoke for themselves flourished in the nineteenth century. George Stephenson, considered the “father of railways” in Britain, was illiterate until age 18, and learned to read and write in night school. He became an engine wright in a coal mine after repairing a pumping engine. In 1815, the year before Rene Laennec’s stethoscope, Stephenson invented a mining safety lamp. Then, after studying the workings of a locomotive used to haul coal, Stephenson constructed his own locomotive in a workshop behind his home.
Thomas Edison, who had just three months of formal schooling, provides another example. Edison’s first inventions derived from a brief stint as a telegraph operator, but he proceeded to rack up more than a thousand patents for a wide range of inventions ranging from electric lighting, to power generation, to sound recording, to motion pictures. Certainly, autodidacts did invent some medical artifacts (such as Benjamin Franklin’s bifocals) but, as a rule, virtually all medical pioneers were trained, practicing physicians steeped in the prevailing medical paradigm.
B. Patient/physician relationships. Traditions governing the relationship between physicians and patients also have discouraged innovations from outside the field, and restricted venturesome consumption in medicine.
The holistic Hippocratic concept of medicine placed individual patients at the center of diagnosis and treatment, whereas a rival school of the time, the Cnidian, focused on the disease rather than the patient. Accordingly, “the Hippocratic doctor needed to know his patient thoroughly: What his social, economic, and familial circumstances were, how he lived, what he usually ate and drank, whether he had travelled or not, whether he was a slave or free, and what his tendencies to disease were” (Bynum, 2008, p. 7).
Later, even though holism waned (because advances in medical knowledge emphasized diagnosis and treatment of disease) physicians continued to idealize the treatment of patients as unique individuals. Practices pioneered in Parisian hospitals in the nineteenth century made the examination of patients more personal and comprehensive. Physicians began to routinely inspect patients for furriness of tongues and coloration of eyeballs, palpitate to feel lumps or enlarged organs, percuss chests and abdomens and auscultate with stethoscopes. Systemizing the traditional patient-by-patient approach preserved the idealized practice of medicine as artisanal, even as the knowledge and training of the “art” was codified and standardized. As Abraham Verghese of Stanford University’s School of Medicine puts it, “A very important, I would say ministerial, function of being a physician is to be attentive, is to be present, is to listen to that story, is to locate the symptoms on that person of that patient, not on some screen, not on some lab result, but on them.”1 Mass inoculations, mass screenings and the like emerged as specialized “public health” exceptions to the normal practice of medicine.
Much of capitalist enterprise after the nineteenth century has, however, pursued the large profit possibilities of mass production for mass markets. The limited economies of scale in an artisanal system likely curtailed commercial interest in medical advances. Thus, while innovations of the first and second industrial revolutions transformed activity upon activity—plowing fields, slaughtering hogs, freezing and canning produce, preparing breakfasts, making shoes and clothes, lighting streets, transporting goods, traveling for work and pleasure, and performing household chores—few innovators targeted health care. In many lists of great inventions in the second half of the nineteenth century, for instance, only x-rays and aspirin are overtly medical. Of course, the development of electricity, chemical processes and so on did have medical implications. My suggestion is simply that the artisanal approach in medicine (and fields such as law and education) dampened interest in developing and harnessing new technologies for medical care. Matthew Josephson’s (1934) Robber Barons lists magnates from many industries—real estate, steel, copper, railroads, shipping, hotels, coal, sugar, tobacco, and even furs and barbed wire. None of the barons made their fortunes in medicine or medical technologies, or in professional fields such as education, accounting, and law, which preserved artisanal practices.
Another salient feature of the traditional relationship between physicians and patients has been the passivity of the latter, which is expected by the former. The responsibility for diagnosis and treatment has been with physicians who were thought to have the necessary training and experience. The first lines of Hippocrates’s Aphorismi observed, “Life is short, and art long, opportunity fleeting, experience perilous, and decision difficult.” How could lay-patients who had not devoted their lives to learning and practicing the medical art make difficult decisions? Medical professionals, therefore, severely discourage any choosing, mixing-and-matching, hacking, and tweaking that help bridge mass-produced goods on the one hand, and our personal circumstances and preferences on the other. The good patient is compliant, not venturesome. Even the diagnostic reliance on symptoms reported by patients has been progressively diluted in favor of objective tests, typically performed on rather than by patients.
C. Church and State. “By about 1900,” writes Pickstone (1996, p. 306), “most Western states were supervising and subsidizing the education of doctors, underwriting the policing of medical misconduct, and protecting the regular professional against false claims to qualifications.” Well before the 1900s, however, the state—and church—had a strong influence on the development and use of medical knowledge.
Theological concerns, Nuland suggests, are among those that helped sustain Galen’s claims about human anatomy. Christianity before the Renaissance had “exerted an inhibiting influence” on the study of the human body, writes Nuland. Christian doctrines “diminished the importance of man’s corporeal being as compared to his soul,” and religious authorities were “quite satisfied with the teleological precepts of Galen.” Although Galen’s pagan creator was “quite different from the Judeo-Christian God, the church and synagogue were united in the belief that the Galenic construct accorded with their dogma far better than did any obtrusive efforts of objective research” (Nuland, 2008, p. 71).
The overlap of theology and healing exposed incautious innovators to sanctions imposed by religious authorities. The Spanish physician and theologian—and contemporary of Vesalius—Michael Servetus had corrected Galen’s erroneous account of how blood passes between the heart and lungs. However, Servatus published his thesis (which preceded Harvey’s more general theory of the circulatory system) in a religious anti-trinitarian tract, Christianismi Restitutio (The Restoration of Christianity). Servetus was denounced as a heretic, and in 1553, Calvinists burned him and his book at the stake (with green wood to prolong the agony).
On the positive side, religious orders started hospitals—which later became hubs for the development and dissemination of medical knowledge—to offer refuge (“hospitality”) to pilgrims and the needy. Hospitals, like monasteries and nunneries, also sometimes included infirmaries to tend to the infirm and sick. Hospitals assumed an exclusive medical role in the Middle Ages, many specializing in caring for leprosy and plague patients. Religious support continued as hospitals served patients rather than pilgrims, and the names of storied European hospitals such as Hotel Dieu in Paris, St. Bartholomew's Hospital in London, and Santa Maria Nuova in Florence reflect their religious origins.
The state, to the extent it was distinct from the church, had a secular interest in medicine. Infectious diseases, especially the plague, could disrupt the public order and infect ruling elites. Wealth and power did not protect against illness. Appointments as court physicians were coveted and could provide valuable platforms for advancing medical ideas. The Greek Galen, for instance, served the Roman Emperor Marcus Aurelius, whose patronage protected Galen from the “vendettas of his rivals” and gave him the freedom to pursue his investigations with “plenty of help in the preparation of his manuscripts” (Nuland, 2008, p. 50). Treating wounded soldiers was of vital interest to the state, and also was a pathway to influence for some innovators and a spur to their innovations (see box: Ministering to the Military).
Serving in military campaigns was a path to influence for some pioneering physicians. Before Aurelius appointed Galen his court physician, the emperor had asked Galen to join his campaign against the hordes of the Marcomanni. Similarly, while barber/surgeons had low status in the medical hierarchy, they were considered indispensable on the battlefield. Paré’s service to French forces in several campaigns of the Italian Wars and the Wars of Religion—which started before his birth in 1510 and ended after his death in 1590—secured the barber/surgeon royal patronage and protection: King Charles IX personally intervened to spare Paré in the St. Bartholomew’s Day massacre of Paré’s fellow Protestant Huguenots.
Changing military technologies also spurred medical innovation. “Each new war demands yet further improvements in medical care,” writes Nuland (2008, p. 96), because it brings “more efficient methods of destruction. The injuries are more complicated, requiring increasingly sophisticated knowledge of the body in order to treat them.” European armies began using small guns and gunpowder in the fourteenth century, and artillery was first extensively used in the Italian Wars. Doctors of the time mistakenly believed that hot oil was necessary to “detoxify” gunshot wounds, providing Paré the opportunity to discover—after his supply of hot oil ran out—that bandages with soothing balms were more effective
While state patronage might have given a boost to some innovators, the fact that it was restricted to credentialed physicians likely reinforced barriers to pluralistic innovation. Long, expensive educations (including learning Greek and Latin) typically restricted entry to the relatively wellborn. The pioneering anatomist Vesalius was “fifth in his family’s line of distinguished medical men, all of whom had been either scholars or physicians to royalty” (Nuland, 2008, p. 75). Restricted entry also helped elevate incomes and brought “gentlemanly status” to physicians who could “read Latin and dispute the niceties of Galen” (Bynum, 2008, p. 27). Physicians drawn from this milieu who could successfully compete for court appointments would not be expected to be the scrappiest or most innovative of their peers.2
The French Revolution, which initially tried to make medical practice more inclusive, had the opposite effect: It triggered a revolution in “hospital medicine” that systematized treatment and clinical research, reinforcing the specialized nature of medical training and innovation (see box: A Tale of Two Revolutions).
The French Revolution first “swept away” the “institutions of medicine—physicians, surgeons, hospitals, the old academies, and faculties”—and in the “heady” early 1790s, “it seemed best for everyone to be his or her own doctor” (Bynum, 2008, p. 45). The Faculte de Medicine of Paris that had controlled teaching since the Middle Ages was dissolved, leaving the training of doctors to individual enterprise, apparently in keeping with the philosophy of the new Republic. But the Revolutionary government’s army needed trained doctors, too, and in 1794, it reopened three medical schools (in Paris, Strasbourg, and Montpellier) primarily to train physicians for the military.
The National Convention established public competitions (concours) to fill the chaired and adjunct professorships it created in the schools. It decided, in a break with tradition, to pay faculty salaries (and allow them an unrestricted private practice) so that professors did not have to depend on tuitions paid by students. The National Convention also appointed a commission to report on how the reopened schools should function (Bynum. 2008, p. 45).
The commission recommended intensely practical training (the student ought to “read little, see much, do much”) studying hospitalized patients and their corpses. The forty-eight hospitals in Paris overflowing with destitute patients were expected to provide ample opportunity. The report also urged students to be trained both in medicine and surgery, since they would be expected to provide services to the military. Adoption of the recommendations had unexpected, far-reaching consequences. “Teaching hospitals” became centers for systematic research, not just hands-on training. Faculty members could observe and treat more patients in hospitals than they could through house calls. And hospital patients who “were mostly the poor and uneducated, and therefore powerless to have much of a say in the way they were treated,” (Bynum, 2008, p. 47) served as unproblematic subjects for research.
Including surgery in the curriculum elevated the standing of surgeons and broadened the nature of medical research. According to Bynum, whereas physicians had been concerned with dysfunctions of the body as a whole (such as might arise through an imbalance of the humors), surgeons dealt with “solid” local problems (such as broken bones and abscesses). The surgical propensity to focus on local solids led to an interest in lesions and pathological changes in organs induced by disease. The opportunity to study many patients with similar symptoms led to pathologico-clinico research.
The clinical side involved recording detailed patient histories that included symptoms reported by patients, as well as the objective examination (inspection, palpitation, percussion, and auscultation) pioneered in the French hospital system. Similar signs and symptoms provided a basis for categorizing diseases.
The pathological side involved associating diseases with pathological changes in organs (the lesions) discovered through postmortem examinations. Hospitals provided a natural venue for these postmortems since their autopsy rooms were as full as their beds. Correlating lesions and symptoms helped differentiate diseases rather than cure them or even identify their causes. It did, however, provide a foundation for later advances such as the germ theory of disease that did lead more directly to cures.
Thus, while the hospital system catalyzed by the French Revolution broadened medicine to include surgical knowledge and sensibilities, it also increased the scope and importance of specialized training and research. Reforms intended to reduce the role of lectures and textbooks in medical education actually increased the scope and reliability of codified textbook knowledge. Hospital medicine also then set the stage for further specialization through laboratory research pioneered by Pasteur and the German physician Robert Koch.
While faculty appointments in teaching hospitals were more meritocratic than court appointments, the role of patrons and pedigree did not disappear. The stethoscope inventor, Laennec, for instance, had been a top student, done important research, edited a medical journal, and developed a successful private practice. Yet because Laennec was a devout Catholic and Royalist and had no powerful sponsors, he could not secure an invitation to enter a concour for a faculty position. Laennec could only secure a hospital-and-teaching appointment after Napoleon fell and the monarchy was restored —and even then it took a personal connection. The appointment had a nearly immediate payoff: Within weeks, Laennec’s hospital rounds led to the invention of the stethoscope.
In medicine as in so many other fields, the larger resources commanded by the state in the twentieth century and the broadening of its regulatory reach significantly expanded the influence of the state in shaping innovation. The expansion in the U.S. was pronounced. As the Federal Government increased the share of private incomes it secured through taxes, especially following the Sputnik scare, the Federal Government became a major source of funding for basic and applied research.
Federal agencies and foundations—including the National Science Foundation, National Institutes of Health, Department of Energy, and Defense Advanced Research Projects Agency —developed a structured process to evaluate grant applications. Unlike private philanthropists, taxpayer-funded agencies had to avoid the perception of caprice or bias. Peer review of thoroughly documented grant applications by other well-established researchers became the norm. The process, in turn, favored projects that addressed problems that could be deduced from the prevailing paradigm (“normal science,” in Kuhn’s terms) or sought solutions along lines suggested by such paradigms. Projects based on inchoate or out-of-the-box hunches, and projects whose steps could not be specified in advance, usually were not funded. Similarly, despite blind peer review, credentialed insiders with impressive curriculum vitae and training in preparing proposals had advantages. Plus, because universities received a share of grant funds as overhead, they favored faculty (for promotion and honors) who undertook larger, more expensive projects.
The research and development—and new business development—projects of large, professionally managed corporations, which also became a major feature of innovation in the twentieth century, had a similar bias. Objective, independent scrutiny by bosses and committees could help reassure diffused shareholders that their funds were being judiciously used, just as peer review of grant proposals might comfort taxpayers. Here, too, the approval process favored projects whose risks and returns could be objectively investigated, whose execution could be systematically planned, and that were proposed by credible advocates. The fixed overhead of review and ongoing oversight encouraged large corporations to undertake a few large projects rather than many small ones.
In many fields, systematic big-ticket research financed by governments and large corporations did not displace smaller scale and more ad hoc efforts. Autodidacts and freelancers continued to invent and innovate in homes and garages, even in technical fields such as light polarization and instant photography (pioneered by Edwin Land, who dropped out of college to further his technological interests), Xerography (refined in Chester Carlson’s kitchen), and personal computing. As previously discussed, synergies between organized and ad hoc initiatives made innovation more heterogeneous and multiplayer. New cryptographic techniques today are developed both by researchers working at behemoths such as Google and by offbeat individual programmers such as Moxie Marlinspike, whose simple encryption programs are used by the likes of Facebook (Yadron, 2005).
Heterogeneity was not universal, however. In some industries, such as nuclear energy and aircraft manufacture, significant government research, funding, and procurement—in conjunction with high industry concentration sometimes maintained by regulatory barriers to entry—gave innovation a uniform character. Innovative projects had objectively verifiable prospects (low “Knightian” uncertainty), and large capital requirements. They were carefully planned and undertaken by experienced, credentialed personnel. This was also the pattern in medicine. Increased government funding and regulation in the twentieth century expanded the prior role of the state in promoting innovation by credentialed specialists working in research institutions, rather than in homes and garages.
The National Institutes of Health (NIH) is an important case in point. It evolved from traditional government concerns with the care of military personnel and with epidemics, and it originated in the Marine Hospital Service (MHS) first charged with providing medical care to serving and retired Navy personnel. When the U.S. Congress asked the MHS to investigate epidemics, such as cholera and yellow fever, it established a lab to study bacteria. In 1930, that lab was designated as the NIH.
In 1967, the NIH created a division to fund research on noninfectious diseases, notably cancer, strokes, and heart disease, which had historically been of lesser interest to governments than infectious diseases. (The NIH already had started supporting cancer research in the 1920s through a partnership with Harvard Medical School and, in 1937, had taken over the previously independent National Cancer Institute.)
After President Nixon declared “war on cancer,” Congress passed the National Cancer Act of 1971, greatly increasing funding for the NCI’s (and thus the NIH’s) budget and responsibilities. In the 1980s, the resources of the NCI were used in the campaign against HIV/AIDS (discussed earlier). In the 1990s, the NIH increased its emphasis on basic genetic research not tied to a specific disease, joining with international partners to launch the Human Genome Project. Overall, the budgets of the NIH increased more than 500 fold3 in the latter half of the twentieth century.
The NIH now undertakes research in twenty-seven of its own institutes and centers (such as the NCI) that employ more than 1,000 principal investigators and more than 4,000 postdoctoral fellows, making it the largest biomedical entity in the world. The NIH also disburses funds, amounting to four-fifths of its total budget, to researchers at universities, medical schools, and research institutions such as the Mayo Clinic.4 There is dispute about whether the NIH favors prestigious researchers and organizations. The NIH, which likely is sensitive to the need to maintain support in Congress, points out on its website that it funds research at more than 2,500 institutions spread across every state in the union. But however broadly the funds might be disbursed, it is a virtual certainty that nearly all of its grantees have doctoral degrees and institutional affiliations. In medical research, freelance Moxie Marlinspikes do not apply for or receive government funds. Even researchers from prestigious institutions can be shut out if they challenge the prevailing paradigm (see box: Swimming Against the Tide).
Obstacles faced by researchers who want to study the body’s immune response to cancer illustrate the difficulty of securing funding for research that questions the prevailing paradigm.
In the early 1890s, Dr. William Coley, a prominent New York surgeon, noticed that cancer patients who contracted acute bacterial infections experienced spontaneous remissions. Acting on the hunch that the infections had induced the remissions, Dr. Cooley audaciously injected bacteria into a patient with an inoperable tumor to induce a “virulent infection.” When the patient recovered completely, Dr. Coley developed a bacterial mixture, known as “Coley’s mixed bacterial toxins,” for treating cancer patients.
But Coley’s approach of stimulating the body’s immunological response was overshadowed by radiology and chemotherapy, and his work was forgotten. After Coley’s death, his daughter, Helen Coley Nauts, found records of her father’s “toxin treatment” as she was going through his papers. For the next twelve years, Mrs. Nauts, a housewife with no medical training, “taught herself oncology, immunology, and record keeping,” tracked down 896 patients who had been treated with Coley toxins, and published findings showing the beneficial effects. Nauts also secured a $2,000 grant from Nelson Rockefeller to start the Cancer Research Institute (CRI) in 1953.
In 1971, the CRI recruited Dr. Lloyd Old, a physician/researcher, as its medical director, and started a fellowship program to “attract outstanding young scientists to immunology.”5 According to Don Gogel, who has served on CRI’s board since 1981, the fellowships stimulated “basic research that provides the foundation of today’s immunotherapies. The researchers we funded now form the core of the new wave of cancer immunology leaders. But for my first 20 years on the CRI board, I saw a sustained lockout of immunotherapies from the mainstream of funding and research blessed by NIH. The prevailing paradigm of chemotherapy and radiation treatment was favored. We stayed the course largely because of the conviction of Dr. Lloyd Old, who also served as chair of the Department of Immunology at Memorial Sloan Kettering.”
Lone-wolf innovators acting on hunches or challenging the prevailing views of their peers are not altogether absent in the medical sphere. Dr. Charles Kelman, an ophthalmologist in private practice in New York, invented the cyroprobe, an instrument to freeze and extract cataracts, in 1962. The following year, he developed freezing techniques to repair retinal detachments. Most of Robin Warren’s and Barry Marshall’s paradigm-defying work on how bacterial infections cause duodenal ulcers was done by the two Australian physicians—“after hours or at home,” according to Warren (2006)—without a research grant. When Marshall presented his findings in October 1982, the response was “mixed.” The “standard teaching,” according to Warren (2006), was that “nothing grows in the stomach.”
After Marshall was denied renewal of his hospital contract in Perth, he resumed research at a hospital in Fremantle in the face of continuing skepticism. He writes that “most of my work was rejected for publication, and even accepted papers were significantly delayed. I was met with constant criticism that my conclusions were premature and not well supported. When the work was presented, my results were disputed and disbelieved, not on the basis of science, but because they simply could not be true. I was told that the bacteria were either contaminants or harmless commensals” (Marshall 2006).
In 1950, Ewing Marion Kauffman, who was working as a salesman for a pharmaceutical company, started Marion Laboratories in his basement. The business—selling calcium supplements made from ground oyster shells—produced revenues of $36,000 in its first year and a net profit of $1,000. When Merrell Dow bought the company in 1989, Marion Laboratories had grown to become a diversified health care giant with nearly $1 billion in sales.
Where these “unfunded” exceptions make their mark is noteworthy. They usually succeed in areas that are not of primary interest to the NIH or grant applicants from mainstream research establishments. Thus, while developers of new surgical techniques or diagnostic equipment (or business models, as in the case of Ewing Kauffman) will face greater difficulty in securing grants than researchers doing cutting-edge genetic research, they also are less handicapped by their inability to secure grants. Similarly, development of treatments for conditions such as cataracts or ulcers—which people have learned to live with, or have resigned themselves to available treatments—offer more opportunities to unfunded innovators than diseases such as cancer, which the NIH prioritizes.
Ample opportunities to innovate outside the NIH’s and the mainstream research community’s interests should, in principle, produce considerable freelance innovation by outsiders. But outsiders face disadvantages beyond their inability to secure research grants. One disadvantage, as previously discussed, is the restriction of medical practice to trained physicians. Like Laennec and other preindustrial revolution innovators, Drs. Charles Kelman, Robin Warren, and Barry Marshall developed their novel ideas in the course of caring for patients. Just as importantly, FDA rules pose formidable barriers to frugal development that innovators can undertake in, for example, cryptography or smartphone applications.
The FDA became a formidable force in the twentieth century; before that, there were few federal laws regulating the production and sale of food or pharmaceuticals. The agency’s foundational legislation was the Food and Drug Act of 1907, passed to control the adulteration and “misbranding” that had increased along with high-volume production and interstate sales. The U.S. Department of Agriculture’s Bureau of Chemistry, which was tasked with enforcing the act, mounted an aggressive campaign but found its authority checked by the courts. In 1911, for instance, the Supreme Court ruled that the 1906 act did not cover false claims of therapeutic efficacy. Congress responded by expanding the definition of “misbranding” to include “false and fraudulent claims,” but the courts again limited enforcement by setting high standards of proof for fraudulent intent.
In 1938, Congress passed the Food, Drug, and Cosmetic Act after an elixir formulated with a toxic solvent had claimed more than 100 lives. The 1938 legislation gave the FDA (as the Bureau of Chemistry had by then been renamed) sweeping powers. The law did not just mandate premarket review of the safety of all new drugs (which could have prevented the elixir tragedy), it also allowed the FDA to ban false therapeutic claims without proving fraudulent intent, authorized it to inspect manufacturing facilities, and brought medical devices (defined broadly enough to cover toothbrushes) under the FDA’s authority. Courts after the New Deal were also more helpful to the FDA as it enforced its new powers against drug manufacturers that made unsubstantiated claims. For instance, a 1950 court decision (in Alberty Food Products Co. v. United States) held that omitting the intended use of a drug from its label did not provide a safe harbor against “false therapeutic claims.” However, the FDA could not yet prevent the introduction of ineffective drugs; it could only force recalls.
A 1962 amendment to the 1938 act authorized the FDA to put the onus of providing “substantial evidence” of efficacy on developers before they could market a new drug or device. The act defined substantial evidence as comprising adequate and well-controlled investigations, including clinical investigations, by experts qualified by scientific training and experience to evaluate effectiveness. The FDA (1998, p. 2) further took the position that, in passing the 1962 legislation, Congress had “intended to require at least two adequate and well-controlled studies, each convincing on its own, to establish effectiveness.” Moreover, in 1962, the “prevailing efficacy study model” had been “a single institution, single investigator, relatively small trial with relatively loose blinding procedures, and little attention to prospective study design and identification of outcomes and analyses.” Over time, the FDA required efficacy studies to be “multicentered, with clear, prospectively determined clinical and statistical analytic criteria” (FDA, 1998, p. 12).
There was an obvious downside to tougher efficacy requirements. As the 1998 FDA document acknowledges, “the demonstration of effectiveness represents a major component of drug development time and cost. The amount and nature of the evidence needed can, therefore, be an important determinant of when and whether new therapies become available to the public.” Congress ameliorated the problem of the delays faced by developers of new drugs by passing the Hatch-Waxman Act of 1984. This legislation extended patent protection given to new drugs, tying the extension to the time required to secure FDA approval. But, while the extension increased the incentive for pharmaceutical companies to continue developing new drugs, it did little to speed up the process. It also did nothing to reduce the costs of regulatory compliance and thus the prices consumers had to pay.
High costs of efficacy requirements also limit innovation to established companies and the relatively few new businesses that can secure funding from professional venture capitalists. According to DiMasi, Hansen, and Grabowski (2002, p. 162), the mean cost (in 2002 dollars) incurred during Phase I trials (which provide the first screen for safety) for drugs approved in the 1990s was $15.2 million; the Phase II cost (in which new drugs are tested for safety and efficacy on as many as a few hundred patients) was $23.5 million; and the Phase III cost (which involves large-scale randomized and blinded testing of thousands of patients) was $86.3 million. These sums are outside the scope of informally financed new businesses. For instance, most founders of companies on Inc.’s list of the 500 fastest-growing companies in the U.S. (which exemplify exceptionally successful, informally financed ventures) start their businesses with less than $20,000. Unsurprisingly, very few “Inc. 500” companies develop products that require FDA approval.6
FDA and intellectual property rules also discourage continuous iterative innovation, even by well-capitalized enterprises (including venture capital-backed companies), which is the hallmark of innovation outside the biomedical sector. As I have previously argued (Bhidé, 2008), the FDA has developed the sensibilities of researchers in the natural sciences (who try to discover universal, parsimonious laws) rather than that of engineers or technologists (who try to develop artifacts that solve specific problems and are often complex).7 In pharmaceutical regulation, “science-mindedness” has apparently engendered strong preference for simple, single-molecule drugs whose therapeutic effects on a specific indication can be more easily isolated than with cocktails or mixtures. The scientific orientation of the FDA also is reflected in rules requiring well-specified, controlled experiments to test the efficacy and safety of molecules (and medical devices).
This regulatory posture limits the scope for “try it, fix it” innovation. Once a compound has been submitted for approval, and the FDA has approved a testing protocol, the developer is in the same position as someone conducting a science experiment: It is what it is. A skilled chef can make up for a missing ingredient by modifying the recipe, but if a compound is discovered to have an unexpected toxicity, the FDA’s single-molecule approach makes it impossible to effect compensatory adjustments by adding something to offset the toxicity. In fact, because FDA-approved protocols (like good science experiments) are more or less cast in stone, the developer cannot adjust the dosage or other aspects of how the drug is administered to patients once a trial is under way. Flexibility afforded to developers of devices, especially ones implanted or attached to the body, to make changes during trials is only modestly greater.
Developers of new surgical techniques do not seem to be significantly constrained in making alterations, however. There are, in fact, no explicit federal regulations governing innovative surgery, save general Department of Health and Human Services Institutional Review Board (IRB) guidelines covering research on human subjects. According to a survey by Reitsma and Moreno (2002), surgeons experimenting with new techniques rarely seek IRB review, and few are even familiar with how research on human subjects is defined.
The rules discourage ongoing development after a new product has made it to market, as well. In pharmaceutical products, the costs of establishing safety and efficacy represent an obvious barrier in developing new applications or versions. In addition, according to one expert, pharmaceutical companies that discover an approved drug produces better results in a different dosage, or has utility in treating a different condition, are reluctant to go through FDA trials (for the different dosage or indication) because they are afraid the trials might produce data that will lead the FDA to reexamine the drug already being sold. At the same time, strong intellectual property rights limit the competitive pressures that induce companies like Intel to introduce advances on regular “tick-tock” cycles (improving the manufacturing process on the “ticks” and the architecture of chips on the “tocks”).8 As long as the patent hasn’t expired, pharmaceutical companies typically focus on ways to increase sales (by persuading more doctors to prescribe or insurance companies to reimburse) rather than on ongoing improvements.
Barriers to incremental advances are somewhat lower in devices than in drugs, although the intellectual property incentives are similar. The FDA reduces the evidentiary burden for incremental advances; it clears devices after reviewing a premarket notification (known as the 510 (k)) showing that the device “is substantially equivalent to a device that already is legally marketed for the same use.” New devices, in contrast, must be “approved” rather than “cleared”9 Therefore, ongoing incremental improvements may be a more routine feature in medical devices, as Gelijns, Rosenberg, Nathan, and Dawkins (1995) find. The true extent of incrementalism is hard to pin down, however, because device developers have an incentive to seek FDA clearance rather than go through the onerous “approval” process for new products. Regardless, even in devices, there is no equivalent of the weekly or monthly updates that companies such as Microsoft can issue (without any regulatory approval). With both drugs and devices, regulatory (and intellectual property) rules also preclude third parties from adding value by making small changes or developing complements.
Additionally, the FDA isn’t a friend of venturesome consumption. As mentioned, consumers make leaps of faith in deciding whether new, nonmedical products will be worth the price and risk. They often mix and match or hack standardized, mass-produced products to suit their idiosyncratic needs. But the FDA has been mandated to make choices about safety and effectiveness on everyone’s behalf. And as a practical matter, the FDA’s evaluations pertain to standardized interventions. The agency deems something to be safe and effective when “used as directed” where the “as directed” matches the precisely specified conditions under which preapproval trials were conducted. As in the traditional physician/patient relationship, the FDA’s approach favors patients who comply with the “use as directed” injunction rather than making choices of their own.
One manifestation of the tension between venturesome consumption and the FDA’s mandate and modus operandi is in the area of at-home and direct-to-consumer testing. The FDA treats all home-use testing equipment—and tests sold directly to the consumer—as medical devices. Therefore, manufacturers of testing equipment (or providers of tests) have to establish that the tests are safe, reliable, properly labeled (with warnings prescribed by the FDA), and satisfy the agency’s good manufacturing practice requirement as specified in its quality systems regulations. In addition, the FDA also “requires the results to be conveyed in a way that consumers can understand and use.” In particular, the FDA requires that “user comprehension” studies “must obtain values of ninety percent or greater user comprehension for each comprehension concept.”
If the agency deems consumers incapable of understanding the results of a test, it requires that the results be channeled through a “licensed practitioner.” The cost of satisfying these requirements can make at-home testing commercially unviable, hindering the ability of venturesome individuals to take charge of tweaking interventions and therapies—an outcome that the FDA perhaps prefers.
Promoting Multiplayer Innovation
There is considerable concern about the nature and cost of medical innovation. Critics argue that new therapies now target diseases of the few rather than of the many, provide small incremental benefits (adding only a few weeks to the lives of the terminally ill, for instance), or control rather than cure chronic conditions. Cheap, effective treatments are underutilized because of inadequate incentives for their widespread adoption. Meanwhile, spending on medical research—and on healthcare overall—continues to rise even as prices in much of the rest of the economy remain steady or even fall.
Typical remedies seem to focus on increasing the effectiveness of specialized researchers. One such approach is to promote “translational” and “interdisciplinary” research. In 2006, for instance, the NIH created the Clinical and Translational Science Award (CTSA) program, which had expanded to about 60 academic medical institutions in the U.S. by 2015. Similarly in 2005, the NIH launched an Interdisciplinary Research (IR) program to “change academic research culture such that interdisciplinary approaches and team science spanning various biomedical and behavioral specialties are encouraged and rewarded.” The program’s components included interdisciplinary research consortia, training programs, a “Multiple Principal Investigator (Multi-PI) Policy,” and the fostering of new “interdisciplinary Technology and Methods.”10
The history of innovation inside and outside medicine suggests, however, that the relationship between knowledge developed through basic research, typically undertaken without regard to its practical use, and the practical application of such knowledge is difficult to predict and control. In some instances, researchers have been able to systematically and successfully apply basic research; in other cases, applications have been discovered serendipitously (the use of the transistor principle in transistor radios) or after frustrating lags (as in the effort, started in the 1980s, to apply knowledge of nano-molecules, which is only now bearing fruit). In yet other cases, practical knowledge and inventions have led the development of scientific knowledge. As L.J. Henderson observed, “Until 1850, the steam engine did more for science than science did for the steam engine.”
In medicine, too, clinical practice often has preceded scientific understanding (Nelson et al, 2011), and long lags between scientific discovery and treatments have been commonplace. Harvey’s revolutionary discovery that blood circulates had virtually no impact on treatments (including the treatment of Harvey’s own patients) for nearly a century. The practical consequences of the many disease-pathology correlations discovered in French hospitals in the nineteenth century took just as long to materialize. Linus Pauling and his colleagues demonstrated in 1949 that sickle-cell disease occurs as a result of an abnormality in the hemoglobin molecule. This was a milestone in the history of molecular biology. Yet the disease remains incurable. How much an organized effort can accelerate the translation of findings in genomics research into treatments—or whether a systematic process can be developed to shorten lags between medical science and medical treatment—remains to be seen.
Similarly, while many important medical and nonmedical innovations have resulted from the cross-pollination or integration of ideas across fields, the process often has been serendipitous. As often as not, innovators have borrowed ideas from other domains without any formal collaboration. For instance, Kelman was inspired to develop photo-emulsification (to remove cataracts after pulverizing them with ultrasound) as he was having his teeth cleaned by a dentist. There are certainly examples of successful, structured collaborations, which are integral to modern design-thinking approaches to organized innovation. Some of the important advances in cataract treatments that followed Kelman’s serendipitous insight resulted from purposeful multidisciplinary effort. But where and how structured multidisciplinary innovation works better than the ad hoc cross-fertilization of ideas remain open questions.
The success of multiplayer innovation outside medicine and in the campaign against HIV/AIDS suggests consideration of a different regulatory approach—that of trying to make medical innovation more pluralistic, decentralized, and continuously accretive. Granted, such a change would run counter to a very long tradition. As we have seen, medicine long has favored specialization and an epistemological monoculture that induces centralization by standardizing knowledge and training. There are, nonetheless, signs of increased openness.
Outside players have increased their roles. As I suggested earlier, medicine was on the periphery of the first and second industrial revolutions, perhaps because physicians and patients could effectively resist changes to artisanal practices. However, many large companies that emerged from the industrial revolutions did later diversify into the medical sphere. Kodak started making x-ray products in 1896 and, after the Second World War, produced equipment and film for tuberculosis screening. General Electric, Phillips, and Siemens developed large medical-equipment businesses. Similarly, after the success of Genentech, venture capitalists who previously had specialized in nonmedical technologies, such as computers and software, started investing in biotechnology, medical-device, and medical-services companies. Companies such as IBM and Google now are seeking to apply their big-data analytics and cloud computing capabilities to health care.
As Nelson et al (2011) observe, many technologies that weren’t developed for medical purposes have nonetheless been incorporated into medical devices. They point out that lasers, which had “no connection with research aimed to understand disease,” became “a central component” of many “effective medical treatments.” Similarly, computerized tomography (CT) scanners “drew heavily on advances in computers and mathematics, ultrasound had its origins in submarine warfare, and magnetic resonance imaging (MRI) had originated in the work of experimental physicists.” We may further note that large industrial companies and venture capital-backed businesses—not traditional medical researchers—drove much of this cross-fertilization.
Widespread online information-sharing has encouraged venturesome consumers to take many medical matters into their own hands. Online information-sharing started modestly with the formation of Usenet groups at the University of North Carolina and Duke University in 1980. Through the 1980s, Usenet membership was restricted to the few individuals who had access to the Internet and the technical skills necessary to participate in a group. Since then, Internet connectivity has become ubiquitous, and posting or retrieving information has become mundane. In addition, channels for information-sharing have multiplied to include online forums and social networks, such as Facebook and YouTube. Easy information-sharing has, in turn, prompted individuals to investigate and try to solve problems in domains—including medicine—where they have no training or experience. For many, a web search has become a complement to—and in some cases a substitute for—consulting a physician.
The FDA has cautiously changed some of its efficacy requirements. In the mid-1980s, AIDS activists had pressured the FDA to allow several thousand patients access to AZT before it had been approved. In 1987, the FDA formalized the conditions under which patients could get new drugs that were still in trials. Under “treatment” Investigational New Drug (IND) rules, doctors could prescribe the drugs to patients who were not enrolled in a clinical trial only if the patients had advanced life-threatening diseases for which no other treatment was available. The rules also required the drugs’ developers to “diligently” pursue normal trials, and refrain from promoting or otherwise commercializing not-yet-approved drugs. By August 1994, twenty-nine drugs had been granted treatment INDs, of which twenty-four had received normal approval by the end of that year (Flieger, 1995). In the 1990s, the FDA began “priority” reviews for applications that might produce major advances. It also began accepting evidence of proxy effectiveness—for instance, approving drugs that reduce cholesterol on the premise that reducing cholesterol reduces the risk of heart disease.
At the same time, the FDA increased the scope of its regulation to cover new trends in self-diagnosis and self-testing by individual consumers. As mentioned, the FDA now regulates home-testing devices and tests sold directly to consumers. In one celebrated instance, the FDA forced 23andMe to stop marketing its $99 Personal Genome Service, which provided consumers who mailed in a saliva sample more than 200 health reports. The FDA’s 2013 warning letter complained that the tests could “produce false positive or false negative assessments for high-risk indications,” and that patients who did not adequately understand the test results might use them to “self-manage,” possibly even abandoning necessary treatments. (In early 2015, the FDA allowed 23andMe to offer a single test, the Bloom syndrome carrier test, after the company conducted two separate studies to demonstrate the accuracy of the test— one “usability study” to show that consumers could adequately follow instructions about how to submit saliva samples, and another test to show that consumers could understand the results.11)
Similarly in 2013, the FDA declared that apps on smartphones and other such mobile devices would be regulated as medical devices if they were “used as an accessory to a regulated medical device,” or if they “transform[ed] a mobile platform into a regulated medical device.” It encouraged “app developers to contact the FDA—as early as possible—with questions about mobile apps, their level of risk, and whether a premarket application is required.”12
As it happens, consumers aren’t always mistaken in questioning accepted treatments. Radical mastectomy was the standard treatment for breast cancer until quite recently. The few doctors who questioned its effectiveness would have been ignored had it not been for a patient revolt against the drastic surgery. And FDA regulation hasn’t been foolproof in terms of safety or effectiveness. Recent products that had to be recalled after passing safety tests included the anti-inflammatory drug Vioxx and Guidant’s defibrillators and pacemakers.
Overestimates of effectiveness are arguably even more commonplace. The use of antibiotics to treat ulcers (after research by Marshall and Warren) displaced treatments of dubious utility that, nonetheless, had received regulatory approval. Even if completely useless treatments rarely make it through FDA scrutiny, efficacy in actual use often tends to be much lower than reported in earlier randomized, blind trials. This so-called “decline effect,” according to Lehrer (2010), is “extremely widespread” in medicine, affecting therapies such as cardiac stents, antidepressants, Vitamin E, and antipsychotic drugs.
Some libertarians might see these shortfalls as a reason to shut down the FDA. Believers in the FDA’s current mission—and in the improvability of randomized trials—would see them as reason to provide more funding for safety and efficacy trials. The analysis of this essay suggests consideration of a third retroradical alternative—require the FDA to return to its foundational mission of controlling safety while limiting its role in testing for effectiveness of those interventions (such as vaccines) where lack of effectiveness poses obvious public health risks (or where the lack of efficacy also unambiguously jeopardizes safety). As the costs of satisfying regulatory efficacy requirements are reduced, the intellectual property protections offered to innovators also should be scaled back. In other words, the implicit bargain reflected in the Hatch-Waxman Act—more patent protection to offset longer efficacy trials—could be reversed. Licensing under “fair, reasonable, and nondiscriminatory” terms now required by standard-setting bodies similarly could be adapted for medical patents, especially those originating in publicly funded research.
Tough safety rules and little to no regulation of efficacy have become the norm outside medicine. Regulators did not subject transformative nonmedical advances—including steam engines, light bulbs, cars, computers, and the World Wide Web, to randomized tests for effectiveness or comprehensibility of operating instructions. It is only in medicine (and increasingly in foreign aid to third-world countries) where “evidence based” is considered synonymous with controlled randomized testing. Evidence of effectiveness or value typically is collected quite differently. For instance, in September 2014, Microsoft launched its Windows Insider Program to collect continuous feedback about its operating system as it was being developed. By the end of the year, nearly 1.5 million people had signed up. They did not at all comprise a random sample. Nor did Microsoft test features in successive releases in any structured or controlled manner. Similarly, competing products or technologies are subjected to a pluralistic, Darwinian sort of selection that isn’t blind, standardized, or centralized. Rather, in the multiplayer game, many buyers decide whether to take a chance on new offerings, using their own objective and subjective standards.
In some cases, this trial by the many may lock everyone into a poor choice (such as, allegedly, the “Qwerty” keyboard or VHS videotapes). As a rule, however, decentralized consumer choice supports the diversity of innovators and their offerings by protecting innovators against the prejudice or bias of a few expert judges. (Many industry experts, it may be recalled, panned the iPhone when it was first introduced, but the device nonetheless secured a fanatical following). Moreover, unstructured pluralistic testing produces much more data (by millions of testers in the Windows Insider Program, compared to the thousands enrolled in FDA trials). Potentially, to the extent a large number of testers encompasses greater diversity, pluralistic testing better matches products and features with the heterogeneous problems and preferences of users.
At the same time, the U.S. government, like governments abroad, has significantly expanded its regulatory powers to promote personal and public safety. For instance, the National Highway Traffic Safety Administration’s New Car Assessment Program encourages manufacturers to build safe vehicles. The Federal Aviation Administration has an elaborate process to oversee the design, manufacture, and maintenance of aircraft to ensure that they meet “the highest safety standards.”13 The EPA seeks to control pollution in the air, land, and sea. The Federal Communications Commission screens new computers and other digital devices for “harmful interference” with “police, ambulance, and fire communications” and “air traffic control operations” (Federal Communications Commission, 1996).
In medicine, too, we should expect that an FDA that made safety its primary focus would reduce the incidence of dangerous drugs or devices brought to market. The FDA could devote more money and time to safety. Plus, safety trials that don’t have to be randomized to establish efficacy could enroll more patients.
Meanwhile, scaling back the regulation of efficacy (or making FDA approvals of efficacy, like Good Housekeeping seals, voluntary) promises two important benefits:
- Sharply reducing the costs of regulatory compliance should foster some of the hectic, frugal innovation that we find in so many other fields.
- Replacing centrally supervised, randomized trials with more pluralistic evaluations (by medical associations, insurers and other third-party payers, and online consumer communities) should improve the matching of treatments and patients.
Useless treatments might increase with more innovations coming to market. But, as is the case outside medicine, widespread sharing of diverse experiences about actual use might yield more knowledge of what works best and under what circumstances. We could sip a little more of the holy grail of personalized medicine on the cheap simply by allowing more ad hoc user experimentation.
We certainly should not suppress science, disdain biotech and Big Pharma, or replace trained physicians with barefoot, Maoist doctors. But we could be less credulous about imminent research breakthroughs, and offer more scope for nurse practitioners and even completely uncredentialed outsiders to innovate. Placing ever-larger bets on exclusive innovation is a poor remedy for its debilities. Harnessing the enterprise and ingenuity of the many and for the many should be the way ahead.
I wrote this paper for the 2015 Kauffman Foundation New Entrepreneurial Growth conference. The key section about HIV/AIDS relies heavily on an ongoing project to compile a large collection of case histories about medical innovation. I am very grateful to David Roux, whose most generous gift made the project possible; to Katherine Stebbins McCaffrey, research associate at Harvard Business School who is diligently and thoughtfully writing up the case histories; and to Srikant Datar, my valued collaborator and partner on the project. Srikant also provided valuable comments on this draft. I am solely responsible for any and all factual inaccuracies, questionable inferences and sweeping over-generalizations contained in this essay.
About the Author
Amar Bhidé is the Thomas Schmidheiny Professor of International Business, member of the Council on Foreign Relations, editor of Capitalism and Society, and a founding member of the Center on Capitalism and Society. He is the author of A Call for Judgment: Sensible Finance for a Dynamic Economy (Oxford, 2010), The Venturesome Economy: How Innovation Sustains Prosperity in a More Connected World, (Princeton, 2008). In addition, he has written numerous articles in the Harvard Business Review, the Wall Street Journal, The New York Times, BusinessWeek, and Forbes. Bhidé was previously the Glaubinger Professor of Business at Columbia University and served on the faculties of Harvard Business School and the University of Chicago’s Graduate School of Business. Bhidé earned a DBA and MBA from Harvard School of Business with High Distinction and a B. Tech from the Indian Institute of Technology.
Amar Bhide, The Demise of U.S. Dynamism Is Vastly Exaggerated – But Not All Is Well, SSRN, http://ssrn.com (January 26, 2015).
Amar Bhidé, The Origin and Evolution of New Businesses (New York, Oxford University Press, 2000)
Amar Bhidé, The Venturesome Economy: How Innovation Sustains Prosperity in a More Connected World (Princeton, N.J., Princeton University Press, 2008)
William Bynum, The History of Medicine: A Very Short Introduction (Oxford, Oxford University Press, 2008)
Daniel Carpenter, Reputation and Power: Organizational Image and Pharmaceutical Regulation at the FDA (Princeton: Princeton University Press, 2010)
Ken Flieger, FDA Consumer Special Report, FDA Finds New Ways to Speed Treatments to Patients (January 1995) (downloaded July 16, 2015)
Federal Communications Commission, Understanding the FCC Regulations for Computers and Other Digital Devices (OET Bulletin 62, 1996) (downloaded July 17, 2015)
Food and Drug Administration, Guidance for Industry: Providing Clinical Evidence of Effectiveness for Human Drugs and Biological Products (1998) (downloaded July 12, 2015)
Annetine Gelijns, Nathan Rosenberg, Holly V. Dawkins, Sources of Medical Technology : Universities and Industry, Medical Innovation at the Crossroads, v. 5, Institute of Medicine (U.S.). Committee on Technological Innovation in Medicine (Washington, D.C., National Academy Press, 1995), 67.
Robert J. Gordon, “The Demise of U.S. Economic Growth: Restatement, Rebuttal, and Reflections” (NBER Working Paper 19895, February 2014)
Robert J. Gordon, “A New Method of Estimating Potential Real GDP Growth: Implications for the Labor Market and the Debt/GDP Ratio” (NBER Working Paper 20423, August 2014)
Jonah Lehrer, “The Truth Wears Off,” The New Yorker, December 13, 2010, (downloaded July 16, 2015)
Barry J. Marshall, Autobiographical Essay (Nobelprize.org, 2006) (downloaded July 11, 2015)
Robert Merges, Richard R. Nelson, “On Limiting or Encouraging Rivalry in Technical Progress: The Effect of Patent Scope Decisions,” Journal of Economic Behavior & Organization 25(1):1-24. DOI: 10.1016/0167-2681(94)90083-3 (1994)
Richard R. Nelson, Kristin Buterbaugh, Marcel Perl, Annetine Gelijns, “How Medical Know-How Progresses,” Research Policy, Volume 40, Issue 10 (December 2011) 1339-1344, ISSN 0048-7333, http://dx.doi.org/10.1016/j.respol.2011.06.014
Neil Osterweil, “Medical Research Spending Doubled Over Past Decade” MedpageToday (September 20, 2005) (downloaded July 10, 2015)
John Pickstone, “Medicine, Society and the State” in Roy Porter (ed.), Cambridge Illustrated History of Medicine (Cambridge, Cambridge University Press, 1996), 304-341.
Nathan Rosenberg, Perspectives on Technology (New York, Cambridge University Press, 1976)
Nathan Rosenberg, Inside the Black Box: Technology and Economics (New York, Cambridge University Press, 1982)
A.M. Reitsma, J.D. Moreno, “Ethical Regulations for Innovative Surgery: The Last Frontier?,” Journal of the American College of Surgeons 194(6), (June 2002) 792-801
Robert Solow, “Building a Science of Economics for the Real World,” prepared statement for House Committee on Science and Technology Subcommittee on Investigations and Oversight, July 20, 2010
Oliver Staley, “Success at Glaxo’s HIV Unit May Mean Having to Call It Quits,” BloombergBusiness (July 9, 2015),
Sherwin Nuland, Doctors: The Biography of Medicine (New York, Alfred A. Knopf, 1988)
Sherwin Nuland, Doctors: The Illustrated History of Medical Pioneers (Black Dog and Leventhal, 2008)
Warren G. Vincenti, What Engineers Know and How They Know It (Johns Hopkins University Press1990)
Robin J. Warren, Autobiographical Essay (Nobelprize.org, 2006) (downloaded July 11, 2015)
Danny Yadron, “Moxie Marlinspike: The Coder Who Encrypted Your Texts,” Wall Street Journal, July 10, 2015, B1
Robert Gordon (2014a, 201b) offers a pessimistic assessment of innovation during the past forty years and a discouraging prognosis for what we can expect in the future.
The golden age of innovation drew to a close nearly a century ago, according to Gordon, with the end of the second industrial revolution. The great advances of that revolution sustained continued improvements in productivity growth for another fifty years. But because there have been fewer technological breakthroughs, productivity growth—which Gordon and other economists say is the best possible measure of innovation—has been in a slump since the early 1970s.
I long have argued the opposite (starting with Bhidé, 2000 and more extensively in Bhidé, 2008, chapter XV)—that our innovative capacity improved in the twentieth century. More recently (Bhidé, 2015), I also have used the notion of inclusive innovation to buttress well-known criticisms about the utility of abstracted estimates of productivity. I have argued that the essence of multiplayer innovation—the relentless changing of what, how, and by whom it is produced and priced—is assumed away in models used to estimate productivity.
Consider, for instance, Gordon’s (2014a) dismissal of the consumer surplus that goes uncounted in standard measures of the output that innovations help increase. He writes that “real GDP measures have always missed vast amounts of consumer surplus since the dawn of the first industrial revolution almost three centuries ago,” but provides no evidence for why this vast miss should be constant over time. To the contrary, we should expect that changes in industry concentration, product and capital market fashions (buoyant stock markets encourage pricing for market share), the proportions of tangible goods and intangible services, and the government’s role in providing and purchasing goods and services will change the consumer’s share of the innovation surplus in an irregular, unpredictable way.
Estimates of productivity themselves are as implausible as their underlying assumptions. Robert Solow (2010), who, according to Gordon, was instrumental in establishing total factor productivity (TFP) as the main measure of technological advances, criticizes DGSE models for failing the smell test. The model Solow pioneered to estimate productivity also produces results that would fail many people’s olfactory standards. A model showing that productivity—and, by implication, innovation—was much higher during the Depression than in the Roaring Twenties beggars belief. So, too, do results showing that there has been little improvement in the productivity of the service sector, even after cutthroat competitors have invested trillions of dollars, and have visibly transformed banking, retailing, transportation, and even government-provided services.
The great advances in controlling and treating HIV/AIDS discussed in this paper also underscore the pointlessness of relying on conventional estimates of economic output and productivity growth to measure innovation. Practices that have prevented untold infections are reflected in the national accounts, and thus in productivity estimates, mainly in terms of the value of artifacts—condoms, needles, test kits, and the like—that are used or distributed, or in salaries paid to public health workers. The value of prevention itself, which is not a commodity sold in a market, goes uncounted. The situation with accounting for the value of treatments is a bit better because drugs sold to treat infections—and counted as an economic output—are expensive. Even so, there is a wide, uncounted gap between the value to the patient and the cost of the drugs. And, as drugs became more expensive and effective, we cannot tell whether the increase in their prices—which are not set in a classically competitive market—exceeds their greater value to patients, and thus whether there is any increase in real economic output. Yet who would argue that advances against the disease are anything but significantly, viscerally, real, or that there will be an actual decrease in useful economic output—and a fall in productivity — when the AIDS drugs go off patent and their prices fall?
- The thesis of pluralistic, multiplayer innovation is not novel and should not be controversial. Richard Nelson has been emphasizing its importance, and the policy implications thereof, for decades. (See, for instance, Merges and Nelson, 1994). Nathan Rosenberg’s (1976 and 1982) arguments about accretive incremental advances implicitly attribute a pivotal role to pluralistic innovation. Unfortunately, Schumpeter’s stirring rhetoric about great innovators who “found kingdoms” does not. And the “sharp disjunction” Schumpeter posits between “the high level of leadership and creativity involved in the first introduction of a new technique as compared to the mere imitative activity of subsequent adopters” has helped obscure the value of multiplayer innovation.
- This section draws heavily on a case history written by Katherine Stebbins McCaffrey for a project on which Srikant Datar and I are collaborating.
- The historical material in this section relies mainly on Nuland (2008) and Sherwin (2008). I have taken the liberty, however, of giving their stories and analyses a multiplayer framing.
- The restriction, which advanced at different rates in different countries, came relatively late in the U.S. According to Pickstone (1996, p. 305), U.S. medical practitioners in the 1840s did not have to be licensed. They could train, if they chose to, in a “variety of competing medical schools, attached to different brands,” including herbal medicine and homeopathy. Reliance on “credentialed” physicians also varied, with the wealthy and powerful more likely to require formally trained physicians.
- Many discoveries in the nonmedical sciences, such as geology and astronomy, also were made in the seventeenth, eighteenth and nineteenth centuries, by gentlemen of leisure. The inventors’ accomplishments would earn them membership in bodies such as the Royal Society, even though they lacked any university credential.
- The burning of witches also may have unwittingly suppressed the development and use of folk remedies and un-credentialed medical practice (as Elisabeth Barsk pointed out to me).
- Kelman did, however, receive a $250,000 grant to develop his breakthrough photoemulsification technology (introduced in 1967) and secured a clinical appointment at New York University.
- Carpenter (2010) provides the definitive account of the history of the FDA, its influence on medical innovation, and on the structure of industries it regulates.
- As with the 1938 legislation, the 1962 amendment is widely thought to have been catalyzed by an outcry about drug safety, namely drug defects induced by thalidomide, rather than by a shortfall in efficacy. The only nexus between the expanded role of the FDA in premarket trials and the thalidomide deaths was that they occurred during safety trials whose design the FDA then lacked the authority to regulate. The FDA’s (1998) own account of the 1962 legislation makes no mention of the thalidomide tragedy. Rather, it asserts that “the original impetus for the effectiveness requirement was Congress's growing concern about the misleading and unsupported claims being made by pharmaceutical companies about their drug products coupled with high drug prices.”
- Until 2004, a company seeking FDA approval for a therapy based on an herbal extract, for instance, had to identify the single active ingredient that is doing the job—and prove its safety and efficacy. In June 2004, the FDA did, however, issue guidelines that would make it easier to secure approval for botanical drugs that have not been “purified” to a single molecule.
- Some people I interviewed for my previous research, whose companies developed medical devices, said they first sought approval in Europe, where regulators were more tolerant of the need for ongoing adjustments.
- According to Nelson et al (2011), however, “the cumulative result of a series of more incremental advances in medical knowledge often is a major improvement in ways of treating patients.” This raises the question of how the incremental advances cited by Nelson et al (in treating coronary diseases, cataracts, diabetes, and angioplasty) overcame the regulatory barriers. Did they, for instance, pertain to improvements that are not regulated by the FDA? Did they involve use of “clearance” rather than “approval” rules for medical devices?
- To cite a personal example, my physician told me to take Vitamin D supplements to bring my Vitamin D levels to normal. But because there is no way to know how much additional Vitamin D would do the job, the obvious solution would be to experiment with different amounts and monitor the effects. Unfortunately, there are no home tests that would permit such monitoring, even though Vitamin D deficiency is a common condition. Similarly, I have been unable to find cheap and reliable tests to monitor, and thus help control, cholesterol, blood sugar, and sleep apnea. There is no technical reason why, in this day and age, there should not be such tests, much as there are for pregnancy and HIV infections.
- In 2003, for instance, I had a bout of nausea and giddiness. An emergency room doctor performed an “Epley maneuver” on me, which immediately resolved the problem that apparently had been caused by benign paroxysmal positional vertigo. Many years later, when the vertigo reappeared, I looked up a video of the maneuver on YouTube, avoiding another visit to a doctor. An online search also allowed me to figure out why I had periodically suffered from cramps and how to solve the problem—something diligent and competent physicians had been unable to do for more than a decade.
- Maurice Mason, an airline industry veteran (and son and brother of physicians) observes, “Safety had always been used in aviation as the excuse not to deregulate, but deregulation broke the regulatory capture dynamic and, in my view, has significantly helped improve overall safety levels.”
- Interview Transcript, “PBS Newshour” interview, (downloaded July 16, 2015)
- Similarly, the state’s interest in controlling plagues or treating wounded soldiers likely reduced the resources available for widespread noninfectious diseases
- Congress, which appropriated $28 million for the NIH in 1949, increased that amount more than 500-fold in the next 50 years to $15.6 billion in 1999. Appropriations doubled again to $30.5 billion in 2009, but have leveled off thereafter
- In 2003, NIH grants paid for twenty-eight percent of $94.3 billion spent on biomedical research in the U.S. (Osterweil 2005)
- The material in this box and the quotes are from CRI’s “About US,” http://www.cancerresearch.org (downloaded July 26, 2015)
- The high costs of satisfying regulatory requirements also arguably encourage developers to piggyback off NIH-funded research. This incentive also tends to narrow the scope of development to areas favored by the NIH, and also favors individuals (particularly NIH grantees) and organizations plugged into NIH research.
- Vincenti’s 1990 book, What Engineers Know and How they Know It, provides an excellent analysis and persuasive examples of the differences.
- Intel’s description of its tick-tock process.
- “What Does It Mean When FDA ‘Clears’ or ‘Approves’ a Medical Device?”, (downloaded July 23, 2015)
- Summarized from an overview, http://commonfund.nih.gov (downloaded July 15, 2015)
- FDA press release (February 19, 2015) http://www.fda.gov(downloaded July 16, 2015)
- See http://www.fda.gov (downloaded July 16, 2015)
- See https://www.faa.gov (downloaded on July 17, 2015.)