E.A. Fitzgerald, Ph.D.
Merton C. Flemings–SMA Professor of Materials Engineering, MIT
When we think about how to get more benefit from scientific research at our
universities, we usually focus on the back end of the research pipeline: on how
to move new technologies "out of the labs" and into the marketplace. We have
created an entire infrastructure for this purpose, from technology transfer
offices to startup incubators, venture funds, and more. Certainly these efforts
are useful, but still the yield is often lower than expected. Perhaps it is time
to ask what is seldom asked:
How do we know that university scientists are working on the best
possible research projects to begin with? The ones with the greatest chances of
bearing the most fruit?
In fact, we do not have good mechanisms for seeing that promising research is
pursued while blind alleys are avoided. Mechanisms that once existed have
atrophied, as the structure of research in this country has changed. And though
we haven't regressed to a stage where mad scientists are wasting their time
trying to transmute lead to gold, our research ecosystem needs better methods of
reality testing, and reality attunement.
The Great Shift
University research has grown tremendously since World War II as federal
funding for it has ballooned to a total of about $30 billion per year. This
isn't nearly as much as the private sector spends on research and development
(R&D) in companies. But university research has come to have great strategic
importance, due to the shifting nature of corporate R&D.
Here's why: Big firms once did a lot of basic research, the kind that might
not show results for ten to fifteen years, if ever, but could produce
fundamental advances. In the first few postwar decades, AT&T's Bell
Laboratories developed (among other things) the transistor, the first modern
solar cell, and the UNIX operating system. Companies at that time could afford
to invest in basic research. For years, firms like IBM, Xerox, and Kodak had
near-monopolies in their industries.
Then came a more competitive economy, with new foreign and domestic entrants.
Pressure on profits drove firms to shift their focus to applied R&D, with
shorter time horizons: three to five years, or even two to three. More and more,
basic research migrated to the universities, where the federal funds and the
labs were growing. What did not survive the migration were the reality-sensing
The Non-Reality Loop
Since the mid-1980s, I have had a front-line view of this shift, first as a
graduate student in university research, then working for six years at Bell
Labs, and since 1994 on the faculty at MIT, where I have kept touch with
industry through startups and related activity. Following are some things I have
Under the old system, basic research at companies was grounded in
practicality. The R&D lab was embedded in an organization embedded in the
market. Within the lab there would be a subgroup of people doing only basic
research and thinking ten to fifteen years into the future, but they were
surrounded by applied research people thinking and working shorter-term. From
this, scientists could gain a good sense of what it takes to literally "apply" a
lab-grade technology—and of what kinds of factors signaled whether a technology
was likely to be feasible and marketable.
Industry researchers also transmitted signals to the rest of the research
community. They sat on the boards of professional societies and attended
conferences, along with university scientists. Their very presence served as a
control on hazy thinking, as a young professor who over-stated the potential of
a new research technology might find a senior scientist from an industry lab
standing up to tell everyone why it would never fly.
Federal funding seemed to respond to the signals, too. When I was at Bell
Labs, I noticed that after the Labs made a big discovery, the government would
follow with BAAs (broad agency announcements) for funding in that area.
Government program officers used these discoveries as indicators of emerging
fields that were likely to grow, and would thus require more research and
skilled students from the campuses.
Today, with the decline of basic research in industry, this has all changed.
It is mostly academic scientists who direct the research societies and attend
scientific conferences. And funding for basic science is often driven by a sort
of university-government feedback loop. Professors who have a new line of
research will create momentum for it via the Internet and other sales channels,
such as conferences. The government uses the resulting excitement as an
indicator, releasing BAAs. As the funds begin to flow, the work expands and
momentum builds further.
The danger is that this self-reinforcing loop can become a non-reality loop.
More than once, in my own contacts with industry, I have mentioned the latest
university research that's supposed to change the world and gotten little more
than a laugh and a head-shake. But the knowledgeable skeptics are now outside
the loop, and research that may never fly can spend years trying to.
Unrealistic thinking also can have harmful ripple effects, even when the
research is promising. For instance an important channel for moving university
technology to market is through venture-funded spinout companies, but often,
research that is far from ready is moved prematurely. This happened in my
discipline, materials science, when nanotechnology became a hot new field.
Professors rushed to start companies, persuading venture investors that all
manner of commercial nano-materials were just around the corner. Basic research
that might well pay off in a decade or so was now expected to pay returns in a
couple of years.
Of course many venture funds got burned. And after repeated burnings, many
funds now invest mainly in more proven later-stage companies, with the result
that venture capital isn't really "venture" any more. Worthy startups may find
it harder to get funding; a key part of the ecosystem has been weakened.
Solutions and Caveats
What can be done? The single most useful measure—however it can be
implemented—is simply for university scientists to have ongoing, one-to-one
interactions with people in industry. This means contact with people who know
what's involved in making and using things, from cost and competitive factors to
the many practical constraints (and opportunities!) that can arise when turning
ideas into reality.
One caveat: Any attempt to make research more "practical" must not drive
scientists toward shorter-term thinking, aimed only at incremental advances. We
need long-term, visionary thinkers. The trick is to provide these highly
creative people with the signals, and the knowledge, that will enable them to
envision more intelligently.
And there is an underlying need for a change of mindset in the science
community. We've fallen too much under the spell of the limitless possibilities
of science. Our funds and our human capital, though bountiful, are limited.
Every problem we might want to solve, from making a more efficient solar cell to
curing a disease, has many possible research approaches. We can never explore
them all. With more choices than resources, we need to think and act like wise
investors—placing some bets on long shots, but trying to build a balanced
portfolio in which the investment in each line of research is proportional to
the risks and rewards.
At present, scientists often behave more like interest groups, and government
more as a "supporter" of science than as a demanding customer or investor. We
must do better. Despite my criticisms, the new research environment has great
virtues. Today's open innovation model—in which the whole chain from research to
application doesn't have to take place within a firm—is indeed highly open to
ideas from many players, at all stages. If we can keep the research well
focused, this new system will be powerful. If not, the torch of innovation may
pass to others.