6/15/2011 6:30:45 AM By
--Updated Post--June 15, 2011
I wanted to repost my comments from June 2010 at a National Academies' conference on how to increase innovation in federal statistics as the final report from the National Academies of that event is now available
. They picked up on some of my comments on user innovation, as did some of the agencies in the room. I am pleased to report that some of the concepts I raised at the event have been implemented in different agencies - most concretely the idea of putting notices of opportunities to comment closer to where users are getting their data (Business R&D and Innovation Survey from NSF and Census is latest)
. These are some small but good changes. I still worry that the concreteness of that recommendation came through but some of the spirit of it might not have. I'm going to be mulling over some of these concepts for an upcoming presentation I am contributing to, coincidentally at the National Academies, so will be thinking more on the subject again.
As for the final report
, I think there are some good things in it, but I must admit I was very disappointed in the event. It was a discussion of innovation led almost entirely by the people who are already heavyweights in the system and that made range of the discussion very narrow. I didn't see much of an outsider perspective. The statistical system's real concern with innovation should be that they have so few interested parties in the room. At that event there were a few of us not from the system as speakers but that was about it. The room was packed with agency representatives. Every major industry association and business group should have been concerned with that meeting if it was actually relevant because it'd mean that the system was trying to find ways to be relevant. As I found the meeting, it was a discussion of how we can be better at things we probably should have been better at 10 years ago; not how we can be an innovative statistical system that measures the pulse of the U.S. and global economies and sets the tone for what innovation means in statistical systems around the world.
--Original Post--June 28, 2010
Tuesday I will be a discussant at the National Academies’ Committee on National Statistics workshop in Washington, DC, looking at how to facilitate innovation in federal statistics
. It’s a topic I am passionate about and unfortunately it’s an area of interest where we often feel like soloists rather than members of a choir. As we raise our calls for improvement and the need to better measure business activity we need more charitable foundations, academics, businesses, and local economic development officials at the table. If it weren’t for the tireless work of groups like the Association for Public Data Users (APDU)
, I fear that the meager levels of innovation which currently exist in the system would be next to nonexistent.
Desire for Internal Innovation.
In our experience dealing with entrepreneurship and innovation measurement, I’ve seen a lot of personnel within specific agencies that want to see more innovation in their products. Sadly, too often these personnel have no forums or opportunities to explore possible improvements and few incentives that would encourage innovation. More than any funds we’ve spent, our efforts to spur improvements in federal data on entrepreneurship and innovation have come mostly through convening of symposiums and workshops that allow agencies to present their plans for improvement, debate with academic and agency experts, and see what other exciting developments are occurring in academic and agency environments. The returns to this dialog seem only to increase over time and seem well-placed for topics-based statistical discussions.
Recommendation: Create more opportunities for agency representatives to present and receive feedback on their plans for the future before plans are finalized.
Users Can Drive Innovation.
When you provide basic information, like the statistical agencies often do, one of the saddest parts of the process is that much of the information goes on to uses which you never become aware. As a major producer of data, at Kauffman, I know how excited I get to see some of our data used in an analysis or something new but too often the public doesn’t circle back with this material. We’ve seen in our own experience that if you create events, online platforms, and other vehicles to show your support of users that it can result in a positive feedback loop and improved data quality. Through the Kauffman Firm Survey
, we’ve undertaken an annual open call for suggested improvements in the data we collect which has over the years led to major improvements in the usability of the data for research and analysis. Had we not undertaken this process of trying to interact with the users of our data we would have missed significant improvements in data collected on the finances of new firms which are going to greatly improve our analysis of the current recession.
Sadly, most agencies know few of the users of their data, and even where they know them they are often hesitant to interact and hear their needs, feedback, etc.
Recommendation: Create opportunities for the statistical agencies to create communities of high-end users for specific products in an open, online environment.
Interaction, Interaction, Interaction.
I am not sure how many ways I can say it but I’ll say it one more – interaction. The thing which I find most disappointing with the current statistical system is that individuals within agencies often want more interaction with each other and other experts on their topical collections but for the large part this does not happen. Money to travel to conferences is very limited and the interagency groups and agencies overseeing some of this process only have opportunities for interaction at very high levels, not on more granular areas of interest and need. At Kauffman we try to bring academics into the conversation but know that from trying to do this that it’s very tough. Key constituencies like state representatives and sub-state representatives don’t have formal bodies where the agencies can receive feedback on possible data publications and so they don’t hear what these groups actually want.
Recommendation: The Office of Management and Budget (OMB) should create a council of state data representatives to provide a sounding board for new state-level data development, to proactively develop a list of priorities in state-level data development, and to allocate additional funds agencies for priority projects.
Push Ourselves First.
As someone who takes time to read most notices about data collection on the Federal Register, I applaud the few efforts that actually submit meaningful requests for feedback and are open to feedback submitted. In reality these processes are largely a formality that is hidden away so that as few people as possible find them. Even submitted comments following outlined procedures can be dismissed as “it’s too late in the process.” Let’s stop this self-perpetuating process of mediocrity. If we are going to seek comment, seek comment.
Recommendation: Look for feedback and post prospective notices of data to be collected alongside the actual web pages where data is disseminated and start to create opt-in user groups.
I’ve offered a lot of criticism here but I recognize that we at Kauffman still have a lot to learn in this regard. One of the things are beta testing is this concept of a Data Wiki for the Kauffman Firm Survey. It’s not something which was easy for us to do and opens us up to user feedback, positive and negative, in a much greater way that we’ve done before. But this is also an area where we believe what we learn can help to inform the statistical agencies.
In my opinion, until OMB and the agencies become more genuine in their efforts to draw data users into the conversation we have little room to complain about lack of funding. These are things within their power to change. We cannot afford to idly hope for improvements in national statistics. I hope others out there agree with me and will add your voice to the discussion.
6/29/2010 8:00:00 AM By
The National Science Foundation (NSF) has an exciting new survey in the works on microbusiness innovation (which was featured at our recent Kauffman Interagency Forum on Entrepreneurship and Innovation Data
). They are seeking interested contributors to the development of this work.
The Science Resources Statistics (SRS) Division of the National Science Foundation is planning a survey of microbusinesses (fewer than five employees). The microbusiness survey will collect data on R&D, innovation and related activities (such as sales of significantly improved goods and services; operating agreements and licensing activities; technology transfer; patents and intellectual property; and sources of technical knowledge), and measures of entrepreneurial effectiveness.
As we move forward in designing the survey we will be conducting workshops to help (1) gain a better perspective on data user needs and priorities of needs among users and (2) understand how microbusiness data will be used. Potential users include, but are not limited to, government officials at the federal, state, and local levels; international users; businesses and trade associations; and academic researchers. In addition, there are likely to be other categories of users that have not been specifically identified, as this is a new area of study.
If you are interested in contributing to the microbusiness discussion please forward your name to Audrey Kindlon at email@example.com.
6/24/2010 3:00:00 PM By
Few would dispute the need to invest in research and development activities but the impact measurement of such investments in most cases can feel rather unsatisfying. The Chronicle of Higher Education in their June 10
issue highlighted one new project piloted as a result of the Federal economic stimulus program last year with leadership at the National Science Foundation and other funding agencies. Star Metrics is a program which creates a base of knowledge for future impact studies by allowing university HR systems to directly tag individuals being paid under federally-funded research projects. These underlying records are then aggregated automatically to create a database looking at the names of scientists (and emerging scientists) who do work on specific research projects. With this backbone, it should allow for much easier (and mostly automated) matching of citation databases (publication, patent, and other) or other output measures (such as companies formed, disclosures of inventions) in future years.
This is a direction which makes a lot of sense to me as it appears less burdensome on universities than many possible alternatives while potentially giving us much more detail on the people working on specific projects. Some of these people will be postdoc researchers, a nebulous area of the scientific human capital pipeline that are all too often missed in measurements of the scientific workforce. We are funding some work in this area through the STARS Database
being developed by Lynne Zucker and Michael Darby but it is not yet available for researchers.
June 9, 2010
Tracking the Value (at Least in Jobs) of Federal Research Spending
By Paul Basken
Chronicle of Higher Education
More than five years ago, the then-White House science adviser, John H. Marburger III, asked researchers a seemingly simple question: Given the billions of tax dollars they get each year, why don't they have good data on the value of what they produce?
That question may finally be getting answered.
Sparked by job-tracking requirements in the $787-billion economic-stimulus bill approved last year by Congress, the government's major science-financing agencies have been working with universities to devise a way to bring scientific precision to the explanation of how their expenditures help the national economy.
The organizers of the effort, known as Star Metrics, reached a milestone this month by declaring the success of what they consider a critical element—their ability to create a low-cost system for universities to easily compile and assemble their records of federal grants, including the faculty members and students employed by them, in a way that the information can be automatically fed into the overall nationwide database.
Their method, tested at seven universities, took the institutions only about 12 to 15 hours of staff time to set up, and virtually no time to run after that, said Julia I. Lane, a program director at the National Science Foundation.
Users agree. "It was literally a nonissue in terms of the administrative burden," said Susan Wyatt Sedwick, associate vice president for research and director of the Office of Sponsored Projects at the University of Texas at Austin, one of the participating institutions.
The idea, eventually, is to create a comprehensive system that relies largely on existing computerized databases—such as science-journal publications, patent registrations, venture-capital expenditures, and employment histories—to give policy makers a precise idea of what they are getting for their more than $31-billion of annual federal spending on university-based research.
"There is huge pressure" from the White House and Congress, Ms. Lane said, "to show the value of investments."
Benefits and Concerns for Institutions
Universities are largely cheering the project. For one thing, Ms. Sedwick said, the administrative burden alone on the university might be reduced, because current federal data requirements translate into more than two hours of work for each of the 160 or more grants that Austin received in the past year.
And, said Tobin L. Smith, associate vice president for federal relations at the Association of American Universities, institutions surely will be helped by the ability to tell taxpayers with greater precision the benefits of their expenditures. "It's all in all a pretty good thing," Mr. Smith said.
The organizers of Star Metrics are emphasizing to universities the voluntary and cooperative nature of the project, particularly its reliance on low-cost sources of existing data, Ms. Lane said.
The idea grew out of a 2005 speech by Mr. Marburger, then serving as director of the Office of Science and Technology Policy in the administration of President George W. Bush, to the American Association for the Advancement of Science. Mr. Marburger pointed out that systems for measuring the effectiveness of science spending were already nearly three decades old. A National Research Council report, he said, called them "often ill-suited for the purposes to which they have been employed."
The effort took a more concrete form after the economic-stimulus measure, containing more than $15-billion in federal money for science research, required recipients to report back to the government on the jobs being created by their projects.
The resulting Star Metrics effort doesn't come without some concern and uncertainties. For one thing, Ms. Lane said, the voluntary nature of the effort leaves unclear its ultimate comprehensiveness. And the final level of detail is also unclear at this stage, she said, with many questions remaining about how smoothly all the anticipated sources of data will be linked together.
Also, some colleges have objected in other contexts to the inclusion of personally identifiable data in nationwide research projects. In the case of Star Metrics, Ms. Lane said, all faculty members and students involved in federally sponsored research are expected to be assigned some form of identification number, to allow tracking of their outside activities such as journal publications, patents, and jobs.
And some had greeted Mr. Marburger's original idea with suspicion, tied to their unhappiness with the Bush administration's level of support for science spending. Mr. Marburger may largely be trying, with the new measures, "to make it impossible to assess the Bush record relative to past spending," Sally T. Hillsman, executive officer of the American Sociological Association, wrote at the time.
Advocates of research spending also recognize the possible political risks of letting the government science budget be judged primarily for its job-producing qualities, even though they largely see that trade-off as positive.
New Incentives for Students
At the Polytechnic Institute of New York University, which was not one of the seven institutions that tested the data-collection system, the initiative raises hope of creating badly needed new incentives for students, said the institution's president, Jerry M. Hultin.
Research students, including prospective new faculty members, have broad ambitions for applying their research expertise toward solving problems in the commercial marketplace, and they too often feel "boxed in" by current measures of academic success that are tied to traditional pathways, such as journal-publication citations, Mr. Hultin said.
"This is an unboxing of young faculty," he said, "and letting them head to some of the places they really want to go."
In fact, one startup enterprise that originated from the institute, he said, could be part of the solution. The company, ChubbyBrain, is compiling a directory of private companies that contains detailed information in such areas as their financing sources, mergers and acquisitions, and customers.
That's exactly the type of easily obtainable information that organizers of Star Metrics may hope to incorporate into their network, Mr. Hultin said. Ms. Lane agrees. "The notion here is to leverage existing data," she said.
The assembled information could eventually form the basis for reports that reviewers, at agencies such as the NSF and the National Institutes of Health, consider when they decide which grant applications to approve, Ms. Lane said.
But that doesn't mean the potential for job creation will be the only or even the primary measure for future grant applications, she said. Although the impetus from the economic-stimulus measure centered on job creation, Star Metrics is being designed to include the effects of federal science spending in three other broad areas, including the generation of basic scientific knowledge and improvements in long-term health and environmental conditions.
A classic example that the developers of Star Metrics keep in mind, Ms. Lane said, is that of Sergey M. Brin, the co-founder of Google Inc. whose research at Stanford University—helped by a graduate fellowship from the National Science Foundation—led to the development, in a rented garage, of the world's most popular Internet search engine.
"One grant," she said, "doesn't typically lead to one particular outcome."
5/24/2010 3:00:00 PM By
The Economic Development Administration (EDA)
at the U.S. Department of Commerce has issued a call for proposals on the “Mapping Regional Innovation Clusters Project.” I am still reading over all the details and thinking about some of its implications, but in seeking proposals in the $1 million/year range for three years of support this should bring out a large and diverse set of applicants.
The short stated intent of the project:
…EDA, pursuant to its Research and Evaluation program, solicits applications for an economic development research project aimed at developing a replicable method for identifying and mapping regional innovation clusters, providing resources on best practices, and providing recommendations on metrics for the evaluation of regional innovation clusters.
For further details: http://www.grants.gov/search/search.do?mode=VIEW&oppId=54670
My comments here will be fairly simple.
- Missed opportunity. It’s a real shame that significant efforts like this aren’t actually better thought out across agencies by groups like the Office of Management and Budget, the Office of Science and Technology Policy, or through informal task forces. This effort could be so much the better if it were coming on the tail of a one-year or two-year research effort across U.S. statistical agencies to develop new and relevant regional innovation statistics from the underlying microdata. Instead, whoever wins will be forced to use many of the same fairly worn sets of indicators. So much could be done in this regard at Census and BLS, at a minimum, but it takes effort, time, and some funding. The U.S. statistical agencies are becoming increasingly aware that they need to produce better regional statistics (BEA is really taking the lead hear but only after some rough years).
- Web visuals aren’t so different. Having just gone through my first project in online data visualization with the Kauffman Index of Entrepreneurial Activity, I can say that doing online data visualization is not cheap and online visualizations are only as good as the traditional analysis completed. Unfortunately, I don’t feel like the science behind regional clusters or what is actually important to be measured when looking at regional strengths and weaknesses is specified enough to offer a fully-coherent base of knowledge for visualization.
The solicitation specifically identifies a couple of prior EDA-funded projects which the agency wants to be a component of the new project:
5/18/2010 8:00:00 AM By
A lot of times I find out about new data sources through working papers or conference presentations. In this case, Ben Hallen
at the University of Maryland and Rory McDonald
have a working paper on super angel investors which uses a new database – CrunchBase – and Ben seemed enthusiastic on the data, so I thought I’d take more of a look. Incidentally, also keep an eye out for the updated version of this paper as it was really interesting but for now the paper is not posted online and interested scholars should contact the scholars directly.
, which advertises itself as the “free tech company database,” is a great concept and one that can only become more powerful as more users see it and use it. It’s essentially technology company data collected via wiki. Here were the overview stats as of 5/14/2010:
Companies - 39,866
People - 54,684
Financial Organizations - 4,705
Service Providers - 2,305
Funding Rounds - 14,944
Acquisitions - 2,996
While many researchers will have concerns about data gathered using a bottom-up process, I suspect the data is actually much more accurate than we would expect. Now, this isn’t to say that the data should be taken as is shown because even CrunchBase acknowledges the following on their FAQ web page
You do not know if the data is accurate. As multiple people edit CrunchBase profiles of companies, financial organizations and people, some mistakes might be added. Information might also be out of date. If you notice anything that needs changing you can go ahead and edit the page.
Most large data sets (even government data) have a significant amount of error at the individual firm level which, if random, washes itself out as the data gets aggregated up. Now, the true test of CrunchBase as a research tool will be to see if they close the cycle with researchers providing data and receiving updates back. In my experience, scholars are great at taking data, complaining about, and spending tons of time cleaning it but rarely actually do many scholars go to the next step of showing data producers where there were errors or things that could be improved. I hope for CrunchBase’s sake that this paradigm begins to change.
In any case, for those looking at technology companies CrunchBase definitely seems worth a further exploration. I hope that those of you who have explored this data further than I have will offer comments related to how good or bad CrunchBase is at curating the data to allow for longitudinal analysis.
5/11/2010 2:24:58 PM By
1/5/2010 9:00:00 AM By
I am not at the American Economic Association (AEA) meeting
this year as I recently became a father and am not going to be traveling for a while but that doesn't mean there aren't some really exciting sessions/papers being presented related to new advances in measuring innovation and entrepreneurship. Ken Jarboe at the Athena Alliance did a great post
on some of the papers focusing on intangible assets so I'll simply defer to Ken on that topic, but there are some other data papers worth a review:
Michael R. Darby (University of California-Los Angeles & NBER)
Lynne G. Zucker (University of California-Los Angeles & NBER)
John E. Jankowski (National Science Foundation)
Lynda Carlson (National Science Foundation)
Peter Gibson (U.S. Census Bureau)
Richard Hough (U.S. Census Bureau)
Ronald Lee (U.S. Census Bureau)
Brandon Shackelford (Twin Ravens Consulting)
Raymond Wolfe (National Science Foundation)
Jonathan Haskel (Imperial College Business School)
Alicia Robb (Beacon Economics)
John Haltiwanger (University of Maryland)
There are a couple of other great sessions on the agenda which don't have papers listed which I am trying to gather more info on, like one on measuring broadband impact, so I'll hopefully be able to post more in the coming week. For those at the AEA meeting who I've missed, hope it's going great!
12/2/2009 9:00:00 AM By
According to a new Intel/Newsweek survey
, consumers remain confident in the role that innovation will play in driving future economic growth. While this is survey I suspect knew the storyline that it wanted to tell before even going to the field, I did find some aspects of their survey about perceptions of future technological innovation to be interesting. Here are a couple of paragraphs from their press release:
However, even as Americans see technological innovation as a key growth driver, they have significant doubts about their country's ability to hold on to global leadership. Despite many nations giving the United States credit for leadership in technology innovation today, only one-third of Americans saw themselves leading over the next 30 years.
Americans are not alone in their belief that they risk losing the mantle of innovation leadership. A large majority of Europeans gave technology innovation a nod for improved quality of life and positive economic impact. However, Europeans are even less optimistic than Americans about their own ability to be innovative long-term. Only 14 percent saw a European country leading on technology innovation in the future, and the rest ceded future leadership to nations such as China, Japan and India. In contrast, China shows strong confidence in its future strength as 54 percent of Chinese people predicted that their country will pioneer the next society-changing technology and overtake the United States in the next 30 years.
We have asked a similar question before (not that that makes me any more confident in its predictive nature). Think back to the perceptions created of Japan in the 1980s. As such, I'd take this with a grain of salt. Certainly media can influence attitudes significantly when it comes to a countries confidence. But I can also recognize how self-fulfilling defeatists attitudes can become. That is part of why we engaged in Global Entrepreneurship Week to try to positively affect the images and opportunities young people can find in entrepreneurship. Perhaps next years theme should try to take on perceptions of our abilities to innovate technologically?
11/24/2009 5:00:00 PM By
Update 11/24/2009: The Heritage Foundation has posted proceedings (including video and PPTs)
from the recent event I spoke at. It was a really nice event with a diverse crowd. Nice to see Brookings and Heritage coming together to focus on developing better data. I most enjoyed the mornings speakers who really offered a wide range of comments on the state of current economic statistics and needed improvements.
Original Post 11/6/2009: The Brookings Institution and the Heritage Foundation are hosting an event on "Measuring Innovation and Change During Turbulent Economic Times"
on November 17, 2009, 9:30 a.m. - 4:30 p.m. The topics on the agenda go much beyond just innovation measurement into many different aspects of change so it will be interesting to see how it all comes together at the event. Oh, and I will be serving as a discussant for one of the sessions so if you want to say hi, you will catch me at Heritage for this event.
10/21/2009 9:45:14 AM By
I will be attending a mini-conference on user innovation which the National Science Foundation is putting on next month. It strikes me as something others might be interested in.
The Current Paradigm shift from Producer Innovation to Open User Innovation
Monday, 16 November 2009
National Science Foundation
4201 Wilson Boulevard, Room 110
Arlington, VA 22230
Ever since Schumpeter (1934) promulgated his theory of economic development, economists and policymakers have assumed the dominant mode of innovation is a “producers’ model.” That is, it has been assumed that most important designs for innovations would originate from producers and be supplied to consumers via goods that were for sale. This long-held view of innovation has, in turn, led to public policies based on a theory of producer incentives.
Recently, however, innovation theory has been going through a paradigm shift – where it is increasingly recognized that open and collaborative user innovation increasingly dominates the traditional pattern of producer innovation under a wide range of conditions. Research needs now to explore and develop this new path. And related policy changes must be considered and assessed.
During this small, half-day workshop, a first session will compactly review what we currently know about open user innovation. A second session will provided interested meeting participants with a roundtable opportunity to discuss ideas and possible activities for a set of next steps in research and measurement on the user innovation topic.
Session I 1:00 to 2:30 pm
Prof. Eric Hippel, Sloan School of Management, MIT
Fred Gault, Professorial Fellow, UNU MERIT, and OECD
Prof. Jeroen de Jong, EIM and Rotterdam University, The Netherlands
TOPIC: Open User Innovation
What is it, what do we know about it, why is it driving out producer-centered innovation under many conditions? What are the important measurement and policy issues?
- General story of and evidence for the paradigm shift from closed, producer-centered innovation toward open, user innovation. Economic reasons for these changes.
- Data: Canada and Netherlands surveys on the frequency of user innovation among firms; UK survey of product modification and development by end users/consumers
- Status of measurement today: What we can measure reasonably well now; what are the key statistical indicator and data collection shortcomings?
- What are we likely to gain from better understanding and measurement of the user innovation phenomena? (business/economic opportunities, organization management, public policy, etc.)
Session II 2:45 to 4:30 pm
Session chair: Science Resources Statistics, NSF (to be announced)
TOPIC: Research and Policy Implications of Open User Innovation
To be conducted as a roundtable discussion among interested meeting participants. What are possible targets for the next stage of research on the topic?
- Participant reactions to and comments on Session I presentations
- Group perspective on where the user innovation ought to fit in the larger scheme of research on innovation and innovation policy analysis
- Discussion of what a next phase of user innovation research activities might most usefully look like.
- Discussion of next steps and action items.
Workshop Wrap-up and Close by 4:45 pm
(For questions about this conference, contact Mark Boroush, Div. of Science Resources Statistics, National Science Foundation, 703.292.8726, firstname.lastname@example.org)