Identify Customer Segments to Create Customer Offers for Each Segment - Appli...
103-Basco Identifying metrics to fully assess the impact of federal R&D investment
1. Science, Technology, and Policy Program
Thank You: Dave Baiocchi, Dmitry Khodyakov, and Jon Cairns. Also, thank you to all of the panelists for participating.
Funding Acknowledgment: Science of Science and Innovation Policy (SciSIP) Program, National Science Foundation, U.S.A.C O R P O R A T I O N
Identifying Metrics to Fully Assess the Impacts of
Federal R&D Investments
Daniel Basco
RAND Corporation
Santa Monica, California, USA
Objective
To identify a consensus among U.S. Federal
Government officials and U.S. researchers on the
most useful metrics for measuring the impacts
of Federal research and development (R&D)
investments.
Rationale
In the United States, there is little agreement on the
specific metrics needed to comprehensively evaluate
the impacts of various Federal R&D programs.
Without agreement, agencies cannot:
1. Develop data collection instruments to fully
capture the results of R&D programs
2. Compare the effects and impacts of R&D
programs within and across agencies and scientific
areas
3. Use evaluations to determine how to optimally
structure future R&D investments to encourage the
greatest returns.
While there is general agreement that R&D
influences academia, government, the economy,
and society, no comprehensive set of metrics had
been identified or agreed upon. This project aimed
to identify the full range of impacts and develop a
consensus on a core set of metrics.
Approach
This project utilized the Delphi method, an
iterative consensus building approach to decision
making, to explore whether agreement could be
reached on a core set of impact metrics. Using an
online platform, a panel of Federal Government
policymakers and a separate panel of science
policy researchers participated in a three round
modified Delphi process:
In round one,
participants rated
the usefulness of 58
metrics and provided
rationales for their
decisions. Proposed
metrics addressed
impacts on academia,
government, the
economy, and
society.
1. Share views
and knowledge
Rate metrics on potential
usefulness in future R&D
program evaluations.
2. Feedback and
discussion
Engage with other experts,
compare answers and
share perspectives
through online
discussion.
3. Reassessment
of views
Revise original responses
based on group feedback
and discussion.
Findings
Round three results were analyzed to identify useful metrics. To be characterized as “useful”, metrics must
have met 2 criteria:
1. The median rating for the group was in the top tertile (between 7 and 9)
2. No disagreement was identified. Disagreement was defined as:
1/3 or greater of ratings in the bottom tertile (1-3) AND
1/3 or greater of ratings in the top tertile (7-9)
About the panels
Two panels of participants were recruited for this
study.
Government Panel
• 26 panelists began round one
• 19 panelists completed more than 50% of round
three questions
• Panelists were U.S. Government employees from:
Researcher Panel
• 30 panelists began round one
• 17 panelists completed more than 50% of round
three questions
• Panelists were U.S. based researchers from:
Panelist Ratings
Each panelist rated a standard set of impact
metrics that were identified prior to the Delphi
panels through a literature review and discussions
with subject matter experts. Panelists rated 58
standardized metrics across 4 categories:
• 18 metrics on Academia
• 11 metrics on Government
• 14 metrics on the Economy
• 15 metrics on Society
Panelists could also propose new metrics to add to
the list during round one or round two. For each
metric, panelists were asked:
How useful would information on this metric be
in evaluating the impact of a broad range of R&D
programs?
Panelists answered each question on a 1-9 rating
scale from ‘Not at all useful’ to ‘Extremely Useful’.
EndorsedbyBothPanels
• Usage of new methods
developed from R&D
• Usage of software
developed from R&D
• Improvements
in efficiency or
effectiveness of public
services or programs
• Citations in new local,
state, or Federal policies
or regulations
• New firms developed off of R&D
• New products developed
• Improved existing products
• External (non-Federal) funding
secured for additional R&D
• Improved business processes
• Government funding secured for
additional R&D
• Improvements in environmental
conditions or sustainability
• Morbidities prevented from
new information, products or
technologies developed
• Lives saved from new
information, products or
technologies developed
• Improvements in quality of life
from new information, products,
or technologies developed
• Changes in public behavior
• New clinical or safety practices
developed
EndorsedbyGov't
OfficialsOnly
• Usage of new
databases developed
from R&D*
• Citations in new
local, state, or Federal
legislation
• Operational use of
products of R&D
by government
organizations*
• Number of private sector jobs
created from results of R&D*
• New licensing agreements
• Post-Graduate students enter
STEM profession
• Graduate students enter STEM
profession
No additional metrics
Endorsedby
ResearchersOnly
• New fields or sub-
fields of research
developed
• R&D studies that were
successfully replicated
• Influencing new
local, state, or Federal
legislation
• Influencing new local,
state, or Federal policies
or regulations
• Influencing local, state,
or Federal programs
• Citations in state or
Federal judicial cases
• Cost savings*
• General public knowledge of
research increased
• New human capabilities enabled*
• Patents filed or granted
=
33
overall useful metrics across the panels
7
Endorsed by
Government
Officials Only
16
Endorsed by
Both Panels
10
Endorsed by
Researchers
Only
SocietyEconomyGovernmentAcademia
+ +
• NIST
• DOD
• DOE
• Congress
• OSTP
• NSF
• NIH
• NOAA
• Ed
• EPA
• USGS
• USDA
• DOT
• Private Universities
• Public Universities
• Scientific Associations
• National Laboratories
• Think Tanks
• Research Institutions
• Foundations
* Added by panelists in Round two, therefore not rated by the other panel.
In round two,
participants viewed
the ratings and
rationales of the
entire panel and used
discussion boards
to anonymously
debate differences
in perspectives and
ratings. Participants
also recommended
new metrics.
In round three,
participants re-rated
metrics, including
the new metrics
that were proposed
in round two, to
ultimately identify
a core set of metrics
that the entire
panel agreed would
be useful to U.S.
policymakers.
5 8 11 9