Improving the Practice of the Assessment of Research Portfolios

With the support of public and private organizations, extensive and often costly research is being conducted on a variety of topics, including health, education, and production practices. This may relate to basic science or be more applied.

Determining whether research is producing tangible benefits is a complex problem. There can be a long period of time between the completion of the research and the appearance of impact, and it is often difficult to link the observed results to the underlying research. Evaluating a study can be even more difficult when it is part of a portfolio of projects, i.e., a collection of projects or programs that have been implemented over the years.

Evaluating research portfolios must overcome the problems associated with evaluating individual projects and take into account how the projects in the portfolio are interconnected and what synergies can be achieved among multiple projects.

The former Centers of Excellence in Mental Health and Traumatic Brain Injury and their successors, now part of the Department of Defense, asked the RAND Corporation to help them understand how others evaluate research portfolio performance.

To answer this question, the RAND research team examined the research activities of 34 major federal and private agencies and organizations known for developing, conducting, and evaluating different types of research (basic, applied, and translational).

The work included a literature review, document review, and interviews with representatives of selected research organizations. Based on the information gathered, the team created a classification of evaluation methods organized by the different steps (input, process, output, outcomes, and impact) of a general logical model[1].

The group found that research funding agencies typically use three types of portfolio-level indicators: 1) aggregate project-level data derived from the sum of individual project data, 2) narrative estimates, and 3) general indicators (e.g., population-level) that may depend on the project-level data. Each portfolio-level indicator has its own advantages and disadvantages (see table below).

Each portfolio-level indicator has advantages and disadvantages

Type Metric, Advantage, Disadvantage
  • Aggregation of project-level data


Advantage: Easy to aggregate, report, and compare.

Disadvantage: Lacks nuance and has limited ability to add value to the portfolio.

Metric type, advantage, disadvantage
  • Narrative assessment


Advantage: Good for solving attribution problems.

Disadvantage: Can be expensive to create and difficult to compare.

Metric type, advantage, disadvantage
  • General indicators (e.g., population levels).


Advantage: Uses commonly available, comparable, and understandable data.

Disadvantage: Incorporates possible weak links to studies.

Because research funders have different interests and there are advantages and disadvantages to using different metrics, there is no ideal approach or single set of metrics. Rather, a mixed approach that takes into account the needs of the context and research design is likely to provide the best balance. However, based on the taxonomy of measures and other elements from the literature reviewed and interviews, the research team developed a set of high-level recommendations that are generally applicable (see table above).

They may be useful to other U.S. Department of Defense (DoD) units involved in supporting soldier health, well-being, and readiness that take a relatively similar applied and translational approach to their research. The taxonomy of metrics is also important for all funders of research portfolios. The conclusions and recommendations are intended to assist anyone wishing to use an effective framework for evaluating research portfolio performance, taking into account the specific needs of each institution or organization.

Conclusions

  • While much data is generally collected on bottom-up indicators (inputs, processes), less data is often collected on top-down indicators (especially outcomes and impact), where resources could be used more efficiently.
  • Key stakeholders expressed concerns about reporting burden and provided positive examples of the use of centralized information systems.
  • Nonmilitary, nonaudited agencies appear to have done more to measure research outcomes and impacts than audited military agencies, and audited military agencies have the potential to measure research outcomes and impacts more systematically.
  • It may not be possible to implement the new impact measurement framework on a large scale, and it may be useful to test the options.
  • In a world of research pressures and performance measurement requirements, it is possible to make clear choices about which measurements to use. This is important because there are tradeoffs in measurements, such as data availability, expert judgment, or specification constraints.


Recommendations

  1. Review the data currently collected on input indicators (inputs and processes) to determine whether the current level of data collection is still useful and whether the benefits of the data collected exceed the costs of collecting them.
  2. Identify opportunities to harmonize reporting requirements and activities.
  3. Integrate appropriate performance and impact measures into monitoring and evaluation processes.
  4. Consider incremental monitoring and measurement of outcomes and impacts.
  5. Establish a balanced set of indicators and determine how baseline data will be collected.

Read our general and most popular articles

Leave a Comment

error: