Skip past navigation to main part of page
 
Land and Environment : Agribusiness Assoc. of Australia
---

Agribusiness Perspectives Papers 1999

Paper 21
ISSN 1442-6951


A Simplified Benefit- A Simplified Benefit- Cost Calculator to Foster Research Evaluation and Monitoring

Barry White *
AGEC Consulting,
P.O. Box 916, Indooroopilly, Q 4068.


A Simplified Benefit- Cost Calculator to Foster Research Evaluation and Monitoring

Abstract
Introduction
Decisions Without BCA
Research on Decision Making
The Market Potential of BCA
1.Relative Advantage
2 Complexity
3.Compatibility
4.Perceived Risk
5.Trialability
6.Communication
Opportunities in BCA Practice
A BCA Calculator
Project Costs / BC Discount Factor
Conclusion
References
Tables
Table 1 Sensitivity of the BC Discount Factor to the Annual Pattern of Costs
Appendix
Benefit Cost Calculator
BC Discount factor


ABSTRACT

Increasing use of Benefit-Cost Analysis (BCA) in ex ante research evaluation has not been accompanied by consensus on the increased returns flowing from the evaluation effort. There are likely to be a wide range of views held by researchers, agricultural economists and research managers on how evaluation can be made more effective. The paper draws on the author's experience in the evolution of BCA in research evaluation to propose how the current state of the art might be advanced.

A BCA calculator is presented which enables a BC ratio to be simply and accurately determined. Inputs are total costs and their duration, potential benefits, and the adoption lag and rate. The adoption data, for example, is used in a table to look up the appropriate factor for the sum of discounted benefits. The calculator is likely to be seen as a somewhat oblique approach for those now accustomed to using the same inputs in a spreadsheet. However, the more likely role is as an introductory tool for researchers and research managers which is easier to understand, convenient and transparent. Further, the calculator shows that the effort in BCA calculations is trivial compared to that required to determine the input data. The calculator could also add to an evaluation approach which is competitive and indeed superior to more intuitive and less accountable approaches to resource allocation decisions. More widespread use of ex ante BCA, coupled with ongoing monitoring, are seen as the essential priorities for research evaluation. Monitoring can convert estimates of key inputs such as adoption patterns from an artful art to a science.

* This paper draws on the author's experience whilst undertaking a contract on research evaluation for the Grains R&D Corporation, and also as an R&D Coordinator for the Land and Water Resources R&D Corporation . The views are the authors and are not to be construed as indicative of GRDC or LWRRDC views or policy. 


INTRODUCTION

This decade has featured an increased effort in agricultural research evaluation and much questioning on how evaluation is best done. Questions range from the level of resources for evaluation, the target, for example ex post or ex ante, and how evaluation is best used. These questions have been well canvassed in Workshop reports, (Daniels 1997), and Brennan and Davis, (1996) and in 'Regae News', (the Newsletter of the AARES Research Evaluation Group for Agricultural Economists). The range of planning and evaluation approaches used in priority setting in R&D Corporations has been reviewed by Lack (1996). That ground will not be recovered in this paper. The emphasis is on ex ante evaluation in the context of the challenging task facing research managers (including Committees etc) - selecting projects for funding. The paper will take a market perspective to consider how research evaluation can add value.

The stimulus to adopt a user perspective came from the virtual absence in research evaluation debates on the 'without' situation. This of course parallels experience at the project level where an investment can only be evaluated against a baseline of likely developments that would have occurred without the project. For example discussion on the contribution that even a simple Benefit Cost Analyses (BCA) could make to improve decision-making are more often compared to the contribution a more rigorous BCA might have made. The obvious and logical approach can only be to compare the value of a simple BCA to decision making without a simple BCA.

The following section of the paper will consider some of the issues in decision making for research funding. The extra contribution ex ante BCA can then make to decision making will then be assessed using a general framework for the market value of an innovation. The final section of the paper develops a modified approach based on the assessment of market value.

DECISIONS WITHOUT BCA

In this section the decision process will be reviewed to provide a base line for the contribution BCA might make. Given that BCA or its components is now more widely used to complement other sources of information, an example will be used which was typical of practice during the 1980s (Wohlers and Vlastuin, 1990). In the then Wool R&D Corporation, a Committee, as part of the annual funding cycle, assessed some 220 proposals with a view to funding the top proposals (about one half) in terms of their potential contribution to the goals of wool R&D. A Committee of 12 developed ratings based on referee reports and other information. Using a variety of statistical approaches, the Wool study showed that the information provided had an impact on decisions. The decisions appeared to be biased against proposals rated stronger on industry benefits (demand driven) compared with proposals strong on science (supply). Further, a discriminant analysis showed that the available information predicted the decision to fund or not in about 80% of cases.

The Wool R&D study does demonstrate that it is possible to study some of the decision process with some rigour. (The study was used to amend the decision process to ensure a better balance between supply and demand aspects). The next step, to provide a base line, would be a study of the ultimate success of the funded projects. This is feasible, at least to some extent, after most projects have completed their research phase. An early indicator of project success in terms of outcomes, could be compared to the ratings. The critical question in determining the resources required for ex ante evaluation is the extent to which future returns are predictable. A high correlation between ex ante and ex post (high heritability in a sense) would increase the return to improved decision processes.

To illustrate the scope for improved decision processes, the following example is developed based on a typical operational two-stage system. Preliminary proposals are first sought, and then a proportion of those are selected to be developed as full proposals for a final round of assessment. Two of the possible standards this process might be assessed against are (1) an alternative more intensive decision process on a sample, done as a research trial, or (2) in the medium or long term, but only for successful projects, against preliminary ex post outcomes from the research. Based on a standard, if a 70% accuracy in the operational process is assumed, then the final group of funded projects will include about one half of the top projects. (0.7 x 0.7, where accuracy is the probability of a top project being selected). Thus if 20 projects were funded from 100 preliminary proposals, the final 20 would include about 10 of the top 20 projects in the original 100. (This accuracy should be compared with the base rate of 4 out of 20.)

The returns from improved decision processes will depend on returns foregone from the top proposals not funded compared to those for the lower return projects selected. If the return on top projects is about double, then the two-stage process with 70% accuracy is performing at about 75% of optimal. But a random process would achieve 60%, so the process has achieved about 40% of the difference between no-skill and perfect skill. Thus there maybe considerable scope for an improved decision process. The value of the improvement will be reduced if the standard is a poor predictor of ex post returns. The value will be increased if top projects are substantially superior to the others. For example some ex post studies appear to exhibit 80/20 behaviour whereby 80% of benefits arise from the top 20% of projects.

RESEARCH ON DECISION-MAKING

The large field of research comparing intuitive v analytical decision making provides another perspective on the decision process (Dawes, 1988). What the research shows conclusively is that analytical approaches outperform intuitive approaches in accuracy but not usually in speed. The factors that contribute to this are numerous biases including the widespread overconfidence that experts have in their decisions. Selective feedback amplifies this overconfidence. Not surprisingly then, the findings on the poor performance of intuition have been as much ignored as they are conclusive.

The research on decision-making has been obviously done where outcomes are available and performance can be compared. The research subjects generally used the same information base for intuitive and analytical approaches. The analytical approach uses a simple tool or decision rule to integrate the information. This research demonstrates that the real skills of experts is in knowing the important criteria to be integrated and developing the decision rule. Intuition is prone to a wide range of biases which of course can be useful shortcuts.

In decisions on funding of research projects, there is a continuum from intuitive to analytic modes. An R&D example of a largely intuitive approach is where individual committee members use available information to arrive at an overall project ranking on a scale (such as 1 - 5). A more analytic approach would be to us a scoring system (provided it is developed by the Committee and is based on criteria related to outcomes.)

Performance of expert's intuitive decision-making compared to an analytic approach varies with the situation. For example weather forecasters and soil assessors do better than stock-brokers and clinical psychologists (Shanteau, 1990). Intuitive approaches appear to perform relatively more poorly where decision outcomes depend more on human behaviour. Performance also appears poorer where expectations are higher.

Kleinmutz (1990) reviewed 'Why we still use our heads instead of formulas' and concluded with a recommendation to use both. There are many issues which predispose one or the other. Time is obviously a key one. Greater complexity in terms of cues required, their ease of measurement, and multiple objectives, all actually favour intuitive approaches. Analytic approaches are clearly favoured when there is an accepted organising principle, for example Bayesian statistics for rare events compared to biases in intuitive notions of probability. A traditional example in the literature is the value of groceries in a shopping trolley. It is preferable to go through the check-out rather than estimate the total. However, intuition might be a better starting point for a rapid assessment of the nutritional value of the contents.

THE MARKET POTENTIAL OF BCA

The varying extent to which R&D Corporations make use of BCA was reviewed by Wilson (1996). At that time, most of at least the major R&D Corporations had introduced, or planned to introduce BCA to some extent. Although no formal follow-up survey has been done, a range of approaches are still likely to be in use, as are a range of views on the value of BCA in project selection. At a minimum, the BCA information provided by the researcher is one further item of information which adds to, or complements, other information in the proposal. Research management then has the task of using available information, coupled with their perspective of the industry, to form an overall assessment of the likely returns from the proposal.

The potential value of BCA in decision-making would need to take into account the specific application, and how BCA was aligned with broader research management requirements. A general assessment can however be made by a comparison between decision-making with and without BCA information. The framework developed by O'Keefe and Manifold (1995) provides a basis for comparison. The framework is a general approach to assessing the potential and adoption prospects for an innovation from a market perspective. The following summarises the possible advantages of BCA for the six components.

1. RELATIVE ADVANTAGE

 

1.1

Increase performance or reduction of problem

BCA provides an evaluation framework directly aligned to the goal of increasing returns to investment in R&D. There is thus a potential advantage in accountability in terms of outcomes rather than of inputs and processes. BCA also has a potentially important role in project monitoring and feedback on R&D management generally. In the absence of direct evidence on improved performance, the evidence is from the general comparisons of the improvement from using more analytical approaches compared to the more intuitive.

5/10

1.2

Increase profitability or cost savings

Whilst there are clearly increased costs from BCA, the increased returns are problematic but possibly substantial (as in 1.1). Whilst research on selection decisions ex post could provide some indication of existing selection skill, it would be a more complex task to determine the increased skill and profitability possible with BCA.

4/10

1.3

Product quality/reliability

Some issues that contribute to lower return projects, such as the difficulty of defining the 'without the project' situation and the market failure justification, are relevant to either approach. However, the BCA process may make the issue more explicit and more likely to be taken into account. General comparisons of 'intuitive v analysis', rate analysis as giving more confidence in the process and less in the answer. BCA may also be seen to discriminate against projects which are more difficult to evaluate. Whilst such projects could be seen as to some extent riskier, they may be an important component of a balanced portfolio.

6/10

1.4

Increased convenience of use/time savings

No obvious advantage from BCA, particularly so if more rigorous approaches are used.

0/10

2 COMPLEXITY

 

2.1

Technical complexity

BCA is typically seen as complex, particularly if more rigorous approaches are used.

2/5

2.2

Difficult to learn to use

BCA is typically seen as difficult for researchers not trained or motivated in its use and without access to skilled support.

2/5

3. COMPATIBILITY

 

3.1

Compatible with current practices

BCA is not generally used in researcher's host institutions. There is little understanding of key inputs such as likely adoption rates for innovations with various characteristics.

0/5

3.2

Compatible with social values and norms

The culture of peer review has generally been seen as appropriate for research funding allocations. Because of the time lags and the current evaluation requirements, there is little demand for a strong emphasis on outcomes, other than for samples of successful projects. Greater farmer involvement in research management is one factor changing accountability requirements.

0/5

4 PERCEIVED RISK

 

4.1

Investment required

The major requirement is the resources to ensure credible and consistent information. Whilst the investment is likely to be a low proportion of the total R & D budget, it would be competing with other opportunities in an R & D management budget.

3/5

4.2

Product failure/loss

A change in the composition of the portfolio resulting from the greater use of BCA may impact on both the average returns and their spread across projects. However, the BCA approach can make risk aspects more explicit and thus more manageable. In any case, the difficulty of evaluation of a particular project, is an indicator of risk. In the longer term greater use of BCA in project management and monitoring should improve the quality of information and reduce risk.

4/5

5. TRIALABILITY

 

5.1

Small scale trial feasibility

Some aspects of BCA are readily trailed in the short term. However, evaluation in the long term of ex post success would be difficult and other factors would be involved. In the short term, a trial might show advantages in process., for example improved quality and balance of research proposals.

3/5

5.2

Adoption easily reversible

BCA can be readily be deleted as a requirement, and previous practice (or a modification) re-introduced.

5/5

6. COMMUNICATION

 

6.1

Benefit evaluated pre-use

There is unlikely to be any specific information on the advantages of BCA in terms of increased returns for a portfolio. Stakeholders may perceive benefits in terms of accountability.

2/5

6.2

Benefit evaluated by others during use.

Again, the long time scale required and difficulties of attribution's do not allow any demonstration in the short term at the portfolio level.

2/5

An overall assessment for maximum adoption can be determined from the scores on the first four components. Using the author's estimates, this assessment suggests only a medium level of adoption by the target market. The rate of adoption is assessed from the final three components in conjunction with the score for maximum adoption. The scores suggest a low rate of adoption.

Over the last decade there has been relatively rapid adoption of ex ante BCA by R&D Corporations, but much less so by other research agencies which contribute a substantial proportion of R&D funding overall. Whilst a number of factors are involved, external drivers particularly related to accountability, are one factor contributing to more rapid use of BCA by R&D Corporations (Lack, 1996). However, the difficulties in demonstrating relative advantage place in doubt whether the adoption achieved is sustainable. The volume of projects handled also constrains feedback on BCA information provided by researchers, and is another issue contributing to a decline in quality of information.

BCA practise is likely to continue to evolve before a degree of convergence on an effective role emerges. For example, GRDC changed in 1998. They now require only the components of a BCA in proposals, but with considerable justification required for the major assumptions on benefit/unit, scale and adoption. Previously software was provided to enable researchers to provide a completed BCA. The change should result in better quality information on components, leaving the overall assessment of the proposal to GRDC.

OPPORTUNITIES IN BCA PRACTICE

The scope for improved performance in project selection processes is clearly difficult to define. Whilst the high rates of return historically are comforting, they do not usually allow analysis of the potential. However, for those organisations using mainly intuitive assessments based on ranking's etc, experience in decision-making research generally has been that more formal approaches using the same information usually perform better.

Although BCA is the logical integrating framework for more formal approaches, the practice in the past has seen the BCA as an add-on to the more traditional styled proposal concentrating on the science.

Thus, one direction is a BCA approach which can more effectively complement current approaches, and aim to be incremental. The current system is typically based on research management assessing a variety of information supplied by researchers and other sources, and then subjectively combining their estimates, comparisons and judgements on the main components into an overall assessment. This type of global judgement is shown by decision-research to be inferior to a formal decision framework. A formal framework defers the judging of components by providing a more powerful means to integrate across components.

Thus, an alternative for research management is to use a simple BCA framework, using the same estimates and information, as an integrator. The intuitive approach and this BCA approach would share the same general problems besetting project evaluation. Those include the importance of the without-the project situation as a base to define benefits, as well as how to incorporate benefits which are difficult to quantify. A major advantage of even a simple BCA using research manager's estimates is the potential for feedback on performance of the selection process.

A BCA CALCULATOR

In typical and simple applications of BCA, a spreadsheet is used to calculate discounted benefits and costs. The benefits are usually determined from an adoption pattern, the benefit/units, and the number of units.

Given the widespread use of scoring systems a preliminary investigation was undertaken to develop a hybrid approach combining BCA and the usual components of scoring systems. This was seen as a more incremental means of introducing BCA in an integrated way. Decision-making research, for example Dawes (1988), has shown that simple scoring systems can perform well and are not very sensitive to the weighting scheme. However, the approach was not generally effective with BCA data such as adoption rates because the scale is only linear over a small range.

One hazard with scoring systems is they are likely to include too many interrelated factors and involve double counting. A BCA framework can in principle incorporate all potential benefits and costs, and can integrate an evaluation of the science using the probability the research will be successful based on the task and the resources to be applied. The starting point for the calculator was the following form of expression for the ratio of discounted benefits and costs:

B/C = Probability of Success ( %/100) x Potential Benefits x % Benefits achieved (%/100)

Project Costs / BC Discount Factor

Probability of Success: 

Defined as at the end of the research phase. The expected value of the BCA is then calculated from the two scenarios: (1) where there are no costs or benefits after the research phase, and (2) costs and benefits as specified.: 

Defined as at the end of the research phase. The expected value of the BCA is then calculated from the two scenarios: (1) where there are no costs or benefits after the research phase, and (2) costs and benefits as specified.

Potential Benefit: 

Defined conventionally as the product of the maximum size of the industry or area to benefit from the research by the annual benefit/unit.

% Benefits achieved: 

Defined by the adoption pattern, and are discounted benefits as a proportion of potential discounted benefits. The % Benefits achieved are defined in the following identity:

% Benefit Achieved = Total Discounted Benefits .

Potential Annual Benefit x Series Discount Factor

where the Series Discount Factor is the factor for the Present Value of a Uniform series discounted at 8% over 20 years (9.82).

Project Costs : 

Defined in this formula as total un-discounted costs for all years. (The BC discount factor allows approximately for their timing, see Table 1).

BC Discount Factor : 

Combines factors for (1) the timing of project costs (2) the uniform series factor (transferred from the denominator), and (3) the factor for probability of success dependent on the duration of the costs for the project.

Apart from rounding or interpolation errors, the only source of error in using the calculator arises from the assumption on the pattern of costs. The following Table shows an example when project costs are incurred over the first four years. The discount factor applied to total costs assumes that annual costs in the final half of the project are at half the rate in the first half. Such a pattern approximates many projects were the final years are a development and extension phase. As the Table shows, the error for a major departure from that pattern will only be a few percent.

TABLE 1 Sensitivity of the BC Discount Factor to the Annual Pattern of Costs

Year

Annual

Cost Pattern ($,000)

 

Discount Factor

(present value)

Decreasing

(as used)

Constant

Increasing

1

0.9259

100

75

50

2

0.8573

100

75

50

3

0.7938

50

75

100

4

0.7350

50

75

100

Total Cost

300

300

300

Total Cost Discount Factor (TCDF)

0.849

0.828

0.807

BC Discount Factor (9.82*/TCDF)

11.6

11.8

12.2

* Note: 9.82 is the Present Value Factor(8%) for a uniform series of 20 years

An example of the Calculator (see Appendix 1) follows using the cost pattern from Table 1. Probability of Success is assumed to be 100%. Benefits of $10/ha are assumed for a new variety which is expected to be adopted over 10 000 ha at maximum adoption. The potential benefit is thus $100,000. Adoption starts from zero in Year 5, reaches the maximum in Year 10, and continues with no disadoption to Year 20. The % Benefits Achieved of 47% are interpolated from the Calculator. The BC Discount Factor is 12 given 4 years of costs and 100% Probability of Success.

B/C = 100/100 x 100,000 x 47/100

300,000/ 12

=1.88

The value calculated from the series of discounted costs and benefits is 1.84. As concluded from Table 1, errors are unlikely to be more than a few percent.

The Calculator can be further used to examine sensitivity for key assumptions. The tables for % Benefits Achieved shows the usual relationship for adoption; sensitivity is about twice as great for the Year Adoption Starts compared to the Adoption Phase. Rules of thumb can also be developed. For example, for a typical three year project followed by an adoption phase reaching a maximum in about a decade from the start of the project, potential annual benefits need to be of the order of 20% of Total Costs for the project to break even.

CONCLUSION

There has been a substantial increase in the use of BCA for ex ante research evaluation, by research organisations and researchers. However, there is little evidence of convergence on what is the best approach. For the increased use of BCA to be sustained, research organisations may seek better information on the contribution that BCA can make to improved selection of projects in terms of increased returns to research. For a number of reasons, including the general perception that returns are already high, the advantages in using BCA are difficult to define. Also, little attention has been given to defining the performance of the current system, so the baseline is ill-defined.

The paper introduces a different perspective using general comparisons of decision making based on largely intuitive judgements with more formal approaches using integrating decision rules. In these comparisons, using the same information for decision-making, intuitive approaches have been consistently shown to be inferior, subject to numerous biases, produce overconfidence in their value, and seriously limit the value of feedback and learning. Therefore, particularly where there is a relevant integrating framework such as BCA, a more formal decision approach should be superior.

Research managers already make their own intuitive judgements on the components which contribute to project returns. This paper provides a simple and easy to use BCA Calculator which can integrate the judgements. The Calculator achieves considerable economy in presentation without significant loss of computational accuracy.

Appendix 1 BENEFIT COST CALCULATOR

Figure 1

BC Discount factor (8%) for number of years with significant costs,by Probability of Success

Figure 2

 B/C = Probability of Success ( % /100) x Potential Benefit x % Benefits achieved (% /100)
Project Costs / BC Discount factor

% Benefits achieved
(over 20 years,discounted at 8%) for the YEAR ADOPTION STARTS, and the ADOPTION PHASE, i.e number of years from when adoption starts to maximum adoption

Figure 3


REFERENCES

Brennan, J.P. and Davis, J.S. 1996 (eds.), Economic Evaluation of Agricultural Research in Australia and New Zealand, ACIAR Monograph No.39, Australian Centre for International Agricultural Research, Canberra.

Daniel,.P. (ed.), 1997. Economic Evaluation of Research and Extension Activities in Agriculture Pre Conference Workshop (Global Agricultural Science Policy for the Twenty-first Century) 20 August, 1996. Melbourne. Victorian Department of Natural Resources and Environment, Melbourne.

Dawes, R.M., 1988. Rational Choice in an Uncertain World. Harcourt Brace Jovanovich College Publishers.

Kleinmutz, B., 1990. Why we still use our heads instead of formulas: towards an integrative approach. Psychological Bulletin 107,3,296-310.

Lack, S.,1996. Use of Research Evaluation in Decision-making in R&D Corporations. In Economic Evaluation of Agricultural Research in Australia and New Zealand, eds. Brennan, J.P. and Davis, J.S., ACIAR Monograph No.39, Canberra

O'Keefe, M. and Mansfield, F., 1995. Estimating the market potential of innovations. Agricultural Science, 8,3: 36-40.

Shanteau, J., 1990. Psychological characteristics of agricultural experts: Applications to expert systems. In Climate and Agriculture: Systems approaches to decision making . ed. A. Weiss,. Lincoln: University of Nebraska.

Wilson T., D., 1996. Software developments for economic evaluation of research. In Economic Evaluation of Agricultural Research in Australia and New Zealand, ed. Brennan, J.P. and Davis, J.S., ACIAR Monograph No.39, Canberra

top of pagetop of page

Contact us

Contact the University : Disclaimer & Copyright : Privacy : Accessibility