A Value-Driven Framework For The Evaluation Of Biosurveillance Systems

Authors

  • Victor Del Rio Vilas PAHO, Rio de Janeiro, Brazil
  • M Kocaman London School of Economics, London, United Kingdom
  • Howard Burkom John Hopkins APL, Laurel, MD, USA
  • Richard Hopkins University of Florida, Tallahassee, FL, USA
  • John Berezowski Veterinary Public Health Inst., Bern, Switzerland
  • Ian Painter University of Washington, Seattle, WA, USA
  • Julia Gunn Boston Public Health Comm., Boston, MA, USA
  • G Montibeller School of Business and Economics, Loughborough, United Kingdom
  • M Convertino University of Minnesota, Minneapolis, MN, USA
  • L. C. Streichert ISDS, Boston, MA, USA
  • P. A. Honoré LSU School of Public Health, New Orleans, LA, USA

DOI:

https://doi.org/10.5210/ojphi.v9i1.7665

Abstract

ObjectiveTo describe the development of an evaluation framework thatallows quantification of surveillance functions and subsequentaggregation towards an overall score for biosurveillance systemperformance.IntroductionEvaluation and strengthening of biosurveillance systems is acomplex process that involves sequential decision steps, numerousstakeholders, and requires accommodating multiple and conflictingobjectives. Biosurveillance evaluation, the initiating step towardsbiosurveillance strengthening, is a multi-dimensional decisionproblem that can be properly addressed via multi-criteria-decisionmodels.Existing evaluation frameworks tend to focus on “hard” technicalattributes (e.g. sensitivity) while ignoring other “soft” criteria(e.g. transparency) of difficult measurement and aggregation. Asa result, biosurveillance value, a multi-dimensional entity, is notproperly defined or assessed. Not addressing the entire range of criterialeads to partial evaluations that may fail to convene sufficient supportacross the stakeholders’ base for biosurveillance improvements.We seek to develop a generic and flexible evaluation frameworkcapable of integrating the multiple and conflicting criteria and valuesof different stakeholders, and which is sufficiently tractable to allowquantification of the value of specific biosurveillance projects towardsthe overall performance of biosurveillance systems.MethodsWe chose a Multi Attribute Value Theory model (MAVT) tosupport the development of the evaluation framework. Developmentof the model was done through online decision conferencing sessionswith expert judgement, an indispensable part of MAVT modelling,provided by surveillance experts recruited from the member pool ofthe International Society for Disease Surveillance.The surveillance functions or quality criteria that were consideredfor the framework were initially gathered from a review of theliterature with specific attention to a subset of public health qualitycriteria (1). Group discussions with the experts led to a final list offunctions, finally reviewed to comply with the properties for goodcriteria in decision models. The eleven functions were: sensitivity;timeliness; positive predictive value (PPV); transparency; versatility;multiple utility; representativeness; sustainability; advancing thefield and innovation; risk reduction; and actionable information.In addition, 24 different scenarios were developed for sensitivity,PPV, and timeliness since their values may differ with the level ofinfectiousness of the condition/event of interest, its severity andthe availability of treatment and/or prevention measures. Four orfive levels of performance were also developed for each criterion.Macbeth (Measuring Attractiveness by a Category-Based EvaluationTechnique) tables were used to elicit values of different levels ofperformance from the experts using qualitative pairwise comparisonsand then convert them into numerical values.ResultsTo date, two criteria, sensitivity and transparency, have beenassessed by more than one expert working on the same scenario.Value functions were generated for each criterion and scenario bycalculating the median of the different values produced by the experts.For both sensitivity and transparency, value functions were mostlylinear, indicating similar preferences between levels of performance.However, for some scenarios, experts allocated greater value toincreases at the higher end of the performance level distribution.ConclusionsAt the time of writing new elicitation sessions are planned toconclude the model. Next, we will apply swing weights to supportthe trade-offs between the different criteria. We will present thebaseline model elicitated from the experts and demonstrate howto apply portfolio decision analysis to assess overall performanceof biosurveillance systems according to the specific needs ofstakeholders and in conjunction with macro-epidemiological models.

Downloads

Published

2017-05-02

How to Cite

Del Rio Vilas, V., Kocaman, M., Burkom, H., Hopkins, R., Berezowski, J., Painter, I., … Honoré, P. A. (2017). A Value-Driven Framework For The Evaluation Of Biosurveillance Systems. Online Journal of Public Health Informatics, 9(1). https://doi.org/10.5210/ojphi.v9i1.7665

Issue

Section

System Evaluation and Best Practices