Abstract
ObjectiveTo describe the development of an evaluation framework thatallows quantification of surveillance functions and subsequentaggregation towards an overall score for biosurveillance systemperformance.IntroductionEvaluation and strengthening of biosurveillance systems is acomplex process that involves sequential decision steps, numerousstakeholders, and requires accommodating multiple and conflictingobjectives. Biosurveillance evaluation, the initiating step towardsbiosurveillance strengthening, is a multi-dimensional decisionproblem that can be properly addressed via multi-criteria-decisionmodels.Existing evaluation frameworks tend to focus on “hard” technicalattributes (e.g. sensitivity) while ignoring other “soft” criteria(e.g. transparency) of difficult measurement and aggregation. Asa result, biosurveillance value, a multi-dimensional entity, is notproperly defined or assessed. Not addressing the entire range of criterialeads to partial evaluations that may fail to convene sufficient supportacross the stakeholders’ base for biosurveillance improvements.We seek to develop a generic and flexible evaluation frameworkcapable of integrating the multiple and conflicting criteria and valuesof different stakeholders, and which is sufficiently tractable to allowquantification of the value of specific biosurveillance projects towardsthe overall performance of biosurveillance systems.MethodsWe chose a Multi Attribute Value Theory model (MAVT) tosupport the development of the evaluation framework. Developmentof the model was done through online decision conferencing sessionswith expert judgement, an indispensable part of MAVT modelling,provided by surveillance experts recruited from the member pool ofthe International Society for Disease Surveillance.The surveillance functions or quality criteria that were consideredfor the framework were initially gathered from a review of theliterature with specific attention to a subset of public health qualitycriteria (1). Group discussions with the experts led to a final list offunctions, finally reviewed to comply with the properties for goodcriteria in decision models. The eleven functions were: sensitivity;timeliness; positive predictive value (PPV); transparency; versatility;multiple utility; representativeness; sustainability; advancing thefield and innovation; risk reduction; and actionable information.In addition, 24 different scenarios were developed for sensitivity,PPV, and timeliness since their values may differ with the level ofinfectiousness of the condition/event of interest, its severity andthe availability of treatment and/or prevention measures. Four orfive levels of performance were also developed for each criterion.Macbeth (Measuring Attractiveness by a Category-Based EvaluationTechnique) tables were used to elicit values of different levels ofperformance from the experts using qualitative pairwise comparisonsand then convert them into numerical values.ResultsTo date, two criteria, sensitivity and transparency, have beenassessed by more than one expert working on the same scenario.Value functions were generated for each criterion and scenario bycalculating the median of the different values produced by the experts.For both sensitivity and transparency, value functions were mostlylinear, indicating similar preferences between levels of performance.However, for some scenarios, experts allocated greater value toincreases at the higher end of the performance level distribution.ConclusionsAt the time of writing new elicitation sessions are planned toconclude the model. Next, we will apply swing weights to supportthe trade-offs between the different criteria. We will present thebaseline model elicitated from the experts and demonstrate howto apply portfolio decision analysis to assess overall performanceof biosurveillance systems according to the specific needs ofstakeholders and in conjunction with macro-epidemiological models.