Abstract
Surveillance systems need to be evaluated to understand what the system can or cannot detect. The measures commonly used to quantify detection capabilities are sensitivity, positive predictive value and timeliness. However, the practical application of these measures to multi-purpose syndromic surveillance services is complex. Specifically, it is very difficult to link definitive lists of what the service is intended to detect and what was detected.
First, we discuss issues arising from a multi-purpose system, which is designed to detect a wide range of health threats, and where individual indicators, e.g. ‘fever’, are also multi-purpose. Secondly, we discuss different methods of defining what can be detected, including historical events and simulations. Finally, we consider the additional complexity of evaluating a service which incorporates human decision-making alongside an automated detection algorithm. Understanding the complexities involved in evaluating multi-purpose systems helps design appropriate methods to describe their detection capabilities.