Abstract
Objective
To conduct a field-based assessment of the malaria outbreak surveillance system in Mashonaland East, Zimbabwe.
Introduction
Infectious disease outbreaks, such as the Ebola outbreak in West Africa, highlight the need for surveillance systems to quickly detect outbreaks and provide data to prevent future pandemics.1–3 The World Health Organization (WHO) developed the Joint External Evaluation (JEE) tool to conduct country-level assessments of surveillance capacity.4 However, considering that outbreaks begin and are first detected at the local level, national-level evaluations may fail to identify capacity improvements for outbreak detection. The gaps in local surveillance system processes illuminate a need for investment in on-the-ground surveillance improvements that may be lower cost than traditional surveillance improvement initiatives, such as enhanced training or strengthening data transfer mechanisms before building new laboratory facilities.5 To explore this premise, we developed a methodology for assessing surveillance systems with special attention to the local level and applied this methodology to the malaria outbreak surveillance system in Mashonaland East, Zimbabwe.
Methods
In a collaboration between the Zimbabwe Field Epidemiology Training Program and the University of Washington, an interview guide was developed based on the Centers for Disease Control and Prevention’s (CDC) Updated Guidelines for Surveillance Evaluations and WHO’s JEE tool.4,6 The guide was tailored in country with input from key stakeholders from the Ministry of Health and Child Care and National Malaria Control Program. Interview guides included questions focused on outbreak detection, response, and control procedures, and surveillance system attributes (preparedness, data quality, timeliness, stability) and functionality (usefulness). The team utilized the tool to evaluate surveillance capacity in eleven clinics across two malaria-burdened districts of Mashonaland East, Mudzi and Goromonzi. Twenty-one interviews were conducted with key informants from the provincial (n=2), district (n=7), and clinic (n=12) levels. Main themes present in interviews were captured using standard qualitative data analysis methods.
Results
The majority of key informants interviewed were nurses, nurse aids, or nurse officers (57%, 12/21). This evaluation identified clinic-level surveillance system barriers that may be driving malaria outbreak detection and response challenges. Clinics reported little opportunity for cross-training of staff, with 81% (17/21) mentioning that additional staff training support was needed. Only one clinic (10%, 1/11) had malaria emergency preparedness and response guidelines present, a resource recommended by the National Malaria Control Program for all clinics encountering malaria cases. A third of interviewees (33%, 7/21) reported having a standard protocol for validating malaria case data and 29% (6/21) reported challenges with data quality and validation, such as a duplication of case counts. While the surveillance system at all levels detects malaria outbreaks, clinics experience barriers to timely and reliable reporting of cases and outbreaks to the district level. Stability of resources, including transportation and staff capacity, presented barriers, with half (48%, 10/21) of interviewees reporting that their clinics were under-staffed. Additionally, the assessment revealed that the electronic case reporting system (a WHO-developed SMS application, Frontline) that is used to report malaria cases to the district was not functioning in either district, which was unknown at the provincial and national levels. To detect malaria outbreaks, clinics and districts use graphs showing weekly malaria case counts against threshold limit values (TLVs) based on historic five-year malaria case count averages; however, because TLVs are based on 5-year historic data, they are only relevant for clinics that have been in existence for at least five years. Only 30% (3/10) of interviewees asked about outbreak detection graphs reported that TLV graphs were up-to-date.
Conclusions
This surveillance assessment revealed several barriers to system performance at the clinic-level, including challenges with staff cross-training, data quality of malaria case counts, timeliness of updating outbreak detection graphs, stability of transportation, prevention, treatment, and human resources, and usefulness of TLVs for outbreak detection among new clinics. Strengthening these system barriers may improve staff readiness to detect and respond to malaria outbreaks, resulting in timelier outbreak response and decreased malaria mortality. This evaluation has some limitations. We interviewed key informants from a non-random sample covering 30% of all clinics in Mudzi and Goromonzi districts; thus, barriers identified may not be representative of all clinics in these districts. Secondly, evaluators did not interview individuals who may have been involved in outbreak detection and response but were not present at the clinic when interviews were conducted. Lastly, many of the evaluation indicators were based on self-reported information from key informants. Despite these limitations, convenience sampling is common to public health practice, and we reached a saturation of key informant themes with the 21 key informants included in this evaluation.7 By designing evaluation tools that focus on local-level knowledge and priorities, our assessment approach provides a framework for identifying and addressing gaps that may be overlooked when utilizing multi-national tools that evaluate surveillance capacity and improvement priorities at the national level.
References
1. World Health Organzation. International Health Regulations - Third Edition. Vol Third. Geneva, Switzerland; 2005. doi:10.1017/CBO9781107415324.004.
2. Global Health Security Agenda. Implementing the Global Health Security Agenda: Progress and Impact from U.S. Government Investments.; 2018. https://www.ghsagenda.org/docs/default-source/default-document-library/global-health-security-agenda-2017-progress-and-impact-from-u-s-investments.pdf?sfvrsn=4.
3. McNamara LA, Schafer IJ, Nolen LD, et al. Ebola Surveillance — Guinea, Liberia, and Sierra Leone. MMWR Suppl. 2016;65(3):35-43. doi:10.15585/mmwr.su6503a6.
4. World Health Organization (WHO). Joint External Evaluation Tool: International Health Regulations (2005). Geneva; 2016. http://apps.who.int/iris/bitstream/10665/204368/1/9789241510172_eng.pdf.
5. Groseclose SL, Buckeridge DL. Public Health Surveillance Systems: Recent Advances in Their Use and Evaluation. Annu Rev Public Health. 2017;38(1):57-79. doi:10.1146/annurev-publhealth-031816-044348.
6. Centers for Disease Control and Prevention. Updated guidelines for evaluating public health surveillance systems: recommendations from the guidelines working group. MWWR. 2001;50(No. RR-13).
7. Dworkin SL. Sample size policy for qualitative studies using in-depth interviews. Arch Sex Behav. 2012;41(6):1319-1320. doi:10.1007/s10508-012-0016-6.