How do we properly P, M&E and do we even want to L?
How to create good PMEL systems? As junior PMEL expert this question intrigues me. Looking at trends in PMEL we seem to have moved from the sole quantitative models to more theory- based and mixes-method approaches (Barnett & Gregorowski, 2013). However, complexity remains an issue and if we truly want to be able to understand the impact of our interventions we might start by challenging the way we frame our ‘successes’ and ‘outcomes’. The question then is: What is meaningful data? Who will benefit from it? And why do we measure? Maybe you are reading this and are currently working in the international development sector. Well, as you probably experienced, PMEL is often used for accountability reasons towards donors. Even though accountability is still an important use of PMEL towards both donors and partners, this shouldn't be the starting point when creating your strategy or projects. In my opinion PMEL should be learning-driven, intersectional and participatory. Through this approach I believe we have great opportunities to shift power dynamics and learn. But do we even want to learn?
Effective and participatory PMEL systems are important to draw conclusions about the impact, the efficiency, the sustainability, the gender, the environmental impacts of our projects and their contribution in strengthening civil society. Priorities in the organizational PMEL processes should be that it is learning-driven and used to ensure we're heading in the right direction; data is collected as cost-effectively as possible; and the Theory of Change needs to remain flexible to plan for unpredictability (Coe & Schlagen, 2019).
References
AMID (2022) Inspired by discussions during AMID lecture with Dr. Luuk van Kempen and group discussions (Huissen, 16-09-2022).
Bamberger, M. (2021) “Understanding real-world complexities for greater uptake of evaluation findings”,3IE Blog July. https://www.3ieimpact.org/blogs/understanding-real-world-complexities- greater-uptake-evaluation-findings
Bamberger, M., Rao, V., & Woolcock, M. (2010). Using mixed methods in monitoring and evaluation: experiences from international development. World Bank Policy Research Working Paper, (5245).
Barnett, C., & Gregorowski, R. (2013). Learning about theories of change for the monitoring and evaluation of research uptake.
Blaser-Mapitsa, C. (2022). A scoping review of intersections between indigenous knowledge systems and complexity-responsive evaluation research. African Evaluation Journal, 10(1), 624.
Coe, J., & Schlangen, R. (2019). No royal road. Finding and following the natural pathways in advocacy evaluation. Center for Evaluation Innovation.
Gertler, P. J., Martinez, S., Premand, P., Rawlings, L. B., & Vermeersch, C. M. (2016). Impact evaluation in practice. World Bank Publications.
Gugerty, M. K., & Karlan, D. (2018). Ten reasons not to measure impact–and what to do instead. Stanf. Soc. Innov. Rev, 8, 41-47.
Treurniet, M., Bedi, A., Bulte, E. H., Dalton, P., Dijkstra, G., Gunning, J. W. & van Soest, D. (2021). Effectief ontwikkelingsbeleid vergt cultuur van leren en betere evaluaties. ESB Economisch Statistische Berichten, 106(4801), 436-439.