Implementing an Adaptive Monitoring Framework: Principles and Good Practice

*This is an adapted version of an article I drafted for the SIAP SIAGA program, which can be found here


In 2019, I began working on a program that set itself the challenge of implementing a systems change, adaptive approach in order to leverage the results of previous programs in that sector, as well as to take into account the increasing complexity of development programming in a middle-income country.

This type of program required a different type of monitoring framework – one that could capture changes in the system based on the results of the program, as well as be adaptive to the adjustments in program implementation over time. With adaptive monitoring still in its infancy (in practice, despite the plethora of theoretical information available on line), there were few practical experiences and good practices to draw on to facilitate the design of the monitoring, evaluation and learning (MEL) framework. There were more resources on monitoring systems change, but as with any program, most approaches, tools and practices would need to be adjusted to context.

The design of the MEL framework was a journey that was both adaptive in itself, and instructive in that while the program may be working within a complex system, in order to be adaptive, the MEL framework needed to be simple and clear. The journey to design an adaptive MEL framework also highlighted the most important principles that the framework needs to be based on.

 

1.    Measurement needs to be outcome focused.

As a program adapts to changing priorities based on emerging results and changes in context, pre-conceived outputs and targets will continually change, making analysis at output level difficult and sometime impossible. What does not (or is very unlikely to) change over time are the outcomes your program is working towards. A good rule is to focus all measurement at the outcome level – this is especially important for systems change programs. Use outputs to organize activities but focus measurement and analysis on the results of activities (and how they create change over time) to provide evidence at the outcome level. An important tool for this process is your Theory of Change – by deconstructing what change looks like for your program into variables vis-a-vis your program’s intended outcomes, you can design indicators using those variables, which can be reevaluated as program implementation progresses to capture emerging pathways to change, or emerging complexity in the operating context.

 

2.    Measurement tools need to accommodate learning

Measurement tools must accommodate reflection and inquiry. They need to be participatory (i.e.: measurement is a collective responsibility, not just the role of the MEL team), creating space for reflection on what is working well, what is not, as well as intangible change like observations of changing mindsets or shifts away from business as usual. However, the tools also need to reflect the capacity of the team which will be using them.

The process of testing different tools/approaches to determine what worked best for the program was an adaptive journey in itself. We went through phases of being too rigid to then over complicating the process to finally drilling down to understand what tools were needed most: accountability and tracking change. We distilled the measurement tools down to the following: 1) collaborative measurement through monthly activity reports (accountability) and 2) quarterly reflection reports (tracking outcome indicator evidence). A second level of reflection is then carried out through two processes of Real Time Evaluation (RTE) and Learning: 1) Quarterly Catch Ups for internal RTE based on the program’s Evaluation Questions and 2) semi-annual Partner Reflection Workshops to reflect on program results, challenges and emerging opportunities. These processes provide significant evidence regarding the progress of the program toward its outcomes, as well as opportunities to adjust activities based on inputs from partners in order to pursue emerging pathways for change or adapting to changing socio-political contexts.

 

3.     Measurement tools and processes need to be able to track emergent change

We also learned that with systems change/adaptive programs, predetermined targets can quickly become irrelevant and create unnecessary challenges for both program and monitoring teams alike. We have an idea of what may change over time based on our TOC (the theory part of the change), but we cannot be sure as change is emergent. As measurement is outcome focused, evidence is collected at activity level over time, and initial results will impact how the next series of activities will be implemented (including adapting activities based on the initial results), and so forth. Target setting can be detrimental to the process of tracking emergent change – change that you may have predicted and change that you may not have. This is why supplementing activity monitoring tools with reflection tools and processes is so important – it creates space to capture not just expected change based on activity results, but also emerging unanticipated changes that will impact on how program planning and activities are determined going forward.

 

4.     A clear chain of evidence is still essential

What has not changed in the design of a fully adaptive MEL framework is the need for a clear chain of evidence. In fact, your monitoring needs to be even more rigorous in detailing evidence than in a tradition program since evidence (especially for systems change programs) comes in many forms: changes in policies and regulations, as well as observations of changing mindsets documented in meeting minutes; feedback from partners; the need to shift activities based on results; requests from partners which signify increasing ownership/buy-in; documentation of a shift away from business as usual processes; discussions between government actors (horizontally and vertically) which are a result of the program’s facilitation and technical support.

Your program’s outcome indicators are a reflection of how the program initially described the ‘change we want to see’ vis-a-vis the Theory of Change pathways, which in this case are largely qualitative in nature. This means stringent criteria for documenting evidence of expected and unanticipated change, and how that change contributes to the program’s outcomes is necessary to ensure the robustness of analysis as well as validity in the eyes of the program’s partners.

 

Designing an adaptive MEL framework has been a challenging task, as we reflect and learn about what works and what does not for the needs of the program (and the program team), as well as what is necessary and what is not. What works and is necessary for one program will differ from that of other programs, which means that there are no hard and firm ‘rules’ regarding adaptive MEL, only key principles to guide the design and implementation of adaptive MEL frameworks. Capturing and sharing good practices based on the above principles will be critical to supporting increased understanding and use of adaptive MEL by a wider range of practitioners in a wider range of development programs.


Comments

Popular posts from this blog

How to Use Theory of Change for Adaptive Monitoring, Evaluation and Learning