The Limits of Efficient M&E


I’ve spent years harking on about the need to ensure that development projects are designing indicators that are collecting the most necessary data as opposed to interesting data. My mantra, drilled into the heads of many colleagues over the years, has been ‘what do you need to know to know if you are making progress’. Big on SMART indicators, and keeping the process as efficient as possible (know that monitoring is the least favourite activity of a majority of development practitioners) – it’s what I did.

I was always very rigid and strict about this – ‘no more than four indicators per output!’ – because in my experience, if you gave an inch, you lost the upper hand. All of a sudden monitoring change effected was sliding quickly and perilously back to how many people attended training X (which tells me precisely nothing beyond everyone probably had a nice lunch and a good gossip during the tea break).

Recently, though, I’ve found myself advising two separate projects which dealt with similar issues…. But were human rights focused. As in, the objective of both projects was to increase the quality of human rights and improve the respect for human rights and the law in general. Despite my best efforts, and challenging myself to create a monitoring framework that had results-based indicators on a very sensitive subject, the management of the first project was less than keen to monitor more then their inputs (ie: activities). We respectfully parted ways, with me thinking that there was still a lot of mind-set and behavioural change to be done on M&E in development and human rights.

The other project, however, was keen to demonstrate change effected. It was great, until I realized that it was not practical to design indicators that partners were just not going to provide data on. The challenge was to identify other data that could triangulate anticipated results. Except that this meant that there was no way I could stick to my four indicator rule. It just wasn’t going to happen. I watched and twitched as the number of indicators grew and grew – from six to even eight per output. If only the partners could be open enough to simply provide the data we needed, we could stick to two or maximum three indicators. All of a sudden, the monitoring framework was becoming unwieldy. It felt like nails on a chalkboard. But what could we do? We needed to know if the project was effecting change, and due to the nature of the project, we were going to have to go by the long route to find out.

This whole process has reminded me that, despite all of our rules and regulations and guidelines and push for efficiency, development is just not a science. It’s trial and error. We can have rules and guidelines but sometimes we have to stretch them just to make the most basic aspects of development work. And sometimes we have to accept that for some people, it’s more about the activities that they are doing than the change they are effecting. In this day and age of accountability to beneficiaries, it goes against everything we are told not to do. But there aren’t necessarily sanctions that can be applied. There is a difference between an individual not towing organization policy and an entire organization having a different view to global good practice. What can you do? (Probably wait for sectoral peer pressure to have an impact but again… not a science).

We cannot police the practice of M&E the same way we police the financial aspects of development. It’s too open to interpretation. It makes it more complicated and creates an ‘us vs them’ mentality between those who want to really monitor change and keep M&E efficient, and those who just want to tick a box. I cannot see much more change happening anytime soon. So those of us that advise on M&E need to decide what mindsets are worth changing, what projects are worth breaking the rules for, and which ones are just not worth our time.

Comments

Popular posts from this blog

How to Use Theory of Change for Adaptive Monitoring, Evaluation and Learning

Implementing an Adaptive Monitoring Framework: Principles and Good Practice