Development Jargon 2.0
Development
data is a big deal. In fact, there is so much literature and analysis out there
on development data and the data 'revolution' that it is simply impossible to
keep up. So forgive me if some of the following is dated or lacks in robustness
- there are only so many hours in a day and I like to things that don't look
like work on occasion.
I
confess, part of the reason I have some difficulty staying on top of the data
revolution is because I'm a bit unsure about what is meant by it. In some articles
I read, it's all about collectimg more complete data. In others, it's about how
we'll use the data. Here's the thing, though - have development organizations
not been collecting data for years? Has this data not been used (even
superfically) to make policy and budget decisions? I'll be the first to say
yes, although it's never been perfect (in either case) and likely never will
be. But this seems to be the new gung ho flagship issue in the development
world and so we all have to board that train or become irrelevant (sorry for
mixing metaphors).
I have
some issues with this process. As an M&E person, data collection and use is
the core of my work. Weak data and poorly collected data are the bane of my
existence (only just beating out Bangkok's irrational traffic light management
system). Receiving data from project teams in which focus group discussions
failed to separate groups by age or gender, desk review information on policy
process which simply say 'yes' without any qualifiers. In part because there is
a lack of capacity to collect data in a way that allows end users to analyse
it, and in part because quite a lot of the time the data we seek simply isn't
available. Such is life and we have to make decisions using our intuition as
much as hard data analysis (and, you know, have more conversations and hand's
on working days with M&E staff about why they need to undertake separate
focus groups or provide detailed information on documents used during desk
reviews, etc). It's not like it's rocket
science. Or is it?
Keith
Shepherd writes here about the popularity of
the discourse on development data. My favourite part of his article is where he
notes that 'people tend to overestimate the amount of data needed to make a
good decision, or misunderstand what type of data is needed.' Ain't that the
truth. (Side note: he must be loving the
SDG indicator identification process). So often you see project monitoring
frameworks with dozens of indicators aiming to collect data on the issue rather
than the specific objective statement. I kid you not, I once advised a project
that had developed a log frame with 400 indicators.
In fact, it was too much to even work with. So I binned it and started from
scratch. I asked them (none to politely, admittedly) what force prevailed upon
them to list that many indicators. 'It's interesting information to have,' was
the reply. Indeed. But then don't you lose the point of what you are trying to
achieve - are you trying to find anything that will indicate progress rather
than the one or two things that will definitively tell you if progress is being
made? It's kind of like have 398 back up plans in case you don't like the looks
of what some data is telling you.
On the
flip side are the data providers - generally speaking, government agencies.
Personally, I feel that the 'data revolution' is just one extra pressure on
developing country governments, and in particular local governments who have
far less capacity, in an age when development funding is stagnating, if not
decreasing altogether. This push for 'development' data is so forceful that I
worry that it may just be the straw that breaks the camel's back. Are we
setting them up for failure? Not only are there techological capacities, but
you have to address human resources, and geographical and cost issues. Not to
mention time. In countries with smaller populations, the workload on government
officials tends to be more rather than less. If we reference only the SDGs, the
type, quality and quantity of data expected to be collected is mind boggeling.
Now, not every country will collect data for every indicator, but it will still
be a lot.
Shepherd's
article looks at development data collection, analysis and use as something of
a scientific process, because that is how data ideally should be used. To date,
we collect some information, warp it a little and justify a decison that was
likely more or less already made. 'Just needed some evidence to back it up.' I
cannot even imagine trying to force development practitioners to take a more clinical
or scientific approach. Development is not science. If it was, every project
would need a data scientist or statistician on the team.
Data
collection to date hasn't been perfect, and there are gaps that we know are
there and sometimes admit to. Maybe that's why there's such a push to
'revolutionize ' development data. We want to be better at filling those data
gaps. That's not a bad thing.
What is bad is that we may start to confuse
filling data gaps with simply gathering more data. Will we be so focussed on
getting all the data that we won't prioritze getting the right or most
necessary data? Will we be so amped up on using new technologies and running
algorithms that we forget to take a good hard look at what the data is telling
us? For example - and it's a good one - all the effort that went into tracking
primary and secondary school attendance under the MDGs but failed to collect
data on the quality of education? Later we learned that significant numbers of
students who attended school left functionally illiterate anyway. That was a
lesson well learned. Let us not fall into tbe trap of now having so much data
that we don't know how to use it, and when we do, we fail to focus on the most
important data for the objective we are trying to assess. As Shepherd points out 'it is not enough to
simply assume that the data revolution will benefit sustainable development.'
Definitely not.
Comments
Post a Comment