November 1, 2013
Developing monitoring and evaluation practice in an organization or coalition is an exercise in patience. It requires incubating a culture of evaluative thinking and instituting new processes. The “what”—the indicators to be measured, the evaluation framework—work best if the organization also has the capacity, processes and appetite to use them.
Knowing what information to collect and how to use it takes time and practice. An advocacy coalition I worked with recently had developed systems to monitor changes in budgets and a constellation of policies. They developed these practices over a seven-year period, constantly testing and refining them. By the time I worked with them, use of these systems was seamlessly integrated with their advocacy processes. They used the information to review progress, test assumptions and refine strategies. They used what they learned to bolster their advocacy arguments. Finally, the systems served as a rich source of data for the external consultants who conducted periodic evaluations.
Certainly, what is measured matters. But how, why and to what end does as well. While developing indicators or an M&E frame is important, using them in a way that supports analysis and insight may require cultural changes and operational changes. Many organizations and coalitions have well thought out indicators, but then all too often only think again of the indicators when filing donor reports. To make these indicators really useful to the advocates doing the work, attention to internal processes and developing a culture of evaluative thinking is a worthy investment.
Cartoon source: Scott Chaplowe, International Federation Red Cross and Red Crescent Societies