It’s a new year for SMIP – with much to learn from behind us and exciting opportunities ahead!

We’ve just finished our annual meeting where we critically reflected on our progress and experiences as well as looked ahead into the future. One of the SMIP’ers promised to put together a post on this so I won’t jump his gun!

In the meantime, though – there’s some very interesting literature and new knowledge being made available and which I thought to share with you…if you haven’t already seen the links.

We look forward to another year of sharing opinions, experiences, lessons and new knowledge with you all!


The Journal of MultiDisciplinary Evaluation has just published its latest
issue at <>.

Journal of MultiDisciplinary Evaluation
Vol 6, No 11 (2009)
Table of Contents

The book “Country-led monitoring and evaluation systems. Better evidence, better policies, better development
results” is now available for free download at

The book was produced by UNICEF in partnership with the World Bank, UN Economic Commission for Europe, IDEAS (International Development Evaluation Association), IOCE (International organization for Cooperation in Evaluation), DevInfo and MICS.

This publication tries to bring together the vision, lessons learned and good practices from twenty-one stakeholders on how country-led monitoring and evaluation systems can enhance evidence-based policy making.

The Institutional Learning and Change (ILAC) Initiative has just launched its new interactive website.
The section ‘resources’ may be of particular interest, where you can find important methods for evaluation and impact of
collaborative projects. Don’t miss the ILAC Library section, with over 1200 references related to:
*   participatory research
*   monitoring and evaluation
*   impact assessment
*   organizational learning and much more…

In addition, you can sign up for receiving ILAC news.
The link is


And last but certainly not least! Have a look at the new blog, “Practical Evaluation” at

This blog intended to provide a platform where people can discuss practical evaluation issues, share experiences and support others with their questions.


Further to Elias’ post entitled, “Programme management should work beyond influencing or contributing to impact“; I was recently sent the following paper which I thought may be of interest to others. The paper speaks very much to what we try to do in SMIP and what we’re working towards. (more…)

Further to Thevan’s post on Impact evaluations; here’s an excerpt from a very interesting article on “Evaluation Evolution” posted in the Broker magazine the other day (link to full article below);

Three approaches to evaluation

Evaluation evolution?

Politicians are calling for evaluations that measure the effects of development cooperation. However, good development cooperation focuses on long-term processes that cannot be measured in terms of cause and effect. Alternative approaches to evaluation are needed.

By Otto Hospes 

Development cooperation is one of the most evaluated areas in public policy. Over the past 30 years, several studies of evaluator types and their approaches have been undertaken.1 But there are three approaches that fundamentally characterize and distinguish types of evaluators: evidence-oriented evaluation, which seeks hard evidence; realistic evaluation, which tests how and why outcomes of policy occur; and complexity evaluation, which focuses on the complexity of social issues and governance (see table).

The evidence-oriented approach is the most dominant in development cooperation evaluation, but it is not necessarily the most illuminating. The realistic approach, meanwhile, is mainly applied in the fields of justice, health and social services in European countries. According to some theorists, however, a shift is occurring away from evidence-oriented to realistic and complexity evaluation. The importance of the social and political contexts in which a policy is employed is increasingly recognized. This context is dynamic, complex and multilayered, with many different agencies and networks involved.

Complexity evaluation is related to the more recent use of complexity theory in social science. This emerging approach may provide useful insights to help overcome serious flaws in current evaluation practice, particularly in developing countries.2″


I recently read a paper on “The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement“. This, other papers, and my own experiences all point to the fact that evaluation practice is rapidly evolving. We’re seeing a gradual shift in the dominant paradigms – which includes the need to view things from the point of view of those that are directly benefitting (or not!) from a development initiative. Also – the need to take into account PROCESS (how we do things) and not simply WHAT we do and WHAT is achieved. 

We also see this in a number of aid instruments (such as the Paris Declaration or Sector Wide Approaches) which seem to support the shift towards greater ownership of the recipients of aid (and therefore greater responsibility). But – the question is they enable both recipients and funding agencies to gain knowledge and the learning that they need in order to take up these new responsibilities in a world of shifting paradigms? Robert Chambers’ paper on “Poverty Unperceived: Traps Biases and Agenda” really got me thinking about this. 

Have a read and please share your thoughts! 

Until soon,