The African Evaluation Association (AfrEA), the Network of Networks on Impact Evaluation (NONIE) and the International Initiative for Impact Evaluation (3ie) have joined forces to bring to Africa some of the best expertise from all continents on one of the most discussed topics among evaluation and development communities worldwide.

The conference defines impact evaluations as those studies which concern themselves with determining and understanding the short, medium and long term outcomes or impacts of projects, programs and policies. The term is not limited to any specific methodology in any particular discipline. (more…)

More often than not, I come across a type of monitoring I’ve started to refer to as “tick & go” (many thanks to an MfI network member for the expression). This typically begins with a frantic scramble to develop SMART indicators which sometimes (but rarely) are linked to a clearly defined and understood programme logic. Indicators are then duly listed in a neat & tidy table with columns for targets & actuals (annual & cumulative targets) – preferably expressed as numbers or percentages.

This is all very well and I can most certainly appreciate why it’s so popular. When you’re running around supervising a number of different projects/programmes..have only a couple of days to deliver a report and too many reports to read – all you really want to do is “tick & go”! Unfortunately though, numbers & ticks are can go so far in helping us understand (and make decisions) within the complex situations most programmes are having to work in. (more…)

Further to Elias’ post entitled, “Programme management should work beyond influencing or contributing to impact“; I was recently sent the following paper which I thought may be of interest to others. The paper speaks very much to what we try to do in SMIP and what we’re working towards. (more…)

The SMIP team is currently in the process of supporting the Zanzibar ASSP & ASDP-L projects to roll out their participatory and learning oriented M&E Plan. One of the first activities was to try and build the capacity of and motivate District Facilitators (extension officers, subject matter specialists etc) to facilitate the Farmer Field School (FFS) monitoring sessions.We did this through training – consisting of 1 1/2 days in a “class-room setting” and 4 1/2 days in the field working with the farmers. (more…)

I was facilitating a session on “Introduction to Managing for Impact” on a training workshop organized from September 15 to 25, 2008 at Haramaya University. The session dealt on issues related to impact and the need for programmes or projects to organize themselves to put together a framework that will enable them achieve impact.

In the middle of the session a participant raised an issue; we in programmes and project are accountable to the activities, outputs and the programme outcome, however, we also believe that with what we do to realize out come we influence impact. Therefore, as project managers, we shouldn’t be asked or be accountable for impact. This remark reminded me an article titled “M&E as learning: Rethinking the dominant paradigm “by JIM WOODHILL which I read a day before when preparing to facilitate the session I mention above.

The article mentioned, (more…)

It is a blistering Friday afternoon in Pabalelo Township in the remote town of Upington in the Northern Cape of South Africa. A young twenty- one year old woman sits outside her make- shift home under the shade watching her children play in the dusty front yard. A 4X4 vehicle drives by and parks outside her home. It has a government logo on the sides of the doors. A well- dressed man walks up to her from the vehicle and introduces himself from the department of welfare in the province. He wishes to talk to her about the social and economic changes she has experienced in her household and her community over the past twelve months. ‘I have nothing to say’, she responds curtly, ‘you government officials pop up here now and then, write your reports and then disappear without involving us in any of your plans that affect our lives! Here we are, still poor and unemployed.’

The government official is left speechless and disturbed by this encounter. He realizes that perhaps the young woman was right- they often do come into the community and set up projects without actually consulting with the community or even involving them in the developing of projects that are designed to address their plight. No wonder it is so difficult to accurately measure whether there has been any significant and qualitative changes in the lives of the poor, both prior to and after these projects have been established. (more…)

Dear All,

Last week, I was deeply privileged to be invited to this blog by Mine, and sure enough my expectations were way below what I read on the blog! I shared with Mine my first impressions soon after skimming through the rich experineces shared on this forum, and particularly had my attention caught by the metaphor of “craftsmanship” / “craftswomenship”.

I was lately reflecting over the metaphor of craftsmanship / craftswomanship, and for me, this brought memories of past times when we (in one of the projects using PRA techniques) were driven by techniques rather than the development issues / questions. Naively, we churned up maps, chapati diagrams, timelines and all tribes of matrices, certainly exciting and drawing mamoth crowds of communities, but certainly – at the best simply skirting around the real development issues! Thankfully, the project registered some unintended positive gains, but could have perfomed better on the scale of her objectives.

The talk about craftsmanship also reminds me of glaring mismatches / disparities in some of the publications we have; often times, data on a particular parameter, sometimes from the same source and for the same period has outright differences…4 years ago, I juxtaposed data on adult literacy in Uganda, compiled into a leaflet (developed with reference made to “How to lie with statistics”  – click on illustration for full view)

 

\

\

I was baffled by these “contradictions”, and loudly posed questions about the perplection: could it be that we view the same parameter from very different planes, and hence our definition / description of the same parameter is extremely different? Is it that we are using very different sampling frames, different sample questions, generalising very different sample answers, or could the “problem” be emanating from using different statistical tools (eg median, mode, mean for average)? The latter could be partly true, graduating some of our calculations to “manipulations” to suit a particular audience for a specific request (politically, state very high literacy rates registered, but to potential funders, state much lower figures etc).

My contention will always be that as the craftsmen and craftswomen get more exposed to, and equipped with the necessary tools, that we still have the development question as our guide (to determine what combination of tools to use, and not vice versa) and that we do not use the tools for (wo)manipulating the audience… just thoughts stemming from the crafting metaphor (apologies for this digression from your focussed discussions)

On another note, I would like to believe you are already in the know about the forthcoming European Evaluation Society Conference in Lisbon – late September to early October 2008. Is there anyone on this blog planning to participate? I think it would be of interest as impact evaluation will be given thrust – with active participation of NONIE, DFID and probably 3IE… Just in case you need more information about the same (if it is not too late), I will be happy to send / share more.

Once agian, I an honoured to be a part of the “family” of this blog, and as earlier mentioned to Mine, would be certainly one on the learning curve, always gleaning a learning from the rich experiences that you all have.

Great many thanks and kind regards

 Simon Kisira