top of page
Writer's pictureDeepti Sastry, PhD

MEL Foundations for Adaptive Programs

It may seem pedestrian to talk about program adaptation in international development programs, and yet the nuts and bolts that are necessary for an adaptive program are often missing.


This blog offers experience-based insights to encourage program managers, donors, and MEL colleagues to revisit the foundations on which program adaptation is based.


I suggest three foundational blocks in this endeavor:


1. Gathering ‘useful’ monitoring data

Monitoring systems and tools typically yield information about whether an expected activity has been completed or a product/output has been delivered. In some instances, projects gather bespoke information for case studies, which are qualitative and (largely) anecdotal in nature. This information is useful to hold programs to account, largely assessing whether partners have spent money on the things they said they would. However, this information offers little insight into whether the program is doing the ‘right’ sorts of things and whether key assumptions in the theory of change are legitimate. For example, it might not tell us if training is the right activity to deliver institutional change.

Tip: Pay attention to the information that your monitoring system is generating and ask yourself if the information tells you if you’re doing the ‘right’ things to deliver real change. If the answer is ‘no’, then you need to reconsider what data your monitoring tools are gathering and amend the tools.


2. Ensuring analytical capabilities of data-users

What benefit is fantastic, reliable, granular data if the data users are unable to analyze and utilize the information? Most programs commission evaluations, strategic reviews, case studies, learning reviews, and so forth. These steps yield invaluable data, but we need to find a way to bring this information together with the monitoring data to allow program teams to reflect on what is working, why (or why not), and what else they can do to catalyze change. In addition to bringing this information together in digestible formats, we need to consider whether program teams and donors possess the necessary skills to leverage the information and thereby inform decision-making for program adaptation.

Tip: Map out all your data sources and draw out key themes, messages, and insights. Work with program implementation teams to ask the important questions:

(i) What is working and why/why not?

(ii) What else can we do to catalyze change?

(iii) Are our assumptions about the context and motivations/behaviors of actors legitimate?


3. Balancing learning and reporting

MEL systems are often set up primarily to report on what activities money has been spent on and the tangible products that have been delivered. This is not to suggest that MEL systems only focus on accountability, but the importance of reporting (logframes, results frameworks, etc.) means that accountability often steers the MEL system. Unfortunately, when accountability drives the MEL system, learning takes a backseat; there are only so many hats a MEL system can wear! This, in turn, skews the types of tools and methods that a program team is likely to use. For example, a MEL system skewed towards accountability is likely to focus more on determining attribution rather than, for example, understanding whether activities were appropriate or changes are sustainable. There is no silver bullet to find a balance between the (often) competing agendas, but a recognition that learning is essential must be made explicit and help drive the design of a MEL system.

Tip: In addition to quantitative indicators (which largely serve accountability), consider qualitative data collection and analysis. You can use methods such as outcome harvesting and contribution analysis as part of the MEL toolset.


Adaptive programing is the holy grail of a MEL system. We can continue to discuss semantics, methods, and value, but there are some first principles – the presence of which are a sign that program teams and donors are using their MEL systems to prioritize reflection and adaptation.




Dr. Deepti Sastry is a monitoring, evaluation, and learning expert with over 15 years of experience in the international NGO, government, civil society, and private sectors. She has extensive experience working with UK and EU-funded aid programmes, with emphasis on MEL for programmes in and on fragile and conflict affected states, private sector development, and impact investing. While being a MEL purist, Deepti is passionate both about good quality, robust MEL tools and processes, and in optimizing the value of these tools and processes to leverage insights and adaptations. In addition, Deepti is experienced with and uses numerous methodological approaches such as mixed-methods and qualitative evaluation design, appreciative inquiry, the Qualitative Impact Protocol (QuIP), outcome mapping, and qualitative research methods.

Recent Posts

See All

2 Comments


ufabet kkpologame
ufabet kkpologame
Jul 16, 2022

เว็บพนันออนไลน์ที่มีเกมพนันมากมายให้ท่านได้เลือกเล่นมากกว่า 100 เกม ไม่ว่าจะเป็น คาสิโนออนไลน์ หรือ เดิมพันกีฬาออนไลน์ แน่นอนว่าทางเว็บเรานั้นได้มีเกมพนันมากมายให้ท่านได้เลือกเล่น ไม่ว่าจะเป็น แทงบอล บาคาร่าออนไลน์ สล็อตออนไลน์ เกมยิงปลาออนไลน์ และยังมีเกมพนันออนไลน์อื่น ๆ อีกมากมายให้ท่านได้เลือกเล่น

Like

ufabet kkpologame
ufabet kkpologame
Jul 16, 2022

เว็บพนันออนไลน์ที่ดีที่สุดในตอนนี้ ซึง่ท่านสามารถที่จะสมัครสมาชิกได้ง่ายมาก ๆ เพียงไม่กี่ขั้นตอนเท่านั้นเอง สำหรัยบคนที่อยากที่จะเล่นท่านสามารถที่จะสมัคสมาชิกได้ที่หน้าเว็บ Ufakicks ของเราได้เลย เว็บพนันเรานั้นเป็นเว็บพนันออนไลน์ที่เปิดตลอด 24 ชั่วโมง มีคนเข้ามาเล่นตลอดเวลา

Like
bottom of page