
Turning Evaluation into Learning for Program Design
Turning Evaluation into Learning for Program Design
Background & Challenge
Client: USAID/Uganda (via Q2 Impact)
Sector: International Development
Focus Area: Monitoring, Evaluation, and Learning (MEL) and Collaborating, Learning and Adapting (CLA) | Strategic Program Design
Role: Director of Performance
USAID/Uganda’s $50 million Monitoring, Evaluation, and Learning (MEL) and Collaboration, Learning, and Adapting (CLA) platform in Kampala, Uganda was responsible for supporting the performance of all USAID/Uganda programs. Beyond monitoring and evaluation as its core function, the “Uganda Learning Activity” (ULA) played an integral role in strategic planning, policy advocacy, and the design of new USAID-funded programs in Uganda. Serving as the Director of Performance, Jo-Ann led efforts to ensure that evidence generated through monitoring and evaluation was transformed into actionable learning, strategic insights, and adaptive programmatic decisions across the Mission.
USAID’s portfolio in Uganda included numerous large, multi-million-dollar programs operating in dynamic, shifting contexts. While evaluations were routinely commissioned, the findings often came in long, technical reports—dozens or even hundreds of pages in length. Stakeholders and technical teams found the reports difficult to digest, and even harder to apply in a timely or meaningful way. Additionally, many evaluations were based on assumptions and plans that no longer aligned with current realities. As a result, valuable opportunities for adaptation and to improve future programming were being lost.
Approach
Outcomes & Impact
To bridge the gap between evidence and action, the Performance team led a process to translate evaluation findings into learning opportunities that were timely, relevant, and easy to engage with. Actions included:
Creating digestible, high-impact learning products (briefs, slide decks, and facilitated session materials) tailored to senior leaders and program staff.
Conducting after-action reviews and facilitated learning sessions with Mission teams to reflect on what was working, what wasn’t, and recommendations for adaptation.
Distilling evaluation findings into decision-relevant insights, directly tied to ongoing or upcoming program design.
Working with teams to identify course-correction opportunities within active programs and embedding learning moments into routine planning.
One example included synthesizing findings from multiple sector evaluations into a single cross-cutting learning session to inform the Country Development Cooperation Strategy (CDCS) planning process.
USAID/Uganda leadership gained clear, concise insights they could act on—many within weeks, rather than months.
Evaluations became living documents, used to inform program design, procurement planning, and adaptation within existing projects.
Cross-team collaboration increased, as learning sessions brought together technical, program, and operations staff around shared challenges and evidence.
Lessons learned were directly incorporated into new activity designs, improving alignment with Uganda’s evolving development context.
With a focus on turning data into learning and knowledge a shift in the perception of evaluation took place: from a compliance exercise to a critical learning and decision-making tool. By re-centering the purpose of evaluation around usability, relevance, and timing, the Mission was better equipped to receive the results with an attitude of adaptation, design programs that were based on close stakeholder engagement and ultimately steward resources with more impact.
Why it Matters - Lessons with Broad Application
While this work took place in an international development context, the principles and practices extend far beyond that sector. This case demonstrates the value of:
Facilitating strategic learning from complex or mixed data sources.
Creating systems that support reflection, adaptation, and timely decision-making.
Engaging diverse teams and stakeholders to surface insights and foster shared ownership.
Connecting evaluation efforts to tangible programmatic improvements and outcomes.
These capabilities are just as relevant to nonprofit organizations, professional associations, and community-based initiatives closer to home. The underlying commitment remains the same: using evidence to drive meaningful, mission-aligned action.
