Much of my work as monitoring and evaluation (M&E) professional has not simply been practicing M&E, but developing the capacity of others to practice, manage, support and/or use M&E. I am not alone as other evaluation colleagues have echoed similar experiences.
Based on an expert lecture I gave at the American Evaluation Association (AEA) 2017 conference in Washington, D.C., this post identifies ten key considerations for evaluation capacity development (ECB) in organizations.
Before we dive into the top ten recommendations, it is useful step back and look at what we are diving into. An organization refers to a wide range of groups where people work together to collectively achieve given objectives – from national and international agencies to nongovernmental organizations (NGOs), volunteer organizations, private organizations, and community-based organizations (CBOs).
In its broadest sense, capacity can be understood as “the ability of people, organizations and society as a whole to manage their affairs successfully” (OECD, 2006, p.12). Building these capacities involves improving the ability of people to solve problems, achieve objectives, and perform better over time. For evaluation capacity building (ECB), it is useful to revisit one of the earliest and most cited definitions from a volume of the journal, New Directions for Evaluation, devoted to the topic:
“ECB is a context-dependent, intentional action system of guided processes and practices for bringing about and sustaining a state of affairs in which quality program evaluation and its appropriate uses are ordinary and ongoing practices within and/or between one or more organizations/programs/sites,” (Stockdill, Baizerman, & Compton, 2002, p. 8). |
[Another term often used, typically in international development, is evaluation capacity development (ECD). Some argue that ECB focuses more on individual capacity development, whereas ECD is focused on organizational capacity and institutional readiness. For our purposes, we use ECB to encompass the valuable lessons from ECD (e.g., ECDG, 2014; Horton et. al., 2003; Segone & Rugh, 2013; World Bank, 2014).]
Lastly, it is important to remember that evaluation is part of many interrelated process in an organization’s program management system. Evaluation professionals often find themselves supporting program design, monitoring, data management and reporting. As such, I use ECB broadly to refer to organizational capacity building that includes evaluation and other related processes, especially those involving the collection and assessment of data for reporting and use, as part of a program’s overall management system.
Without any further ado, here are the top 10 tips for ECB:
1. Adopt a systemic (systems) approach to organizational evaluation capacity building (ECB).
ECB does not happen in isolation, but is embedded in complex social systems. Each organization will be distinct in time and place (context), and ECB interventions should be tailored according to the unique configuration of different factors and actors that shape the supply and demand for ECB. Supply refers to the presence of evaluation capacity, (human and material), and demand refers to the motivations and incentives for evaluation use.It is useful to consider a three-tier conceptualization of capacity building at the micro (e.g., individual), meso (e.g., organizational), and macro (e.g., societal) levels – see diagram below. This levels approach is adopted by others in the ECB and ECD literature, (for example, Fukuda-Parr, Lopes, & Malik, 2002; Hieder, 2011; Horton et. al., 2003; OECD, 2006; Segone & Rugh, 2013).
Although such conceptual diagrams are limited in their representation of reality, this one helps underscore that ECB is embedded in and interdependent on a larger, dynamic system and should thus be planned accordingly. Rather than approaching ECB as simply building individual knowledge and skills, a levels analysis broadens the lens of capacity building as nested within systems with cascading effects down the various levels of analysis. For example, government policies for greater accountability can have a profound impact on organizational demand and support for ECB.
2. Plan, deliver and follow-up ECB with attention to transfer.
If organizational ECB is to make a difference, it is not enough to ensure learning occurs; targeted learners need to apply their learning. As Hallie Preskill and Shanelle Boyle (2008, p. 453) state, “Unless people are willing and able to apply their evaluation knowledge, skills, and attitudes [KSA] toward effective evaluation practice, there is little chance for evaluation practice to be sustained.” This vision for transfer needs to be clearly understood by key stakeholders, and should inform the design, deliver and evaluation of organizational ECB programs.
3. Meaningfully engage stakeholders in the ECB process
ECB will be more effective when it is done with rather than to organizational stakeholders. Stakeholder participation is more than just consultation, but direct opportunity for people to provide input and become involved in various aspects of ECB. Meaningful engagement helps build ownership to sustain ECB implementation and use. It is especially important to identify and capitalize on ECB champions, and mitigate adversaries who can block ECB and its uptake. This stakeholder analysis is an important part of the overall systems analysis in which ECB is to be provided.
4. Systematically approach organizational ECB, but remain flexible and adaptable to changing needs.
ECB is intentional and conducted to meet specific needs. Therefore, it should be orderly planned to gather information and analyze training demand, needs and resources, identify relevant objectives, and design interventions to realistically achieve and evaluate these objectives.However, a systematic approach does not mean a rigid blueprint that is blindly followed. This can inhibit experimentation and the ability of ECB to adapt and respond to changing needs and unanticipated outcomes – whether positive opportunities or challenging obstacles. A flexible approach to ECB recognizes the dynamic nature of the system in which an organization is embedded, which will vary and change over time.
5. Align and pursue ECB in relation organizational objectives.
A systemic approach to ECB includes careful attention to other organizational objectives and capacity building interventions. ECB does not exist for its own sake, but as a means to an end relative to the organization’s overall mission and strategic objectives. It should not be “silo-ed,” nor should it duplicate or compete with other capacity building efforts. ECB is ideally planned as part of a coherent strategy that complements and reinforces overall capacity building in an organization. For example, rather than a stand-alone training on monitoring and evaluation (M&E), explore ways to ‘blend’ it into existing program management training.
6. Ensure your ECB strategy is practical and realistic to organizational capacities.
ECB should be realistic given the available time, budget, expertise and other resources. Such considerations includes the organizational context, number of staff, their location, their availability, their current level of evaluation understanding, existing training materials and facilities, and the number and experience of people required to support ECB. Just as when planning an evaluation, this entails careful context analysis to ensure ECB objectives are feasible given these real-world practicalities, and candid stakeholder communication to frame realistic expectations.
7. Identify & capitalize on existing sources for ECB.
There are a multiplicity of resources for and approaches to ECB, ranging from face-to-face training, online webinars, communities of practice, discussion boards, self-paced reading, and blogs like this. These resources can be used in solo or blended as part of a capacity building program that supports different learning styles and needs. Indeed, it is important not to ‘reinvent the wheel’ if it can be ‘recycled.’ This can save time and money. However, do not fall into the trap of adopting just because it is available—ensure that ECB resources are relevant for the desired capacity building objectives, or can be modified accordingly.
8. Design and deliver learning grounded on adult learning principles.
As our book on M&E training underscores, adults learn better when it is delivered in an enjoyable and meaningful way. Adults are self-directed learners that bring to training past experiences, values, opinions, expectations and priorities that shape why and how they learn. Principles for adult learning stress a learner-centered approach that is applied, experiential, participatory and builds upon prior experience. They also include other good practices for learning regardless of the age, such as the use of mixed-methods, structured progression, repetition, and feedback. Below we summarize 14 key principles for adult learning, which you can read more about here.
Key Adult Learning Principles for M&E Training |
1. Establish a safe and respectful climate – Adults learn better when they feel safe and respected. |
2. Respond to the “need to know” (NTK) – Adults prefer to know what, why, and how they are learning. |
3. Provide a structured yet flexible progression – Adults prefer learning that is well-organized. |
4. Empower with genuine participation – Adults want to share full responsibility for their learning. |
5. Incorporate past experience – Adults prefer learning that builds upon their prior experience. |
6. Keep it relevant & meaningful – Adults prefer practical learning that meets their needs. |
7. Provide direct experience – Adults learn best by doing. |
8. Make it active, fun and challenging – Adults learn more when it is engaging and enjoyable. |
9. Use mixed/multisensory methods – Adult learners require a mixture of learning approaches. |
10. Differentiate instruction – Adults learning is more effective when instruction is tailored to different learners’ needs. |
11. Utilize collaborative, peer learning – Adults effectively learn from each other. |
12. Include practice and repetition – Adult learning is enhanced by repetition. |
13. Design for primacy and recency – Adults remember best what they learn first and last in sequence. |
14. Provide feedback & positive reinforcement – Adults want to know if they are learning, & to be encouraged in the process. |
Chaplowe & Cousin, (2016). M&E Training: A Systematic Approach. Sage Publications
9. Uphold professional standards, principles and ethics.
An essential aspect of capacity building it to instill an understanding of and appreciation for ethical conduct and other standards for good practice. Specific guidelines and principles will vary according to context – sometimes specific to the organization itself, other times adopted from industry standards, such as the. AEA’s Guiding Principles For Evaluators and Statement on Cultural Competence in Evaluation, and the JCSEE’s Program Evaluation Standards Statements. Whatever the source, it is critical that ECB is informed by and reinforces professional standards, principles and ethics.
10. Monitor and evaluate ECB efforts to learn and adapt.
ECB should practice what it preaches and track and assess ECB efforts to adapt, improve and be accountable to ECB objectives and stakeholders. This begins at the design stage, when identifying ECB objectives to be assessed, so resources can be allocated and steps taken to monitor and evaluate ECB moving forward. And it is important to remember that monitoring and evaluation not only focus on identified ECB objectives, but it should remain alert to unanticipated consequences, whether positive or negative, and other contextual cues (e.g. opportunities and threats), so the ECB strategy can be revised and adapted accordingly – which brings us full circle to tip #1 on adopting a systems perspective for ECB.
(11.) Ancillary Tip. The above top 10 list is far from exhaustive, and as they are about human organizations and behavior, they are not absolute.
Citations
Chaplowe, S, & Cousin, J B. (2016). M&E Training: A Systematic Approach. Thousand Oaks, California: Sage Publications
Compton, D., Baizerman, M., & Stockdill, S. H. (Eds.). (2002). New Directions for Evaluation Series: Vol. 93. The art, craft and science of evaluation capacity building (pp. 1–120). San Francisco: Jossey Bass
ECDG (Evaluation Capacity Development Group). (2015). Retrieved from http://www.ecdg.net
Fukuda-Parr, S., Lopes, C., & Malik, K. (Eds.). (2002). Capacity for development: new solutions to old problems. New York: United Nations Development Programme (UNDP) with Earthscan Publishing.
Hieder, C. (2011). Conceptual framework for developing evaluation capacities: Building on good practices in evaluation and capacity development. In R. Rist, M.-H. Boily, & F. Martin (Eds.), Influencing change: Evaluation and capacity building (pp. 85–110). Washington, DC: The World Bank.
Horton, D., Alexaki, A., Bennett- Lartey, S., Brice, K. N., Campilan, D., Carden, F., . . . Watts, J. ( (2003). Evaluating capacity development: Experiences from research and development organizations around the world. The Netherlands: International Service for National Agricultural Research (ISNAR); Canada: International Development Research Centre (IDRC); the Netherlands: ACP-EU Technical Centre for Agricultural and Rural Cooperation (CTA).
OECD (Organisation for Economic Co-operation and Development). (2006). The challenge of capacity development: Working towards good practice. Paris: OECD.
Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 29(4), 443–459.
Segone, M., & Rugh, J. (Eds.). (2013). Evaluation and civil society. Stakeholders’ perspectives on national evaluation capacity development. UNICEF, IOCE, and EvalPartners.
World Bank. (2014). Evaluation capacity development (ECD). Independent Evaluation Group and the World Bank Group. Retrieved from https://ieg.worldbankgroup. org/evaluations/evaluation-capacity-development-ecd