Chapter II in “INCLUSIVE OR RATIONAL PARTICIPATORY EVALUATION?” Slovenian Evaluation Society’s Working Papers, Spring/Summer 2024, forthcomming

To explore the inclusiveness and collective rationality of design-based approaches in participatory evaluation, this text examines four popular tools: SenseMaker by Cognitive Edge (focusing on collective sensemaking), Outcome Harvesting by Wilson-Grau (identifying unintended outcomes), Most Significant Change by Rick Davies (capturing core changes), and Causal Mapping (Copestake et al., Goddard, Powel; visualizing causal relationships).

The four tools are first presented from their authors’ point of view. They design tools in the intersection between result-based (and more broadly realist, objectivist) methodologies and design-based (constructivist, subjectivist) approaches to evaluation. It is specially presented how they contribute to the inclusiveness and rationality of participatory evaluation.

Three steps are usually taken in constructivist evaluations. First, gather diverse perspectives, and collect comprehensive data from multiple sources, participants and stakeholders, acknowledging subjectivity and context. Second, identify themes, causal factors or patterns in the data, in their commonalities and differences. Third, build a shared understanding of patterns through a participatory process, constructing a shared narrative that integrates different perspectives while respecting their interpretations and principal differences.

II.1 Most Significant Change (MSC)

This approach was developed by Rick Davies as a participatory approach for the monitoring and evaluation of complex development interventions. The MSC provides a simple means of making sense of a large amount of complex information (Davies, Dart, 2005). It designs the evaluation as a means of identifying and aggregating the views of beneficiaries and stakeholders. The approach is best suited to large-scale, open-ended interventions, which would otherwise be difficult to evaluate using traditional methods. It is the most justified approach for the evaluation of interventions where it is not possible to say with any certainty what the outcome will be or how they will vary across beneficiaries.[1]

The MSC collects qualitative information from the intended beneficiaries of an intervention, in the form of a description of a change (intermediary impact) each considers as the most significant. The MSC then seeks an explanation of why respondents see that change as most significant. It also demands an external verification in which the collected stories are checked to see if the described changes align with what is known about them from external neutral sources. The MSC develops an intuitive understanding of the intervention’s intermediary impacts in conjunction with the hard facts (McDonald, 2021).

The verification process is followed by the use of one or more selection panels in small groups of beneficiaries, each representing one intervention domain, to review the set of collected MSC stories. They accomplish in-depth discussions and vote on which specific change they think is most significant of all.[2] At this stage, the evaluator becomes the facilitator who has to manage the debate, give the floor to everyone and be able to handle possible setbacks.

The plenary session takes place when the most significant changes from each group have been selected. The session is attended only by project stakeholders (the program funders or the head office staff). They choose the most significant changes as the ones that best represent the sort of outcomes they wish to fund. Intervention’s stakeholders are also asked to document the reasons for their choice. This enables them to frame a final set of stories that represent the collective view about the most important changes (Davies, Dart).

Evaluation is undertaken as an appreciative inquiry, a form of constructivist synthesis that focuses on identifying and amplifying positive aspects of a topic or experience.

The tool encourages the active involvement of various stakeholders in the evaluation process. The MSC is inclusive since it gives voice to beneficiaries as well as to stakeholders of intervention. The MSC also contributes to objectivity in participatory evaluation by focusing on the actual changes that have been observed by intervention participants. It collects diverse stories directly from beneficiaries – the ones that are best suited to explain what is achieved with an intervention. This ensures a range of perspectives is considered, contributing to a less biased and subjective participatory evaluation and providing evaluation results that are nevertheless based on evidence.

II.2 Causal Mapping (CM)

A causal map is a concept introduced by Axelrod (1976). The CM emphasizes the importance of understanding complex forms of causation beyond single causes. It elicits the causal beliefs and assumptions of different stakeholders involved in an intervention. The CM explains that the success or failure of an intervention is not due to a single factor, but rather to a complex web of causal relationships, and configurations of causal conditions associated with specific outcomes. When the causal maps of different stakeholders are compared, areas of their agreement and disagreement are identified.

The CM is applied as a specialised evaluative tool (Powell, 2017) since it can identify the key factors that contribute to the change induced by intervention, facilitate dialogue among the stakeholders and support them in making informed decisions and actions. Powel et al. (2023) present CM as a tool, which lifts the lid on the black-box of qualitative analysis on how to draw from qualitative data logically consistent and inclusive conclusions. Mapping is especially effective since it enables a graphical representation of causal networks, wherein factors (referred to as nodes or elements) are interconnected by arrows symbolizing evidence for or beliefs about causal influences from the initiating factor to the concluding one (GCM).[3] An illustrative instance of a causal map within the realm of policy impact evaluation is the theory of change, a conceptual framework through which decision-makers and evaluators perceive the world (Powell et al.).

Powell (et al., 2023) explain the objective of the CM approach is to expound on stakeholders’ perspectives, elucidating the causal relationships they perceive and the beliefs they hold regarding causation. It is concerned with beliefs rather than empirical facts. Diverging from the conventional scientific exploration of causality, causal mapping seeks to capture comprehensively the causal assertions people convey through narratives, as opposed to deducing causal relationships via statistical analysis or pre-structured inquiries. These maps offer insights into stakeholders’ cognitive frameworks, providing a faithful representation of their reasoning and behaviors (Powell).

The CM is participatory by incorporating diverse perspectives in the identification of causal relationships. More importantly, participants are not merely subjects of the evaluation nor informers but are actively engaged in identifying inner drivers of intervention-induced induced change. This is a strong argument in favour of assessing it as inclusive.

The CM is nevertheless fostering causal as rational (not constructivist) reasoning, explaining what people think and how they resolve the challenge of causal attribution. It further accomplishes a ‘reality check’ (Copestake et al., 2019) on every participating theory of change on what are the main drivers of interventions’ impacts. Focusing on the actual causal relationships between drivers of change decisively contributes to the objectivity of participatory evaluation.

The CM prescribes a rigorous and transparent approach to eliciting causal statements from the stakeholders. It incorporates multiple sources of data and evidence to reduce potential biases and to verify and refine the causal maps. The process of constructing visual causal maps entails the systematic collection and meticulous analysis of narrative descriptions of change sourced from individuals – the target beneficiaries within the studied population. Evaluators code and analyze transcripts of narratives and available documents to discern stakeholders’ perspectives on causal relationships (GCM). Coding enables classification of causal claims according to whether they explicitly link outcomes to specified activities, and compares it with the commissioners’ theory of change. Transcripts of interviews are written up in pre-formatted spreadsheets to facilitate thematic analysis. The results of the analysis develop a causal factor network, wherein elements exert direct and indirect influences on one another along the pathways within the network (GCM). Semi-automated generation of summary tables and visualizations enables interpretation of the evidence (Copestake et al.). Causal mapping organises diverse links that enable to drawing of a unified composite global causal map (GCM). By applying filters and algorithms, the global map can be used to address distinct inquiries, thereby generating distinct sub-maps tailored to different groups of information sources (GCM).

The map informs further investigation (Powell et al.) by stakeholders and commissioners. The pivotal role of the evaluator in causal mapping revolves around the careful collection and accurate visualization of causal evidence while leaving it to stakeholders to conclude how to proceed from the revealed maps. A final step, the conclusive interpretation of the implications of these maps already extends ‘beyond the realm of causal mapping per se’ (Powell et al.).

II.3 SenseMaker (SM)

This is a narrative-based approach developed by Cognitive Edge. The SM is based on the assumption that complex social systems cannot be fully understood by traditional quantitative methods. It strives instead to integrate within the evaluation process a wide array of perspectives on these systems from various stakeholders based upon shared collective wisdom. Voices That Count, a collaborative network of experts and practitioners, has developed an evaluative framework described as participatory narrative sensemaking (SM).[4]

The SM’s mission is to structure the unknown (Van der Merwe et al., 2019). It aims to make sense of social complex processes by gathering and analysing narratives, stories, and contextual information. The tool is designed to identify patterns within the collected narratives, providing insights into the different ways groups of people interpret and make sense of a given complex situation. This helps the evaluator uncover patterns of differentiation (oppositions) and overlaps (shared understandings) between social groups, themes, or sectors… SM has predominantly found application in the delineation of ideational cultures, cognitive frameworks, and attitudinal orientations (Van der Merwe et al.).

The core of this approach is the analysis and interpretation of a series of ‘micronarratives’ on a narrowly specified topic. Participants are initially presented with a selection of distinct prompts designed to elicit narratives about areas of interest. Participants are then tasked with ‘plotting’ their responses quantitatively in this way indicating their relative importance to predefined signifiers (terms, or criteria) from the perspective of a previously narrated story. This plotting of micronarratives is carried out across three evaluative domains, followed by plotting in ‘dyads’, and finally about a set of separate singular multi-choice questions (Bartels et al., 2019). In this way, the qualitative micronarratives are translated into numerical data, thus facilitating statistical analyses and visual presentation of aggregated results. Moreover, quantification aligns responses with specific semiotic markers (Van der Merwe et al.), such as keywords. Through self-signification, participants assign meaning to their micronarratives. This eliminates the need for expert interpretation of obtained results, which gets rid of researcher bias and provides for more objective (Van der Merwe et. al.) evaluative outcomes.

This dual process of quantification and visualization serves to distil collective meanings inherent within the heterogeneous narratives (Deprez, 2021). The SM procedure departs from the conventional approach of conducting an in-depth examination of individual narratives, such as in the case of the MSC. Instead, it focuses on the entire ensemble of narratives. The applied approach discerns patterns as well as instances that deviate from the norm, differentiations across various groups, and correlations among distinct signifiers. This enables the evaluator to identify the main questions, issues and themes that should be further explored during the collective sensemaking workshops (Deprez). These workshops may encompass a diverse spectrum of participants, including envisaged project beneficiaries, program and managerial personnel, and donors (Guijt et al., 2022). Collective sensemaking endeavour engenders the creation of a collective cognitive map (Van der Merwe et al.) elucidating the underlying order behind the disorder.

The SM is inclusive both at the micro level (gathering stories) and at the macro level (sense-making). It involves participants in the data collection process by capturing their stories and perspectives. It also empowers participants to be co-analysts in the sensemaking process. The SM contributes to the objectivity of participatory evaluation by delivering more comprehensive sensemaking that better distils collective meanings from the heterogeneous narratives.

II.4 Outcome Harvesting (OH)

This participatory evaluative tool was formulated by Ricardo Wilson-Grau (2019). It is based on the assumption that the outcomes of an intervention are complex and multifaceted and they cannot be fully understood by traditional quantitative methods. This tool invites stakeholders closely associated with the intervention to identify the relevant changes (referred to as ‘outcomes’) that have emerged. Changes in behaviours comprise actions, activities, or practices of the key stakeholders of intervention that are necessary so that people or the environment to benefit (Wilson-Grau). The crux of the approach is centred upon the actors who change, rather than concentrating exclusively on the ultimate beneficiaries of that behavioural change. Traditional evaluation methodologies only seek to assess the level of effectiveness or efficiency and do not manage to capture the complex reality behind diverse agent changes involved in the implementation of an intervention. Furthermore, traditional evaluations do not deal with human and social behaviours (Peroni Fiscarelli, 2022).

The OH is a particularly useful evaluation approach in situations where outcomes are emergent and may not be easily predicted in advance. Such as in complex, uncertain environments with many unintended outcomes, including negative ones, where relations of causes and effects are identified but are not fully understood, or when it is not possible to define concretely most of what an intervention aims to achieve (Wilson-Grau). Such as when assessing intervention for which social change is the purpose or is a significant part of what is required for success. Or when an intervention underwent so much change – in design, external conditions or mechanism of implementation – that it is not useful to assess what it achieved against what was originally planned. Other cases are interventions that centre around the delivery of goods with no interaction with the beneficiaries (subsidies, plans, and vouchers), as well as for evaluation of community development programs in particular when aimed at increasing the social inclusion of individuals (Wilson-Grau).

Technically, evaluators first collect (‘harvest) evidence of the changes, and then, working backwards, determine whether and how the intervention has contributed to these changes – instead of working forward on measuring progress towards a pre-defined set of goals as is the case in conventional approaches. Harvested information undergoes several participatory rounds of revisions to make it specific and comprehensive enough. The harvested information is then validated by comparing it with information gathered from independent knowledgeable and authoritative sources.

In the synthesis of validated outcome descriptions, the evaluator plays a crucial role. The evaluator classifies all outcomes, usually according to the objectives and strategies of the key stakeholders. He undertakes qualitative data analysis and interprets the analysis results for their significance in achieving a mission, goal, or strategy. These interpretations answer the harvesting questions (Wilson-Grau). Evaluator concludes the study by proposing issues for further discussion between stakeholders, about how they can make use of the verified findings (Wilson-Grau).

The OH is an inclusive approach because it captures a wide array of perspectives of diverse stakeholders in the identification of outcomes and in interpreting their collective implications. Its results also contribute to collective rationality since the OH focuses on real-world outcomes that have been observed by stakeholders, as opposed to relying solely on planned indicators. It also reduces the potential evaluator’s bias associated with his preconceived notions.


Selected four tools contribute positively to social inclusion by involving diverse stakeholders in identifying changes or assigning collective meaning to their dissimilar grasping of complex interventions’ achievements. They also enhance collective rationality by focusing participatory evaluation on the most significant communal issues, providing systemic understanding, or promoting collective interpretation of quantified and visualized complexities. They are ‘groundbreaking’ (Wilson-Grau) since they pursue an inclusive democratic shift towards more robust and informed decision-making in participatory evaluation. At least according to the authors of the four approaches and advocates of these tools, this is the intended outcome.

A comparative assessment of four tools against selected criteria will reveal a more nuanced picture in Chapter IV. Before that, the paper must present both criteria, how they are constructed and how they are meant to ensure the realization of the two imperatives of the participatory evaluation (Chapter III).

[1] Monitoring and Evaluation NEWS,

[2] Ibid.

[3] Guide to Causal Mapping.


Chapter I