Key words: SenseMaker, Outcome Harvesting, Most Significant Change, Causal Mapping //

The traditional, result-based approach to impact evaluation of policy interventions compares actual achievements with initial plans. However, this method falls short when evaluating complex interventions (Powel et al., 2023; Deprez, 2021; Willson-Grau, 2019; Davies, Dart, 2005). Complex interventions involve an incomplete understanding of the evaluated object. Their impacts are often immeasurable, incommensurable (lacking a common unit for comparison), or unknown. They produce a multitude of effects that cannot be definitively attributed to specific causes or beneficiaries. This is particularly true for interventions aiming to drive social change, enhance interaction between social groups, or develop internal group dynamics (Wilson-Grau, 2019). In such a situation, evaluators and participating stakeholders face limitations in knowledge, they see things only as epistemic blind and biased.

Over the past few decades, a new generation of evaluative approaches has emerged (Guba, Lincoln, 1989; Weiss, 1972), moving beyond traditional analytical logic. This new paradigm shifts the focus of evaluation to more stakeholder-driven and design-based constructivist approaches. Stakeholder involvement becomes central, leading to the development of dialogical forms of policy impact evaluation (Patton, 2011). Stufflebeam (2003) defines participatory evaluation as a collaborative assessment process. It evaluates complex interventions by involving a broad range of stakeholders with their diverse viewpoints and by considering various data types, quantitative and qualitative. This approach strives to be both inclusive of multiple perspectives and contribute directly to the needs of the communities served. New approaches enhance democratic and collectively rational policy-making, which connects them to the theoretical concept of collective choice (Condorcet, 1785; Arrow, 1951).

The theory of collective choice tackles the challenges of collective decision-making. It examines how societies, communities, or organisations navigate conflicting options, such as how to approach the resolution of a given multifaceted challenge. At the core of collective choice lies the tension between individual autonomy and the common good. This reflects the inherent conflict between democracy’s goal of inclusivity and the need for rational decision-making to identify the best option for the collective. The theory of collective choice faces a fundamental limitation framed by Arrow’s impossibility theorem. This theorem states that no aggregation method can perfectly satisfy both inclusivity (reflecting all individual preferences) and rational choice (selecting the most advantageous option for the collective). This impossibility highlights the serious limitations of democratic decision-making.

Democracy frames efforts for resolving the inherent tension between demos and kratos. Demos represent the collective body of citizens, who value inclusivity and a diversity of views. Kratos, on the other hand, refers to the practical governance that enables these decisions. Kratos seeks to ensure that choices are coherent, just, and functional, sometimes at the expense of the initial inclusivity and diversity. The tension is evident for instance in debates around freedom of speech on social media platforms. While demos value the free exchange of ideas, kratos necessitate measures to prevent the spread of misinformation or hate speech. The democratic legitimacy of political authority depends on finding a middle ground that increasingly successfully satisfies the values and aspirations of citizens and the need for efficient and effective governance.

Policy impact evaluation emerged as a critical response to the limitations of conventional approaches to collective choice. Participatory evaluation enables the demos to participate in kratos. It also helps kratos become decreasingly restrictive in enforcing unity over diversity simply by a more connective understanding of contradictions brought by complexity.

A new generation of participatory evaluation approaches includes numerous tools. To assess the success of these tools in achieving inclusivity and collective rationality, this paper uses two criteria. The inclusivity of the tools will be determined by how they address epistemic blindness (Fricker, 2007) and resolve bias in participatory evaluation. Epistemic blindness is the opposite of epistemic certainty. It may arise from ignorance or lack of knowledge, resulting in the exclusion of anything that does not match established patterns of understanding things (Kahneman, 2011). However, epistemic blindness cannot always be eliminated. Our knowledge is inherently limited or bounded (Simon) and biased. This is particularly true in complex evaluations which involve uncertainty.

Uncertain things that involve the void in their core can be understood better by a blindsighted evaluator than by an enlightened scientist. ‘Blindsighted’ means it is articulated from ‘the empty middle’ as indeterminate but not inconclusive.

The second criterion examines how effectively from the aspect of the collective these tools aggregate the diverse contributions gathered through the participatory process. Various aggregation procedures exist, ranging from micro-level (focusing on individual data points) to macro-level (looking at broader concerns) and meso-level (on intermediate processes and correlations between sub-aggregates) to meta-level (on the overlap between identified shared concerns). Various aggregation procedures yield strikingly different results (Radej, 2021). This suggests that the method of aggregation is not arbitrary, but must closely correspond to the evaluation task, in the concrete example of the complexity of evaluated interventions.

To explore both the inclusiveness and collective rationality of design-based approaches, this paper examines four popular tools in participatory evaluation: SenseMaker by Cognitive Edge, Outcome Harvesting by Wilson-Grau, Most Significant Change by Rick Davies, and Causal Mapping (Copestake et al., Goddard, Powel).

The paper begins by introducing four tools for participatory evaluation. It then examines two key assessment criteria: epistemic blindness (limitations in knowledge) and aggregation problems (combining diverse viewpoints). The core analysis delves into how well these tools secure criteria in participatory evaluation. While the paper acknowledges the tools’ strengths and positive contributions, it ultimately argues that they fail to ensure a more inclusive and collective rational evaluation of complex interventions. The paper concludes with a call for the antipostmodern epistemic shift in evaluation theory that achieves inclusiveness and collective rationality by intersecting them in the empty middle and then reading their messages blindsighted.

The four selected tools were presented and discussed at the 5th Western Balkans Evaluator’s Network (WBEN) conference in Ljubljana in late September 2023. The Slovenian Evaluation Society hosted the event on behalf of WBEN.


This is the opening chapter of a forthcoming working paper.

Chapter II

 

Oznake: