Modes of Participatory Evaluation

Within the literature on PE, we can identify at least three different approaches. These approaches to participation are built on different epistemological premises and conceptualize participation in different ways. Their on-the-ground praxis in PE is also dissimilar. One line of thinking is represented by Guba and Lincoln’s (1981, 1989) constructivist approaches. A second line is Patton’s (1986, 1997) arguments for utilization-based evaluation, and the third line of thought is “empowerment evaluation” (Fetterman, Kaftarian, & Wandersman, 1995).


Egon Guba and Yvonna Lincoln became interested in constructivist evaluation after becoming radically dissatisfied with the usefulness of conven­tional evaluation. This was a logical outgrowth of their “naturalistic inquiry” perspective; they felt that to make evaluations effective, such evaluations had to be built on the naturalistic paradigm:

A naturalistic paradigm, relying on field study as a fundamental technique, which views truth as ineluctable, that is, as ultimately inescapable. Sufficient immersion in and experience with a phenomenological field yields inevitable conclusions about what is important, dynamic, and pervasive in that field. Ethnography is a typical instance. (Guba & Lincoln, 1981, p. 55)

The canonical text on constructivist evaluation is Guba and Lincoln’s (1989) Fourth Generation Evaluation. In it, they introduce a constructivist approach to evaluation and link it to naturalistic inquiry by arguing that evaluation is a process of construction and reconstruction of realities. This book is a logical follow-up to their work, Effective Evaluation ( 1981), where the theme is how to make evaluation matter, and Naturalistic Inquiry (1985), which centers on the comprehensive development of a postpositivistic methodological stance for the social sciences. Fourth Generation Evaluation focuses on carving out an epistemological position for constructivist social science and forwarding detailed methodological positions for researchers who approach the field in a nonpositivistic manner.

The central theme of Guba and Lincoln’s work is to urge social researchers to engage with people directly to make sense of the evaluation process and results. In this way, they make participation a central element in debates about contemporary evaluation praxis. As Guba and Lincoln say:

The major task for the constructivist investigator is to tease out the construc­tions that various actors in the setting hold and, so far as possible, to bring them into conjunction—a joining—with one another and with whatever other information can be brought to bear on the issues involved. ( 1989, p. 142)

The constructivist approach necessarily brings the problem owners to the fore because their views are key to understanding and making sense of the processes and structures being evaluated. This means that the evaluation rests on the participants’ understandings of their own situation and on how they judge the results achieved. The evaluators can support and engage in these hermeneutic processes because the processes will eventually lead to the requi­site evaluative insights. The evaluation can not be completed unless the hermeneutic groundwork is done by the participants.


A more conventional response to the challenge of the efficiency and effec­tiveness of evaluations is found in the work of Michael Quinn Patton ( 1986, 1997). For Patton, the central question is how to shape evaluations so that the results matter to the involved stakeholders. In Patton’s view, evaluation is an activity that should be designed to have an impact on the program or activity being evaluated.

Responding to the dilemma of evaluations being ignored by the stake­holders, evaluators like Patton developed participatory approaches in which the evaluator and the evaluands created a closer relationship and opened up for mutual learning. Patton was one of the first to present this different path for evaluation. In his book, Utilization-Focused Evaluation (1986), Patton emphasizes the use of evaluation results to improve projects as an imperative in evaluation work:

What fundamentally distinguishes utilization-focused evaluation from other approaches is that the evaluator does not alone carry this burden for making choices about the nature, purpose, content, and methods of evaluation. These decisions are shared by an identifiable and organized group of intended users. (p. 53)

Basically, Patton aims to include every stakeholder, as defined by him. They “are people who have a stake—a vested interest—in evaluation findings” (1986, p. 43). For any evaluation, there are multiple stakeholders—program funders, staff, administrators, clients, and others—with a direct or even indi­rect interest in program effectiveness. Although much of Patton’s ( 1986, 1997) attention is paid to the funders, staff, and administrators, the clients of the pro­jects being evaluated also are included in his thinking and evaluation process.

The particular insight that local involvement is necessary for making the results of evaluations useful leads to an interest in ways the clients of the programs being evaluated themselves deal with evaluation results. These clients are in a different position from all other stakeholders as the actors who potentially should benefit most from the evaluation. Their interests are, in many situations, not the same as the interests of the program staff. They are, in a certain sense, the primary actors in any program, simply because the focus of the activity is to do something about their life situations. No other stakeholder group is in such a position, so it is a powerful move to focus attention on ways these primary beneficiaries can use the evaluation.

This is where the participatory approach to evaluation makes its appear­ance. Participatory evaluation aims to create a learning process for the pro­gram clients that will help them in their effort to reach their own desired goals. Participatory approaches to evaluation purposely muddy the distinction between the program activity and evaluation results because the evaluation aims to make a difference by helping program clients achieve their goals better. Such an approach often can end up going even farther and creating a situation in which it is possible not just to evaluate whether the program is doing what it is supposed to do well, but whether what it is doing is the right thing to do or whether doing something else would meet its objectives better.

A standard practice in PE is to involve the providers and clients of a pro­gram or an activity in the process of interpreting evaluation results. The most conventional way to do this is to discuss the collected data with them as a way of making sense of the findings. A more advanced form is to involve partici­pants in the process of designing what to evaluate from the beginning of the project (for example, decide on the variables and how they are defined), to engage them in the data collection process, and to include them in making sense of the findings.

How this participatory process is structured can differ widely among eval­uation practitioners. Each evaluator engages the participants in ways that are comfortable for both parties. Some construct meetings, others use group dynamic processes—search conference “look-alikes” have been used—and other participatory techniques.

Such processes, however, are not without problems. A key difficulty in using participatory approaches to evaluation for the sole purpose of achieving improved utilization is that it creates an opportunistic situation for the evalu­ator that easily could lead to a co-optive process in which the evaluator is effec­tively coaching the program clients on what they should want from a program. This can result in slighting issues of the multiplicity of stakeholder interests and the often laborious process of stakeholder goal setting.


In any participatory process, there is always a tension between participa­tion as an instrumental means of accomplishing something and participation as an end in itself. The larger political settings involving interests and power usually play a minor role in most evaluation practices, and democratization is rarely an element in the conceptual schemes linked to evaluation. However, in empowerment evaluation, these settings are emphasized. For example, Brunner and Guzman’s Participatory Evaluation: A Tool to Assess Projects and Empower People ( 1989) is an effort to see evaluation as “a methodological com­ponent of the educational development project that aims at empowering the dominated groups in a society so that they will be able tojoin the struggle for a just and egalitarian society” (p. 10). Weiss and Greene (1992), Patti Lather (1991), and Michelle Fine (1996) are other proponents of the empowerment evaluation approach.

Michelle Fine (1996) summarizes this work in the form of five commit­ments to PE research: building local capacity, evaluation and reform, an ethic of inquiry, evaluation and democratic participation, and rethinking the “products” of evaluation research. Fetterman et al. ( 1995) define empowerment evaluation as “the use of evaluation concepts, techniques, and findings to foster improve­ment and self-determination.” They go on to say, “Empowerment processes are ones in which attempts to gain control, obtain needed resources, and critically understand one’s social environment are fundamental”(l995, p. 4).

This is a radical point of departure. Empowerment evaluation is founded on a restructuring of the evaluator role that departs dramatically from the con­ventional detached, objectivist role and it is more proactive politically than constructivist and utilization-focused evaluation. The most striking element in empowerment evaluation is the understanding of the evaluator as an interven­tionist, as an activist. Active political engagement is expected.

The foundation of empowerment evaluation is to teach the participants to conduct their own evaluation. This includes an effort to help participants understand both what evaluation is and how it can be conducted. In empow­erment evaluation, the stakeholders themselves are expected to be active and engaged. Here, self-evaluation is conceptualized as having a dual meaning: doing the evaluation yourself and having the evaluation done on your own sit­uation. The professional evaluator then becomes the facilitator who works to enable the participants to commission their evaluation and also see to it that necessary learning processes are constructed to support them. In this respect, empowerment evaluation looks quite similar to good cogenerative organiza­tional development processes.

The professional evaluator is also an advocate but is most focused on enabling the participants to conduct their own evaluation. Armed with this evaluation, the professional evaluator becomes a public spokesperson and legitimator of the insights gained through the evaluation process.

The practices of empowerment evaluation pay particular attention to illu­minating (eye-opening, revealing, enlightening) experiences that can create the point of departure for a liberating development. Despite this, the broader issues of liberation are generally treated rather softly, as for example, here: “[Empowerment evaluation] can unleash powerful, emancipatory forces for self-determination” (Fetterman et al., 1995, p. 16). Liberation is seen as a sec­ondary effect that takes place within the empowerment evaluation. Liberation is not the goal per se but a potential outcome that would be good if it happens; it is not a design criterion for the evaluation.

This is an interesting contradiction. If empowerment evaluation is not necessarily meant for the ultimate goal of liberation, what is its aim? Without clarity about this larger goal, empowerment evaluation can easily degenerate to a co-opted strategy for participation in a process that would have little or no effect on people’s long-term ability to impact their own life situations. Empowerment evaluation is on the verge of falling into the same trap as did the empowerment movement in business life, in which empowerment is gen­erally something “done to” stakeholders rather than actions taken by them.

Source: Greenwood Davydd J., Levin Morten (2006), Introduction to Action Research: Social Research for Social Change, SAGE Publications, Inc; 2nd edition.

Leave a Reply

Your email address will not be published.