Evaluation experts and scientometricians have more to offer to the practices they analyse. The evaluative inquiry approach presented here understands academic performance or impact as an effect of translations within and between networks of actors that make up academic research and its environments. The aim is to find out what are the central issues or ambitions, how they are operationalised, what kind of output this yields and where the output travels to (building on Joly et al. 2015; Molas-Gallart et al. 2015; Matt et al. 2017; Spaapen & van Drooge 2011). We also move beyond tracing ‘productive interactions’ (Spaapen & Van Drooge, 2011) by proposing to use the potential of the form(s) evaluation can take. Our approach treats evaluation as a ‘situated intervention’ (Zuiderent-Jerak, 2015). It is designed to help organisations or groups give an overview of goals and missions and the ways these are embedded within the organisation (goal > mobilisation > output > reach).
The Evaluative inquiry Approach, Sarah de Rijcke + SES group, 2018
1. Context counts, so I am going to locate this contribution in the context of the Dutch
science governance system. In the Netherlands, we have a ‘weak’ evaluation system
(Whitley 2007). These tend to emphasize opportunities for organizational learning
afforded by more interactive peer review formats over interim periods (Youtie and
Corley 2011; Hansson and Monsted 2012).
The design of the Dutch university research evaluation system is structured by a main
guiding document — the Standard Evaluation Protocol. There is the possibility of
flexible use of these standardized but also under-determined a priori guidelines.
Given that the Standard Evaluation Protocol is designed to govern all types of
research activity in Dutch universities, and as such only lays-out very general
parameters, the protocol can also be used by those coordinating and engaging in
self-evaluations as matter of informal learning (cf. Rushforth & De Rijcke, 2017)
2. Now where we see room to introduce flexibility is here: Evaluation in the Netherlands
originally had a strong focus on purely academic work, but in recent years we witness
more emphasis on “societal relevance”
● This is principally a desirable development, but…
○ Perpetuates the idea of a divide between “the academic” and “the social”
○ Is often related to the expectation that everybody has to do everything,
societal relevance as extra work
○ The split between academic and societal relevance is partly an artefact of
reductive evaluation mechanisms, neither captures reality. Academic work
frequently ENTAILS engagement with societal actors.
● What I will present is our budding work in which we try to propagate more:
○ context-sensitive evaluations
○ by way of an ecological approach, assuming diversity: not everybody has to
do everything at the same time
○ evaluation as a means to stimulate self-reflection + emergent development
(“evaluative inquiry”, Fochler & De Rijcke, 2017)). This may be harder to do in
strong evaluation systems, perhaps we can pick that up in the discussion.
2
3. Mollas-Gallart (2012) distinguishes 3 main purposes of evaluation practice:
A distributive, an improvement, and a controlling use. In the Dutch context, I would
argue that we mainly see this:
“An improvement use will focus on deriving lessons from the past experience to adapt
the activities conducted to what evaluation studies will conclude is better practice.
The improvement purpose is therefore relying on the existence of feedback
mechanisms and the operational flexibility needed to function as a learning
organization.”
3
4. The role bibliometric indicators play in these systems is of course much-debated. And
this is a very important issue for CWTS, also in our commissioned work. I do not have
time to get into all the details of this debate. What I do want to say:
I think that evaluation experts and scientometricians have more to offer to the
practices they analyse. And perhaps the crux of the matter, and here I refer to Paul
Wouters’ keynote, is how we assemble perspectives and methods in different
evaluative situations.
Our evaluative inquiry approach understands academic performance or impact as an
effect of translations within and between networks of actors that make up academic
research and its environments. The aim is to find out what are the central issues or
ambitions, how they are operationalized, what kind of output this yields and where
the output travels to (building on Joly et al. 2015; Molas-Gallart et al. 2015; Matt et
al. 2017; Spaapen & van Drooge 2011)
à combination of methods, depending on what fits the specific evaluation purpose
best
4
5. But we would also like go a bit further. We not only study/map/trace these
‘productive interactions’ (Spaapen & Van Drooge, 2011). And this is the move: we
also use the potential of the form(s) evaluation can take in the Netherlands, and treat
evaluation as a ‘situated intervention’.
It incorporates the participation of communities under evaluation. It makes explicit
the way we ourselves participate in evaluations by engaging in them as evaluation
experts or scientometricians.
Groups see opportunity for direct involvement of social scientists in the practices they
study. Evaluative Inquiry is tailored to the department ”under study”, is hence more
experimental, less formalized and more collaborative. Our involvement can lead to
the production of more situated, more grounded and hopefully also more relevant
processes and outcomes.
5
6. Two commissioned projects: one by University of Protestant Theology, other by
department of Catholic Theologians.
We consulted the group in the process of gathering information for their “self-
evaluation,” which is part of the Dutch Standard Evaluation Protocol.
Theologians are part of long-standing, deeply connected local/international peer
communities, but also have many societal partners, including the Church, hospitals,
the army, and other social and professional actors.
● Tensions:
○ Questions around the legitimacy of Theology as an academic discipline
○ Questions around the hold of the Church
○ Tense relations between different kinds of spiritual and epistemological
commitments.
● Many commitments: minister training, training students, spiritual care networks,
youth and church, but also digital humanities, theology and art, multi-faith issues.
6
7. ● Exploratory phase: Articulation of relevant issues and questions with the
management of the institute.
● Data gathering phase:
○ Analysis of policy documents and previous (self-) evaluations.
○Quantitative information: Rafols: “Through contextualisation and
participation, indicators can take into account more diverse assumptions and
values, thus making decision taking more sensitive to local uncertainties and
values (Stirling et al., 2007). Pluralisation of perspectives enriches evaluation.”
○ “design and use of STI indicators should take place not only in ‘secluded
spaces’ such a scientometric laboratories, but with the participation of
stakeholders so as to take in consideration their contexts. ‘Indicators in the
wild’ would be the metaphor for this contextualised and participatory work of
constructing quantitative evidence for decision making.”
○ Interviews in order to gain as much individual research information as
possible.
●And workshop: 1) to test and fine-tune hypotheses that emerged from quant. and
interviews and 2) pursue a SWOT analysis.
●Data analysis and report writing
7
9. Research organizations grapple with changing societal, economic and political
contexts and expectations.
We will further develop the ‘evaluative inquiry’ approach to help organizations or
groups give an overview of goals and missions and the ways these are embedded
within the organization (goal > mobilization > output > reach).
This approach helps trigger meaningful discussions about the increasing roles and
demands of peer communities, professional and societal partners, governance or
industry while building on individual and organizational strengths.
By using multiple methods.
9
10. We have seen that our work can serve as a starting point to develop or refine the
narrative of the organization:
Based on the views and experiences of researchers and users (bottom up)
Articulating what is already going on (making visible)…
And identifying new possibilities:
■ New audiences, existing ones
■ Ways of communication next to books and articles
■ A clearer structure of the organization, in terms of programs, centres
and projects.
10
11. The concept of "evaluative inquiry", we hope, reveals the epistemic commitments and
community values of local practices. Evaluative inquiry thus essentially approaches
evaluation as a knowledge production process.
It is meant to be or become a reflexive approach to evaluation that sees the relevance of
scientific work as an unfolding process, in which a variety of academic and non-academic
actors are involved. Our approach emphasizes process and engagement rather than
accounting and ranking.
Crucially, evaluative inquiry identifies values, networks of people, and resources as
collectives. We think it is a quite productive way to help articulate how ‘worlds’ are created
and negotiated in relation to these values.
Inspired by Zuiderent-Jerak (2015): this is a ‘situated normative commitment, an attached
instead of detached approach,” opening up and broadening out what can be addressed in
evaluations, [not holding up a mirror], “turning the norms and values and subjectivities not as
things to leave out or leave unspoken but turning them into an empirical topic in
assessments. Addressing fallacy of one size fits all, in favor of empirically specifying what
issue(s) are at stake.’
11