Skip to main content

Authors: Melanie Saumer, Isabelle Freiling, Irina Dietrich, Marie-Charlotte Hasewinkel, Jaroslava Kaňková, Phelia Weiß & Jörg Matthes

Handbook of Youth Social Citizen Science (Borgström, D., Canto-Farachala, P., Hagen, A. L., Norvoll, R., Rådmark, L. & Lorenzen, S.B. (Eds.). (2024). Handbook of Youth Citizen Social Science. Working with Young People and the Local Community for Social Change. Zenodo. https://doi.org/10.5281/zenodo.10566411)

Designing an Evaluation Framework

First of all, the nature of the project, as well as the direction of the evaluation need to be determined: What aspects of the project should be in the focus? The bigger the project, the more careful the dimensions have to be extracted based on the general design and aim of the project. Several steps led to the final evaluation design.

Step 1: Decide On the Basic Pillars of your Evaluation

In the case of YouCount, the general criteria to measure social change achieved through a hands-on youth citizen social science project all revolved around the principle of co-evaluation, meaning that young citizen scientists would have to be involved in the scientific process of evaluating (e.g. Kieslinger et al., 2018; the CoAct project is a good reference for a strong co-creative implementation: https://coactproject.eu/). More specifically, the main pillars of the evaluation framework are process and outcome oriented. Each one is clustered alongside both scientific dimensions, participation dimensions, and social dimensions. This allows for both in-case and cross-case evaluations, which taken together will paint a bigger picture of the real-life unfolding of the project. The main questions that were asked were (for more details of how they were derived, see Juricek et al., 2021): How and what should be evaluated? And what are the individual, social, and scientific outcomes of youth citizen social science? How can we measure and evaluate these outcomes?

Step 2: Familiarise Yourself with the Literature

To answer those preliminary questions, the current literature was closely evaluated and tailored to the social science context, while simultaneously considering the diverse nature of our individual cases. The latter aspect is especially important with respect to finding a balance of criteria that all cases have in common, while leaving enough space for individual differences between cases. The literature did not provide a clear picture on multi-dimensional evaluation methods for citizen social science projects, which is why we pieced together our own comprehensive framework based upon existing strategies.

(a) Outcome-related frameworks (e.g. Phillips et al., 2018) suggest dimensions to evaluate individual learning outcomes of citizen scientists. This rather quantitative approach was adopted into our framework by implementing a pre-post-survey design which anonymously measured individual-level outcomes like scientific knowledge, attitudes towards science, self-agency, and project involvement levels, among the young citizen scientists.

(b) Process-oriented frameworks (such as Ceasar et al., 2017; Fischer et al., 2021) highlight the need to consistently track communication, awareness, relationship and empowerment processes and changes. In the YouCount project, we therefore extended the originally citizen scientists-based approach by including multiperspectivity to account for dynamics and different perceptions. Professional researchers, citizen scientists, stakeholders and (if present) student assistants’ perspectives were all incorporated equally. This was embedded by designing qualitative interviews at three points in time throughout the project with professional researchers, stakeholders and students. Further, three focus groups throughout the project with the respective young citizen scientists of each case serve as tools for insights into the Citizen Scientists’ perspective. And as a third complementary process instrument, we added self-reports to the design. Those reports were filled out by each case leader four times throughout the project and consisted of short but concise questions about on-going processes (e.g. recruitment strategies and challenges, communication channels and obstacles, etc.). Conducting all those instruments (self-reports, interviews and focus groups) in the beginning, the middle, and the end of the project accounts for insights into processes, changes, and (overcoming) challenges from each perspective.

(c) Process- and outcome-based frameworks like the Citizen Evaluation Framework (e.g. Kieslinger et al., 2018) further suggest to focus on a threefold dimensionality of scientific dimensions (knowledge generation on both professional researchers’ side and the young citizen scientists side), participants dimensions (young citizen scientists impact and empowerment), and social dimensions (social innovations, citizen education). This is why we incorporated all three dimensions into (almost) all of the overall four mixed-method measurement instruments: The pre-post surveys and focus groups with the young citizen scientists, the interviews with the professional researchers, stakeholders and students, and the self-reports by the professional researchers all included questions about potential scientific, participants (young researchers all included questions about potential scientific, participants (young citizen scientists), and social outcomes and/or individual processes related to it.

Step 3: Puzzle Together What Works in Your Project

Above all, what makes citizen science evaluations unique is the concept of co-evaluation. This means actively involving the young citizen scientists in the evaluation process; both in designing it, carrying it out, and analysing it.

What we found is that implementing a co-creative evaluation is more challenging than originally expected. This leads us to the next subchapter, where we share some practical implementation insights and necessary adaptations.

Important Considerations

Plan Reasonably

Design is key! For instance, the estimated number of interviews (90) and on-sight case visits (27) was not feasible from the beginning on. Applying a realistic approach as to what is really necessary to be evaluated in-depth, and how this can be done with a mindful use of (personal and monetary) resources will benefit the overall project “smoothness”. It is advised to outline a detailed evaluation deadline plan with all case tasks from the beginning on, so potential obstacles can be managed individually in time. Letting some room for flexible adjustments or prioritizations is hence also strongly advised. This also relates to a potential data overload: Considering different angles of evaluation perspectives (e.g., young citizen scientists vs. professional researchers) is highly recommended; however, mind risking a data-overload. Some aspects that are most valuable are evaluated several times (e.g., learning curves; process focus), while others are sufficiently evaluated with one measurement (e.g., innovations; outcome focus). Too much data will not lead to a better evaluation, but rather compromise the overall comparative angle.

Consider Potential Barriers on Both Sides

All kinds of obstacles can hinder (young) citizen scientists in participation of not only the case-related tasks, but ultimately also the evaluation-related aspects. Those include language barriers, socio-economic barriers, time constraints, lack of incentives, and overall motivational hurdles, but also feasibility struggles/overburdening and misunderstandings on the professional researchers’ side. A good relationship quality between the professional researchers and the young citizen scientists does prevent at least some of those barriers due to communication. Also, additional resources, e.g. translators/interpreters, student assistants, can help.

Co-creation Does Not Always Make Sense.

Estimating where a co-creational approach is useful and where not is one more major part of citizen social science evaluation considerations. In some instances, designing a co-creational tool is neither necessary nor feasible (e.g. when applying scientifically validated measurement tools/strategies), while in other regards it is both useful and creates unique additional knowledge (e.g., when brainstorming what kinds of questions to ask the young citizen scientists in questionnaires, interviews, etc.). It is commonly advised to employ a mixed-methods design with quantitative and qualitative instruments, as well as a complementary mixture of co-created and scientifically approved tools.

Balancing Case Foci

Multiperspectivity matters, when it comes to the bigger picture of project outcomes. Taking various dimensions such as scientific vs. individual/civic/citizen scien Emphasising one of them leads to less salience of another one. Therefore, we chose to measure some topics (e.g. communication processes) multiple times with different methods out of our evaluation tool potpourri; in some instances, it can even be useful to implement core topics into every evaluation instrument. Notably, here again a potential data overload needs to be considered, so those prioritised aspects need to be selected carefully. Lastly, a decent level of case-autonomy/-individuality needs to be implemented into the evaluation framework. This requires flexibility by prioritising (and communicating) what is needed most definitely by each case, and what can be compensated by other data.

 Failure Is Ultimately Productive

Contrary to rather static, quantitative studies/projects, CSS is inherently dynamic and poses challenges simply due to its nature of doing science with people, instead of only researching about them. Given the unique characteristics inherent in the design, development, and outcomes of each Citizen Social Science (CSS) project, obstacles, changes and failures are unavoidable. Conversely, those can become productive and generate new insights into the work with Citizen Scientists, which are inherently valuable findings.

 

FURTHER READING

Jordan, R. C., Ballard, H. L., & Phillips, T. B. (2012). Key issues and new approaches for evaluating citizen‐science learning outcomes. Frontiers in Ecology and the Environment, 10(6), 307-309.

Wiggins, A., Bonney, R., LeBuhn, G., Parrish, J. K., & Weltzin, J. F. (2018). A science products inventory for citizen-science planning and evaluation. BioScience, 68(6), 436-444.

Schaefer, T., Kieslinger, B., Brandt, M., & van den Bogaert, V. (2021). Evaluation in citizen science: the art of tracing a moving target. The science of citizen science, 495.

User Type
  • Citizen scientist/civil society organization
  • Researcher/research institution
  • Teacher/school
Resource type
  • Getting started
  • Projects/project examples
  • Step by step guides
Research Field