Skip to content Skip to navigation Skip to footer

Intro to Evaluation: What is it?

Here at the CSU STEM Center, we have been successfully creating and carrying out evaluation plans for over a decade. Our new series, Intro to Evaluation, sheds some light on what we do as evaluators.

Have you heard something about evaluation but don’t know what it is? Do you need to evaluate your program and don’t know where to start? Have you wanted to do your own evaluation but need some more information before you jump in? In this Intro to Evaluation series, I will take you through what it is, what an evaluator does, some types of evaluation models, and when you should find an evaluator. Our goal is to give you an intro to evaluation to make evaluation accessible to anyone! 

What is evaluation?

If you were to look in a dictionary for “evaluation,” you would find something like this: “determination of the value, nature, character, or quality of something or someone.”  But in practice, what does this mean? Evaluation is a systematic process. It involves collecting data based on predetermined questions or issues. We do it to enhance knowledge or decision-making about a program, process, product, system, or organization. (From here on out, I’ll talk about programs, but the process is similar to other things you might evaluate.) You’ll notice I bolded some words in that sentence, so let’s pick this apart a bit.

A systematic process 

Evaluation takes pre-planning and forethought. We don’t do evaluation as an afterthought. Rather, we plan out with a purpose what our questions are and how we will answer them.

Systematic process

This planning can happen alongside the creation of a new program or project. Or, it may happen well into the life of a project. In any case, it is always a systematic process.

Collecting data from questions or issues

During the systematic planning process, we ask a lot of questions about the program and the needs of the program staff. From a long list, we pick out the questions that focus on issues or requirements. For example, a funding agency might ask you to report how many people participate in a program.

Questions and Data

We use these questions to decide what types of data we need to collect, who we need data from, and when it needs to be collected. The questions come first. Data collection comes next.

Knowledge and decision-making.

There is always a purpose for conducting an evaluation. That may be to learn something new about our program, if things are working well, or some other reason.

Knowledge for decision making

Evaluations usually lead to some sort of decision being made about the program. That could be to make improve it, expand it, or discontinue it. This means that someone is making a judgement about the merit, worth, or value of what is being evaluated.

Regardless of the reason an evaluation requested, evaluation results are always meant to be used in some way.

Why is evaluation important?

Have you ever thought about everything you know? Now, think about that in terms of a program you might be developing or that has been going on for a long time. Donald Rumsfeld talked about knowledge in 2002 during an intelligence briefing. His statement (tongue twister for sure) makes a good framework to think about why evaluation is important.

  1. You know what you know – how staff is organized or how many people participate every year. 
  2. You know what you don’t know – why some groups of people decide not to participate in your program. 
  3. You don’t know what we don’t know – things you haven’t even considered about our program.

So how does this help us think about evaluation? It’s simple. As evaluators, we uncover answers to questions you have. We also help you to think about new questions you may have never asked. And, we support you as you use what you know to make decisions.

An Example

You run a STEM program geared toward middle school students. It has a lot of participation every year. The program seems successful, so you want to find funding to let you expand this program to other areas of the U.S.

You know how many students participate every year and what schools those students attend. Other information about the students is also available to you. But, you don’t know how the program might help students become more interested in the topic.

Meeting between evaluator and team

When the evaluator meets with you and your team, she learns that the participants are almost entirely from more affluent families. So, she asks how this relates to the overall program goals. You tell her that the program is intended to serve all students. Through this process, you realize that the evaluator has just uncovered something you had not thought of. You work with the evaluator to come up with questions to figure out why your program isn’t drawing from other communities. By answering the questions, you are able to change the program to make it include a more diverse set of students.

So, really, what is evaluation?

Hallie Preskill and Darlene Russ-Eft, at the beginning of their book Building Evaluation Capacity, explain this simply:

Ultimately, evaluation is concerned with asking questions about issues that arise out of everyday practice. It is a means for gaining better understanding of what we do and the effects of our actions in the context of society and the work environment. A distinguishing characteristic of evaluation is that, unlike traditional forms of academic research, evaluation is grounded in the everyday realities of organizations.

Building Evaluation Capacity: Activities for Teaching and Training, p. 2.

Want to know more?

Intrigued and want to know more

If you are intrigued about evaluation and can’t get enough, that’s great! In this Intro to Evaluation series, I am excited to share more about evaluation, what it’s like to be an evaluator, and some basics on working with an evaluator. I will be posting more about evaluation regularly here on the CSU STEM Center website. Next in our Intro to Evaluation series is What an Evaluator Does


Disclaimer: The thoughts, views, and opinions expressed in this post are those of the author and do not necessarily reflect the official policy or position of Colorado State University or the CSU STEM Center. The information contained in this post is provided as a public service with the understanding that Colorado State University makes no warranties, either expressed or implied, concerning the accuracy, completeness, reliability, or suitability of the information. Nor does Colorado State University warrant that the use of this information is free of any claims of copyright infringement. No endorsement of information, products, or resources mentioned in this post is intended, nor is criticism implied of products not mentioned. Outside links are provided for educational purposes, consistent with the CSU STEM Center mission. No warranty is made on the accuracy, objectivity or research base of the information in the links provided.


Dr. Laura B. Sample McMeeking

Director – STEM Center

Dr. Sample McMeeking is the Director of the STEM Center. Her primary research focuses on STEM professional development at multiple levels, including preservice and inservice teachers, university undergraduate and graduate students, postdoctoral researchers, and faculty. As part of her work, she collaborates with faculty and staff at CSU and others outside of CSU to develop and implement high-quality research and evaluation in STEM education.