In this edition
- Logic Models: Your Recipe for Success
- Understanding Levels of Measurement
- Step-by-Step Guide to Using Logic Models
About Evaluation Matters
Evaluation Matters is a monthly newsletter published by University of Nevada, Reno-Extension. It is designed to support Extension personnel and community partners in building practical skills for evaluating programs, making sense of data, and improving outcomes. Each issue focuses on a key concept or method in evaluation and provides clear explanations, examples, and tools that can be applied to real-world programs.
This issue, published in March 2025, introduces the fundamentals of logic models. It explains how logic models can be used to clarify program structure, align activities with intended outcomes, and guide both planning and evaluation. Whether you're new to evaluation or looking to refresh your understanding, this issue offers straightforward guidance to help you connect your program efforts to meaningful results.
Logic Models: Your Recipe for Success
Understanding logic models through simple cooking analogies.
Running a program without a logic model is like trying to bake a cake without a recipe. You might have all the right ingredients, (flour, sugar, eggs), but without clear instructions on how to combine them, you could end up with a dense, undercooked mess instead of the fluffy masterpiece you envisioned. A logic model works the same, giving you a step-by-step guide to turn your resources and actions into the outcomes you want. It helps ensure that your efforts lead to meaningful results and, just as importantly, tells you how to measure whether your program is actually working. After all, the real test of a recipe isn’t just following the steps... it’s about how good the final dish tastes!
A logic model is like a recipe for your program.
Like any good recipe, a logic model has key components that guide you from start to finish. First, there are the inputs—the ingredients you need to get started. These can include funding, staff, materials, or time. Then come the activities, which are the instructions you follow, like mixing the batter or preheating the oven. These activities lead to outputs, which are the immediate, concrete results, things you can count, like how many workshops were held or how many participants attended. However, just because you measured your outputs doesn’t mean that your program is a success! That’s where outcomes come in. Outcomes tell you whether the work you put in is making a difference, like whether program participants gained new skills or changed their behavior. Finally, there’s the impact, the long-term change your program aims to create... just like a great meal that continues to nourish even after the last bite.
In my experience, one of the biggest challenges in understanding logic models is differentiating between outputs and outcomes. If you’re following a recipe, the output might be the finished cake sitting on the counter, while the outcome is how delicious it tastes. You can measure outputs easily, (attendance numbers, completed reports, materials distributed), but outcomes require looking at whether those outputs led to tangible change. Just because a cake comes out of the oven doesn’t mean it’s delicious. In the same way, just because 100 people attend a workshop doesn’t mean they learned something meaningful. That’s why logic models push us to focus on outcomes—because success isn’t just about producing something, it’s about making sure it has the intended effect.
Of course, for a logic model to be useful, its goals and outcomes need to be measurable. Having a goal to “bake the best cake ever!” sounds great, but it’s actually really vague. What does “best” mean? Does it mean fluffy texture? Perfect sweetness? A cake that wins a baking competition? Without a clear definition, there’s no way to tell if you succeeded. A better approach might be, “We want to bake a cake that rises properly, has a moist texture, and earns a rating of at least 8/10 in a taste test.” In program evaluation, the same principle applies. A weak outcome might be, “Participants will feel more confident.” That sounds nice, but how do you measure “feeling confident”? A stronger outcome would be, “80% of participants will report increased confidence in advocating for themselves on a post-workshop survey.” The more concrete and specific your measurement, the easier it will be to determine whether your program is working—or if it needs a little more time in the oven.
A good recipe makes cooking easier, and a good logic model makes evaluation clearer. It helps you know what to expect, what to measure, and whether you’re on the right track. And just like in cooking, sometimes things don’t turn out exactly as planned. But that’s okay! A logic model doesn’t just tell you whether your program worked, it helps you figure out why it did (or didn’t) so you can adjust and improve next time. So next time you’re designing a program, think of it like baking. Gather the right ingredients, follow the steps, and when it’s done... taste the results to see if it was a success! After all, no one wants to spend all day in the kitchen just to end up with a flop.
Understanding Levels of Measurement
Why data types matter in evaluation.
Not all numbers are created equal. Sure, they might all look like digits on a spreadsheet, but some numbers are really names, others have order, and some let you do fancy math. Understanding the level of measurement of your data, (nominal, ordinal, interval, or ratio), helps you make sense of what you can (and can’t) do with it. Think of it like sorting laundry: you wouldn’t toss a red sweater in with your white socks, and you shouldn’t treat all numbers the same way either.
Let’s start with nominal data, which is the simplest, just labels or categories with no real numerical meaning. Think of eye color, dog breeds, or even your social security number! You can count social security numbers, but you can’t say that one social security number is greater or less than another. Similarly, you can’t average a group of social security numbers together. Nominal data is great for counting and even grouping, but you won’t be calculating means or running complex stats here.
Next up is ordinal data, which introduces order but still lacks precise numerical differences. Think of movie ratings (one star to five stars) or finishing places in a race (gold, silver, bronze). You know that five stars is better than three, and first place is better than third, but you don’t necessarily know by how much. Was the five-star movie twice as good as the three-star one? Hard to say. Because of this, ordinal data works well with ranking-based statistics but not with mathematical operations like addition or division.
Then there’s interval data, where numbers are evenly spaced but don’t have a true zero. Classic example? Temperature in Fahrenheit or Celsius. The difference between 30 and 40 degrees is the same as between 80 and 90, but zero degrees doesn’t mean “no temperature.” You can add and subtract these numbers, but you can’t multiply them meaningfully (you wouldn’t say 80°F is twice as hot as 40°F). Interval data lets you do more with statistics, like calculating means and standard deviations, but you still have to be careful with ratios.
Finally, we have ratio data, which is the gold standard of measurement. It’s like interval data, but with a true zero, meaning you can make meaningful comparisons. Height, weight, time, income, zero actually means none of the thing being measured. If you have 10 dollars, you really do have twice as much as someone with 5 dollars. With ratio data, the statistical world is your oyster... you can add, subtract, multiply, and divide to your heart’s content.
So why does all this matter? Because different types of data require different statistical tools. If you’re working with nominal data, you’re looking at frequencies and chi-square tests. Ordinal data calls for medians and rank-based tests. Interval and ratio data let you bust out the big guns, like t-tests, ANOVA, and regression. Not sure what test to use? Check out StatsBee, a tool I developed to help you pick the right statistical test based on your data, at at https://chriscopp.com/Statsbee/index.html. Because no one wants to mix up their sweaters and socks when it comes to data analysis.
Step-by-Step Guide to Using Logic Models
Implementing a logic model with clear examples.
When launching a complex program, having a structured plan is essential to ensure that resources are used effectively, activities are carried out as intended, and meaningful results are achieved. A logic model serves as the foundation of your project, mapping out the key components of a program from start to finish. It helps clarify how inputs lead to activities, how activities produce measurable outputs, and how those outputs contribute to broader outcomes and long-term impacts. To illustrate how this works in practice, we’ll walk through the logic model of SnowPacs, a research project that examined water allocation challenges in snow-dominated basins and their implications for food security and agricultural economies. While this example highlights one approach to evaluation, it’s important to remember that your evaluation may look very different, depending on your program’s goals and context.
The first step in constructing a logic model is identifying inputs... the foundational resources that make a project possible. For SnowPacs, this included a team of researchers specializing in institutional and resource economics, hydrology, climate modeling, and governance. Funding was secured to support salaries, travel, and research operations, and partnerships were established with a diverse group of water management stakeholders. Additionally, faculty from a land-grant university provided expertise and research facilities. These inputs represented the investments necessary to ensure the project’s success. In your evaluation, inputs might look completely different. Perhaps your project relies more on community volunteers, local funding, or partnerships with schools or nonprofits. Identifying the unique resources that fuel your program is a key first step.
With inputs in place, the next step was to define the activities that would be carried out using these resources. SnowPacs researchers developed hydrological-climate models to simulate variable water availability, which were then integrated into economic models that analyzed how changes in snowmelt and water storage affected agricultural production. The project also incorporated institutional analysis, examining how water governance structures influenced decision-making in agricultural basins. Your program’s activities might take an entirely different for, perhaps organizing public workshops, implementing pilot programs, or conducting surveys. What matters is ensuring that your activities are clearly linked to your intended outcomes.
Once the activities have been listed, the next step is tracking project outputs, or tangible products resulting from the work. In SnowPacs, this included the creation of hydrological and economic models, documentation of stakeholder perspectives, and the publication of research findings. The project also generated visualizations of water distribution to help illustrate key trends, produced peer-reviewed Extension materials, and maintained a public website to share ongoing updates. These outputs served as immediate evidence that the project was progressing as planned. In your program, outputs might take many different forms. Instead of research models and publications, your outputs might be the number of training sessions held, the number of individuals who completed a program, or the development of new educational materials. The key is that outputs should be measurable and directly tied to your activities... they represent what your program is producing, not yet the change it is creating.
However, outputs alone do not demonstrate a program’s impact. The real measure of success comes from examining outcomes, which reflect changes in knowledge, behavior, or conditions that result from a project. SnowPacs produced several short-term outcomes, including an increase in stakeholder knowledge about water rights, allocation mechanisms, and the effects of climate change on water availability. Researchers also gained valuable insights into stakeholder challenges and the role of interdisciplinary collaboration in addressing complex water management issues. In the medium term, these findings influenced water management decisions, with stakeholders beginning to incorporate co-produced knowledge into their strategies. Over the long term, the project aimed to contribute to more resilient food systems and improved water governance policies that help agricultural communities adapt to changing environmental conditions.
Your program’s outcomes will depend entirely on its goals. If you are working on a community nutrition program, your short-term outcomes might be increased knowledge of healthy eating habits, while your long-term outcomes could be measurable reductions in obesity rates. If your program focuses on workforce development, an outcome could be job placements for participants or improved industry partnerships. The critical point is ensuring that your outcomes are realistic, measurable, and clearly connected to your program’s objectives.
A key part of any logic model is its evaluation component, which ensures that the program is not only progressing as planned but also achieving meaningful results. In SnowPacs, the evaluation approach combined qualitative stakeholder feedback with quantitative tracking of outputs and outcomes. This helped the team assess whether models were effectively capturing real-world dynamics and whether stakeholders were gaining useful insights from the research. Again, your program’s evaluation might look completely different. If you’re running a training program, you might track participant progress through pre- and post-surveys. If you’re implementing a policy initiative, you may rely on data analytics to monitor long-term trends. Regardless of the approach, evaluation is what ties the entire logic model together, helping to determine what’s working, what needs adjustment, and how the program is making a difference.
The SnowPacs logic model demonstrates how a well-structured plan can guide a complex research initiative from concept to impact. However, it’s just one example. Your program might need a very different approach depending on its scope, audience, and intended outcomes. Some evaluations focus on behavior change, while others prioritize policy impact, knowledge dissemination, or economic benefits. The strength of a logic model lies in its flexibility—it can be adapted to fit the unique needs of any program, helping to clarify objectives, track progress, and ensure that efforts are aligned with meaningful results.