Evaluation

Whether done internally or with an external evaluator, the goal of an evaluation is simple: to collect data and insights to learn about what works well and what doesn’t and make improvements. This section shares evaluation tools used by the Northeast Farm to School Institute and tips to help you develop an evaluation approach for your own Institute. Before reading further, here are some helpful things to keep in mind:

  • The Northeast Farm to School Institute's tools represent what has worked well over a decade of refining their evaluation approach. However, these tools may not suit your Institute's needs.

  • Consider what makes your Institute unique and what information will be most valuable to guide your decisions. What data do you need to collect that will inform meaningful learning and improvements at your Institute?

  • Based on your capacity, priorities, and differences from the Northeast Farm to School Institute model, you can adapt elements of these tools to create your own tailored set. By developing your own evaluation tools with your team, you become the owners of your work and will be more likely to digest and use the evaluation’s results. 


Evaluation at the Northeast Farm to School Institute

The Northeast Farm to School Institute uses evaluation to: 

  • understand participants’ needs and plan programming based on those needs.

  • inform decision making for future planning and resource prioritization.

  • measure and increase the Institute’s impact through quantitative measurement and storytelling.

  • engage participants and Institute partners and facilitate learning (e.g., using interactive activities that collect data and provide opportunities for reflection simultaneously).

  • tell the story of the Institute and its impact on school teams for fundraising and reporting purposes.

Organizers of the Northeast Farm to School Institute engage in evaluation at several points throughout the Institute.

  • A pre-program survey is conducted to gain a more nuanced understanding of the teams, their experience with and capacity for farm to school, their needs, and their interests. Results are used by Institute organizers to plan for and adjust which topics to address and make changes to their facilitation approach, if needed.

  • A post-retreat survey is conducted to learn what participants found most useful about the retreat, key ideas and plans they came away with, their experience with the coaches, and needs for support for the rest of the year. Results are used by Institute organizers to inform future retreat logistics and facilitation style as well as the best ways they can support teams during the implementation phase. 

  • A post-workshop survey is conducted after various workshops during the year to capture brief participant reflections on what was most useful and suggestions for improvement. This survey is usually completed in the form of an “exit card,” either on paper or digitally. 

  • A final survey is conducted to document team successes, challenges, and continued needs for support as well as collect data to compare with pre-program survey results to understand what changes have occurred as a result of participating in the Institute. Once Institute organizers have results from the final survey in hand, the team takes time to sit down and digest all the data they collected. Some of the questions they ask themselves include:

    • In which areas did teams gain the most skills?

    • What aspects of the Institute can be improved?

    • What steps can we take to help with planning for the next year?

    • What stories can we tell that communicate the impact of the Institute on the teams’ implementation of farm to school programs?


A few things to note from past experience:

  • Getting most or all of your Institute participants to complete a short survey, preferably during scheduled time before they leave, will likely be more valuable than getting only a few to participate in more in-depth evaluation. 

If the final team gathering takes place virtually, the survey response rate may be low. In this case, you may consider sending follow-up emails and using incentives to encourage survey completion. If you choose to use incentives, note that this needs to be done with careful consideration about the dollar amount/quantity and how it could bias responses.


General Evaluation Tips

Evaluation can be complicated, but it doesn’t have to be! Keep these tips in mind as you design your Institute's evaluation approach.


What is Data?

Data is information collected in an organized, systematic way. Being "systematic" means asking all participants the same questions or collecting data consistently. This makes the data more credible than casual observations.

Data comes in two main forms:

  • Stories (qualitative data): These can include written or spoken perspectives, observations, and feedback. For example, asking Institute participants, “What was the most valuable thing you learned at the retreat?” 

  • Numbers (quantitative data): These can include counts, rating scales, and percentages. For example, asking Institute participants to rate their agreement with a statement like, “The Institute increased my understanding of the 3Cs model.”

Analysis involves making sense of the stories or numbers that were gathered and will very likely not require fancy statistics. The goal is finding useful insights in the data. If you prioritize well, the first few data points are often the most useful.


Start Small

How much evaluation you can do will depend on your capacity and resources. Remember, some data is better than no data. If your evaluation resources are limited, start by collecting some data in one priority area and building from there in subsequent years. For example, if if your team only has capacity for a short survey or a quick interview, focus on the big questions like:

  • Which part of the retreat was most valuable for you and why?

  • How might we improve the retreat to offer a more powerful learning experience next year? 

  • What further support does your team need most right now to work toward the goals in your action plan?

Focus on Stories

Open-ended conversations and feedback often provide the best insights about what works well and what needs improvement. Surveys with open-ended questions can also capture useful perspectives if live interviews are not possible. For small groups, gathering stories is often more enlightening than crunching numbers. Statistics require larger data sets to be meaningful. For modest-sized groups, such as those you are likely to have at your initial Institute, narrative data will likely be most insightful.

Choose an Evaluation Coordinator

Designate one person to be responsible for managing the evaluation process. Having a dedicated coordinator boosts accountability and organization. If you are working with an external evaluator, then choose a point person on your Institute team to be responsible for engaging the team in the process. This person doesn’t need to do everything; their role is to keep the team organized and moving forward. For example, the point person can:

  • create the evaluation timeline and task list.

  • facilitate development of data collection tools.

  • coordinate data gathering during the Institute.

  • facilitate conversations with the team to interpret the meanings of the data.

  • take the lead on writing up findings to share with funders and stakeholders.

  • be the primary point of contact for external evaluators.


Determine Your Audience

The very first thing to ask yourself when planning your evaluation is, “Who am I doing this for?” It’s not realistic to expect that you can collect data to satisfy all of your stakeholders. The best place to start is with your highest priority stakeholder. Who is this person or group and what important decisions do they need to make? Once you determine this, figure out how you will get the data that helps that one person or group shed light on one important decision they need to make. Consider your core team and participants first, then extend the thinking to “outside” groups.

PEER Associates’ Stakeholder Prioritization Tool, based on the work of Michael Quinn Patton, will walk you through a streamlined process of stakeholder prioritization that will help you clarify who your evaluation matters to so you can best design it to meet their needs.


Logic Models

A logic model is a visual diagram that shows how a program's activities connect to its intended results. It’s essentially a road map for your program, in this case, your Institute. There is no rigid template for developing a logic model. Think of it more as a flexible framework that takes a complicated initiative and presents as a simple, easy to understand visual. It’s a living model  that will evolve over time as you grow and adapt your Institute. 


Logic models serve three main purposes:

  1. Internal alignment: They build a shared understanding of the work among your team so everyone can see the big picture.

  2. External communication: They clearly communicate to partners and funders what your work looks like and on what foundation your evaluation is built on.

  3. Starting point for measurement: They guide evaluation and data collection decisions to ensure that your Institute is creating the outcomes described in the logic model and that your work is consistent with your overarching Institute values statement.


There are different schools of thought as to what should be included in a logic model. The best logic models are clear, concise, and contain enough detail to be useful for various purposes. At the very least, your Institute logic model should articulate the following: 

  • Activities: Activities are the key parts of your Institute. These may be the same as or similar to the Institute’s core components. 

  • Outcomes: Outcomes illustrate how participants change as a result of taking part in the Institute. 

    • What skills and knowledge do they develop? 

    • What connections might they make?

    •  How might their beliefs about farm to school or understanding of their roles change? 

  • Impacts: Impacts illustrate how the world changes as a result of what Institute participants do in a realistic and localized way. 

    • How do farm to school programs change? 

    • What happens for educators and school staff who didn’t attend the Institute? 

    • What happens for students? 

    • How does school culture change? 

Tips and tools for getting the most out of working on your Institute logic model:

Build Shared Understanding

Developing a logic model can bring staff and stakeholders together around how your Institute will create change. It allows diverse perspectives to shape a common view of how activities will lead to outcomes. The process of coming to agreement on which words belong on the logic model (and which don’t) will reveal and drive your team to resolve any divergent assumptions team members may have been holding.

Rather than gathering a large group of staff and stakeholders to build a logic model from scratch, you may find it more effective to form a core team to create a draft logic model, which can then be circulated for feedback to a larger group to ensure input from key stakeholders. You may also want to use this Farm to School Institute Logic Model Building Kit to streamline your logic model building process.


Remember that a logic model is always a work in progress!

Guide Evaluation

A logic model provides a roadmap of the outcomes you expect over the short and long term. This framing can help you zero in on the most meaningful indicators to track, allowing you to focus your evaluation resources on a few priority areas and leave other indicators to be supported by assumptions, data from elsewhere, or future evaluation work. Which parts of the logic model to evaluate now should be informed by the stakeholder prioritization process described earlier. 

To use your Institute logic model as a guide for evaluation, consider what is listed in each column and what you might want to know about it. For example: 

  • Inquire about participant experience at the Institute. What worked well and what didn’t.

    • How can coaching (e.g., workshops, learning journeys, or the Institute in general), be improved to better help you achieve your farm to school goals? 

    • If you could attend the retreat for only three hours, which part would you attend? Why?

    • Which parts of the retreat, if any, would you like to see more of? 

  • Investigate changes in outcomes. 

    • How did participation in the workshop increase your understanding of how to procure local foods? 

    • How did participation in the retreat increase your team’s confidence in pursuing the goals in your action plan? 

  • Explore the connections between activities and outcomes.

    • Which aspects of the Institute were most effective in building your team’s capacity to integrate the classroom and cafeteria aspects of farm to school?

    • How effective was your coach in supporting your team’s ability to engage community stakeholders?

  • Explore the connections between impacts and outcomes.

    • How did participating in the Institute help your team work toward nutrition outcomes for students? 

Communicate with Different Partner Groups

A logic model creates a simple visual picture of your program's theory of change. This makes it easier to communicate how your activities will lead to meaningful outcomes to your team and external partners like funders, community leaders, and Institute participants. The following tips will help you ensure that your Institute logic model is an effective communication tool. 

  • Consider the graphic layout of your model. How can you (sparingly) use shapes, font sizes, and colors to organize information so readers can most easily take in your model?

    • Consider using the 3-30-3 rule to your logic model’s graphic design (3 seconds to grab a reader's attention, 30 seconds to engage them, and roughly 3 minutes for them to spend reading the content)

    • Keep language simple.

    • Minimize jargon and use everyday terms wherever possible. Aim for a fourth-grade reading level.

  • Try not to overcrowd your logic model.

    • In most cases, it’s more effective to err on the side of less is more.