Define the Intent of the Evaluation

The agreement among collaboration partners on the purpose of an evaluation

 

WHY IT MATTERS: Facilitating consensus among partners as to the purpose of the evaluation allows the collaboration to move forward with generating insights that are mutually agreed to be relevant to all partners, while acknowledging that partners may have differing goals for the evaluative process.

Differing perspectives on the purpose of the evaluation.

Influenced by sector- and organization-specific practices, norms, and interests, partners may have differing goals for the evaluation. Some partners may propose an evaluation that focuses on collaborative process so that others can replicate the collaboration’s efforts; others may propose evaluating outcomes to report success to their constituencies; others may propose evaluating both process and outcomes in order to adjust collaboration strategy (assuming the collaboration is ongoing rather than project-specific). Partners must reconcile these and other potential differing perspectives; a lack of clarity on the purpose of the evaluation creates confusion as to what information should be collected and how it should be assessed, ultimately limiting the collaboration’s ability to complete an evaluative process.

“Financing Clean Energy in Berkeley”

Launched in 2008, Berkeley FIRST allowed property owners to borrow money from the City’s Sustainable Energy Financing District to install solar paneling and repay the costs through their property tax bills over a period of 20 years. From the onset of the cross-sector collaboration that gave rise to Berkeley FIRST, the City of Berkeley and its partners sought to create a standardized and scalable financing model that other cities could easily adopt if it proved successful. They agreed that conducting interim and final evaluations of a pilot program operating at a modest scale would help them determine if this type of solar paneling installation was feasible. To this end, the Office of Energy and Sustainable Development and University of California, Berkeley’s Renewable and Appropriate Energy Laboratory conducted initial and final reviews of the program. The initial evaluation in 2009 highlights program achievements and pitfalls, and the motivations and opinions of participants and residents who were not in the pilot group. It also described why some participants withdrew and decided to finance their solar panels through home equity loans instead, suggesting that the pilot program interested them but that they found home equity loans less expensive. The final evaluation conducted in 2010 was focused on assessing the feasibility of scaling Berkeley FIRST to a statewide Property Assessed Clean Energy Program (PACE) program. The lessons learned included “adding energy and water efficiency measures to reduce paybacks, increasing the scale of programs to attract new and cheaper sources of capital, and developing uniform rules for first position liens to ensure that the projects result in a reduction in overall housing costs.”

  • What are partners’ differing goals for the evaluation? How will we reconcile those differences to arrive at a clear understanding of the purpose of the evaluation?
  • Do we have internal capacity to conduct the evaluation(s) partners have agreed upon? Or will we enlist a third party to conduct the evaluation(s)?

“Evaluating Collaboratives: Reaching the Potential” from the University of Wisconsin-Cooperative Extension

This comprehensive resource is valuable for all stages of assessment processes. For guidance in defining the intent of an evaluation, see pp. 35-39 of Section 3: Evaluation Practice, which provides helpful explanations and templates for considering the range of questions, uses, and audiences for a potential evaluation. Also see Sources of Information and Methods of Data Collection on p. 43 for potential sources of information and methods of data collection for the evaluation, as well as the example Evaluation Worksheet on p. 44 for a template to record what partners want to evaluate and how they will evaluate it.

“The Partnership Toolkit” from Collaboration Roundtable

Especially see Tool 18: Evaluation on pp. 108-112 for step-by-step guidance and activities for developing an evaluation that is focused on assessing the partnership process rather than outcomes. “The Partnership Toolkit” is a comprehensive guide to assist organizations in building and sustaining partnerships.

Define the Intent of the Evaluation

The agreement among collaboration partners on the purpose of an evaluation

 

WHY IT MATTERS: Facilitating consensus among partners as to the purpose of the evaluation allows the collaboration to move forward with generating insights that are mutually agreed to be relevant to all partners, while acknowledging that partners may have differing goals for the evaluative process.

Differing perspectives on the purpose of the evaluation.

Influenced by sector- and organization-specific practices, norms, and interests, partners may have differing goals for the evaluation. Some partners may propose an evaluation that focuses on collaborative process so that others can replicate the collaboration’s efforts; others may propose evaluating outcomes to report success to their constituencies; others may propose evaluating both process and outcomes in order to adjust collaboration strategy (assuming the collaboration is ongoing rather than project-specific). Partners must reconcile these and other potential differing perspectives; a lack of clarity on the purpose of the evaluation creates confusion as to what information should be collected and how it should be assessed, ultimately limiting the collaboration’s ability to complete an evaluative process.

“Financing Clean Energy in Berkeley”

Launched in 2008, Berkeley FIRST allowed property owners to borrow money from the City’s Sustainable Energy Financing District to install solar paneling and repay the costs through their property tax bills over a period of 20 years. From the onset of the cross-sector collaboration that gave rise to Berkeley FIRST, the City of Berkeley and its partners sought to create a standardized and scalable financing model that other cities could easily adopt if it proved successful. They agreed that conducting interim and final evaluations of a pilot program operating at a modest scale would help them determine if this type of solar paneling installation was feasible. To this end, the Office of Energy and Sustainable Development and University of California, Berkeley’s Renewable and Appropriate Energy Laboratory conducted initial and final reviews of the program. The initial evaluation in 2009 highlights program achievements and pitfalls, and the motivations and opinions of participants and residents who were not in the pilot group. It also described why some participants withdrew and decided to finance their solar panels through home equity loans instead, suggesting that the pilot program interested them but that they found home equity loans less expensive. The final evaluation conducted in 2010 was focused on assessing the feasibility of scaling Berkeley FIRST to a statewide Property Assessed Clean Energy Program (PACE) program. The lessons learned included “adding energy and water efficiency measures to reduce paybacks, increasing the scale of programs to attract new and cheaper sources of capital, and developing uniform rules for first position liens to ensure that the projects result in a reduction in overall housing costs.”

  • What are partners’ differing goals for the evaluation? How will we reconcile those differences to arrive at a clear understanding of the purpose of the evaluation?
  • Do we have internal capacity to conduct the evaluation(s) partners have agreed upon? Or will we enlist a third party to conduct the evaluation(s)?

“Evaluating Collaboratives: Reaching the Potential” from the University of Wisconsin-Cooperative Extension

This comprehensive resource is valuable for all stages of assessment processes. For guidance in defining the intent of an evaluation, see pp. 35-39 of Section 3: Evaluation Practice, which provides helpful explanations and templates for considering the range of questions, uses, and audiences for a potential evaluation. Also see Sources of Information and Methods of Data Collection on p. 43 for potential sources of information and methods of data collection for the evaluation, as well as the example Evaluation Worksheet on p. 44 for a template to record what partners want to evaluate and how they will evaluate it.

“The Partnership Toolkit” from Collaboration Roundtable

Especially see Tool 18: Evaluation on pp. 108-112 for step-by-step guidance and activities for developing an evaluation that is focused on assessing the partnership process rather than outcomes. “The Partnership Toolkit” is a comprehensive guide to assist organizations in building and sustaining partnerships.