Monitor and Evaluate Your Efforts
Monitoring and evaluation enable you to examine activities and outcomes, engage stakeholders, expand knowledge, and improve your SART’s ability to meet victims’ needs and criminal justice objectives. In addition, demonstrating program effectiveness can be an invaluable tool to obtaining buy-in from potential new partners who may have initially declined to participate on your SART.1 Rather than thinking of monitoring and evaluation as being done to the team, consider them to be a function of the team.
There is no single, "best" approach that can be used in all situations. It is important to decide why you are monitoring and evaluating your SART, the questions you want to answer, and the methods of collecting and analyzing data that will provide useful and reliable information. It also is important to start planning for your evaluation early; planning during the SART development process can help you establish relevant goals and objectives that are measurable.
When monitoring and evaluating your SART’s efforts, consider the following:2
This section reviews—
Monitoring Versus Evaluating
Monitoring and evaluation tells you what works, what doesn’t work, and why. The process can help you increase efficiency, improve communication, and measure the benefits of your SART for victims, service providers, funders, policymakers, and communities.
Monitoring and evaluation are intimately connected but they have separate functions. Doing both can lay a foundation that will move your SART beyond good intentions to an ongoing system that meets diverse and ever-changing needs and conditions. For example, the process can3—
Although monitoring and evaluation are closely linked, "evaluation is not a substitute for monitoring nor is monitoring a substitute for evaluation":4
Evaluating Collaboratives: Reaching the Potential. Provides information on the methods available for evaluating collaboratives and the feasibility that collaboratives will succeed.
A Practical Guide to Evaluating Domestic Violence Coordinating Councils. Describes strategies to use in evaluating an organization’s effectiveness.
A UNICEF Guide for Monitoring and Evaluation. Explains and contrasts monitoring and evaluation processes.
Source: Courtney Ahrens, Gloria Aponte, Rebecca Campbell, William Davidson, Heather Dorey, Lori Grubstein, Monika Naegeli, and Sharon Wasco, Introduction to Evaluation Training and Practice for Sexual Assault Service Delivery, Michigan Public Health Institute and the University of Illinois at Chicago, 1998.
Types of Evaluation
When SART agencies collaborate, "they engage in a range of activities. Some of these activities are internally focused and some are externally focused."9 Internally focused evaluations, therefore, would involve the inner workings of the team, such as the decisionmaking process, team membership, or leadership. Externally focused evaluations would assess factors that affect stakeholders beyond the team itself, such as the frequency of sexual assault reporting, victims’ experiences with medical and legal professionals and advocates on your team, the number of community referrals, how your SART affects victims’ recoveries, and prosecutorial outcomes.
Evaluating how your team functions and evaluating whether it is ultimately meeting its goals are both important. This section reviews these two types of evaluation and also reviews a third, which covers the overall impact of your SART:
There are no hard and fast rules about when to collect data, but timing can be very important. If you want to examine change over time, ideally, you would collect initial (baseline) data during the planning stages and then again at various points once the SART is established. Always leave enough time between the first time you collect data and those that follow so that desired changes have a chance to occur. Not doing so could lead you to erroneously conclude that an activity or response is ineffective.
Examples of Data Collected for Evaluations
|Victim Characteristics||Age, race, socioeconomic status, gender, and level of involvement with criminal justice|
|SART Services||Number of victims served and types of services|
|Team Information||SART membership, frequency of team meetings, and whether case reviews are performed|
|SART Outcomes||Victim satisfaction with services, cases investigated and prosecuted, and victim referrals|
|SART Impact||Changes in policies or resources|
For more information, see—
A process evaluation examines how your team functions. Use it to help you address the quality of your team’s—
Much of the information in this section was adapted from Nicole Allen and Leslie Hagen, 2003, A Practical Guide to Evaluating Domestic Violence Coordinating Councils, Harrisburg, PA: National Resource Center on Domestic Violence, 9–42.
Research suggests that the internal working climate that [teams] foster is an important part of their success. . . . [Teams] handling conflict effectively are more likely to generate creative solutions, encourage needed changes, and avoid ‘groupthink.’10
Evaluating your team’s working environment can include assessing whether your team has a shared interagency mission, the effectiveness of its decisionmaking process, and the ways that it manages conflict. A process evaluation would provide important information about where changes are needed to improve the team’s functioning.
Consider asking team members the following questions when evaluating your team’s working environment:
To what degree has the team developed a shared mission?
To what degree do members feel their input informs team decisionmaking?
To what degree are disagreements handled effectively?
When surveying team members, make sure to maximize confidentiality so that all members feel free to candidly share their beliefs about how the team functions.
There are many different ways that disagreements can be handled. In general, getting to the root of the problem is thought to be the most effective strategy. Compromising, agreeing to disagree, or ignoring conflict is commonly employed by teams, but they are not always effective because they often leave root causes unresolved.
Infrastructure can refer to how your team is organized, who your team members are, and how effective your team leadership is. Assessing infrastructure is important because there is a positive correlation between organizational capacity and team effectiveness.
Determine whether the team is organized by asking team members the following questions: Does the team have—
Determine the adequacy and involvement of the team’s membership by asking members to rate the level of participation by each agency and the level of training and expertise among SART members.
The following table is one example of this type of evaluation.
|Areas of Training/Experience (examples)|
|Rape crisis center||Advocacy||(1=inactive and 5=very active)||Sexual assault trauma and healing, advocacy, criminal justice, civil justice, health care, court monitoring, immigration, bilingual fluency, evaluation, community education, disabilities, care of older adults, substance abuse, community organizing, writing, and so forth|
|Police department, sheriff’s office, dispatch||Law enforcement|
|Prosecuting attorney’s office||Prosecuting attorney|
|Hospital, forensic examiner organization, EMS||Health care|
Determine whether leadership is effective by asking team members to rate the following statements regarding team leadership:
In This Toolkit:
Collaboration Factors Inventory Asks survey questions to help assess factors that influence successful collaboration.
In conducting a process evaluation of your SART’s activities, you will be asking team members to gauge their satisfaction with the team’s activities. For example, you may want to ask your SART members the following questions:
To what degree have team efforts increased collaboration and communication? [Rate responses from 1 to 5.]
To what degree have team efforts increased the exchange of information, resources, and client referrals?
To help answer this question, you could create a survey that lists the number of times per month that team members communicate across agencies. For example, team members might record and report how many times per month they have contacted interdisciplinary agencies, referred victims, or received referrals (see sample survey below).
Name of Agency (completing form) _______________________________
|Agency Name||Number of Times I Exchanged Information||Number of Referrals I Gave||Number of Referrals I Received|
|Rape crisis center||3||5||1|
To what degree have team efforts increased service providers’ knowledge of community resources?
You may want to develop a process survey to determine team members’ understanding of available community services. Consider the following services:
[Teams] that establish clear outcomes from the beginning are more likely to construct a way to have those results achieved. [Teams] that don’t establish outcomes from the beginning are left saying ‘what happened?’11
Outcome evaluations look specifically at whether SARTs achieve their goals and if their activities had the intended effect on victims and service providers. If you don’t have an appropriate survey tool to measure outcomes, consider taking the following actions when creating a new tool.12
Impact evaluations assess long-term intended and unintended outcomes. The basic question that impact evaluations seek to answer is, "What actual results did the outcome produce?"13 For example, your objective may be to expand and enhance community partnerships to support victims, regardless of which agency or organization they initially contact. The outcome from expanded partnerships may be that victims feel more supported and have more of their needs met. The impact of that outcome could mean that victims may be more willing to work with criminal justice providers over an extended time.
You may find yourself left with some unintended outcomes. For example, if you’ve increased the number of cases investigated, you may now see a backlog in the processing of medical forensic examination kits. At first glance, the outcome is positive (more investigations and prosecution of cases), but that impact can have a ripple effect on limited resources. A negative impact (e.g., limited resources), though, can prove invaluable if you use the results to revise policies or inform policymakers and funders about evolving needs.
Consider asking team members to rate the following statements regarding the SART’s impact:
Overcoming the Costs of Evaluation
Evaluation can be expensive, but there are steps you can take to keep costs down:14
Activities: What you do to fulfill your mission (e.g., service delivery).
Effectiveness: A measure of your ability to produce a specific desired effect or result that can be qualitatively measured.
Evaluation: The systematic investigation of the merit or significance of your SART. Evaluations answer specific questions about whether current policies, guideline, protocols, and responses work or do not work and provide the reasons why.
Goals: Future organizational and programmatic directions for your SART.
Impact: Positive and negative long-term effects of your SART’s responses whether intended or unintended.
Indicator: A quantitative or qualitative measurement that is used to demonstrate whether your SART’s goals and objectives have been achieved. Indicators help to answer key questions, such as where you are currently, where you want to go, and whether you are taking the right path to meet your goals.
Inputs: Any resources dedicated to the SART. Examples are money, staff and staff time, volunteers and volunteer time, facilities, equipment, and supplies.
Monitoring: Tracks your performance against what was planned or expected according to your SART’s policies, protocols, and guidelines.
Objectives: Clear, realistic, specific, measurable, and time-limited statements of action that can move you toward achieving your goals.
Outcome Evaluation: A comprehensive examination of components and strategies intended to achieve a specific outcome. An outcome evaluation gauges the extent of success in achieving the outcome, assesses the underlying reasons for achievement or non-achievement, validates the contributions of SART agencies and allied professionals, and identifies key lessons learned and recommendations to improve performance.
Outcomes: Benefits for victims and the criminal justice system based on coordinated service delivery. For example, victims may be more willing to assist with the investigation and prosecution of their cases because their practical, emotional, psychological, social, and economic needs are prioritized.
Outputs: SART products and services. For example, the numbers of community referrals, medical forensic exams, and cases investigated and prosecuted. They are important because they should lead to a desired benefit for participants or target populations.
Performance Measurement: A system that analyzes, interprets, and reports on the achievement of outcomes.
Process Evaluation: Examines whether your SART is operating as intended. A process evaluation helps identify which changes are needed in design, strategies, and operations to improve performance.
Program Outcomes: The benefits of your services to victims, service providers, and the community.
Program Services (also known as activities or outputs): The services that you provide.
Qualitative Evaluation: Describes and interprets how well your SART is working, such as your SART’s relevance, quality of resources (including literacy materials), efficiency of policies and activities, and cost in relation to what has been achieved.
Quantitative Evaluation: Documents how much your SART has accomplished, such as the numbers of individuals served and materials produced, amount of outreach to underserved populations, number of community referrals, number of cases prosecuted, and number of sexual assault reports or prosecutions.
Air University Sampling and Surveying Handbook—Guidelines for Planning, Organizing, and Conducting Surveys
Covers types of surveys, their advantages and disadvantages, survey development, and analysis of data collection plans and surveys.
Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations with Very Limited Resources
Includes basic outcomes-based planning steps.
Community Tool Box, Section 5. Our Evaluation Model: Evaluating Comprehensive Community Initiatives
Provides information on the what, why, when, who, and how of evaluations; situational examples, tools and checklists; and overheads that summarize major evaluation points.
Evaluation Guidebook for Projects Funded by S.T.O.P. Formula Grants Under the Violence Against Women Act
Addresses reasons to participate in evaluations, explains logic models, and discusses ways to use evaluation results to improve a program’s functioning, performance, and promotion.
Evaluation Plan Workbook
Introduces the concepts and processes of planning a program evaluation.
Helps users develop customized evaluation plans from the ground up.
Getting the Most from Evaluation
Describes how one organization leveraged its evaluation findings in various innovative ways.
Getting To Outcomes 2004: Promoting Accountability Through Methods and Tools for Planning, Implementation, and Evaluation
Provides information on planning, implementing, and evaluating programs. Worksheets to help with action steps are included.
Including Evaluation in Outreach Project Planning
Includes information on developing outcomes-based projects and assessment plans, describes how to use a logic model at different stages of development, and provides sample data resources and evaluation methods.
Provides program planning and evaluation consulting, training, and Web-based tools to nonprofits and funders.
Key Steps in Outcome Management
Reviews the outcome measurement process, identifying specific steps and providing suggestions for examining and using the information.
Performance Measures for Prosecutors: Findings from the Application of Performance Measures in Two Prosecutors’ Offices
Describes a project that developed a performance measurement framework for prosecutors, which identifies measurable goals and objectives linked to possible performance measures.
Planning a Program Evaluation
Includes chapters on creating an evaluation, collecting and using data, and managing evaluations.
Planning a Program Evaluation: Worksheet
Provides a chart to document stakeholders, the focus of evaluations, data collection methods, data analysis and interpretation, and other information.
The Program Manager’s Guide to Evaluation
Describes the reasons to evaluate, who should conduct evaluations, how to hire and manage an outside evaluator, what needs to be included in an evaluation plan, how to get needed information, assessing evaluation information, and reporting evaluation findings.
The Programme Managers Planning, Monitoring and Evaluation Toolkit
Provides information on the purposes of evaluation, stakeholder participation with monitoring, a six-part section on planning and evaluation, and ways to identify program indicators.
Six Keys to Successful Organizational Assessment
Describes the importance of having uniquely tailored organizational assessments to meet program goals.
Taking Stock: A Practical Guide to Evaluating Your Own Programs
Provides guidance on designing and implementing evaluations.
A UNICEF Guide for Monitoring and Evaluation
Explains and contrasts the monitoring and evaluation processes.
Analyzing Outcome Information: Getting the Most from Data
Covers the necessary steps for outcome management and includes guidance on establishing an outcome-oriented measurement process.
Analyzing Qualitative Data
Discusses the analysis process including patterns and connections between topics, process enhancements, and pitfalls.
Collecting and Analyzing Evaluation Data
Provides steps for designing quantitative and qualitative data collection and analysis methods.
Collecting Evaluation Data: Surveys
Describes when surveys are appropriate and how to choose survey methods, plan and implement a survey, get good responses, and interpret survey results. The appendixes include sample mail and telephone surveys and press releases.
Ethical Guidelines for Research with DAWN
Addresses research involving individuals with disabilities.
Evaluating Collaboratives: Reaching the Potential
Reviews the methods and feasibility of collaborative evaluations.
Evaluation Plan Workbook
Offers an introduction to the concepts and processes of planning a program evaluation.
Getting To Outcomes 2004—Promoting Accountability Through Methods and Tools for Planning, Implementation, and Evaluation
Provides information on evaluation assessments, goals and objectives, planning, process and outcome evaluation, quality improvement, and sustainability.
Including Evaluation in Outreach Project Planning
Describes the relationship between planning and evaluation. Chapters include information on developing outcomes-based models and assessment plans and guides. The appendixes include information on logic models, sample resources, evaluation method, worksheets, and checklists.
Managing for Results Guidebook: A Logical Approach for Program Design, Outcome Measurement and Process Evaluation
Describes how STOP, VOCA, and Family Violence subgrantees in Tennessee can meet reporting requirements.
National Victim Assistance Academy Textbook, Chapter 17: Research and Evaluation
Covers how to obtain information about research findings, basic research terms, fundamental research and evaluation methods, and how information and technical assistance to conduct research can be acquired.
Outcome Measures for Sexual Assault Services in Texas
Provides Texas-specific outcome measures to support sexual assault service providers’ evaluation needs and practices.
Outcome-Based Evaluation: A Training Toolkit for Programs of Faith
Explains the key concepts and terms used in outcome evaluation and helps users clarify the purpose of their programs.
A Program Manual for Child Death Review: Strategies to Better Understand Why Children Die, Chapter 13: CDR Program Evaluation
Discusses types and methods of evaluation and the steps involved.
Questionnaire Design: Asking Questions with a Purpose
Describes how to develop questionnaires.
Sexual Violence Surveillance: Uniform Definitions and Recommended Data Elements
Provides recommendations for standardizing definitions and data elements for sexual violence surveillance.
Strategic Planning Toolkit, Section 6: Evaluate
Outlines the reasons for evaluation, evaluation strategies, evaluation plans, data collection, and how to report results.
Using Technology to Enable Collaboration
Describes how to develop and maintain technology-based solutions to serve victims of crime.
County of San Diego Sexual Assault Response Team: Systems Review Committee Report, Five Year Review
Includes demographic information and data on examination outcomes.
Descriptive Analysis of Sexual Assault Nurse Examinations in Bethel: 2005–2006
Examines the characteristics of 105 sexual assault victimizations recorded by sexual assault nurse examiners in Bethel, Alaska. It documents the demographic characteristics of patients, pre-assault characteristics, assault characteristics, post-assault characteristics, exam characteristics and findings, suspect characteristics, and legal resolutions.
The Efficacy of Illinois’ Sexual Assault Nurse Examiner (SANE) Pilot Program
Evaluates the SANE pilot program’s impact on victims of sexual violence. The research findings indicated that the program improved community responses to sexual assault victims and post-assault evidence collection/processing; although it is unclear if the program resulted in higher prosecution rates.
Evaluating Data Collection and Communication System Projects Funded Under the STOP Program, Executive Summary
Reports the findings of a study that examined the uses of STOP grant funds intended to improve data collection and/or communication systems to address violence against women.
An Evaluation of the Rhode Island Sexual Assault Response Team (SART)
Presents evaluation results regarding the legal effects of receiving services from the Rhode Island SART.
An Evaluation of Victim Advocacy Within a Team Approach: Final Report Summary
Summarizes a study that evaluated victim advocacy services offered to battered women in Detroit and also examined other aspects of coordinated community responses to intimate partner violence.
How SAFE is New York City? Sexual Assault Services in Emergency Departments
Examines services available for rape victims in New York City emergency departments and analyzes hospital adherence to medical care protocol, forensic evidence collection, advocacy, followup care, and quality assurance.
Impact Evaluation of a Sexual Assault Nurse Examiner (SANE) Program
Compares post-rape experiences of victims and services at the University of New Mexico Health Sciences Center before the inception of the SANE program to those received at the Albuquerque SANE Collective after the program’s creation. Among the findings: rape victims treated by SANE nurses received more comprehensive medical attention, had forensic evidence gathered more frequently, and received more referrals than victims who sought medical attention before the SANE program began.
Improving Services to Victims of Sexual Assault: An Evaluation of Six Minnesota Model Protocol Development Test Sites
Presents the findings of an evaluation of the Model Protocol Project’s implementation in six sites in Minnesota; the project was developed to bring multidisciplinary agencies together to help victims of sexual assault.
The Police Interaction Research Project: A Study of the Interactions that Occur Between the Police and Survivors Following a Sexual Assault
Explores what happens between sexual assault survivors and the police officers and detectives of the New York City Police Department.
Process Evaluation of the Kansas City/Jackson County Response to Sexual Assault
Assesses information on and includes recommendations for evidence collection, DNA analysis, multidisciplinary sexual assault responders, medical facilities, the SART and the SANE response, training, and protocol development.
Reporting Sexual Assault to the Police in Hawaii
Presents findings from a study that looked for variables that facilitated or hindered reporting of sexual assault to the police in Hawaii.
A Room of Our Own: Sexual Assault Survivors Evaluate Services
Examines rape survivors’ perspectives on sexual assault services provided by New York City hospitals, rape crisis/victim assistance programs, law enforcement, and the criminal justice system.
Sexual Assault Experiences and Perceptions of Community Response to Sexual Assault: A Survey of Washington State Women
Summarizes research that examines the incidence and prevalence of sexual assault in Washington, including the characteristics of assault experiences and barriers to reporting.
Amherst H. Wilder Foundation
Conducts program evaluations and other research projects.
CDC Evaluation Working Group
Promotes evaluation practices at the Centers for Disease Control and Prevention and throughout the health care system. The Web site links to online publications, evaluation manuals, logic model resources, and planning and performance improvement tools.
Center for Program Evaluation and Performance Measurement (Bureau of Justice Assistance)
Assists users in conducting evaluations and in using the results to improve their programs. The Web site links to various resources (e.g., news, a guide to program evaluation, reference materials) and information about different program areas (e.g., adjudication, law enforcement).
Helps agencies improve their service provision and management systems and promote organizational learning. The Web site links to planning tools, a bibliography, online consulting services, and other resources.
VAWA Measuring Effectiveness Initiative
Measures the effectiveness of Violence Against Women Act grants administered by the Office on Violence Against Women.
Community Organizational Assessment Tool
Helps users evaluate how board members, organizations, and communities are functioning. Can be adapted by SARTs.
In This Toolkit: Evaluation Tools
Includes links to several evaluation tools used by different SARTs throughout the Nation.
In This Toolkit: Intake and Outcome-Based Form (Word)
Includes a victim survey at the end of the document.
Police Response to Rape and Sexual Assault
Covers the initial police response, victim interviews, and investigation followups.
RAND Health: Surveys and Tools
Features surveys and assessment tools on various health care categories such as aging, diversity, HIV/STD, mental health, public health, and quality of care. The surveys can be used and adapted to meet SART objectives.
Wilder Collaboration Factors Inventory
Assesses the factors that influence successful collaboration.
Wyoming Victim Satisfaction Survey
Helps users evaluate victims’ satisfaction with their treatment in the Wyoming justice system.
Common Myths Regarding Outcome Measures
Reviews evaluation myths that could impede the development of evaluation plans.
Program Development and Evaluation (University of Wisconsin-Extension)
Links to a series of evaluation publications that describe how to plan evaluations, design questionnaires, collect and analyze data, and report evaluation results.
Sexual Violence (Centers for Disease Control and Prevention)
Links to fact sheets, statistics, evaluation publications, and prevention education materials.
1 Adapted from Donna Greco, 2006, Evaluating Prevention Programs: A Technical Assistance Guide for Sexual Violence Advocates, Enola, PA: Pennsylvania Coalition Against Rape. Used with permission.
2 Nicole Allen and Leslie Hagen, 2003, A Practical Guide to Evaluating Domestic Violence Coordinating Councils, Harrisburg, PA: National Resource Center on Domestic Violence, 57.
3 Division for Oversight Services, 2004, "Tool Number 2: Defining Evaluation," Programme Manager’s Planning, Monitoring and Evaluation Toolkit, New York, NY:United Nations Population Fund, Division for Oversight Services, 2.
5 Office for Victims of Crime Training and Technical Assistance Center, nd, "Section 6: Evaluate," OVC-TTAC Strategic Planning Toolkit, Washington, DC: U.S. Department of Justice, Office for Victims of Crime, 6–7.
7 National Center for Child Death Review, 2005, A Program Manual for Child Death Review: Strategies to Better Understand Why Children Die, Okemos, MI: National Center for Child Death Review.
8 Martha Burt, Adele Harrell, Lisa Newmark, Laudan Aron, and Lisa Jacobs, 1997, Evaluation Guidebook for Projects Funded by STOP Formula Grants under the Violence Against Women Act, Washington, DC: Urban Institute.
9 Allen and Hagen, A Practical Guide to Evaluating Domestic Violence Coordinating Councils, 21.
10 Allen and Hagen, A Practical Guide to Evaluating Domestic Violence Coordinating Councils, 9.
11 Performance Results, Inc., nd, Outcome-Based Evaluation: A Training Toolkit for Programs of Faith, Gaithersburg, MD: Performance Results, Inc., 5.
12 Adapted from Greco, Evaluating Prevention Programs: A Technical Assistance Guide for Sexual Violence Advocates.
13 Courtney Ahrens, Gloria Aponte, Rebecca Campbell, William Davidson, Heather Dorey, Lori Grubstein, Monika Naegeli, and Sharon Wasco, 1998, Introduction to Evaluation Training and Practice for Sexual Assault Service Delivery, Okemos, MI: Michigan Public Health Institute and Chicago, IL: University of Illinois at Chicago.
14 Adapted from Greco, Evaluating Prevention Programs: A Technical Assistance Guide for Sexual Violence Advocates.