SART TOOLKIT: Resources for Sexual Assault Response Teams
Develop a SARTPrint Print

Monitor and Evaluate Your Efforts

Monitoring and evaluation enable you to examine activities and outcomes, engage stakeholders, expand knowledge, and improve your SART’s ability to meet victims’ needs and criminal justice objectives. In addition, demonstrating program effectiveness can be an invaluable tool to obtaining buy-in from potential new partners who may have initially declined to participate on your SART.1 Rather than thinking of monitoring and evaluation as being done to the team, consider them to be a function of the team.

There is no single, "best" approach that can be used in all situations. It is important to decide why you are monitoring and evaluating your SART, the questions you want to answer, and the methods of collecting and analyzing data that will provide useful and reliable information. It also is important to start planning for your evaluation early; planning during the SART development process can help you establish relevant goals and objectives that are measurable.

When monitoring and evaluating your SART’s efforts, consider the following:2

This section reviews—

Monitoring Versus Evaluating

Monitoring and evaluation tells you what works, what doesn’t work, and why. The process can help you increase efficiency, improve communication, and measure the benefits of your SART for victims, service providers, funders, policymakers, and communities.

Monitoring and evaluation are intimately connected but they have separate functions. Doing both can lay a foundation that will move your SART beyond good intentions to an ongoing system that meets diverse and ever-changing needs and conditions. For example, the process can3

Although monitoring and evaluation are closely linked, "evaluation is not a substitute for monitoring nor is monitoring a substitute for evaluation":4

The Evaluation Process
  1. Clarify your goals and objectives.
  2. Understand the different types of evaluation.
  3. Select your evaluation tools (e.g., surveys, focus groups).
  4. Create data collection plans.
  5. Develop processes for managing the data you collect.
  6. Identify resources and methods for analyzing and interpreting the data.
  7. Create a plan to report findings.
  8. Implement improvements based on evaluation findings.

Source: Courtney Ahrens, Gloria Aponte, Rebecca Campbell, William Davidson, Heather Dorey, Lori Grubstein, Monika Naegeli, and Sharon Wasco, Introduction to Evaluation Training and Practice for Sexual Assault Service Delivery, Michigan Public Health Institute and the University of Illinois at Chicago, 1998.

Types of Evaluation

When SART agencies collaborate, "they engage in a range of activities. Some of these activities are internally focused and some are externally focused."9 Internally focused evaluations, therefore, would involve the inner workings of the team, such as the decisionmaking process, team membership, or leadership. Externally focused evaluations would assess factors that affect stakeholders beyond the team itself, such as the frequency of sexual assault reporting, victims’ experiences with medical and legal professionals and advocates on your team, the number of community referrals, how your SART affects victims’ recoveries, and prosecutorial outcomes.

Evaluating how your team functions and evaluating whether it is ultimately meeting its goals are both important. This section reviews these two types of evaluation and also reviews a third, which covers the overall impact of your SART:

Data Collection

There are no hard and fast rules about when to collect data, but timing can be very important. If you want to examine change over time, ideally, you would collect initial (baseline) data during the planning stages and then again at various points once the SART is established. Always leave enough time between the first time you collect data and those that follow so that desired changes have a chance to occur. Not doing so could lead you to erroneously conclude that an activity or response is ineffective.

Examples of Data Collected for Evaluations

Victim Characteristics Age, race, socioeconomic status, gender, and level of involvement with criminal justice
SART Services Number of victims served and types of services
Team Information SART membership, frequency of team meetings, and whether case reviews are performed
SART Outcomes Victim satisfaction with services, cases investigated and prosecuted, and victim referrals
SART Impact Changes in policies or resources

For more information, see—

  • In This Toolkit: Collect Data, which describes methods of collecting data in more detail.
  • SANE Program National Database, which collects national victim data to identify SANE programs’ strengths and weaknesses, improve their evidence collection, and enhance prosecution rates in future cases.

Process Evaluation

A process evaluation examines how your team functions. Use it to help you address the quality of your team’s—

Interagency Working Environment

Research suggests that the internal working climate that [teams] foster is an important part of their success. . . . [Teams] handling conflict effectively are more likely to generate creative solutions, encourage needed changes, and avoid ‘groupthink.’10

Evaluating your team’s working environment can include assessing whether your team has a shared interagency mission, the effectiveness of its decisionmaking process, and the ways that it manages conflict. A process evaluation would provide important information about where changes are needed to improve the team’s functioning.

Consider asking team members the following questions when evaluating your team’s working environment:

To what degree has the team developed a shared mission?

___
The mission is supported by all team members.
___
Team members have a shared vision regarding needed changes in the community’s response to sexual violence.
___
Team members have a shared understanding of sexual violence.
___
Team members are working together to achieve a common goal.

To what degree do members feel their input informs team decisionmaking?

___
The input of all active SART members influences team decisions.
___
When making decisions, the team is respectful of all viewpoints.
___
The team does not move forward until all ideas are considered.
___
If a team member disagrees with the group, his or her perspective is valued.

To what degree are disagreements handled effectively?

___
Disagreements among team members are often resolved by compromise.
___
Conflict among team members leads to effective problem solving.
___
When conflict arises, the team ignores it.
___
Conflict has created opportunities for open discussion among members.
___
Disagreements typically stifle the progress of the team.
___
When faced with conflict, team members agree to disagree.
___
The team handles conflict by attempting to get to the root of the problem.
___
Conflicting opinions among team members have led to needed changes.
___
The team has avoided addressing diverse viewpoints on the team.

When surveying team members, make sure to maximize confidentiality so that all members feel free to candidly share their beliefs about how the team functions.

The Root Cause

There are many different ways that disagreements can be handled. In general, getting to the root of the problem is thought to be the most effective strategy. Compromising, agreeing to disagree, or ignoring conflict is commonly employed by teams, but they are not always effective because they often leave root causes unresolved.

Infrastructure

Infrastructure can refer to how your team is organized, who your team members are, and how effective your team leadership is. Assessing infrastructure is important because there is a positive correlation between organizational capacity and team effectiveness.

Determine whether the team is organized by asking team members the following questions: Does the team have—

___
Written agendas for team meetings?
___
Agendas distributed prior to team meetings?
___
Written goals and objectives?
___
Regular meetings?
___
Written interagency role descriptions?
___
Subcommittees or workgroups (e.g., evaluation subcommittee)?
___
Established methods for decisionmaking?
___
Established methods for problem-solving and conflict resolution?
___
Established monitoring and evaluation methods?
___
Training opportunities for new and old members?
___
Methods for new member orientation?
___
Methods for including victims’ viewpoints?

Determine the adequacy and involvement of the team’s membership by asking members to rate the level of participation by each agency and the level of training and expertise among SART members.

The following table is one example of this type of evaluation.

Agency Discipline Participation Level
(1–5)
Areas of Training/Experience (examples)
Rape crisis center Advocacy (1=inactive and 5=very active) Sexual assault trauma and healing, advocacy, criminal justice, civil justice, health care, court monitoring, immigration, bilingual fluency, evaluation, community education, disabilities, care of older adults, substance abuse, community organizing, writing, and so forth
Police department, sheriff’s office, dispatch Law enforcement  
Prosecuting attorney’s office Prosecuting attorney  
Hospital, forensic examiner organization, EMS Health care  

Determine whether leadership is effective by asking team members to rate the following statements regarding team leadership:

___
Is committed to the SART’s mission.
___
Has appropriate time to devote to administering the SART.
___
Plans activities effectively and efficiently.
___
Is knowledgeable in the area of sexual violence.
___
Is flexible in accepting different viewpoints.
___
Promotes equality and collaboration among members.
___
Is skilled in organizational management.
___
Works within influential political and community networks.
___
Is competent in negotiating, solving problems, and resolving conflict.
___
Is attentive to individual members’ concerns.
___
Is effective in managing meetings.
___
Is skilled in obtaining resources.
___
Values members’ input.
___
Recognizes members for their contributions.

Activities

The Right Tool

In This Toolkit:

Collaboration Factors Inventory Asks survey questions to help assess factors that influence successful collaboration.


In conducting a process evaluation of your SART’s activities, you will be asking team members to gauge their satisfaction with the team’s activities. For example, you may want to ask your SART members the following questions:

To what degree have team efforts increased collaboration and communication? [Rate responses from 1 to 5.]

___
Increased the ability to coordinate across disciplines.
___
Increased knowledge about community organizations.
___
Increased respect for multidisciplinary responders.
___
Increased knowledge about sexual assault responder’s roles and limitations.
___
Increased ability to adapt to emerging issues (e.g., funding constraints, staff transitions).
___
Increased ability to work more efficiently.
___
Increased team’s willingness to act on recommendations.

To what degree have team efforts increased the exchange of information, resources, and client referrals?

To help answer this question, you could create a survey that lists the number of times per month that team members communicate across agencies. For example, team members might record and report how many times per month they have contacted interdisciplinary agencies, referred victims, or received referrals (see sample survey below).

Name of Agency (completing form) _______________________________

Agency Name Number of Times I Exchanged Information Number of Referrals I Gave Number of Referrals I Received
Rape crisis center 3 5 1
SANE program 5 1 3
Faith-based organization 1 1 0
Law enforcement 1 3 1

To what degree have team efforts increased service providers’ knowledge of community resources?

You may want to develop a process survey to determine team members’ understanding of available community services. Consider the following services:

___
Emergency shelter
___
Counseling
___
Housing
___
Social Support
___
Food
___
Transportation
___
Clothing
___
Medical assistance
___
Material goods
___
Order of protection
___
Financial assistance
___
Culturally specific organizations
___
Legal aid/civil legal assistance
___
Other: ______________________
___
Childcare
 
 

Outcome Evaluation

[Teams] that establish clear outcomes from the beginning are more likely to construct a way to have those results achieved. [Teams] that don’t establish outcomes from the beginning are left saying ‘what happened?’11

Outcome evaluations look specifically at whether SARTs achieve their goals and if their activities had the intended effect on victims and service providers. If you don’t have an appropriate survey tool to measure outcomes, consider taking the following actions when creating a new tool.12

Impact Evaluation

Impact evaluations assess long-term intended and unintended outcomes. The basic question that impact evaluations seek to answer is, "What actual results did the outcome produce?"13 For example, your objective may be to expand and enhance community partnerships to support victims, regardless of which agency or organization they initially contact. The outcome from expanded partnerships may be that victims feel more supported and have more of their needs met. The impact of that outcome could mean that victims may be more willing to work with criminal justice providers over an extended time.

You may find yourself left with some unintended outcomes. For example, if you’ve increased the number of cases investigated, you may now see a backlog in the processing of medical forensic examination kits. At first glance, the outcome is positive (more investigations and prosecution of cases), but that impact can have a ripple effect on limited resources. A negative impact (e.g., limited resources), though, can prove invaluable if you use the results to revise policies or inform policymakers and funders about evolving needs.

Consider asking team members to rate the following statements regarding the SART’s impact:

___
The SART has addressed shortcomings in practices regarding the response to victims of sexual violence.
___
The SART has addressed shortcomings in practices related to the criminal justice response to sexual assault.
___
The SART has influenced policies of allied agencies regarding their responses to sexual violence.
___
The SART has influenced changes in practice to increase offender accountability.
___
The SART created changes in practice to increase victim healing and satisfaction.
___
The SART has promoted public awareness of sexual assault trauma, healing, and safety.

Overcoming the Costs of Evaluation

Evaluation can be expensive, but there are steps you can take to keep costs down:14

Resources

Key Terms

Activities: What you do to fulfill your mission (e.g., service delivery).

Effectiveness: A measure of your ability to produce a specific desired effect or result that can be qualitatively measured.

Evaluation: The systematic investigation of the merit or significance of your SART. Evaluations answer specific questions about whether current policies, guideline, protocols, and responses work or do not work and provide the reasons why.

Goals: Future organizational and programmatic directions for your SART.

Impact: Positive and negative long-term effects of your SART’s responses whether intended or unintended.

Indicator: A quantitative or qualitative measurement that is used to demonstrate whether your SART’s goals and objectives have been achieved. Indicators help to answer key questions, such as where you are currently, where you want to go, and whether you are taking the right path to meet your goals.

Inputs: Any resources dedicated to the SART. Examples are money, staff and staff time, volunteers and volunteer time, facilities, equipment, and supplies.

Monitoring: Tracks your performance against what was planned or expected according to your SART’s policies, protocols, and guidelines.

Objectives: Clear, realistic, specific, measurable, and time-limited statements of action that can move you toward achieving your goals.

Outcome Evaluation: A comprehensive examination of components and strategies intended to achieve a specific outcome. An outcome evaluation gauges the extent of success in achieving the outcome, assesses the underlying reasons for achievement or non-achievement, validates the contributions of SART agencies and allied professionals, and identifies key lessons learned and recommendations to improve performance.

Outcomes: Benefits for victims and the criminal justice system based on coordinated service delivery. For example, victims may be more willing to assist with the investigation and prosecution of their cases because their practical, emotional, psychological, social, and economic needs are prioritized.

Outputs: SART products and services. For example, the numbers of community referrals, medical forensic exams, and cases investigated and prosecuted. They are important because they should lead to a desired benefit for participants or target populations.

Performance Measurement: A system that analyzes, interprets, and reports on the achievement of outcomes.

Process Evaluation: Examines whether your SART is operating as intended. A process evaluation helps identify which changes are needed in design, strategies, and operations to improve performance.

Program Outcomes: The benefits of your services to victims, service providers, and the community.

Program Services (also known as activities or outputs): The services that you provide.

Qualitative Evaluation: Describes and interprets how well your SART is working, such as your SART’s  relevance, quality of resources (including literacy materials), efficiency of policies and activities, and cost in relation to what has been achieved.

Quantitative Evaluation: Documents how much your SART has accomplished, such as the numbers of individuals served and materials produced, amount of outreach to underserved populations, number of community referrals, number of cases prosecuted, and number of sexual assault reports or prosecutions.

Evaluation Guides

Air University Sampling and Surveying Handbook—Guidelines for Planning, Organizing, and Conducting Surveys
Covers types of surveys, their advantages and disadvantages, survey development, and analysis of data collection plans and surveys.

Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations with Very Limited Resources
Includes basic outcomes-based planning steps.

Community Tool Box, Section 5. Our Evaluation Model: Evaluating Comprehensive Community Initiatives
Provides information on the what, why, when, who, and how of evaluations; situational examples, tools and checklists; and overheads that summarize major evaluation points.

Evaluation Guidebook for Projects Funded by S.T.O.P. Formula Grants Under the Violence Against Women Act
Addresses reasons to participate in evaluations, explains logic models, and discusses ways to use evaluation results to improve a program’s functioning, performance, and promotion.

Evaluation Plan Workbook
Introduces the concepts and processes of planning a program evaluation.

Evaluation Toolkit
Helps users develop customized evaluation plans from the ground up.

Getting the Most from Evaluation
Describes how one organization leveraged its evaluation findings in various innovative ways.

Getting To Outcomes 2004: Promoting Accountability Through Methods and Tools for Planning, Implementation, and Evaluation
Provides information on planning, implementing, and evaluating programs. Worksheets to help with action steps are included.

Including Evaluation in Outreach Project Planning
Includes information on developing outcomes-based projects and assessment plans, describes how to use a logic model at different stages of development, and provides sample data resources and evaluation methods.

Innovation Network
Provides program planning and evaluation consulting, training, and Web-based tools to nonprofits and funders.

Key Steps in Outcome Management
Reviews the outcome measurement process, identifying specific steps and providing suggestions for examining and using the information.

Performance Measures for Prosecutors: Findings from the Application of Performance Measures in Two Prosecutors’ Offices
Describes a project that developed a performance measurement framework for prosecutors, which identifies measurable goals and objectives linked to possible performance measures.

Planning a Program Evaluation
Includes chapters on creating an evaluation, collecting and using data, and managing evaluations.

Planning a Program Evaluation: Worksheet
Provides a chart to document stakeholders, the focus of evaluations, data collection methods, data analysis and interpretation, and other information.

The Program Manager’s Guide to Evaluation
Describes the reasons to evaluate, who should conduct evaluations, how to hire and manage an outside evaluator, what needs to be included in an evaluation plan, how to get needed information, assessing evaluation information, and reporting evaluation findings.

The Programme Managers Planning, Monitoring and Evaluation Toolkit
Provides information on the purposes of evaluation, stakeholder participation with monitoring, a six-part section on planning and evaluation, and ways to identify program indicators.

Six Keys to Successful Organizational Assessment
Describes the importance of having uniquely tailored organizational assessments to meet program goals.

Taking Stock: A Practical Guide to Evaluating Your Own Programs
Provides guidance on designing and implementing evaluations.

A UNICEF Guide for Monitoring and Evaluation
Explains and contrasts the monitoring and evaluation processes.

Evaluation Publications

Analyzing Outcome Information: Getting the Most from Data
Covers the necessary steps for outcome management and includes guidance on establishing an outcome-oriented measurement process.

Analyzing Qualitative Data
Discusses the analysis process including patterns and connections between topics, process enhancements, and pitfalls.

Collecting and Analyzing Evaluation Data
Provides steps for designing quantitative and qualitative data collection and analysis methods.

Collecting Evaluation Data: Surveys
Describes when surveys are appropriate and how to choose survey methods, plan and implement a survey, get good responses, and interpret survey results. The appendixes include sample mail and telephone surveys and press releases.

Ethical Guidelines for Research with DAWN
Addresses research involving individuals with disabilities.

Evaluating Collaboratives: Reaching the Potential
Reviews the methods and feasibility of collaborative evaluations.

Evaluation Plan Workbook
Offers an introduction to the concepts and processes of planning a program evaluation.

Getting To Outcomes 2004—Promoting Accountability Through Methods and Tools for Planning, Implementation, and Evaluation
Provides information on evaluation assessments, goals and objectives, planning, process and outcome evaluation, quality improvement, and sustainability.

Including Evaluation in Outreach Project Planning
Describes the relationship between planning and evaluation. Chapters include information on developing outcomes-based models and assessment plans and guides. The appendixes include information on logic models, sample resources, evaluation method, worksheets, and checklists.

Managing for Results Guidebook: A Logical Approach for Program Design, Outcome Measurement and Process Evaluation
Describes how STOP, VOCA, and Family Violence subgrantees in Tennessee can meet reporting requirements.

National Victim Assistance Academy Textbook, Chapter 17: Research and Evaluation
Covers how to obtain information about research findings, basic research terms, fundamental research and evaluation methods, and how information and technical assistance to conduct research can be acquired.

Outcome Measures for Sexual Assault Services in Texas
Provides Texas-specific outcome measures to support sexual assault service providers’ evaluation needs and practices.

Outcome-Based Evaluation: A Training Toolkit for Programs of Faith
Explains the key concepts and terms used in outcome evaluation and helps users clarify the purpose of their programs.

A Program Manual for Child Death Review: Strategies to Better Understand Why Children Die, Chapter 13: CDR Program Evaluation
Discusses types and methods of evaluation and the steps involved.

Questionnaire Design: Asking Questions with a Purpose
Describes how to develop questionnaires.

Sexual Violence Surveillance: Uniform Definitions and Recommended Data Elements
Provides recommendations for standardizing definitions and data elements for sexual violence surveillance.

Strategic Planning Toolkit, Section 6: Evaluate
Outlines the reasons for evaluation, evaluation strategies, evaluation plans, data collection, and how to report results.

Using Technology to Enable Collaboration
Describes how to develop and maintain technology-based solutions to serve victims of crime.

Evaluation Reports

County of San Diego Sexual Assault Response Team: Systems Review Committee Report, Five Year Review
Includes demographic information and data on examination outcomes.

Descriptive Analysis of Sexual Assault Nurse Examinations in Bethel: 2005–2006
Examines the characteristics of 105 sexual assault victimizations recorded by sexual assault nurse examiners in Bethel, Alaska. It documents the demographic characteristics of patients, pre-assault characteristics, assault characteristics, post-assault characteristics, exam characteristics and findings, suspect characteristics, and legal resolutions.

The Efficacy of Illinois’ Sexual Assault Nurse Examiner (SANE) Pilot Program
Evaluates the SANE pilot program’s impact on victims of sexual violence. The research findings indicated that the program improved community responses to sexual assault victims and post-assault evidence collection/processing; although it is unclear if the program resulted in higher prosecution rates.

Evaluating Data Collection and Communication System Projects Funded Under the STOP Program, Executive Summary
Reports the findings of a study that examined the uses of STOP grant funds intended to improve data collection and/or communication systems to address violence against women.

An Evaluation of the Rhode Island Sexual Assault Response Team (SART)
Presents evaluation results regarding the legal effects of receiving services from the Rhode Island SART.

An Evaluation of Victim Advocacy Within a Team Approach: Final Report Summary
Summarizes a study that evaluated victim advocacy services offered to battered women in Detroit and also examined other aspects of coordinated community responses to intimate partner violence.

How SAFE is New York City? Sexual Assault Services in Emergency Departments
Examines services available for rape victims in New York City emergency departments and analyzes hospital adherence to medical care protocol, forensic evidence collection, advocacy, followup care, and quality assurance.

Impact Evaluation of a Sexual Assault Nurse Examiner (SANE) Program
Compares post-rape experiences of victims and services at the University of New Mexico Health Sciences Center before the inception of the SANE program to those received at the Albuquerque SANE Collective after the program’s creation. Among the findings: rape victims treated by SANE nurses received more comprehensive medical attention, had forensic evidence gathered more frequently, and received more referrals than victims who sought medical attention before the SANE program began.

Improving Services to Victims of Sexual Assault: An Evaluation of Six Minnesota Model Protocol Development Test Sites
Presents the findings of an evaluation of the Model Protocol Project’s implementation in six sites in Minnesota; the project was developed to bring multidisciplinary agencies together to help victims of sexual assault.

The Police Interaction Research Project: A Study of the Interactions that Occur Between the Police and Survivors Following a Sexual Assault
Explores what happens between sexual assault survivors and the police officers and detectives of the New York City Police Department.

Process Evaluation of the Kansas City/Jackson County Response to Sexual Assault
Assesses information on and includes recommendations for evidence collection, DNA analysis, multidisciplinary sexual assault responders, medical facilities, the SART and the SANE response, training, and protocol development.

Reporting Sexual Assault to the Police in Hawaii
Presents findings from a study that looked for variables that facilitated or hindered reporting of sexual assault to the police in Hawaii.

A Room of Our Own: Sexual Assault Survivors Evaluate Services
Examines rape survivors’ perspectives on sexual assault services provided by New York City hospitals, rape crisis/victim assistance programs, law enforcement, and the criminal justice system.

Sexual Assault Experiences and Perceptions of Community Response to Sexual Assault: A Survey of Washington State Women
Summarizes research that examines the incidence and prevalence of sexual assault in Washington, including the characteristics of assault experiences and barriers to reporting.

Organizations

Amherst H. Wilder Foundation
Conducts program evaluations and other research projects.

CDC Evaluation Working Group
Promotes evaluation practices at the Centers for Disease Control and Prevention and throughout the health care system. The Web site links to online publications, evaluation manuals, logic model resources, and planning and performance improvement tools.

Center for Program Evaluation and Performance Measurement (Bureau of Justice Assistance)
Assists users in conducting evaluations and in using the results to improve their programs. The Web site links to various resources (e.g., news, a guide to program evaluation, reference materials) and information about different program areas (e.g., adjudication, law enforcement).

Performance Vistas
Helps agencies improve their service provision and management systems and promote organizational learning. The Web site links to planning tools, a bibliography, online consulting services, and other resources.

Urban Institute
Evaluates programs, gathers data, and conducts research, among other services. The Web site includes materials and publications on crime and justice and performance measurement.

VAWA Measuring Effectiveness Initiative
Measures the effectiveness of Violence Against Women Act grants administered by the Office on Violence Against Women.

Sample Forms

Community Organizational Assessment Tool
Helps users evaluate how board members, organizations, and communities are functioning. Can be adapted by SARTs.

In This Toolkit: Evaluation Tools
Includes links to several evaluation tools used by different SARTs throughout the Nation.

In This Toolkit: Intake and Outcome-Based Form (Word)
Includes a victim survey at the end of the document.

Police Response to Rape and Sexual Assault
Covers the initial police response, victim interviews, and investigation followups.

RAND Health: Surveys and Tools
Features surveys and assessment tools on various health care categories such as aging, diversity, HIV/STD, mental health, public health, and quality of care. The surveys can be used and adapted to meet SART objectives.

Wilder Collaboration Factors Inventory
Assesses the factors that influence successful collaboration.

Wyoming Victim Satisfaction Survey
Helps users evaluate victims’ satisfaction with their treatment in the Wyoming justice system.

Web Sites

Common Myths Regarding Outcome Measures
Reviews evaluation myths that could impede the development of evaluation plans.

Program Development and Evaluation (University of Wisconsin-Extension)
Links to a series of evaluation publications that describe how to plan evaluations, design questionnaires, collect and analyze data, and report evaluation results.

Sexual Violence (Centers for Disease Control and Prevention)
Links to fact sheets, statistics, evaluation publications, and prevention education materials.

Notes

1 Adapted from Donna Greco, 2006, Evaluating Prevention Programs: A Technical Assistance Guide for Sexual Violence Advocates, Enola, PA: Pennsylvania Coalition Against Rape. Used with permission.

2 Nicole Allen and Leslie Hagen, 2003, A Practical Guide to Evaluating Domestic Violence Coordinating Councils, Harrisburg, PA: National Resource Center on Domestic Violence, 57.

3 Division for Oversight Services, 2004, "Tool Number 2: Defining Evaluation," Programme Manager’s Planning, Monitoring and Evaluation Toolkit, New York, NY:United Nations Population Fund, Division for Oversight Services, 2.

4 Ibid.

5 Office for Victims of Crime Training and Technical Assistance Center, nd, "Section 6: Evaluate," OVC-TTAC Strategic Planning Toolkit, Washington, DC: U.S. Department of Justice, Office for Victims of Crime, 6–7.

6 Division for Oversight Services, "Tool Number 2: Defining Evaluation," 3.

7 National Center for Child Death Review, 2005, A Program Manual for Child Death Review: Strategies to Better Understand Why Children Die, Okemos, MI: National Center for Child Death Review.

8 Martha Burt, Adele Harrell, Lisa Newmark, Laudan Aron, and Lisa Jacobs, 1997, Evaluation Guidebook for Projects Funded by STOP Formula Grants under the Violence Against Women Act, Washington, DC: Urban Institute.

9 Allen and Hagen, A Practical Guide to Evaluating Domestic Violence Coordinating Councils, 21.

10 Allen and Hagen, A Practical Guide to Evaluating Domestic Violence Coordinating Councils, 9.

11 Performance Results, Inc., nd, Outcome-Based Evaluation: A Training Toolkit for Programs of Faith, Gaithersburg, MD: Performance Results, Inc., 5.

12 Adapted from Greco, Evaluating Prevention Programs: A Technical Assistance Guide for Sexual Violence Advocates.

13 Courtney Ahrens, Gloria Aponte, Rebecca Campbell, William Davidson, Heather Dorey, Lori Grubstein, Monika Naegeli, and Sharon Wasco, 1998, Introduction to Evaluation Training and Practice for Sexual Assault Service Delivery, Okemos, MI: Michigan Public Health Institute and Chicago, IL: University of Illinois at Chicago.

14 Adapted from Greco, Evaluating Prevention Programs: A Technical Assistance Guide for Sexual Violence Advocates.

Back to Monitor and Evaluate Your Efforts