Your Perfect Assignment is Just a Click Away
We Write Custom Academic Papers

100% Original, Plagiarism Free, Customized to your instructions!

glass
pen
clip
papers
heaphones

What is the specific group or who are the specific individuals that the volunteer program is targeted to serve (i.e. the program’s clientele)?

What is the specific group or who are the specific individuals that the volunteer program is targeted to serve (i.e. the program’s clientele)?

CHAPTER 16

Evaluating Impact of Volunteer Programs

R. Dale Safrit, EdD North Carolina State University

This chapter introduces and defines the closely related concepts of evaluation, im-pact and accountability, especially as applied to volunteer programs. The author dis- cusses four fundamental questions that guide the development and implementation of an impact evaluation and subsequent accountability of a volunteer program.

Evaluation in Volunteer Programs

The concept of evaluation as applied to volunteer programs is not new. As early as 1968, Creech suggested a set of criteria for evaluating a volunteer program and con- cluded, “Evaluation, then, includes listening to our critics, to the people around us, to experts, to scientists, to volunteers so that we may get the whole truth [about our pro- grams]” (p.2). This approach to evaluation was well ahead of its time since up until the past decade, when authors within our profession either only addressed the evaluation of holistic volunteer programs superficially (e.g., Brudney, 1999; Naylor, 1976; O’Con- nell, 1976; Stenzel & Feeney, 1968; Wilson, 1979) or not at all (e.g., Naylor, 1973; Wil- son, 1981). Even in the first edition of this text, fewer than four total pages of text were dedicated to the topic of evaluation within chapters dedicated to other traditional vol- unteer program management topics, including recruiting and retaining volunteers (Bradner, 1995), training volunteers (Lulewicz, 1995), supervising volunteers (Brud- ney, 1995; Stepputat, 1995), improving paid staff and volunteer relations (Macduff, 1995), monitoring the operations of employee volunteer programs (Seel, 1995), in- volving board members (Graff, 1995), and determining a volunteer program’s success (Stepputat, 1995).

However, for volunteer programs operating in contemporary society, evaluation is a critical, if not the most critical, component of managing an overall volunteer program and subsequently documenting the impacts and ultimate value of the program to the

389 Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

 

 

target clientele it is designed to serve as well as the larger society in which it operates. As early as 1982, Austin et al. concluded that “Only through evaluation can [nonprofit] agencies make their programs credible to funding agencies and government authori- ties” (p. 10). In 1994, Korngold and Voudouris suggested the evaluation of impact on the larger community as one phase of evaluating an employee volunteer program.

The critical role of volunteer program impact evaluation in holistic volunteer man- agement became very apparent during the final decade of the twentieth century, and continues today (Council for Certification in Volunteer Administration, 2008; Merrill & Safrit, 2000; Safrit & Schmiesing, 2005; Safrit, Schmiesing, Gliem, & Gliem, 2005). While most volunteer managers understand and believe in evaluation, they most often have focused their efforts on evaluating the performance of individual volunteers and their contributions to the total program and/or organization. In this sense, evaluation has served an important managerial function in human resource development, the results of which are usually known only to the volunteer and volunteer manager. As Morley, Vinson, and Hatry (2001) noted:

Nonprofit organizations are more often familiar with monitoring and reporting such information as: the number of clients served; the quantity of services, programs, or activities provided; the number of volunteers or volunteer hours contributed; and the amount of donations received. These are important data, but they do not help nonprofit managers or constituents understand how well they are helping their clients. (p. 5)

However, as nonprofit organizations began to face simultaneous situations of stagnant or decreasing public funding and increasing demand for stronger account- ability of how limited funds were being used, volunteer program impact evaluation moved from a human resource management context to an organizational develop- ment and survival context. The volunteer administration profession began to recog- nize the shifting attitudes toward evaluation, and in the early 1980’s the former Association for Volunteer Administration (AVA) defined a new competency funda- mental to the profession as “the ability to monitor and evaluate total program results . . . [and] demonstrate the ability to document program results” (as cited in Fisher & Cole, 1993, pp. 187, 188). Administrators and managers of volunteer-based programs were increasingly called on to measure, document, and dollarize the impact of their programs on clientele served and not just the performance of individual volunteers and the activities they contribute (Safrit & Schmiesing, 2002; Safrit, Schmiesing, King, Villard, & Wells, 2003; Schmiesing & Safrit, 2007). This intensive demand for greater accountability initially arose from program funders (public and private) but quickly escalated to include government, the taxpaying public, and even the volunteers them- selves. As early as 1993, Taylor and Sumariwalla noted:

Increasing competition for tax as well as contributed dollars and scarce resources prompt donors and funders to ask once again: What good did the do- nation produce? What difference did the foundation grant or United Way alloca- tion make in the lives of those affected by the service funded? (p. 95)

According to Safrit (2010, p. 316), “The pressure on nonprofit organizations to evaluate the impact of volunteer-based programs has not abated during the first

390 Evaluating Impact of Volunteer Programs

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

 

 

decade of the new [21st] century, and if anything has grown stronger.” With regards to overall volunteer management, evaluation continues to play an important role in the human resource management of individual volunteers; most volunteer managers are very familiar and comfortable with this aspect of evaluation in volunteer programs. However, today’s volunteer managers are less knowledgeable, skilled, and comfort- able with the concept of impact evaluation as only the first (if important) step in meas- uring, documenting, and communicating the effects of a volunteer program immediately on the target clientele served by the organization’s volunteers, and ulti- mately on the surrounding community.

A Symbiotic Relationship: Evaluation, Impact, and Accountability

In the overwhelming majority of both nonformal workshops and formal courses I have taught, participants will inevitably use three terms almost interchangeably in our discussions of evaluating volunteer programs. The three concepts are symbi- otically linked and synergistically critical to contemporary volunteer programs, yet they are not synonymous. The three terms are evaluation, impact, and accountability.

Evaluation

Very simply stated, evaluation means measurement. We “evaluate” in all aspects of our daily lives, whether it involves measuring (evaluating) the outside temperature to determine if we need to wear a coat to work, measuring (evaluating) the current bal- ance in our checking account to see if we can afford to buy a new piece of technol- ogy, or measuring (evaluating) the fiscal climate in our workplace to decide if it is a good time to ask our supervisor for a salary increase. However, for volunteer pro- grams, “evaluation involves measuring a targeted program’s inputs, processes, and outcomes so as to assess the program’s efficiency of operations and/or effectiveness in impacting the program’s targeted clientele group” (Safrit, 2010, p. 318).

The duel focus of this definition on a volunteer program’s efficiency and effective- ness is supported by contemporary evaluation literature. Daponte (2008) defined evaluation as being “done to examine whether a program or policy causes a change; assists with continuous programmatic improvement and introspection” (p. 157). Royse, Thyer, and Padgett (2010) focused on evaluation as “a form of appraisal…that examines the processes or outcomes of an organization that exists to fulfill some so- cial need” (p. 12). These definitions each recognize the important role of evaluation in monitoring the operational aspects of a volunteer program (i.e., inputs and processes) yet ultimately emphasize the program’s ultimate purpose of engaging volunteers to help bring about positive changes in the lives of the program’s targeted audience (i.e., outcomes). These positive changes are called impacts.

Impact

Contrary to popular belief, volunteer programs do not exist for the primary purpose of engaging volunteers merely to give the volunteers something to do or for supplying an organization with unpaid staff to help expand its mission and purpose. Rather,

A Symbiotic Relationship: Evaluation, Impact, and Accountability 391

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

 

 

volunteer programs ultimately seek to bring about positive impacts in the lives of the targeted clientele the volunteers are trained to support either directly (through direct service to individual clients) or indirectly (through direct service to the service- providing organization). The latter statement does nothing to discount or demean the critical involvement of volunteers, but instead challenges a volunteer manager to con- tinually focus and refocus the engagement of volunteers on the ultimate mission of the sponsoring organization and the outcomes it seeks to bring about. In other words, it forces volunteer managers to identify and focus on the volunteer program’s desired impacts.

According to Safrit (2010):

Impact may be considered the ultimate effects and changes that a volunteer- based program has brought about upon those involved with the program (i.e., its stakeholders), including the program’s targeted clientele and their surrounding neighborhoods and communities, as well as the volunteer organization itself and its paid and volunteer staff. (p. 319)

This inclusionary definition of impact focuses primarily on the organization’s rai- son-d’être, and secondarily on the organization itself and its volunteers. Thus, it paral- lels and compliments nicely the earlier definition of evaluation as being targeted first toward the volunteer program’s targeted clientele, and secondly on internal processes and operations. Subsequently, volunteer managers must constantly measure the ulti- mate outcomes of volunteer programs, or stated more formally, evaluate the volunteer program’s impacts. However, merely evaluating a volunteer program’s impacts is not in itself a guarantee for the program’s continued success and/or survival; however positive, the knowledge gained by evaluating a volunteer program’s impacts are prac- tically meaningless unless they are strategically communicated to key leaders and de- cision makers connected to the sponsoring organization.

Accountability

Accountability within volunteer programs involves the strategic communication of the most important impacts of a volunteer program, identified through an evaluation pro- cess, to targeted program stakeholders, both internal and external to the organization. Internal stakeholders would include paid staff, organizational administrators, board members, volunteers, and the clientele served; external stakeholders include funders and donors, professional peers, government agencies and other legitimizers, and the larger community in which the organization operates.

Boone (1985) was the first author to describe the critical role of accountability in educational programs and organizations, and the previous definition is based largely on that of Boone, Safrit, and Jones (2002). Unfortunately, volunteer managers are sometimes hesitant to share program impacts even when they have identified them through an effective evaluation; they often consider such strong accountability as be- ing boastful or too aggressive. However, accountability is the third and final concept critically linking the previous concepts of evaluation and impact to a volunteer program’s or organization’s continued survival. Volunteer managers must accept the professional responsibility in our contemporary impact-focused society to

392 Evaluating Impact of Volunteer Programs

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

 

 

proactively plan for targeted accountability, identifying specific key stakeholders and deciding what specific program impacts each stakeholder type wants to know. This targeted approach to volunteer program accountability will be discussed in more detail later in this chapter.

Four Fundamental Questions in Any Volunteer Program Impact Evaluation

Evaluation is a relatively young concept within the educational world; Ralph Tyler (1949) is often credited with coining the actual term itself, evaluation, to refer to the alignment between measurement and testing with educational objectives. And there is no dearth in the literature of various approaches and models for program evaluation. Some models are more conceptual and focus on the various processes involved in evaluation (e.g., Fetterman, 1996; Kirkpatrick, 1959; Rossi & Freeman, 1993; Stuffle- beam, 1987) while others are more pragmatic in their focus (e.g., Combs & Faletta, 2000; Holden & Zimmerman, 2009; Patton, 2008). However, for volunteer managers with myriad professional responsibilities in addition to but including volunteer pro- gram evaluation, I suggest the following four fundamental questions that should guide any planned evaluation of a volunteer-based program.

Question 1: Why Do I Need to Evaluate the Volunteer Program?

Not every volunteer program needs to be evaluated. This may at first appear to be a heretical statement coming from the author of a chapter about volunteer program eval- uation, and theoretically it is. Pragmatically, however, it is not. Many volunteer pro- grams are short term by design, or are planned to be implemented one-time only. In contrast, some volunteer programs are inherent with the daily operations of a volun- teer organization, or are so embedded within the organization’s mission that they are invisible to all but organizational staff and administrators. Within these contexts, a vol- unteer manager must decide whether the evaluation of such a program warrants the required expenditure of time and human and materials resources. Furthermore, one cannot (notice that I did not say, may not) evaluate any volunteer program for which there are no measurable program objectives. This aspect of Question 1 brings us again to the previous discussion of volunteer program impacts: What is it that the volunteer program seeks to accomplish within its targeted clientele? What ultimate impact is the volunteers’ engagement designed to facilitate or accomplish?

Any and all volunteer program impact evaluations must be based on the measur- able program objectives targeted to the program’s clientele (Safrit, 2010). Such mea- surable program objectives are much more detailed than the program’s mere goal, and define key aspects of the program’s design, operations, and ultimate outcomes. A measurable program objective must include each of the following five critical elements:

1. What is the specific group or who are the specific individuals that the volunteer program is targeted to serve (i.e., the program’s clientele)?

2. What specific program activities will be used to interact with the targeted clientele group (i.e., the intervention that involves volunteers)?

Four Fundamental Questions in Any Volunteer Program Impact Evaluation 393

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

 

 

3. What specific change is the intervention designed to bring about within the tar- geted clientele group (i.e., program outcome or impact)?

4. What level of change or success does the program seek to achieve? 5. How will the intervention’s success be evaluated?

As an example, too often I encounter the volunteer types of volunteer program objectives:

& “We will recruit at least 50 new teen volunteers to help with the new Prevent Youth Obesity Now Program.”

& “At least 100 individuals will participate in the volunteer-delivered Career Funda- mentals Program.”

& “Organizational volunteers will contribute a minimum of 1,000 total volunteer hours mentoring adults who cannot read and/or write.”

Now consider their correctly written measurable program objectives and components:

& “As a result of the teen volunteer staffed Prevent Youth Obesity Now summer day camp, at least 50% of the participating 200 overweight youth will adopt and main- tain at least one new proper nutrition practice, as reported by their parents in a six-month follow-up mailed questionnaire.” (Target audience: 200 overweight youth. Planned intervention: teen volunteer staffed summer day camp. Desired change among target audience: adoption of at least one new proper nutrition practice. Desired level of success: 50% of participating youth. How suc- cess will be evaluated: 6-month post-camp questionnaire mailed to participants’ parents.)

& “At least 50% of currently unemployed participants in the six-week Career Fundamentals Program taught by volunteers will be able to describe one new workplace skill they learned as a result of the program, as measured by a volun- teer-delivered interview during the final Program session.” (Target audience: unemployed individuals. Planned intervention: volunteer taught workshop session. Desired change among target audience: learning new workplace skills. Desired level of success: 50% of participants. How success will be evaluated: exit interview conducted by a volunteer.)

& “At least 30% of the adults participating in the six-week literacy volunteer mentor- ing program will improve their reading skills by ten percentile points as measured by a standardized reading test administered at the first and final sessions.” (Target audience: illiterate adults. Planned intervention: volunteer mentoring program. Desired change among target audience: improved reading skills. Desired level of success: 30% of participants. How success will be evaluated: standardized reading tests.)

A final aspect of Question 1 involves the use of “logic” models in evaluating volunteer programs so called because they seek to outline and follow the logical development and implementation of a program or intervention from its conception through to its targeted long-term impact. Logic models are not new to volunteer

394 Evaluating Impact of Volunteer Programs

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

 

 

programs (Honer, 1982; Safrit & Merrill, 1998, 2005) and apply four standard compo- nents to program development and impact evaluation (Bennett & Rockwell, 1994; Frechtling, 2007; W.K. Kellogg Foundation, 2000):

1. Inputs. Actual and in-kind resources and contributions devoted to the project 2. Activities. All activities and events conducted or undertaken so as to achieve the

program’s identified goal 3. Outputs. Immediate, short term services, events, and products that document the

implementation of the project 4. Outcomes. The desired long-term changes achieved as a result of the project

Unfortunately, space does not allow for an in-depth discussion of the use of logic models in evaluating volunteer program impacts. However, Exhibit 16.1 illustrates the application of logic modeling in a volunteer-delivered program designed to decrease overweight and/or obesity among teens. Note the strong correlation between the pro- gram’s measurable program objectives and the Outcomes component for the volun- teer program.

EXHIBIT 16.1 Sample Logic Model for a Volunteer Program Focused on Decreasing Teen Obesity

Inputs Activities Outputs Outcomes

$350 in nutrition curricula purchased

$750 for use of the day camp facility (in-kind)

10 members of the program advisory committee

12 adult volunteers working with the program

Program coordinator devoted 3 workweeks (120 hours) to planning and implementing the program

Three 2-hour meetings conducted of the program advisory committee

Three 3-hour volunteer training sessions conducted

At least 30 teens who are clinically obese will participate in the 3-day, 21-hour program

At least 10 adult volunteers will serve during the actual day camp

Program advisory committee members will volunteer to teach program topics to participants during the day camp

At least 80% of teen participants will increase their knowledge of proper nutrition and/or the importance of exercise along with diet as evaluated using a pre/ post-test survey

At least 70% of teen participants will demonstrate new skills in preparing healthy snacks and meals as evaluated by direct observation by program volunteers

At least 50% of teen participants will aspire to eat more nutritious meals and to exercise daily as indicated by a post-test survey

Source: � 2009 R. Dale Safrit. All Rights Reserved.

Four Fundamental Questions in Any Volunteer Program Impact Evaluation 395

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

 

 

Question 2: How Will I Collect the Required Impact Evaluation Data?

Once targeted impacts have been identified for a volunteer program, thus answering Question 1 as to why the program is to be evaluated, a volunteer manager must next decide on actual methods to be used to collect the evaluation data. If measurable pro- gram objectives have been developed, then Question 2 is easily answered. However, oftentimes the evaluation component of a measurable program objective is the final one to be decided, simply because the other four components tend to naturally pre- empt it during the conceptual development of a volunteer program evaluation. Fur- thermore, data collection methods may largely be defined and/or constrained based on the type of program intervention and/or the numbers and type of target audience (i.e., data collection methods will naturally differ between one-on-one and mass-audi- ence-delivered volunteer programs, adult and youth audiences, etc.).

Basically, two types of data (and thus data collection methods) exist: qualitative and quantitative. Thomas (2003) provides a very fundamental description of both:

The simplest way to distinguish between qualitative and quantitative may be to say that qualitative methods involve a researcher describing kinds of character- istics of people and events without comparing events in terms of measurements or amounts. Quantitative methods, on the other hand, focus attention on measure- ments and amounts (more and less, larger and smaller, often and seldom, simi- lar and different) of the characteristics displayed by the people and events that the researcher studies. (p. 1)

And, both types of data are important in documenting impact of volunteer pro- grams. According to Safrit (2010):

Within non-academic contexts (including volunteer programs), quantitative methods are most commonly used in program evaluations. Quantitative methods allow the evaluator to describe and compare phenomena and observations in numeric terms. Their predominance may largely be due to the increasing de- mand for “number-based evidence” as accountability within nonprofit programs and organizations. However, qualitative methods may also be used very effec- tively in volunteer program impact evaluations. Qualitative methods focus upon using words to describe evaluation participants’ reactions, beliefs, attitudes, and feelings and are often used to put a “human touch” on impersonal number scores and statistics. (p. 333)

The discussion is not necessarily qualitative-versus-quantitative; rather, a volun- teer manager needs to once again consider critical factors affecting the program’s im- pact evaluation such as the purpose of the evaluation; possible time constraints; human and material (including financial) resources available; to whom the evaluation is targeted; etc.

There is a wide array of qualitative methods available for a volunteer manager to utilize in evaluating impacts of a volunteer program (Bamberger, Rugh, & Mabry, 2006; Dean, 1994; Krueger & Casey, 2000; Miles & Huberman, 1994; Thomas, 2003; Wells, Safrit, Schmiesing, & Villard, 2000), including (but not limited to) case studies,

396 Evaluating Impact of Volunteer Programs

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

 

 

ethnographs, content analysis, participant observation, and experienced narratives. Of these, however, Spaulding (2008) suggested that the case study approach using par- ticipant interviews and focus groups to collect data is by far the most common qualita- tive method used with volunteer programs. Again, space limitations do not allow for an in-depth discussion of these methods. (For a more in-depth discussion of using case studies with volunteer programs, see Safrit, 2010.) However, the author suggests that qualitative methods are most appropriate in evaluating volunteer programs that are targeted to a relatively small group of clientele, for whom a few, focused practice or behavioral skills and/or changes are the desired program impact. Qualitative evalu- ation methods required considerably more time and human resources to conduct properly, and data should be collected by well-trained individuals who conduct indi- vidual interviews and/or focus groups. Qualitative methods are most effective when the desired targeted accountability is focused on personal/human interest and affec- tive/emotional impacts of the volunteer program.

However, when volunteer programs are designed to reach large numbers of tar- geted clientele and seek to impact their knowledge and/or attitudes, quantitative methods are probably more appropriate for the volunteer program impact evaluation. Unfortunately, in today’s society demanding increased accountability, volunteer orga- nizations are called on all too often to reach ever-increasing numbers of targeted cli- ents with stagnant or decreasing resources, and then to dollarize the program’s impacts on clients. Quantitative methods are also easier to analyze and summarize, and are best when it is important or necessary to translate measured program impacts into dollar amounts that are required by funders and legitimizers.

Consequently, quantitative evaluation methods are overwhelmingly the most prevalent approach to collecting volunteer program impact data, and the most com- mon quantitative method used are survey designs using questionnaires to collect data. According to Safrit (2010):

Translated into volunteer program terms…conducting a survey to evaluate [vol- unteer] program impact involves: identifying the volunteer program of interest; identifying all program clientele who have participated in the program and se- lecting participants for the evaluation; developing a survey instrument (question- naire) to collect data; collecting the data; and analyzing the data so as to reach conclusions about program impact. (pp. 336–337)

When using surveys to evaluate volunteer program impacts, there are important considerations to be made by the volunteer manager regarding participant selection, instrumentation, and data collection and analysis procedures (Dillman, Smyth, & Christian, 2008). Safrit (2010) provides an in-depth discussion of each consideration that space limitations prohibit in this chapter. However, the prevalence today of per- sonal computers, data analysis software programs designed for non-statisticians, “sur- vey-design-for-dummies” type texts, and very affordable do-it-yourself web-based questionnaire companies all make it much easier for a volunteer manager with only a fundamental background in quantitative evaluation methods to plan, design, and con- duct a valid and reliable survey design quantitative evaluation of a volunteer program using a face-to-face, mailed, e-mailed, or web-based questionnaire to collect impact data from targeted clientele.

Four Fundamental Questions in Any Volunteer Program Impact Evaluation 397

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

 

 

The author must point out that in some situations, neither qualitative nor quantita- tive methods alone are adequate to collect the type of data necessary to document impacts of large volunteer programs with multiple measurable program objectives tar- geted to a diverse program clientele. In such situations, the volunteer manager may best decide to use some qualitative approaches together with some quantitative approaches, or rather, a mixed methods approach (Creswell, 1994). According to Safrit (2010):

The most common type of mixed method approach to impact evaluations of vol- unteer programs involves a “two-phase design” in which the evaluator first uses qualitative methods in a first phase to identify and describe key themes describing a volunteer program’s impacts upon clientele, and subsequently quantitative methods to be able to quantify and compare the intensity and pervasiveness of the impacts among the clientele. (p. 340)

Question 3: Who Wants (or Needs) to Know What About the Evaluation Findings?

In response to Question 3, most volunteer managers would quickly answer, “I want everyone to know everything about all of our volunteer programs.” However, the stark reality is that different stakeholders will want (and need) to know different aspects of any volunteer program evaluation. Some stakeholders are extremely busy, and only have minimal time to review a volunteer program evaluation report. Others will have a very focused interest in the program, especially if they have con- tributed materials and/or resources and are therefore most interested in the bottom line. Volunteers themselves, on the other hand, may be less concerned about the financial aspects of the program in which they volunteer, and may be more con- cerned about exactly what difference they have made in the lives of the clientele they have served.

To help volunteer managers answer this question in an objective and realistic manner, Safrit (2010) developed a program accountability matrix (Exhibit 16.2). In the matrix, specific types of internal and external volunteer program stakeholders are listed in the far left column, and the standard types of evaluation data based on logic modeling (i.e., inputs, activities, outputs, and outcomes) are listed across the top of the matrix. According to the author:

To use the Matrix, the [volunteer manager] simply answers the following question for each type of evaluation information, for each type of stakeholder: “If time and resources are limited, does this stakeholder really want to know this type of evalu- ation information?” If the answer is “yes,” then the VRM simply places a mark [X] in the cell where the specific stakeholder and evaluation information intercept; if the answer is “no,” then the cell is left empty. The caveat for developing an effec- tive Accountability Matrix is that the [volunteer manager] must be brutally honest and frank in responding to the question for each stakeholder group and each type of evaluation information; s/he must recognize and manage the previously de- scribed bias that everyone wants to know everything about a specific volunteer program. (p. 342)

398 Evaluating Impact of Volunteer Programs

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

 

 

Once the matrix is completed for the specified volunteer program, the volunteer manager simply looks at the column totals for each potential type of evaluation data. Those columns (i.e., data types) with the highest totals should therefore be the priorit- ies for the volunteer manager to focus on in the impact evaluation, especially when time and resources are limited. This targeted approach to volunteer program account- ability serves as a very useful tool in not only deciding to whom to communicate im- pacts at the end of a volunteer-based program, but in also deciding what needs to be evaluated about the volunteer program even before it is initiated.

MONETIZING IMPACTS Once targeted volunteer program impacts have been identified and evaluated, volunteer managers are often faced with a seemingly impossible chal- lenge—converting measured volunteer program impacts into monetary values. Again, this challenge is a fiscal reality encountered ever more frequently today as a result of increased pressure (or demands) from funders, government agencies, and other legitimizers. However, it is made much easier by (1) having measurable program objectives identified for the volunteer program from its inception, (2) the use

EXHIBIT 16.2 Sample Completed Program Accountability Matrix for a Volunteer Program Focused on Decreasing Teen Obesity

Type of Volunteer Program Stakeholder

If time and resources are limited, does this stakeholder really want to know specifics about the volunteer program’s…

Inputs Activities Outputs Outcomes

Internal Stakeholders

Volunteer Manager X X X X

Program Director X X X X

Organization’s Executive Director X X

Organization’s Board of Directors X

Program Volunteers X X

Other Stakeholder? (Advisory Committee Members)

X X

External Stakeholders

Program Clientele X X

Program Funders X X

Program Collaborators X

Community Leaders X

Government Leaders (County Commissioners)

X X

Other Stakeholders? (County Health Department)

X X

TOTALS 4 3 7 11

Source: � 2009 R. Dale Safrit, All Rights Reserved.

Four Fundamental Questions in Any Volunteer Program Impact Evaluation 399

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

 

 

of logic modeling, and (3) having completed a realistic accountability matrix. And as Key (1994) noted, there may be existing sources of volunteer program benefits other than the impact evaluation’s findings:

There are numerous potential sources of data for analysis of program benefits: 1. Existing records and statistics kept by the agency, legislative committees, or agency watchdogs . . .; 2. Feedback from the program’s clients…obtained through a questionnaire or focus group; 3. Ratings by trained observers; 4. Expe- rience of other governments or private or nonprofit organizations; and 5. Special data gathering. (p. 470)

Furthermore, the idea that volunteer program outputs and outcomes (impacts) can be converted into monetary values is not new, since Karn proposed a system to do so as early as 1982. Most recently, Anderson and Zimmerman (2003) identified five methods that could be used to place a dollar estimate on volunteers’ time:

1. Using an average wage of all wage earners in the geographical area served by the volunteer program

2. Using a wage earned by a professional who does paid work comparable to the service contributed by the volunteer

3. Using the standard average dollar value of an hour of a volunteer’s time that is calculated and published annually by the Independent Sector (2010)

4. Using the living wage that the United States federal government calculates as that needed for an individual to maintain a standard of living above the current pov- erty level

5. Using the local or state minimum wage that any employer must pay an employee as dictated by law

Exhibit 16.3 illustrates how the volunteer manager for a volunteer-based teen obesity program calculated estimated dollar amounts for the logic model components of the program.

COMPARING COSTS AND BENEFITS IN VOLUNTEER PROGRAMS Once program inputs, ac- tivities, outputs, and outcomes (impacts) have all been translated into dollar amounts, a volunteer manager may take the program’s impact evaluation to a final and highest level of accountability by borrowing one or more of three powerful statistics from the field of applied economics: cost savings analysis (CSA), benefit-cost analysis (BCA), and/or return on investment (ROI).

CSA is the estimated dollar value of any potential volunteer program costs that were not required to be spent as a direct result of volunteer involvement in the pro- gram. In the teen obesity program example (Exhibit 16.3), volunteers saved the pro- gram and sponsoring organization an estimated $8,602. BCA is the calculated estimated ratio comparing the net benefits of the volunteer program to the total costs of conducting the program (Key, 1994; Moore, 1978; Royse et al., 2010). A BCA of 1 (written as 1:1) indicates that the value of the volunteer program’s benefits equaled the value of the program’s total costs, whereas a BCA of 2 (written as 2:1) indicates that for each $1.00 in program costs, $2.00 were realized in program benefits. For the

400 Evaluating Impact of Volunteer Programs

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

 

 

EX H IB IT 16 .3

E x am

p le s o f C o n v e rt in g M e tr ic s in

a V o lu n te e r P ro g ra m

F o cu

se d o n D e cr e as in g T e e n O b e si ty

in to

D o ll ar

A m o u n ts

In p u ts

A ct iv it ie s

O u tp u ts

O u tc o m e s

$ 3 5 0 (i n n u tr it io n

cu rr ic u la

p u rc h as e d )

$ 7 5 0 (u se

o f th e

d ay

ca m p

fa ci li ty ;i n

k in d )

$ 3 ,6 0 0 (t o ta l

co st s o f d ay

ca m p su p p li e s,

m e al s, sn ac k s,

e q u ip m e n t,

e tc .)

$ 1 ,8 0 0 (v o lu n te e r m an

ag e r’ s sa la ry

d e v o te d to

p ro g ra m ; 3 w o rk w e e k s D

1 2 0 h o u rs fo r

p la n n in g an

d im

p le m e n ti n g th e p ro g ra m

@ $ 1 5 .0 0 /h r. sa la ry

ra te )

$ 4 1 4 (v o lu n te e r m an

ag e r’ s w o rk

b e n e fi ts

ca lc u la te d as

2 3 % o f sa la ry )

$ 1 ,1 7 0 (p ro g ra m

ad v is o ry

co m m it te e m e m b e rs ’

ti m e ; th re e 2 -h o u r m e e ti n g s co

n d u ct e d fo r 1 0

m e m b e rs @ $ 6 .5 0 /h r. m in im

u m

w ag e 1 )

$ 4 ,0 9 5 (c o st o f v o lu n te e rs ’ ti m e fo r tr ai n in g ;

th re e 3 -h o u r v o lu n te e r tr ai n in g se ss io n s fo r

2 1 v o lu n te e rs @ $ 6 .5 0 /h r. m in im

u m

w ag e 1 )

$ 1 ,4 0 4 (c o st s o f p ar ti ci p at in g 3 6 o b e se

te e n s

p ar e n ts ’ co

st s to

tr an

sp o rt p ar ti ci p an

ts to

d ay

ca m p ; 2 h rs ./ d ay

@ 3 d ay s @ 3 6 p ar e n ts @

$ 6 .5 0 /h r. m in im

u m

w ag e 1 )

$ 2 ,3 2 0 .5 0 (2 1 ad

u lt

v o lu n te e rs co

n tr ib u te d

3 5 7 to ta l h o u rs d u ri n g

th e ac tu al d ay

ca m p @

$ 6 .5 0 /h r. m in im

u m

w ag e 1 )

$ 2 6 6 .5 0 (e ig h t p ro g ra m

ad v is o ry

co m m it te e

m e m b e rs v o lu n te e re d

4 1 to ta lh

o u rs to

te ac h

p ro g ra m

to p ic s d u ri n g

th e d ay

ca m p @ $ 6 .5 0 /

h r. m in im

u m

w ag e 1 )

8 4 % (n D

3 0 ) o f te e n p ar ti ci p an

ts in cr e as e d

th e ir k n o w le d g e o f p ro p e r n u tr it io n an

d /o r

th e im

p o rt an

ce o f e x e rc is e al o n g w it h d ie t

as e v al u at e d u si n g a p re -/ p o st te st su rv e y

7 5 % (n D

2 7 ) o f te e n p ar ti ci p an

ts d e m o n st ra te d

n e w

sk il ls in

p re p ar in g h e al th y sn ac k s an

d m e al s as

e v al u at e d b y d ir e ct o b se rv at io n b y

p ro g ra m

v o lu n te e rs

5 4 % (n D

1 9 ) o f te e n p ar ti ci p an

ts as p ir e d

to e at m o re

n u tr it io u s m e al s an

d to

e x e rc is e

d ai ly as

in d ic at e d b y a p o st te st su rv e y

T o ta l E st im

a te d P ro

g ra m

C o st s

T o ta l E st im

a te d P ro

g ra m

B e n e fi ts

$ 4 ,7 0 0

$ 8 ,8 8 3

$ 2 ,5 8 7

If ,a s a re su lt o f th e p ro g ra m , a m e re

1 0 % o f

o b e se

te e n s w h o d e m o n st ra te d n e w

sk il ls in

p re p ar in g h e al th y sn ac k s an

d m e al s (n D

3 )

h ad

to m ak

e o n e fe w e r v is it to

a d o ct o r e ac h

y e ar

fo r th e n e x t 5 y e ar s, at an

av e ra g e

d o ct o rs ’ v is it co

st o f $ 1 2 0 ,t h e n th e p ro g ra m

w o u ld

h av e sa v e d a m in im

u m

o f $ 1 ,8 0 0 in

m e d ic al co

st s.

(c o n ti n u ed

)

401 Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

 

 

If ,a s a re su lt o f th e p ro g ra m , a m e re

1 0 % o f

o b e se

te e n s w h o as p ir e d to

e at m o re

n u tr it io u s m e al s an

d e x e rc is e re g u la rl y

(n D

2 ) d id

n o t d e v e lo p T y p e II d ia b e te s, at a

co st to

so ci e ty

o f $ 2 .5 m il li o n as

e st im

at e d b y

th e St at e D e p t. o f H e al th

Se rv ic e s, th e n th e

p ro g ra m

w o u ld

h av e sa v e d a m in im

u m

o f

$ 5 .0 m il li o n .

$ 1 3 ,5 8 3

$ 5 ,0 0 2 ,5 8 7

E st im

a te d C S A : $ 8 ,6 0 2

E st im

a te d B C A : 3 6 8 :1

E st im

a te d R O I: 3 ,6 7 3 %

1 N o te : T h is $ 6 .5 0 fi g u re

is u se d b y th e au

th o r fo r il lu st ra ti v e p u rp o se s o n ly . A s o f Ju ly

2 4 , 2 0 0 9 , th e o ffi ci al

U .S . fe d e ra l h o u rl y m in im

u m

w ag e w as

in cr e as e d to

$ 7 .2 5 .

So u rc e: �

2 0 0 9 R .D

al e Sa fr it .A

ll R ig h ts R e se rv e d .

EX H IB IT 16 .3

(C o n ti n u ed

)

T o ta l E st im

a te d P ro

g ra m

C o st s

T o ta l E st im

a te d P ro

g ra m

B e n e fi ts

402 Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

 

 

teen obesity program, the total costs were estimated at $13,583 and total program ben- efits were estimated at $5,002,587, resulting in an astounding BCA of 368:1; that is, for every $1.00 spent on the obesity program, an estimated $368 was generated in pro- gram benefits.

The ultimate impact accountability statistic that a volunteer manager may calcu- late is ROI for a volunteer program. According to Key (1994), ROI “is the discount rate that would make the present value of the [volunteer] project equal to zero” (p. 461). ROI is the percentage resulting from subtracting a program’s total costs from its total benefits, dividing that figure by the total costs, and multiplying that figure by 100 (J. J. Phillips, 2003; P. P. Phillips, 2002; P. P. Phillips & J. J. Phillips, 2005). Thus, in the obe- sity program example, the ROI is $4,989,004 (i.e., $5,002,587 in total benefits minus $13,583 in total costs) divided by the total costs of $13,583, and finally multiplied by 100, resulting in 3,673%. Thus, for every $1.00 invested in the volunteer delivered pro- gram, a net monetary value of $3,673 was generated that benefited the total community.

Question 4: How Do I Communicate the Evaluation Findings?

This fourth and final question may appear to be the easiest to answer, but still requires thoughtful consideration by a volunteer manager. Going back to the discussion of the program accountability matrix, specific volunteer program stakeholders may require very specific types of evaluation findings reports, and unfortunately, one size may not fit all! Some may desire a thorough and comprehensive final report describing the vol- unteer program in detail and all program impacts measured; others may simply wish to see an executive summary of key program impacts related directly to their program involvement. Hendricks (1994) concluded:

If a tree falls in the forest and no one hears it, does it make a sound? If an evalua- tion report falls on someone’s desk and no one reads it, does it make a splash? None whatsoever, yet we evaluators still rely too often on long tomes filled with jargon to “report” our evaluation results. (p. 549)

Safrit (2010) identified three important aspects regarding the accountability function as related directly to Question 4 for a volunteer manager to consider in deciding how to communicate the findings of a volunteer program impact evaluation to targeted stake- holders. First, the volunteer manager must identify the specific recipient of the commu- nication. This has, of course, been addressed by Question 3. Secondly, the volunteer manager must identify the specific message to be communicated. Again, this has been decided by answering Question 1. Together, both of these aspects have been identified more specifically in the completed program accountability matrix.

The third aspect of accountability, however, if for the volunteer manager to iden- tify the specific format for the evaluation impact report, and the specific medium to be used to communicate the report. The most common format and medium used to com- municate the findings of a volunteer program impact evaluation for accountability pur- poses is a written final report. Typical components of such a final report include an introduction to the volunteer program, a description of the methods used to evaluate the program’s impacts, the evaluation findings, and a thorough discussion of the

Four Fundamental Questions in Any Volunteer Program Impact Evaluation 403

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

 

 

findings’ implications for the targeted clientele served, the sponsoring organization, and the larger community (Royse et al., 2010).

However, other written report formats may better serve some targeted stake- holders. Executive summaries are short (i.e., 2 to 4 pages) annotated compilations of information contained in the larger final report, highlighting only the most impor- tant aspects of the volunteer program and its evaluation findings of particular inter- est to the executive summary’s targeted audience. Even more concise are impact statements or fact sheets that present impact evaluation data and findings in tabular or visual formats, much like Exhibit 16.3 for the teen obesity program. And finally, in today’s multimedia, 24/7 world, the volunteer manager should not overlook oppor- tunities to communicate the impact evaluation findings of a volunteer program in any combination of written and visual formats that may be posted to the organiza- tion’s web page or streamed onto the Internet. Whatever the reporting format, Patton’s (2008, p. 509) recommendations for making the evaluation’s accountability report as user-focused and user-friendly as possible should be considered carefully by a volunteer manager:

1. “Be intentional about reporting…know the purpose of the report and stay true to that purpose.”

2. “Stay user-focused: Focus the report on the priorities of primary intended users.” These first two recommendations have been addressed previously in this

chapter, but they emphasize the importance of a volunteer manager: & Identifying specific target stakeholders for a volunteer program evaluation & Identifying which aspect of the logic model each stakeholder type will specifi-

cally want to know & Collecting appropriate data to support that aspect & Reporting the findings in a format desired by the stakeholder group

3. “Organize and present the findings so as to facilitate understanding and interpretation.”

This recommendation points to the prior discussion of the need for a volun- teer manager to customize the actual report into a format preferred by a specific stakeholder group. Again, one style (and one format) does not fit all stakeholders. Most stakeholders will prefer a written report, but today’s technological advances make multimedia options readily available as well.

4. “Avoid surprising primary stakeholders.” No one likes a surprise, but of course, a positive surprise is more readily ac-

cepted than a negative surprise. Begin the accountability report of any volunteer program with the most positive and important impact evaluation findings, and then address “areas for improvement” or “findings of some concern.” However, as an evaluator, a volunteer manager has an ethical responsibility to communicate all appropriate evaluation findings, and not to exclude any findings that may make the volunteer manager or program administrator uncomfortable.

5. “Prepare users to engage with and learn from ‘negative’ findings.” If necessary, present any negative findings one-on-one with key stakehold-

ers, asking them for their reactions, insights, and/or opinions, before surprising them in a large group, formal session or meeting. Then incorporate this input into the final version of the impact evaluation report.

404 Evaluating Impact of Volunteer Programs

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

 

 

6. “Distinguish dissemination from use.” Sharing the findings of a volunteer program impact evaluation with key stake-

holders is a critical component of the accountability responsibility of a volunteer manager. However, the goal should be to move stakeholders beyond a mere dis- cussion of what went well and what went wrong, to a higher level of discussion, one that focuses the impact evaluation findings on strengthening the volunteer pro- gram in ways and areas that better serve the clientele the program is designed to target—and that better fulfill the organization’s mission and purpose.

References

Anderson, P. A., & Zimmerman, M. E. (2003). Dollar value of volunteer time: A review of five estimation methods. Journal of Volunteer Administration, 21 (2), 39–44.

Austin, M. J., Cox. G., Gottlieb, N., Hawkins, J. D., Kruzich, J. M., & Rauch, R. (1982). Evaluating your agency’s programs. Newbury Park, CA: Sage.

Bamberger, M., Rugh, J., & Mabry, L. (2006). Real world evaluation: Working under budget, time, data, and political constraints. Thousand Oaks, CA: Sage.

Bennett, C., & Rockwell, K. (1994, December). Targeting outcomes of programs (TOP): An integrated approach to planning and evaluation. Retrieved from http://citnews.unl.edu/TOP/english/

Boone, E. J. (1985). Developing programs in adult education. Englewood Cliffs, NJ: Prentice- Hall.

Boone, E. J., Safrit, R. D., & Jones, J. M. (2002). Developing programs in adult educa- tion (2nd ed.). Prospect Heights, IL: Waveland Press.

Bradner, J. H. (1995). Recruitment, orientation, and training. In T. D. Connors (Ed.), The volunteer management handbook (pp. 61–81). New York, NY: John Wiley & Sons.

Brudney, J. L. (1995). Preparing the organization for volunteers. In T. D. Connors (Ed.), The volunteer management handbook (pp. 36–60). New York, NY: John Wiley & Sons.

Brudney, J. L. (1999, Autumn). The effective use of volunteers: Best practices for the public sector. Law and Contemporary Problems, 219, 219–253.

Combs, W. L., & Falletta, S. V. (2000). The targeted evaluation process. Alexandria, VA: American Society for Training & Development.

Council for Certification in Volunteer Administration. (2008). Body of knowledge in volunteer administration. Retrieved from www.cvacert.org/certification.htm

Creech, R.B. (1968). Let’s measure up! A set of criteria for evaluating a volunteer pro- gram. Volunteer Administration, 2(4), 1–18.

Creswell, J. W. (1994). Research design: Qualitative & quantitative approaches. Thou- sand Oaks, CA: Sage.

Daponte, B. O. (2008). Evaluation essentials: Methods for conducting sound research. San Francisco, CA: Jossey-Bass.

Dean, D. L. (1994). How to use focus groups. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (pp. 338–349). San Francisco, CA: Jossey-Bass.

Dillman, D. A., Smyth, J. D., & Christian, L. M. (2008). Internet, mail, and mixed-mode surveys: The tailored design method. Hoboken, NJ: John Wiley & Sons.

References 405

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

 

 

Fetterman, D. M. (1996). Empowerment evaluation. In D. M. Fetterman, S. J. Kaftarian, & A. Wandsersman (Eds.), Empowerment evaluation: Knowledge and tools for self-assessment & accountability (pp. 3–46). Thousand Oaks, CA: Sage.

Fisher, J. C., & Cole, K. M. (1993). Leadership and management of volunteer pro- grams. San Francisco, CA: Jossey-Bass.

Frechtling, J. A. (2007). Logic modeling methods in program evaluation. San Fran- cisco, CA: John Wiley & Sons.

Graff, L. L. (1995). Policies for volunteer programs. In T. D. Connors (Ed.), The volun- teer management handbook (pp. 125–155). New York, NY: John Wiley & Sons.

Hendricks, M. (1994). Making a splash: Reporting evaluation results effectively. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (pp. 549–575). San Francisco, CA: Jossey-Bass.

Holden, D. J., & Zimmerman, M. A. (2009). A practical guide to program evaluation: Theory and case examples. Los Angeles, CA: Sage.

Honer, A. S. (1982). Manage your measurements, don’t let them manage you! Volun- teer Administration, 14(4), 25–29.

Independent Sector. (2010). Value of volunteer time. Retrieved from www.independent sector.org/volunteer_time

Karn, G. N. (1982). Money talks: A guide to establishing the true dollar value of volunteer time. Journal of Volunteer Administration, 1(2), 1–17.

Key, J. E. (1994). Benefit-cost analysis in program evaluation. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (pp. 456–488). San Francisco, CA: Jossey-Bass.

Kirkpatrick. D. L. (1959). Techniques for evaluating training programs. Journal of the American Society for Training and development, 13(11–12), 23–32.

Korngold, A., & Voudouris, E. (1994). Business volunteerism: Designing your program for impact. Cleveland, OH: Business Volunteerism Council.

Krueger, R., & Casey, M. A. (2000). Focus groups: A practical guide for applied re- search (3rd ed.). Thousand Oaks, CA: Sage.

Lulewicz, S. J. (1995). Training and development of volunteers. In T. D. Connors (Ed.), The volunteer management handbook (pp. 82–102). New York, NY: John Wiley & Sons.

Macduff, N. (1995). Volunteer and staff relations. In T. D. Connors (Ed.), The volunteer management handbook (pp. 206–221). New York, NY: John Wiley & Sons.

Merrill, M., & Safrit, R. D. (2000, October). Bridging program development and impact evaluation? Proceedings of the 2000 International Conference on Volunteer Ad- ministration (p. 63). Phoenix, AZ: Association for Volunteer Administration.

Miles, M. B., & Huberman, A. B. (1994). Qualitative data analysis: A sourcebook of new methods. Beverly Hills, CA: Sage.

Moore, N. A. (1978). The application of cost-benefits analysis to volunteer programs. Volunteer Administration, 11(1), 13–22.

Morley, E., Vinson, E., & Hatry, H. P. (2001). Outcome measurement in nonprofit organizations: Current practices and recommendations. Washington, DC: Independent Sector.

Naylor, H. H. (1973). Volunteers today: Finding, training and working with them. Dryden, NY: Dryden Associates.

Naylor, H. H. (1976). Leadership for volunteering. Dryden, NY: Dryden Associates.

406 Evaluating Impact of Volunteer Programs

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

 

 

O’Connell, B. (1976). Effective leadership in voluntary organizations. Chicago, IL: Follett.

Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). Los Angeles, CA: Sage. Phillips, J. J. (2003). Return on investment in training and performance improvement

programs (2nd ed.). Amsterdam, Netherlands: Butterworth Heinemann. Phillips, P. P. (2002). The bottom line on ROI: Basics, benefits, & barriers to measuring

training & performance improvement. Atlanta, GA: CEP Press. Phillips, P. P., & Phillips, J. J. (2005). Return on investment: ROI basics. Alexandria,

VA: American Society for Training & Development. Rossi, P. H., & Freeman, H. E. (1993). Evaluation: A systematic approach. Newbury

Park, CA: Sage. Royse, D., Thyer, B. A., & Padgett. D. K. (2010). Program evaluation: An introduction

(5th ed.). Belmont, CA: Wadsworth. Safrit, R. D. (2010). Evaluation and outcome measurement. In K. Seel (Ed.), Volunteer

administration: Professional practice (pp. 313–361). Markham, ON: LexisNexis Canada.

Safrit, R. D., & Merrill, M. (1998). Assessing the impact of volunteer programs. Journal of Volunteer Administration, 16(4), 5–10.

Safrit, R. D., & Merrill, M. (2005, Nov.). The seven habits of highly effective managers of volunteers. Proceedings of the 10th International Association of Volunteer Efforts (IAVE) Asia-Pacific Regional Volunteer Conference (p. 67). Hong Kong, China: IAVE.

Safrit, R. D., & Schmiesing, R. J. (2002, October). Measuring the impact of a stipended volunteer program: The Ohio 4-H B.R.I.D.G.E.S. experience. Proceedings of the 2002 International Conference on Volunteer Administration (p. 16). Denver, CO: Association for Volunteer Administration.

Safrit, R. D., & Schmiesing, R. J. (2005). Volunteer administrators’ perceptions of the importance of and their current levels of competence with selected volun- teer management competencies. Journal of Volunteer Administration, 23(2), 4–10.

Safrit, R. D., Schmiesing, R. J., Gliem, J. A., & Gliem, R. R. (2005). Core competencies for volunteer administration: An empirical model bridging theory with profes- sional best practice. Journal of Volunteer Administration, 23(3), 5–15.

Safrit, R. D., Schmiesing, R., King, J. E., Villard, J., & Wells, B. (2003). Assessing the impact of the three-year old Ohio Teen B.R.I.D.G.E.S. program. Journal of Volunteer Administration, 21(2), 12–16.

Schmiesing, R., & Safrit, R. D. (2007). 4-H Youth Development professionals’ percep- tions of the importance of and their current level of competence with selected volunteer management competencies. Journal of Extension, 45(3). Retrieved from www.joe.org/joe/2007June/rb1p.shtml

Seel, K. (1995). Managing corporate and employee volunteer programs. In T. D. Connors (Ed.), The volunteer management handbook (pp. 259–289). New York, NY: John Wiley & Sons.

Spaulding, D. T. (2008). Program evaluation in practice: Core concepts and examples for discussion and analysis. San Francisco, CA: Jossey-Bass.

Stenzel, A. K., & Feeney, H. M. (1968). Volunteer training and development: A man- ual for community groups. New York, NY: Seabury Press.

References 407

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

 

 

Stepputat, A. (1995). Administration of volunteer programs. In T. D. Connors (Ed.), The volunteer management handbook (pp. 156–186). New York, NY: John Wiley & Sons.

Stufflebeam, D. L. (1987). The CIPP model for program evaluation. In G. F. Madaus, M. S. Scriven, & D. L. Stufflebeam (Eds.), Evaluation models: Views on educa- tional and human services evaluation (pp. 117–141). Boston, MA: Kluwer- Nijhoff.

Taylor, M. E., & Sumariwalla, R. D. (1993). Evaluating nonprofit effectiveness: Over- coming the barriers. In D. R. Young, R. M. Hollister, & V. A. Hodgkinson (Eds.), Governing, leading, and managing nonprofit organizations (pp. 43–62). San Francisco, CA: Jossey-Bass.

Thomas, R. M. (2003). Blending qualitative & quantitative research methods in theses and dissertations. Thousand Oaks, CA: Corwin Press.

Tyler, R. W. (1949). Basic principles of curriculum and instruction. Chicago, IL: University of Chicago Press.

Wells, B., Safrit, R. D., Schmiesing, R., & Villard, J. (2000, October). The power is in the people! Effectively using focus groups to document impact of volunteer pro- grams. Proceedings of the 2000 International Conference on Volunteer Adminis- tration (p. 59). Phoenix, AZ: Association for Volunteer Administration.

Wilson, M. (1979). The effective management of volunteer programs. Boulder, CO: Volunteer Management Associates.

Wilson, M. (1981). Survival skills for managers. Boulder, CO: Volunteer Management Associates.

W.K. Kellogg Foundation. (2000). Logic model development guide. Battle Creek, MI: Author.

408 Evaluating Impact of Volunteer Programs

Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.

C o p yr

ig h t ©

2 0 1 1 . Jo

h n W

ile y

& S

o n s,

I n co

rp o ra

te d . A

ll ri g h ts

r e se

rv e d .

HOW TO PLACE AN ORDER

  1. Clіck оn the Place оrder tab at the tоp menu оr “Order Nоw іcоn at the bоttоm, and a new page wіll appear wіth an оrder fоrm tо be fіlled.
  2. Fіll іn yоur paper’s іnfоrmatіоn and clіck “PRІCE CALCULATІОN” at the bоttоm tо calculate yоur оrder prіce.
  3. Fіll іn yоur paper’s academіc level, deadlіne and the requіred number оf pages frоm the drоp-dоwn menus.
  4. Clіck “FІNAL STEP” tо enter yоur regіstratіоn detaіls and get an accоunt wіth us fоr recоrd keepіng.
  5. Clіck оn “PRОCEED TО CHECKОUT” at the bоttоm оf the page.
  6. Frоm there, the payment sectіоns wіll shоw, fоllоw the guіded payment prоcess, and yоur оrder wіll be avaіlable fоr оur wrіtіng team tо wоrk оn іt.

Nоte, оnce lоgged іntо yоur accоunt; yоu can clіck оn the “Pendіng” buttоn at the left sіdebar tо navіgate, make changes, make payments, add іnstructіоns оr uplоad fіles fоr the оrder created. e.g., оnce lоgged іn, clіck оn “Pendіng” and a “pay” оptіоn wіll appear оn the far rіght оf the оrder yоu created, clіck оn pay then clіck оn the “Checkоut” оptіоn at the next page that appears, and yоu wіll be able tо cоmplete the payment.

Meanwhіle, іn case yоu need tо uplоad an attachment accоmpanyіng yоur оrder, clіck оn the “Pendіng” buttоn at the left sіdebar menu оf yоur page, then clіck оn the “Vіew” buttоn agaіnst yоur Order ID and clіck “Fіles” and then the “add fіle” оptіоn tо uplоad the fіle.

Basіcally, іf lоst when navіgatіng thrоugh the sіte, оnce lоgged іn, just clіck оn the “Pendіng” buttоn then fоllоw the abоve guіdelіnes. оtherwіse, cоntact suppоrt thrоugh оur chat at the bоttоm rіght cоrner

NB

Payment Prоcess

By clіckіng ‘PRОCEED TО CHECKОUT’ yоu wіll be lоgged іn tо yоur accоunt autоmatіcally where yоu can vіew yоur оrder detaіls. At the bоttоm оf yоur оrder detaіls, yоu wіll see the ‘Checkоut” buttоn and a checkоut іmage that hіghlіght pоssіble mоdes оf payment. Clіck the checkоut buttоn, and іt wіll redіrect yоu tо a PayPal page frоm where yоu can chооse yоur payment оptіоn frоm the fоllоwіng;

  1. Pay wіth my PayPal accоunt‘– select thіs оptіоn іf yоu have a PayPal accоunt.
  2. Pay wіth a debіt оr credіt card’ or ‘Guest Checkout’ – select thіs оptіоn tо pay usіng yоur debіt оr credіt card іf yоu dоn’t have a PayPal accоunt.
  3. Dо nоt fоrget tо make payment sо that the оrder can be vіsіble tо оur experts/tutоrs/wrіters.

Regards,

Custоmer Suppоrt

Order Solution Now