TK Logan and David Royse

TK Logan and David Royse

TK Logan and David Royse

Permalink:

A variety of programs have been developed to address social problems such as drug addiction, homelessness, child abuse, domestic violence, illiteracy, and poverty. The goals of these programs may include directly addressing the problem origin or moderating the effects of these problems on indi- viduals, families, and communities. Sometimes programs are developed

Struggling to meet your deadline ?

Get assistance on

TK Logan and David Royse

done on time by medical experts. Don’t wait – ORDER NOW!

to prevent something from happening such as drug use, sexual assault, or crime. These kinds of problems and programs to help people are often what allracts many

social workers to the profession; we want to be part of the mechanism through which society provides assistance to those most in need. Despite low wages, bureaucratic red tape, and routinely uncooperative clients, we tirelessly provide services that are invaluable but also at various Limes may be or become insufficient or inappropriate. But without conducting evaluation, we do not know whether our programs are helping or hurting, that is, whether they only postpone the hunt for real solutions or truly construct new futures for our clients. This chapter provides an overview of program evaluation in gen- eral and outlines the primary considerations in designing program evaluations.

Evaluation can be done informally or formally. We are constantly, as consumers, infor- mally evaluating products, services, and in formation. For example, we may choose not to return to a store or an agency again if we did not evaluate the experience as pleasant. Similarly, we may mentally take note of unsolicited comments or anecdotes from clients and draw conclusions about a program. Anecdotal and informal approaches such as these gen- erally are not regarded as carrying scientific credibility. One reason is that decision biases play a role in our “informal” evaluation. Specifically, vivid memories or strongly negative or positive anecdotes will be overrepresented in our summaries of how things are evaluated. This is why objective data are necessary to truly understand what is or is not working.

By contrast, formal evaluations systematically examine data from and about programs and their outcomes so that better decisions can be made about the interventions designed to address the related social problem. Thus, program evaluation involves the usc of social research meLhodologies to appraise and improve the ways in which human services, poli- ci~s, and programs are conducted. Formal eval.uation, by its very nature, is applied research.

Formal program evaluations attempt to answer the following general question: Does the p rogram work? Program evaluation may also address questions such as the following: Do our clients get better? How does our success rate compare to those of other programs or agencies? Can the same level of success be obtained through less expensive means?

221

222 PART II • QUANTITATIVE A PPROACHES: TYPES OF STUDIES

What is the experience o f the typical client? Should this program be terminated and its funds applied elsewhere?

Ideally, a thorough program evaluation would address more complex questions in three main areas: (1) Does the program produce the intended outcomes and avoid unin- tended negative outcomes? (2) For whom does the program work best and under what conditions? and (3) Ilow well was a p rogram model developed in one setting adapted to another setting?

Evaluation has taken an especially prominent role in practi.ce today because of the focu~ on evidence-based practice in social programs. Social work, as a profession, has been asked to use evidence-based practice as an ethical obligation (Kessler, Gira, & Poertner, 2005). Evidence-based practice is defined diLTerently, but most definitions include using program evaluation data to help determine best practices in whatever area of social programming is being considered. In other words, evidence-based practice includes using objective indica- tors of success in addition to practice or more subjective indicators of success.

Formal program evaluations can be found on just about every topic. For instance, Fraser, Nelson, and Rivn rd (1997) have examined the effectiveness of family preservation services; Kirby, Korpi, Adivi, and Weissman (1997) have evaluated an AIDS and preg- nancy prevention middle school program. Morrow-Howell, Beeker-Kemppainen, and Judy ( 1998) evaluated an intervention designed to reduce the risk of suicide in elderly adult clients of a crisis hotline. Richter, Snider, and Gorey ( 1997) used a quasi-experimental design to study the effects of a group work interven tion on female survivors of childhood sexual abuse. Leukefeld and colleagues ( 1998) examined the effects of an I IlV prevention intervention with injecting drug and crack users. Logan and colleagues (2004) examined the effects of a drug court in terven tion as well as the costs of drug court compared with the economic benefits of the drug court program.

Basic Evaluation Considerations

Before beginning a program evaluntion, several issues must be initially considered. These issues are decisions 1 hat are critical in determining the evaluation methodology and goals. Although you may not have complete answers to these questions when beginning to plan an evaluation, these questions help in developing the plan and must be answered before a n evaluation ca n be carried out. We can 1.um up these considerations with the following questions: who, what, where, when, and why.

First, who will do the evaluation? This seems like a simple question at first glance. llowever, this particular consideration has major implications for the evaluation results. Program evaluators can be categorized as being either internal or external. An internal evaluator is someone who is a program staff member or regular agency employee, whereas an external evaluator is a professional, on contract, hired for the specific purpose of evalu- ation. There are advantages nnd disadvantages to using either type of evaluator. For example, the internal evaluator probably will be very familia r with the staff and the program. This may save a lot of planning time. The d isadvnn tage is that evaluations com- pleted by an internal evaluator may be considered less valid by outside agencies, including the funding source. The external evaluator generally is thought to be less biased in terms of evaluation outcomes because he or she has no personal investment in the program. One disadvantage is that an external evaluator frequently is viewed as an “outsider” by the staff within an agency. This may affect the amount of time necessar)’ to conduct the eva luation or cause problems in the overall evaluation if agency staff are reluctant to cooperate.

CHAPTER 13 • PROGRAM E VALUATION S1 UDIES 223

Second, what resources are available to conduct the evaluation? Hiring an outside eval- uator can be expensive, while having a staff person conduct the evaluation may be less expensive. So, in a sense, you may be trading credibility for less cost. In fact, each method- ological decision will have a trade-off in credibility, level of information, and resources (including time and money). Also, the amount and level of information as well as the research design .. ciU be determined, to some e11.”1ent, by what resources are available. A comprehensive and rigorous eval uation does take significant resources.

Third, where will the information come from? If an evaluation can be done using exist- ing data, the cost will be lower than if data must be collected from numerous people such as clien ts and/or staff across m ultiple sites. So having some sense of where the data will come from is important.

Fou rth, when is the evaluation information needed? In other words, what is the time- frame for the evaluation? The timeframe will affect costs and design of research methods.

Fifth, why is the evaluation being conducted? Is the evaluation being conducted at the request of the funding source? Is it being conducted to improve services? Is it being con- ducted to document the cost-benefit trade-off of the program? If future program funding decisions will depend on the results of the evaluation, then a lot more importance will be attached to it than 1f a new manager simply wants to know whether clients were satisfied with services. The more that is riding on an evaluation, the more attention will be given to the methodology and the more threatened staff can be, especially if they think that th e purpose of the evaluation is to downsize and trim excess employees. In other words, there arc many reasons an evaluation is being considered, and these reasons may have implica- tions for the evaluation methodology and implementation.

Once the issues described above have been considered, more complex questions and trade-offs will be needed in planning the evaluation. Specifically, six main issues guide and shape the design of any program evaluation effort and m ust be given thoughtful and deliberate consideration.

L Defining the goal of the program evaluation

2. Understanding the level of information needed for the program evaluation

3. Determining the methods and analysis that need to be used for the program evaluation

4. Considering issues that might arise and strategies to keep the evaluation on course

5. Developing results into a useful format for the program stakeholders

6. Providing practical and useful feedback about the program strengths and weak- nesses as well as providing information about next steps

Defining the Goal of the Program Evaluation

It is essential that the evaluator has a firm understanding of the short- and long-term objectives of the evaluation. Imagine being hired for a position but not being given a job description or informed aboul how the job fits into the overall organization. Without knowing why an evaluation is called for or needed, the evaluator might attempt to answer a different set of c.1uestions from those of interest to the agency director or advisory board. The management might want Lo know why the majority of clients do not return after one or two visits, whereas the evaluator might think that his or her task is to determine

224 PART II • QUANTITATIVF APPROACHES: TYPlS Or SIUDIES

whether clien ts who received group therapy sessions were better off than clients who received ind ividual counseling.

In defining the goals of the program evaluation, severa l steps should be taken. First, the program goals should be examined. These can be learned through examining official program documents as well as through talking to key program stakeholders. In clarifying the overall purpose of the evaluation, it is critical to talk with different program “stake- holders.” Scriven ( 199 1) defines a program stakeholder as “one who has a substantial ego, credibility, power, futures, or other capital invested in the program . . .. This includes program staff and many who arc no t actively involved in the day-to-day operations” (p. 334). Stakeholders include both supporters and opponents of the program as well as program clients or consumers or even potential consumers or clients. lt is essential that the evaluator obtain a variety of different views about the program. By listening and con- sidering stakeholder perspectives, the evaluator can ascertain the most important aspects of the program to target for the evaluation by looking for overlapping concerns, ques- tions, and comments from the various stakeholders. However, it is important that the stakeholders have some agreement on what program success means. Otherw ise, it may be difficult to conduct a satisfactory evaluation.

It is also important to consult the extant literature to understand what similar programs have used to evaluate their outcomes as well as to understand the theoretical basis of the program in defining the program evaluation goals. Furthermore, it is critical that the evaluator works closely with whoever initiated the evaluation to set priorities for the evaluation. This process should identify the intended o utcomes of the program and which of those outcomes, if not all of them, will be evaluated. Taking the evaluation a step further, it may be important to include the examination of unintended negative outcomes that may result from the program. Stakeholders and the literature will also help to deter- mine those kinds of outcomes.

Once the overall purpose and priorities of the evaluation are established, it is a good idea to develop a written agreement, especially if the eva I uator is an external one. Misunderstandings can and will occu r months later if things are not written in black and white.

Understanding the Level of Information Needed for the Program Evaluation

The success of the program evaluation revolves around the evaluator’s ability to develop practical, researchable questions. A good rule to follow is to focus the evaluation on one or two key questions. Too many questions can lengthen the process and overwhelm the evaluator with too much data that, instead of facilitating a decision, might produce inconsistent findings. Sometimes, funding sources require only that some vague unde- fined type of evaluation is conducted. The funding sources might nei ther expect nor desire dissertation-quality research; they simply migh L expect “good fa ith” efforts when beginning evaluation processes. Other agencies may be quite demanding in the types and forms of data to be provided. Obviously, the choice of methodology, data collection procedures, and reporting formats will be strongly affected by the purpose, objectives, and questions examined in the study.

It is important to note the difference between general research and evaluation. In research, the investigator often· focuses on questions based on theoretical considerations or hypotheses gene rated to hu ilcl on research in a specific area of study. Although

CHAPTER 13 • PROGRAM EVALUATION $ TUUIES 225

program evaluations may foc us on an intervention derived from a theory, the evalua- tion questions should, first and foremost, be driven by the program’s objectives. The eval- uator is less concerned with buildi ng on prior litera ture or contributing to the development of practice theory than with determining whether a program worked in a specific community or location.

There are actually two main types of evaluation questions. There are quc~>tions that focus on client outcomes, such as, “What impact did the program have?” These kinds of questions are addressed by using outcome evaluation methods. Then there are questions that ask, “Did the program achieve its goals?” “Did the program adhere to the specified procedures or standards?” o r “vVhat was learned in operating this program?” These kinds of questions are addressed by using process evaluation methods. We will examine both of these two types of evaluation approaches in the following sections.

Process Evaluation Process evaluations offer a “snapshot” of the program at any given time. Process evalua- tions typically describe the day-to-day program efforts; program modifica tions and changes; outside events that influenced the program; people and institutions involved; culture, customs, and traditions that evolved; and sociodemographic makeup of the clien- tele (Scarpitti, Inciardi, & Pottieger, 1993). Process evaluation is concerned with identify- ing p rogram strengths and weaknesses. This level of program cvaluarion can be usefuhn several ways, including providing a contex-t within which to interpret program outcomes and so that other agencies or localities wishing to sta rr similar programs can benefit with- out having to make the same mistakes.

As an example, Bentelspacher, DeSilva, Goh, and LaRowe ( 1996) conducted a process evaluation of the cultural compatibility of psychoeducational family group treatment with eth nic Asian clients. As another example, Logan, Williams, Leukefeld, and Minton (2000) conducted a detailed process evaluation of the drug court programs before under- taking an outcome evalual ion of the same programs. The Logan et al. sl udy used multiple methods to conduct the process evaluatio n, including .in-depth interviews with the program administrative personnel, inten,iews with each of five judges involved in the program, surveys and face- to-face interviews with 22 randomly selected current clients, and surveys of all program staff, 19 community treatment provider representatives, 6 ran- domly selected defense attorney representatives, 4 prosecuting attorney representatives, l representative 6:om the probation and parole office, 1 representa tive from the local county jail, and 2 police departmen l representatives. In all, 69 different individuals repre- senting I 0 different agency perspectives provided information about the drug court program. Also, all agency documents were examined and analyzed, observations of vari- ous aspects of the program process were conducted, and client intake data were analyzed as part of the process evaluation. The results were all integrated and compiled into one comprehensive repor t.

What makes a process evaluation so important is that researchers often have relied only on selected program outcome indicators such as termination and grad uation rates or number of rearrests to determine effectiveness. However, to better understand how and why a program such as drug court is effective, an analysis of how the p rogram was concep- tualized, implemented, and revised is needed. Consider this exan1ple-say one outcome evaluation of a drug court program showed a graduation rate of 80% of those who began the program, while another outcome evaluation found that only 40o/o of those who began the program graduated. Then, the graduates of the second program were more likely to be free from substance usc and criminal behaviors at the l2-month foUow-up than the graduates

226 PART II • QuANTITATIVE APPROACHES: TYPES OJ SJUDIES

from the first program. A process evaluation could help to explain the specific differences in factors such as selection (how clients get into the programs), treatment plans, monitor- ing, program length, and other program features that may influence how many people graduate and slay free from drugs and criminal behavior at follow-up. Tn other words, a process evaluation, in contrast to an examina tion of program outcome only, can provide a clearer and more comprehensive pictme of how drug cou rt affects those involved in the program. More specifically, a process evaluation can provide information about program aspects that need to be improved and those that work well (Scarpilli, Inciardi, & Pottieger, 1993). Finally, a process evaluation may help to facilitate replication of the drug cou rt program in other areas. This often is referred to as technology transfer.

A different but related process evaluation goal might be a description of the failures and departures from the way in which the intervention originally was designed. How were the staff trained and hired? Did the intervention depart from the treatment manual rec- ommendations? Influences that shape and affect the intervention that clients receive need to be identified because they affect the fidelity of the treatment p rogram (e.g., delayed funding or staff hires, changes in policies or procedu res). ”/hen program implementation deviates significantly from what was intended, this might be the logical explanation as to why a program is not working.

Outcome or Impact Evaluation Outcome or impact evaluation focuses on the targeted objectives of the program, often looking at variables such as behavior change. For example, many drug t reatment programs may measure outcomes or “success” by the number of clients who abstain from drug use. Questions always arise, though. For instance, an evaluation might reveal that 90% of those who graduate from the program abstai n from drug use 30 days after the program was com- pleted. However, only 50% report abstaining from drug use 12 months after the program was completed. Would key stakeholders involved all consider that a success or failure of the program? This example brings up three critical issues in outcome evaluations.

One of the critical issues in outcome evaluations is related to understanding for whom docs the program work best and under what conditions. In other words, a more interest- ing and important question , rather than just asking whether a program works, would be to ask, “Who are those 50% of people who remained abstinent from drug use 12 months after completing the program, and how do they differ from the 50% who relapsed?” It is not unusual for some evaluation questions to need a combination of both process and impact evaluation methodologies. For example, if it turned out that results of a particular evaluation showed that the program was not effective (impact), then it might be useful to know why it was not effective (process). Tn such cases, it would be important to know how the program was implemented, what changes were made in the program during the implementation, what problems were experienced during the implem entation, and what was done to overcome those problems.

Another important issue in outcome evaluation has to do with the timing of measur- ing the outcomes. Outcome effects are usually measured after treatment or postinterven- tion. These effects may be either short term or long term. immediate outcomes, or those generally measured at the end of the treatment or intervention, might or might not pro- vide the same results as one would get later in a 6- or 12-month follow- up, as highlighted in the example above.

The third important issue in outcome evaluation has to do with what specific measures were used. Is abstinence, for example, the only measure of interest, or is reduction in use something that might be of interest? Refrainin g from cri minal activity or holding a steady

CHAPTER l3 • PROGRAM EVALUATION STUOIES 2 27

job may also be an important goal of a subslance abuse program. If we only measure abstinence, we would never know about other kinds of outcomes the program may affect .

These last two issues in outcome evaluations have to do with the evaluation methodol- ogy and analysis and are addressed in more detail below.

Determining the Methods and Analysis That Need to Be Used for the Program Evaluation

The next step in the evaluation process is to determine the evaluation design. There are several interrelated steps in this process, including determining the (a) sources of data, (h) research design, (c) measures, (d ) analysis of change, and (e) cost-benefit assessment of the program.

Sources of Data Several main sources of data can be used for evaluations, including qualitative informa- tion and quantitative information.

Qualitat ive Data Sources

Qualitative data sources are often used in process evaluations and might include obsen a- tions, analysis of existing program documents such as policy and procedure manuals, in- depth interview data, or focus group data. There are, however, trade-offs when using qualitative data sources. On the positive side, q ualitative evaluation data provide an “in- depth” snapshot of various topics such as how the program functions, what staff think are the positive or negative aspects of the programs, or what clients really think of the O’erall program experiences. Reporting clients’ experiences in their own words is a characteristic of qualitative evaluations.

Interviews arc good for collecting qualitative or sensitive data such as values and atti- tudes. This method requires an interview prolocol or questionnaire. These usual!) are structured so that respondents are asked questions in a specific order, but they can be semistructured so t.hat there are fewer topics, and the interviewer has the ability to change the order based on a “reading” of the client’s responses. Surveys can request information of clients by mail, by telephone, or in person. They may or may not be 1>clf-administered. So, besides considering what data are desired, evaluators must be concerned with prag- matic considerations regarding the best way in which to collect the desired data.

Pocus groups also offer insight in to cer tain aspects of the program or program func- tioning; participants add their input, and input is interpreted and discussed by other group members. This discussion component ml!y provide an opportunity to uncover information that might otherwise remain undiscovered such as the meaning of certain things to different people. Focus groups typically are small informal groups of persons asked a series of questions that start out very general and then become more specific. Focus groups are increasingly being used to provide evaluative information about human services. They work part icularly well in identifying the questions that might be important to ask in a survey, in testing planned procedures or the phrasing of items for the specific target population, and in exploring possible reactions to an intervention or a service.

228 P!IRT II • QuANTITATIVE APPROACHF.S: TYPES OF SruOI[S

On the other hand, qualitative studies Lend to use small samples, and care must be used in analyzing and interpreting the information. FurLhermore, although both qualitative and quantitative data are subject to method bias and threats to validity, qualitative data may be more sensitive to bias depending on how participants are selected to be inter- viewed, the nu mber of observations or focus groups, and even subtleties in the questions asked. With qualitative approaches, the evaluator often has less abil ity to account for alter- native expla nations because the data are more limited. Making strong conclusions about representativeness, validity, and reliability is more difficult with qualitative data corn- pared to something like an average rating of satisfaction across respondents (a quantita- tive measure). Yet, an average rating does not tell us much about why participants are satisfi ed with the program or why they may be dissatisfied with other aspects of the program. Thus, it is often imperative to use a mixture of q ualitative and quantitative information to evaluate a program.

Quantitative Data Sources

Two main types of quantitative data sources can be used for program evaluations: sec- ondary data and original data.

Secondary Data. One option for obtaining needed data is to use existi ng data. Collecting new data often is more expensive than using existing data. Examining the data on hand and already available always is a good llrst step. However, the evaluator might want to rearrange or reassemble the data, for example, dividing it by quarters or combining it into 12-month periods that help to reveal patterns and trends over t ime. Existing data can come from a variety of places, including the following:

Client records maintained by the program: These may include a host of demographic and service-related data items about the population served.

Program expense and financial data: These can help the evaluator to determine whether one intervention is much more expensive than another.

Agenc.y annual reports: These can be used to identify trends in service delivery and program costs. The evaluator can compare annuill reports from year to year and can develop graphs to easily identify trends wilh clientele and programs.

Databases maintained by the state health department and other state agencies. Public data such as births, deaths, and divorces are available from each state. Furthermore, most state agencies produce annual reports that may reveal the number of clients served by program, geographic region, and on occasion, selcct·ed sociodemographic variables (e.g., race or age).

Local and regional agencies. Planning boards for mental health services, child protec- tion, school boards, and so forth may be able to furnish statistics on outpatient and in patient services, special school populations, or child abuse cases.

The federal government. The federal governmen t collects and maintains a large amount of data on many different issues and topics. State and national data provide bench- marks for comparing local demographic or social indicators to national-level demo- graphic or social indicators. For instance, if you were working as a cancer educator whose objective is to reduce the incidence of breast cancer, you might want to consult cancercontrolplanct.cancer.gov. That Web site will furnish national-, state-, and

CHAPTER 13 • PROGRAM EVAlUA II ON S TUD ICS 229

county-level data on the nwnber of new cancer cases and deaths. By comparison, it will be possible to determine if the rate in one county is higher than the state or national average. Demographic information about communities can be found at www.census.gov.

Foundations. Certain well-established foundations provide a wealth of information about problems. For example, the Annie E. Casey Foundation provides an incredible Kids Count Data Book that provides an abundance of child welfare-related data at the state, national, and county level. By using their data, you could determine if infant mortality rates were rising, teen births were increasing, or high school dropouts were decreasing. You can find the Web site at www.aecf.org.

lf existing data cannot be used or cannot answer all of the evaluation questions, then original data rnust be colleclcd.

Original Data Sources. There a re rnany types or evalua tion designs (rom wh ich to choose, and no single one will be ideal for every project. The specific approach chosen for the evaluation will depend on the purpose of the evaluation, the research questions to be explored, the hoped-tor or in tended results, the quali ty and volume of data available or needed, and staff, time, and financial resources.

The evaluation design is a critical decision for a number of reasons. Without the appropriate evaluation design, confidence in the resuiL<> of the evaluation might be lack~ ing. A strong evaluation design minimizes alternative explanations and assists the evalua- tor in gauging the true effects attributable to the intervention. In other words, the evaluation design directly affects tl1e interpretation that can be made regarding whether an intervention should be viewed as the reason for change in clients’ behavior. Howewr, there are trade offs with each design in the credibility of information, causality of an)’ observed changes, and resources. These trade-off.~ must be carefully considered and discussed with the program staff.

Quantitative designs include surveys, pretest-posttest studies, quasi-experiments with noncquivalcnt control groups, longitudinal designs, and randomized experimental designs. Quantitative approaches transform answers to specific questions into numerical data. Outcome and impact evaluations nearly always are based on quantitative evaluation designs. Also, sampli ng strategies must be considered as an in regr<1l p<1rt of the research design. Below is a brief overview of the major types of quantitative evaluation designs. For an expanded discussio11 o r these topics, refer Lo Royse, Thyer, Padgell, and Logan (2005).

Struggling to meet your deadline ?

Get assistance on

TK Logan and David Royse

done on time by medical experts. Don’t wait – ORDER NOW!

Open chat
WhatsApp chat +1 908-954-5454
We are online
Our papers are plagiarism-free, and our service is private and confidential. Do you need any writing help?