Preface | p. xiii |
Introduction to Evaluation | p. 1 |
Evaluation's Basic Purpose, Uses, and Conceptual Distinctions | p. 3 |
A Brief Definition of Evaluation | p. 4 |
Informal versus Formal Evaluation | p. 8 |
Distinguishing between Evaluation's Purposes and Evaluators' Roles and Activities | p. 9 |
Some Basic Types of Evaluation | p. 16 |
Evaluation's Importance--and Its Limitations | p. 26 |
Origins and Current Trends in Modern Program Evaluation | p. 30 |
The History and Influence of Evaluation in Society | p. 30 |
Recent Trends Influencing Program Evaluation | p. 44 |
Alternative Approaches to Program Evaluation | p. 53 |
Alternative Views of Evaluation | p. 57 |
Diverse Conceptions of Program Evaluation | p. 58 |
Origins of Alternative Views of Evaluation | p. 59 |
Themes among the Variations | p. 67 |
A Classification Schema for Evaluation Approaches | p. 68 |
Objectives-Oriented Evaluation Approaches | p. 71 |
Developers of the Objectives-Oriented Evaluation Approach and Their Contributions | p. 72 |
How the Objectives-Oriented Evaluation Approach Has Been Used | p. 80 |
Strengths and Limitations of the Objectives-Oriented Evaluation Approach | p. 82 |
Management-Oriented Evaluation Approaches | p. 88 |
Developers of the Management-Oriented Evaluation Approach and Their Contributions | p. 89 |
How the Management-Oriented Evaluation Approach Has Been Used | p. 94 |
Strengths and Limitations of the Management-Oriented Evaluation Approach | p. 95 |
Consumer-Oriented Evaluation Approaches | p. 100 |
Developers of the Consumer-Oriented Evaluation Approach and Their Contributions | p. 101 |
How the Consumer-Oriented Evaluation Approach Has Been Used | p. 104 |
Strengths and Limitations of the Consumer-Oriented Evaluation Approach | p. 108 |
Expertise-Oriented Evaluation Approaches | p. 112 |
Developers of the Expertise-Oriented Evaluation Approach and Their Contributions | p. 114 |
How the Expertise-Oriented Evaluation Approach Has Been Used | p. 121 |
Strengths and Limitations of the Expertise-Oriented Evaluation Approach | p. 123 |
Participant-Oriented Evaluation Approaches | p. 129 |
Evolution of Participant-Oriented Evaluation Approaches | p. 130 |
Developers of the Participant-Oriented Evaluation Approach and Their Contributions | p. 131 |
How Participant-Oriented Evaluation Approaches Have Been Used | p. 145 |
Strengths and Limitations of Participant-Oriented Evaluation Approaches | p. 146 |
Alternative Evaluation Approaches: A Summary and Comparative Analysis | p. 152 |
Cautions about the Alternative Evaluation Approaches | p. 153 |
Contributions of the Alternative Evaluation Approaches | p. 158 |
Comparative Analysis of Characteristics of Alternative Evaluation Approaches | p. 159 |
Eclectic Uses of the Alternative Evaluation Approaches | p. 163 |
Drawing Practical Implications from the Alternative Evaluation Approaches | p. 165 |
Practical Guidelines for Planning Evaluations | p. 169 |
Introduction of Case Study | p. 170 |
Clarifying the Evaluation Request and Responsibilities | p. 173 |
Understanding the Reasons for Initiating the Evaluation | p. 174 |
Conditions under which Evaluation Studies Are Inappropriate | p. 178 |
Determining When an Evaluation Is Appropriate: Evaluability Assessment | p. 182 |
Using an Internal or External Evaluator | p. 185 |
Hiring an Evaluator | p. 189 |
How Different Evaluation Approaches Clarify the Evaluation Request and Responsibilities | p. 192 |
Setting Boundaries and Analyzing the Evaluation Context | p. 199 |
Identifying Intended Audiences for an Evaluation | p. 200 |
Describing What Is to Be Evaluated: Setting the Boundaries | p. 203 |
Analyzing the Resources and Capabilities That Can Be Committed to the Evaluation | p. 212 |
Analyzing the Political Context for the Evaluation | p. 216 |
Variations Caused by the Evaluation Approach Used | p. 217 |
Determining Whether to Proceed with the Evaluation | p. 219 |
Identifying and Selecting the Evaluation Questions and Criteria | p. 232 |
Identifying Appropriate Sources of Questions and Criteria: The Divergent Phase | p. 234 |
Selecting the Questions, Criteria, and Issues to Be Addressed: The Convergent Phase | p. 246 |
Remaining Flexible during the Evaluation: Allowing New Questions, Criteria, and Standards to Emerge | p. 253 |
Planning How to Conduct the Evaluation | p. 260 |
Identifying Design and Data Collection Methods | p. 262 |
Specifying How the Evaluation Will Be Conducted: The Management Plan | p. 275 |
Establishing Evaluation Agreements and Contracts | p. 285 |
Practical Guidelines for Conducting and Using Evaluations | p. 301 |
Collecting Evaluation Information: Design, Sampling, and Cost Choices | p. 303 |
Using Mixed Methods | p. 304 |
Designs for Collecting Causal and Descriptive Information | p. 307 |
Sampling | p. 320 |
Cost Analysis | p. 324 |
Collecting Evaluation Information: Data Sources and Methods, Analysis, and Interpretation | p. 334 |
Common Sources and Methods for Collecting Information | p. 335 |
Planning and Organizing the Collection of Information | p. 356 |
Analysis of Data and Interpretation of Findings | p. 358 |
Reporting and Using Evaluation Information | p. 375 |
Purposes of Evaluation Reports | p. 376 |
Important Factors in Planning Evaluation Reports | p. 377 |
Key Components of a Written Report | p. 382 |
Suggestions for Presenting Information in Written Reports | p. 388 |
Alternative Methods for Reporting: The Adversary Approach | p. 394 |
Human and Humane Considerations in Reporting Evaluation Findings | p. 395 |
Suggestions for Effective Oral Reporting | p. 398 |
A Checklist for Good Evaluation Reports | p. 400 |
How Evaluation Information Is Used | p. 400 |
Dealing with Political, Ethical, and Interpersonal Aspects of Evaluation | p. 411 |
Establishing and Maintaining Good Communications among Evaluators and Stakeholders | p. 412 |
Understanding Potential Bias Resulting from the Evaluator's Personal Values and Interpersonal, Financial, and Organizational Relationships with Others | p. 415 |
Maintaining Ethical Standards: Considerations, Issues, and Responsibilities for Evaluators and Clients | p. 423 |
Political Pressures and Problems in Evaluation | p. 432 |
Evaluating Evaluations | p. 442 |
The Concept and Evolution of Metaevaluation | p. 443 |
The Joint Committee's Standards for Program Evaluation | p. 444 |
Summary of the Program Evaluation Standards | p. 445 |
AEA Guiding Principles for Evaluators | p. 449 |
The Role of Metaevaluator | p. 451 |
Some General Guidelines for Conducting Metaevaluations | p. 453 |
A Need for More Metaevaluation | p. 455 |
Emerging and Future Settings for Program Evaluation | p. 461 |
Conducting Multiple-Site Evaluation Studies | p. 463 |
Purposes and Characteristics of Multiple-Site Evaluations | p. 464 |
Multisite Evaluation (MSE) | p. 466 |
On-Site Evaluation at Multiple Sites | p. 471 |
Cluster Evaluation | p. 475 |
Other Approaches to Multiple-Site Evaluation | p. 481 |
Conducting Evaluation of Organizations' Renewal and Training in Corporate and Nonprofit Settings | p. 485 |
Evaluation in the Nonprofit Sector | p. 486 |
Evaluating Corporate Training Programs | p. 491 |
Personnel Evaluation | p. 495 |
Other Methods of Organizational Assessment | p. 497 |
The Future of Evaluation | p. 507 |
The Future of Evaluation | p. 508 |
Predictions concerning the Profession of Evaluation | p. 508 |
Predictions concerning the Practice of Evaluation | p. 510 |
A Vision for Evaluation | p. 513 |
Conclusion | p. 513 |
Suggested Readings | p. 514 |
Evaluation-Related Web Sites | p. 515 |
References | p. 519 |
Author Index | p. 543 |
Subject Index | p. 551 |
Table of Contents provided by Rittenhouse. All Rights Reserved. |