Program evaluation government




















The manual, based on the Centers for Disease Control and Prevention CDC Framework for Program Evaluation in Public Health, is intended to assist managers and staff of public, private, and community public health programs to plan, design, implement and use comprehensive evaluations in a practical way.

Nationally recognized experts present a general introduction to program evaluation; note challenges in conducting useful evaluations as well as methods for overcoming those challenges; and introduce the six steps of the CDC Framework for Program Evaluation using examples that are relevant to state partners of the National Asthma Control Program. The Program Manager's Guide to Evaluation — the Administration for Children and Families has developed The Program Manager's Guide to Evaluation to explain program evaluation — what it is, how to understand it, and how to do it.

It answers questions about evaluation and explains how to use evaluation to improve programs and benefit staff and stakeholders. Performance Measurement and Evaluation: Definitions and Relationships — The Government Accountability Office has provided this document to describe and discuss the relationship between evaluation and performance measurement. Program Evaluation Glossary — Glossary that reflects the meaning of the evaluation terminology from both the social scientist's and the practitioner's point of view.

GAO previously reported that involving agency staff in planning and conducting evaluations helps to ensure they are relevant, credible, and used in decision making. GAO and OMB have noted the importance of developing an evaluation plan or agenda to ensure that an agency's scarce research and evaluation resources are targeted to its most important issues. While 28 percent of managers with evaluations rated consultation with stakeholders high for facilitating use, another 22 percent reported having no basis to judge.

GAO previously noted limited knowledge of agency consultation with the Congress. Utility standards are intended to ensure that the evaluation will meet the information needs of intended users. Involving stakeholders, using credible evaluation methods, asking pertinent questions, including stakeholder perspectives, and providing clear and timely evaluation reports represent attention to utility standards. The scope of the information collected should ensure that the data provide stakeholders with sufficient information to make decisions regarding the program.

Accuracy standards are intended to ensure that evaluation reports use valid methods for evaluation and are transparent in the description of those methods. Meeting accuracy standards might, for example, include using mixed methods e. Both identify the need to be pragmatic and serve intended users with the goal of determining the effectiveness of a program.

Principles of Community Engagement - Second Edition. Section Navigation. Facebook Twitter LinkedIn Syndicate. Program Evaluation Minus Related Pages. Evaluation can be classified into five types by intended use: formative, process, summative, outcome, and impact. Evaluation may supplement surveillance data by providing tailored information to answer specific questions about a program. Data from specific questions for an evaluation are more flexible than surveillance and may allow program areas to be assessed in greater depth.

Evaluators can also use qualitative methods e. Both research and program evaluation make important contributions to the body of knowledge, but fundamental differences in the purpose of research and the purpose of evaluation mean that good program evaluation need not always follow an academic research model.

Research is generally thought of as requiring a controlled environment or control groups. In field settings directed at prevention and control of a public health problem, this is seldom realistic. Of the ten concepts contrasted in the table, the last three are especially worth noting. Unlike pure academic research models, program evaluation acknowledges and incorporates differences in values and perspectives from the start, may address many questions besides attribution, and tends to produce results for varied audiences.

Program staff may be pushed to do evaluation by external mandates from funders, authorizers, or others, or they may be pulled to do evaluation by an internal need to determine how the program is performing and what can be improved. While push or pull can motivate a program to conduct good evaluations, program evaluation efforts are more likely to be sustained when staff see the results as useful information that can help them do their jobs better. Data gathered during evaluation enable managers and staff to create the best possible programs, to learn from mistakes, to make modifications as needed, to monitor progress toward program goals, and to judge the success of the program in achieving its short-term, intermediate, and long-term outcomes.

Most public health programs aim to change behavior in one or more target groups and to create an environment that reinforces sustained adoption of these changes, with the intention that changes in environments and behaviors will prevent and control diseases and injuries. Through evaluation, you can track these changes and, with careful evaluation designs, assess the effectiveness and impact of a particular program, intervention, or strategy in producing these changes.

The Working Group prepared a set of conclusions and related recommendations to guide policymakers and practitioners. Program evaluation is one of ten essential public health services [8] and a critical organizational practice in public health.

The underlying logic of the Evaluation Framework is that good evaluation does not merely gather accurate evidence and draw valid conclusions, but produces results that are used to make a difference. You determine the market by focusing evaluations on questions that are most salient, relevant, and important. You ensure the best evaluation focus by understanding where the questions fit into the full landscape of your program description, and especially by ensuring that you have identified and engaged stakeholders who care about these questions and want to take action on the results.

The steps in the CDC Framework are informed by a set of standards for evaluation. The 30 standards cluster into four groups:. Utility: Who needs the evaluation results? Will the evaluation provide relevant information in a timely manner for them? Feasibility: Are the planned evaluation activities realistic given the time, resources, and expertise at hand? Propriety: Does the evaluation protect the rights of individuals and protect the welfare of those involved?

Does it engage those most directly affected by the program and changes in the program, such as participants or the surrounding community? Accuracy: Will the evaluation produce findings that are valid and reliable, given the needs of those who will use the results? Sometimes the standards broaden your exploration of choices.

Often, they help reduce the options at each step to a manageable number. Feasibility How much time and effort can be devoted to stakeholder engagement? Propriety To be ethical, which stakeholders need to be consulted, those served by the program or the community in which it operates? Accuracy How broadly do you need to engage stakeholders to paint an accurate picture of this program? Similarly, there are unlimited ways to gather credible evidence Step 4.

Asking these same kinds of questions as you approach evidence gathering will help identify ones what will be most useful, feasible, proper, and accurate for this evaluation at this time. Thus, the CDC Framework approach supports the fundamental insight that there is no such thing as the right program evaluation.



0コメント

  • 1000 / 1000