Changes

no edit summary
Line 83: Line 83:  
* Timely and actionable; and
 
* Timely and actionable; and
 
* Cost-effective to measure.<ref>Note that in practice, it is not possible to maximize all criteria simultaneously; it is often necessary to trade off performance on some criteria against improvements on other criteria.</ref>
 
* Cost-effective to measure.<ref>Note that in practice, it is not possible to maximize all criteria simultaneously; it is often necessary to trade off performance on some criteria against improvements on other criteria.</ref>
 +
 +
<blockquote style="background-color: lightgrey; border: solid thin grey;">
 +
'''<big>Hint Developing Indicators—A Short-Hand Approach</big>'''
 +
 +
When developing performance indicators, it is useful to ask the following: What would you see or hear if the expected results you have described are being achieved? The answer to this question should provide you with strong guidance as to what the indicator of performance should be.
 +
 +
Once you have answered the question, you will want to define the indicator as follows:
 +
 +
# Suggest the measure of change, such as its number, proportion, percentage, rate, ratio, amount, or level.
 +
# Specify who is changing. This may include the population target group, program participant, client, individual, organization, agency, or community. The more you specifically describe the group who should be changing or from whom you expect a reaction, the more precise the indicator.
 +
# Specify what is changing or happening. This may include changes to awareness, attitude, knowledge, skill, aspirations, commitments, behaviours, or practices. It may also include simple reactions such as perception or satisfaction. Note that this may not always be a change per se; sometimes, one is seeking to maintain existing reactions and behaviours.</blockquote>A key principle for developing indicators is that they should always follow the logic of an initiative's expected results. Examples are as follows:
 +
{| class="wikitable"
 +
!Level
 +
!Examples
 +
|-
 +
|Ultimate Outcomes
 +
The highest level outcome that can be reasonably attributed to a policy, program, or initiative and that is the consequence of one or more intermediate outcomes having been achieved. Such outcomes represent a change of state in a target population.
 +
|
 +
* Level of health-related incidence (e.g., incidents per 100,000 of the population)
 +
* Level of environmental degradation (e.g., level of soil erosion, air and water pollution counts)
 +
* Level of economic activity or growth (e.g., percentage of change in the GDP, in direct foreign investment, in employment)
 +
|-
 +
|Intermediate Outcomes
 +
An outcome that is expected to logically occur once one immediate outcome (or more) has been achieved.
 +
|
 +
* Number of inspected enterprises found to be in compliance with regulation x
 +
* Level of emergency preparedness (rated as 1: fully up to standard—no improvement needed; 2: partially up to standard—needs minor adjustments; 3: needs major improvements) for the service facilities in region y
 +
* Percentage of inspected enterprises found to be in compliance with sections a, b, and c of Act x
 +
* Percentage of inspected enterprises found to be in non-compliance with sections a, b, and c of Act x and that will who move into (full) compliance within one year
 +
|-
 +
|Immediate Outcomes
 +
An outcome that is directly attributable to the outputs of a policy, program, or initiative.
 +
|
 +
* Percentage of target enterprises that attended information session
 +
* Percentage of information (specific report) users stating that they found the information at least somewhat valuable
 +
* Level (exceeded, met, did not meet) of support for agreed terms of MOU from the point of view of signatory A
 +
* Number of downloads of safety information from organization's website
 +
|-
 +
|Outputs
 +
Direct products or services stemming from the activities of an organization, policy, program, or initiative and usually within the control of the organization itself.
 +
|
 +
* Numbers of communications, events, inspections, citations, etc.
 +
* Deliverable produced or achieved (yes/no) within expected time frame
 +
|}
 +
 +
=== Measurement and Reporting ===
 +
After selecting a set of indicators, the next step is to establish an approach for ongoing performance measurement and reporting. It is common to summarize a measurement and reporting strategy using a tabular format. The table would include a description of the element being measured (output or outcome), the indicator itself, a description of the data source for the indicator and the methods used to collect the data, a description of how a baseline measurement will be established (existing information or new baseline based on first measurement), performance targets for the indicator, a description of how often the indicator will be measured, and a description of who is responsible for measurement and data collection. Much of this data should be available in the regulatory proposal's cost-benefit analysis and RIAS. The table could also address all levels of the PMEP's logic model (e.g., output and immediate, intermediate, and ultimate outcomes).
 +
 +
To facilitate the update of the PMF, the following table should be used in the PMEP Template; however, it could be expanded to define other elements beyond what is required to update the PMF.
 +
{| class="wikitable"
 +
!Strategic Outcome / Expected Result / Output
 +
!Performance Indicator
 +
!Data Source
 +
!Frequency of Data Collection
 +
!Target
 +
!Date to Achieve Target
 +
|-
 +
|
 +
|
 +
|
 +
|
 +
|
 +
|
 +
|-
 +
| colspan="6" |
 +
|}
 +
The choice of a data source and collection method will depend on the type of performance indicators and the purpose of the information being gathered. The choice of a data source will also depend on whether the information is being collected on a regular basis (for ongoing monitoring purposes) or periodically (as part of an evaluation). Data can be collected from various sources, as follows:
 +
 +
* Administrative data—Information that is being collected through day-to-day activities, such as permit and licensing applications, audits, inspections, and enforcement. Regulatory organizations should be mindful of the administrative burden that data collection may place on businesses and individuals.
 +
* Primary data—Information that is collected through specialized data collection exercises, such as surveys, focus groups, expert panels, or specialized evaluation studies.
 +
* Secondary data—Data that have been collected by other organizations, such as national statistics on health and economic status.
 +
 +
Baseline measurements establish critical reference points from which subsequent changes in indicators can be measured. If reliable historical data on the performance indicator exists, they should be used. Otherwise, it will be necessary to collect baseline data at the first opportunity.
 +
 +
Performance targets consist of projected indicator values for quarterly, semi-annual, or annual performance periods. The target for the regulatory proposal should relate to the analysis (e.g., cost-benefit analysis and risk assessment) that supported the decision to regulate in the first place. Targets can also be set for achieving certain levels of performance in the longer term. Target measurements can be used as interim information about how particular indicators are working. Such information can also be useful for annual reporting and budgeting exercises. Suggested guidelines for setting targets include the following:
 +
 +
* Setting targets based on previous performance (i.e., the level at which performance is no longer deemed "a problem");
 +
* Setting targets using the performance level achieved by the most successful performance to date;
 +
* Setting targets using the performance level achieved by averages of past performance;
 +
* Setting targets using performance levels achieved by other jurisdictions or by private firms with similar activities;
 +
* Making sure that the targets chosen are feasible given the program's budget, staffing, and anticipated influence; and
 +
* Identifying developments—internal and external—that may affect the program's ability to achieve desired outcomes.
 +
 +
Regulatory organizations have a responsibility to report to Canadians on an annual basis through the Report on Plans and Priorities (RPP) and Departmental Performance Report (DPR). This reporting responsibility provides an excellent opportunity to roll up the findings of ongoing performance measurement activities. Other reporting instruments such as websites, annual reports, and newsletters are also effective and timely means for communicating progress on a regulatory proposal to Canadians.
 +
 +
=== Evaluation Strategy ===
 +
Regulatory organizations are required to evaluate their regulatory activities in accordance with the time frames and cycle established in the ''[http://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=15024 Policy on Evaluation]'', which defines evaluation as "the systematic collection and analysis of evidence on the outcomes of programs to make judgments about their relevance, performance and alternative ways to deliver them or to achieve the same results." The evaluation of regulatory activities should be referenced in the annual departmental evaluation plan.
 +
 +
The Evaluation Strategy is a high-level framework that identifies and documents key evaluation questions and the type of data required to address the evaluation questions. The purpose of the Evaluation Strategy is as follows:
 +
 +
* Allows program managers and the Head of Evaluation to ensure that the performance measurement strategy generates sufficient performance data to support the evaluation;
 +
* Allows program managers and the Head of Evaluation to ensure that needed administrative data, in addition to what will be collected through the performance measurement strategy, are available at the time of the evaluation;
 +
* Allows the Head of Evaluation to identify what additional data will need to be collected to support the evaluation; and
 +
* Allows program managers to give the Head of Evaluation advance notice on evaluation commitments that can inform the departmental evaluation plan.
 +
 +
The Evaluation Strategy should include the time frame and responsibilities for developing the evaluation framework and for completing the evaluation. Provided the evaluation addresses the core issues outlined in Annex A of the ''[http://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=15681&section=text Directive on the Evaluation Function],'' regulatory organizations have the flexibility to determine the evaluation's approach and level of effort in accordance with the impact of the regulatory activities as identified in the Triage Statement.
 +
 +
It is expected that the Evaluation Strategy will comply with the requirements of the Treasury Board ''Policy on Evaluation'' and ''Directive on the Evaluation Function''. A valuable resource when developing the Evaluation Strategy is the ''Guide for the Development of Performance Measurement Strategies and for Ensuring that They Effectively Support Evaluation'' (forthcoming). Regulatory organizations are also encouraged to consult their Head of Evaluation when developing the Evaluation Strategy.
 +
[[File:Pmep-pmre1-eng.gif|alt=This is an example that indicates how the regulatory activities are situated in the PAA. The sample shows multiple levels of program activities, sub-activities and sub-sub-activities and how they relate to one another.|thumb|Strategic Outcome]]
 +
 +
=== Linkage to the Program Activity Architecture ===
 +
Briefly describe where the regulatory activities are situated in the PAA. If the regulatory activities do not yet figure in the PAA, indicate when they are expected to be integrated. An example of how regulatory activities are represented in a PAA follows.
 +
 +
=== Regulatory Affairs Sector Review ===
 +
Before the PMEP is signed off by the regulatory organization's responsible Assistant Deputy Minister, a draft copy of the PMEP must be sent to the Secretariat's Regulatory Affairs Sector portfolio analyst for review to confirm that the PMEP meets CDSR requirements. When seeking confirmation from the Secretariat's Regulatory Affairs Sector, the Head of Evaluation should have provided comments on the PMEP.
 +
 +
=== Assistant Deputy Minister Sign-off ===
 +
The Head of Evaluation must review the PMEP and agree that it effectively supports the conduct of an eventual evaluation. The PMEP is then signed by the Assistant Deputy Minister (or equivalent) responsible for the regulatory proposal.
 +
 +
Approval criteria for the Assistant Deputy Minister (or equivalent) are the following:
 +
 +
* The scope and detail of the PMEP are commensurate with the impact of the regulatory proposal;
 +
* The PMEP's content is accurate and reflects the design, implementation, and evaluation of the regulatory activities;
 +
* There is commitment to monitor, evaluate, and report on the performance of the regulatory proposal, including implementation of the PMEP, through the use of progress information to support decision making and the conduct of periodic evaluation exercises according to the departmental evaluation plan;
 +
* Accountabilities for delivery, collection, and timely reporting of performance information and for evaluation activities are clear;
 +
* Resources are sufficient to implement the PMEP; and
 +
* The Head of Evaluation has reviewed the PMEP and agrees that it effectively supports the conduct of an eventual evaluation.
 +
 +
All sign-offs on the PMEP must be obtained before final approval of the RIAS by the Secretariat's Regulatory Affairs Sector. The regulatory organization must send two signed copies of the final PMEP to Regulatory Affairs.
 +
 +
=== Departmental Contact ===
 +
Identify the contact person(s) and contact information for enquiries.
 +
 +
== What needs to be in the Regulatory Impact Assessment Statement? ==
 +
A summary (maximum 2 pages) of the PMEP is to be included in the Performance measurement and evaluation section of the RIAS, consisting of the following key elements:
 +
 +
* A summary of how the regulatory activities connect the inputs and activities to the outputs, target groups, and expected outcomes of the initiative (i.e., summary of the logic model);
 +
* A description of the indicators through which changes in outputs and outcomes of the regulatory proposal will be measured;
 +
* A description of how and when the information will be summarized, reported, and used to improve the performance of the regulatory activities;
 +
* An outline of how (i.e., methodology) and when the regulatory activities will be evaluated; and
 +
* Indication of the PMEP's availability upon request.
 +
 +
== Who can provide advice and assistance and where can you get training? ==
 +
Departmental performance measurement and evaluation specialists can provide guidance on the articulation of key outcomes, results logic, and intended policies, the identification and selection of indicators, and the development of strategies for ongoing monitoring and reporting as well as for evaluation. Specialists involved in the cost-benefit analysis of the regulatory proposal may also be of value to you. Information management and information technology (IM/IT) personnel can also contribute to the PMEP development process by providing expertise on data system design or redesign. They can identify what is already being collected, what would be easy to collect, what mechanisms are already in place to collect data, and what the system implications might be of choosing a certain indicator for regular monitoring.
 +
 +
 +
== Notes ==
 +
<references />
430

edits