12 | Page NJ Evaluation Guidelines – Behavioral Program Process & Impact SWE – 5/22/23
4.0 Evaluability Assessment
Every first evaluation of a program is expected to include a specific evaluability assessment. The
purpose of this activity is to provide early assurance that the data collection and data access can fully
support the needed process and impact evaluations expected of all EE programs in the portfolio for
which savings are claimed. Early investigation is required so any necessary changes in data collection or
procedures can be implemented prior to the next evaluation. The expectation is that the IPEs will verify
that all variables needed from the program tracking data, from billing records, worksheets, and all other
sources that will be needed to support a Behavioral Program process and impact evaluation of the
program are being collected, are populated, are accessible, and are accurate. The product of the
evaluability assessment is a clear statement in the report that the IPE confirms they investigated and
reviewed the variety of specific types and sources of data needed, and that the data were present,
accurately collected, available, and populated. The confirmation statement should list the various types
of data (not individual variables) that were verified, and that the IPE confirms that the data to support
Behavioral Program Process and Impact evaluations can be supported. If the evaluability assessment
finds the data or processes are lacking, specific recommendations to remedy the issue(s) should be
provided clearly and specifically in the report.
Note this evaluability assessment will need to be repeated in any evaluation in which the data collection,
procedures, or other processes have changed that may affect aspects of the development of data
needed to support Process or Impact evaluations for the program. If no such changes have occurred,
the IPE may cite and repeat the previous evaluability statement in the next evaluation. However, a
statement of evaluability must be included in each evaluation conducted on the program.
5.0 Analysis Methods, Findings, Context, and Forward-Looking
Recommendations Focus
Providing Context/Benchmarking: To support the evaluation recommendations, the reports should
provide clear supporting findings from the research, and from comparisons of these findings with past
research on the NJ programs as well as comparisons to other strong-performing similar programs in
other locations. Therefore, each process and impact evaluation is required to include a chapter within
the report summarizing key results from several other similar programs elsewhere. These other
programs should provide benchmarking information that the NJ programs can refer to better put NJ
results in context and potentially identify strong or better practices in the program type. Results from
these programs should be referred to in multiple places in the report, noting where satisfaction, or
savings, or other results are higher or lower than the ranges identified in other programs, or where they
have improved or not improved compared to previous cohorts of the NJ program.
Analytical Methods and Clarity of Results: For the range of analyses conducted in the report, at least,
the following methods and guidelines should be used where relevant: