Publication Date


Advisor(s) - Committee Chair

James Craig, Ronald Adams, John O'Connor

Degree Program

Department of Psychology

Degree Type

Master of Arts


In recent years there has been an upsurge in the demand for program accountability. Program evaluation is often the prescribed procedure used to determine a program's effectivenss. During a program's evaluation, data on the program are gathered by program evaluators. However, in general, the evaluation data gathered are not used by program administrators. The purpose of the present investigation was to assess the impact of a procedure termed provisional analysis on increasing the use of evaluation data by program officials. Sixty-five volunteers from graduate courses in education and fifty-two volunteers from undergraduate educational psychology classes were randomly assigned to two groups: one experienced the provisional analysis procedure, the second was exposed to placebo data. All groups then took part in a simulation of an educational setting in which each participant was placed in the role of a newly appointed high school principal. The participants were given a letter from their superintendent which directed them to dismiss four of seven teachers at their school due to a decline in enrollment. Participants were then provided with personal and evaluation data about each teacher. The results in general reflected no differences in the degree to which those who had experienced provisional analysis and those who had not used and valued personal and evaluation data. The immediate implication was that provisional analysis, as administered, did not increase the use of evaluation data or the value placed upon it. Limitations of the present study and recommendations for future research were discussed. Several suggestions for altering the provisional analysis procedure have been advanced.


Psychology | Social and Behavioral Sciences

Included in

Psychology Commons