February 2002 Issue
Supplemental Instruction (SI)
Evaluating the SI Program
By Barbara Stout, former SI supervisor, University of Pittsburgh. Email: firstname.lastname@example.org, and
Jeanne Wiatr, SI Supervisor, Univ. of Memphis. Email: Jeannewiatr@yahoo.com
It is obviously an important part of any SI program to conduct on-going evaluation. SI programs across the nation and world need to do this to validate the successful nature of SI. On the local level, supervisors may need to do evaluations to keep the funding flowing. In addition, they should want to evaluate to make sure their program is in good shape and producing the results they indicated to professors on initial contact.
At the University of Pittsburgh we evaluated our program in several ways. First exam grades and final grades gave statistical input but we also surveyed the students and professors for their opinion of the SI program and leader. Additionally, our session observations and leader meetings as well as leader debriefs at the end of term served the purpose of ongoing evaluation as the term progressed.
After the first exam we collected the grades from the instructor. Generally one of the SI supervisors met the instructor in their office or after class to collect the grades. This grade pick-up provided a good opportunity to chat with the instructor and see how everything was going. Some instructors preferred other methods of delivery including campus mail or the leader hand carrying the grades to the Learning Skills Center in a sealed envelop. After receiving the data we entered a numerical score on an Excel spreadsheet, along with number of times SI was attended. In this way we could compare and prepare data of non-SI attendees with any student who had attended SI at least one time. We added an additional category, regular attendees, which indicated that they attended 3 or more sessions. (We used 5 or more for the final grade, as they had more opportunities to attend SI.)
Once the data was in spreadsheet form then we could see how many students were attending a given SI session as well as determine how well SI attendees were doing. If we detected any irregular attendance patterns we could provide input to encourage solid attendance. For example, it would be better for fewer students to attend more sessions than for more students to attend one session. If the later were the case we might investigate – why aren’t the students coming back after their first session. This would be something to explore further with the SI Leader. If very few students were attending many sessions, we might want to investigate what students were being represented in the group of regulars. If we found a majority of one level student (needy or average) we could attempt to attract other students to the sessions in order to enrich the study experience. This and other valuable information becomes important in planning for future sessions and for deciding which courses to continue supporting with SI (rather than tutor) in the future. We basically repeated the process with the final grades. In this case we did not have to obtain final grades from the professors but instead took them from the university database (with Provost approval). Finally, we printed out a report, devoid of names, and filed a copy, gave a copy to the SI leader and to the professor. (See the UMKC Supervisor manual for an example.).
In addition, to grades and attendance we also collected soft data in the form of end-of-term surveys. Near the end of the term the SI Leader asked for a few minutes of class time to pass out a survey for both attendees and non-participants. Those students who had attended were asked a different set of questions. We eventually got quite sophisticated and had students report their responses on a bubble sheet so we did not have to manually count them. (For more information on the conversion – contact Jeanne.) Even with the bubble sheet we provided a space for a written response which generated everything from insightful comments to the ridiculous. We cautiously considered the written responses with a “grain of salt”. We also surveyed the professors about their experience with SI, their thoughts on the leader and as to whether they would use SI again. As mentioned continuous observations and the leader debrief (done at the end of each term) provided additional, valuable insights to the SI sessions.
There are many additional things you can do with SI data that are too numerous to detail in this article. Please use the UMKC resources, http://www.umkc.edu/cad/si, to discover more. A few articles from that web site that we would suggest are Does SI Really Work and What Is It Anyway?, Congos and Schoeps, Research Concerning the Possible Influence of Student Motivation and Double Exposure to Content Material, UMKC staff and Kenney, and A Model for Evaluating Retention Programs, Congos and Schoeps. Good luck with planning and refining your SI programs and please don’t hesitate to be in touch.