UW Oshkosh
menu
Future Students adult non-traditional Parents and Family Current Students Faculty and Staff Visitors and Community
Teaching Forum - Paper or Plastic:

A Journal of the the Scholarship of Teaching and Learning: Sunday October 26, 2008 Edition

Search Teaching Forum Archives
 


 
 

Paper or Plastic:
Evidence from a Sample of Clicker versus Paper Quizzes

By Johnson, Robson, & Van Scyoc

Key Words:  Classroom Response System, Clickers, Technology, Student Performance, Microeconomics

INTRODUCTION

                As educators, we often ask ourselves "are our students actually learning from our classroom lectures?"   Most educators feel that student engagement in class fosters better learning and understanding.  Clearly, all students do not feel the same way.  Research shows that classroom methods actively involving students result in substantially greater learning than pure lecturing, and that teachers usually overestimate the amount of learning that takes place in the classroom (Duncan 2005; Horowitz 1988).  In a typical lecture-style university class, it is well documented that students' attention "fades" during lectures (Duncan 2005).  This is also true outside of academics.  In a study of IBM managers in a one-week training course, Horowitz (1988) found the attention of the students diminished rapidly within 20 minutes.  Only about 47% of the students, on average, were paying attention during a standard lecture.  When students were actively engaged with questions, the rate of attention increased to 68%. 

            The use of lectures, however, allows instructors to control how much material can be effectively covered in a specific amount of time.  Often, there is considerable pressure to cover a certain amount of material, in order to prepare students for the next level of instruction.  Lecturing is traditionally the way that most instructors ensure that all the "required" material is covered.  Quizzes and homework assignments often form part of this method of instruction delivery.  However, with traditional paper assignments, students must wait for feedback, as instructors cannot afford to waste valuable class time on evaluating questions and recording grades.  The delay in returning course work to students, however, means that students may forget questions or lose interest.  

                Classroom Response Systems (CRS) or "clickers" are increasingly being employed to address the incongruity between large classes and engaging students in the learning process.  (Classroom response systems (CRS) are also referred to as Student response systems (SRS), Classroom performance systems (CPS), and Classroom communication systems (CCS)).  Several studies have looked at how technology, and in particular, clicker systems can make a difference in the classroom atmosphere (Beatty 2004; Bachelder, French, and Lichti 2006; Caruso and Kvavik 2005; Duncan 2005; Everett 2002; Horowitz 1988; Ober 1997; Rice and Bunz 2005; Sharma, Khachan, Chan and O'Byrne 2005).  Everett and Rankin (2002) find clicker use appears to increase active learning by creating more interactive lectures and quizzes, along with providing students with immediate feedback.  They find the major interactive benefit was "a feeling of connecting with classes of traditionally very passive students and the time it makes available for higher-order thinking."  Similarly, Cox and Jenkins (2002) find that faculty using clicker systems report greater student engagement in the classroom, high satisfaction, and gains in student learning across the disciplines.

One cautionary point in using clickers or any other technology, for that matter, is that it does not significantly disadvantage certain groups of students.  Rice and Bunz (2005) found that gender, age, prior computer usage, experience, and computer-email-web fluency do not affect how students evaluated the use of clickers.  They also report that the more exposure to the system and the overall computer-mediated communication competency of a student, the more positive the evaluation.  This implies that as students and instructors become more familiar with the technology, resistance should diminish.

For some students the cost of a system can become a disadvantage in the utilization of clickers.  There are several different systems and packages available, but this technology does require a university level decision for a standardized system, which will allow an opportunity to negotiate reasonable fees.  In this way clicker systems are subject to economies of scale; the more instructors that use them, the lower the overall cost to the student. 

The problem is convincing instructors and students that this technology is worth their while, which brings us to our current study.  We seek to examine student perceptions of engagement and how engagement and class participation influence student performance in Principles of Microeconomics.  In particular, we look at whether clickers can be an effective use of technology in the classroom by an examination of the immediate feedback effect. 

               

METHOD

To test the effectiveness of the clickers in improving student performance and engagement, each instructor had one section randomly assigned as the control and one section assigned as the treatment (four sections total, two per instructor).  All sections, regardless of instructor, had the same textbook, quizzes and exams.  (Exams were carefully collected in the earlier sections, so as to not influence student performance in later sections.)  Courses were organized to cover the same material on the same days, and instructors used the same grading scale.  In the control section, students completed traditional paper quizzes; in the treatment section, students completed their quizzes using clickers.  There were 14 quizzes total, and students were allowed to drop their two lowest.  The remaining 12 quizzes accounted for 25% of the student's total grade. 

            The clickers used in this study are very much like television remote controls.  Students each had a small handheld device with a panel of numbers and letters; the device transmits radio signals to a classroom receiver that in turn communicates with the classroom computer.  Each student clicker is individually recognized by the computer, and all student responses are recorded by the system.  The clicker can be used several different ways:  to survey students, to ask questions, and to randomly call on students. Questions can be multiple-choice, true-false, matching, or numerical.  Students are asked a question, they enter their responses on their clickers, and the aggregate results are immediately displayed either as a table or a histogram.  This allows student to see, not only if they answered correctly, but also how their response compares to their classmates.  This information can then be used to stimulate class discussion or some type of collaborative learning. The clickers used in this study are those supported by E-Instruction.  Professor 1 had 51% of his students enrolled in the treatment sections; Professor 2 had 53% of her students enrolled in treatment sections.

            Students in the control section had to wait for at least one class period to have the graded quizzes returned.  Instructors would answer specific questions about the quiz, but did not go over the quiz in detail.  Answers to the quiz were made available to students through the university's on-line course learning system, Desire to Learn (a system much like Blackboard).  Students in the treatment sections were required to use clickers to complete their quizzes; students were not allowed to take the quizzes without clickers after the third week of classes.  In contrast to the traditional quiz format, in the clicker sections, students received immediate feedback as to their performance on quiz questions.  The process was as follows.  For each question of a 10-question quiz, students were given between one and two minutes to answer the question.  Once the question was timed-out, the correct answer was immediately displayed, along with the distribution of student responses.  Depending on the distribution of answers, instructors would then review how to correctly solve each problem, and students would have the opportunity to ask questions.         

            The instructors in this study had no prior experience in using clickers. Both instructors used quizzes as incentives for the students to be prepared for and attentive in class.  The clicker technology in this first phase of the study was not fully integrated into the class because we wanted to be able to directly evaluate the impact of this type of immediate feedback on student learning.[1]  As such, both the experimental and control classes predominantly used the traditional lecture mode.  However, while in the control sections, instructors would ask questions and wait for a single student to volunteer to answer, in the treatment sections, all students had the opportunity to answer the question by using their clicker.   

The reason for this design is related to the need to convince more instructors to consider the potential advantages of using clicker technology.  There are two main reasons for wanting to increase the number of classes using the clickers.  The first has to do with increasing student engagement at our university.  We have a goal to improve our scores on the National Survey of Student Engagement (NSSE).  As mentioned earlier, research shows that students who use clicker systems feel they are more engaged in their classes.  ECAR (2005) reports 64% of students perceive that when technology is used in courses they learn more.  The second reason has to do with affordability for the students:  the more classes students take that use a clicker the lower the cost for the student. 

EVALUATION

            Using a survey, data were gathered on students enrolled in four sections of introductory microeconomics taught by two different instructors during the spring 2006 semester.  The four sections each had enrollments of about 45 students.  Students were asked to provide background and demographic information including their gender, race, age, university class status, study habits, attendance patterns, mathematics background, grade point average (GPA), and ACT score.  See Table 1 for a summary.  Our initial sample consists of 177 students who were enrolled in the course at the beginning of the semester.  Re-sampling at the end of the semester yielded 144 responses (81% of registered students).  Two sections or 88 students were initially enrolled with Instructor 1; students in the remaining two sections were enrolled with Instructor 2.  The students were primarily sophomores (44%) with a mean GPA of 2.92 and a mean composite ACT score of 23.  The sections were 38% female and 89% of students classified their race as "white."  Overall, 81% of students were taking the class because it was required for their major.[2] 


Table 1:  Summary of Background and Demographic Data

Percent

Mean

Standard Deviation

Female

Male

38.42

61.58

Age

20.60

 4.29

Freshmen

Sophomores

Juniors

Seniors

Other

24.86

44.07

23.16

  6.21

  1.69

White

Minority

88.70

11.30

Hours Work per Week

13.64

11.55

Hours in Extra Curricular Activities

  4.48

 5.74

Weekly Hours Study for all Classes

 10.16

 6.96

Course is Required for Major

Not Required for Major

81.14

18.86

Self-Reported GPA

 2.92

 0.88

Self-Reported ACT Score

 23.07

 3.18

First Choice of Sections

Not First Choice of Sections

85.23

14.77

Enrolled in a "Clicker" Section

51.98

To minimize student non-response, we gave the survey on the first day of class.  The follow-up survey was given on a quiz day, two weeks before the end of the semester. Students absent on the quiz day were asked to voluntarily fill out the follow-up survey the next class.  Overall, 33 students did not complete the follow-up survey.  Of that 33, five students had dropped the class prior to the end of the semester.  An additional several students refused to fill out the survey, as allowed under IRB guidelines.

Some other students had missing data for several different reasons.  On some survey questions, students occasionally chose an invalid option or left the question blank.  For these students, we replace the missing values with sample mean values in an effort to preserve the sample size.  In total, we are missing information on 19% of the students enrolled in the four sections.    

Another concern is that the division of students between the control and treatment sections was not random; students selectively enrolled in sections of microeconomics and students with fewer credits had fewer choices of sections, though students did not know of the experiment in advance of the first day of class.  In an effort to control for this non-random assignment, we collected information as to whether the student was enrolled in his or her first-choice of section.  Overall, 85.23% of students enrolled in their preferred section.  However, this variable has insignificant coefficients and t-statistics in the performance regressions and is thus not included in the final reported results.

In addition to background and demographic questions, in the survey, we also asked students a series of questions assessing their views on the merits of class attendance, participation, engagement, and reading the textbook.  Students were re-surveyed at the end of the semester on a number of similar questions.  Students in the clicker sections were also asked several questions specifically relating the clickers.  Summary statistics for these questions are reported in Table 2.
Table 2.   Student Engagement

Beginning of the Semester Responses

Control Sections

(n=85)*

End of the Semester Responses

Control Sections

(n=65)*

Beginning of the Semester Responses

Clicker Sections

(n=92)*

End of the Semester Responses

Clicker Sections

(n=79)*

Will attendance help you earn a higher grade?

  • Yes

  • No

83.35

17.65

96.92

  3.08

83.70

16.30

92.41

  7.60

Will participating help you earn a higher grade?

  • Yes

  • No

87.06

12.94

66.15

33.85

81.52

18.48

63.29

36.71

Should professors require attendance?

  • Yes

  • No

34.12

65.88

30.77

69.23

39.13

60.87

27.85

72.15

Should professors require course participation?

  • Yes

  • No

21.18

78.82

18.46

81.54

39.13

60.87

22.78

77.22

Will reading the chapter before class help you earn a higher grade?

  • Yes

  • No

94.12

  5.88

83.08

16.92

90.22

  9.79

77.22

22.78

How often did you [plan to miss] or [actually miss] your economics class this semester?

  • Fewer than 3 times

  • Between 3 and 5 times

  • Between 5 and 10 times

  • Between 10 and 15 times

  • I rarely attend, except for exams

64.71

27.06

  8.24

    -

    -

69.23

27.69

  3.08

    -

    -

56.52

34.78

  4.34

  2.17

  2.17

63.35

22.78

  7.59

  1.27

     -

End of the Semester Questions Only

I was engaged in this course.

  • Yes

  • Somewhat

  • No

36.92

53.85

  9.23

35.44

54.43

10.13

This class was boring.

  • Yes

  • Somewhat

  • No

  7.69

44.62

47.69

11.39

40.51

48.10

Using clickers helped me pay attention in class.

  • Yes

  • Somewhat

  • No

    

     -

     -

     -

46.84

41.77

11.40

*Responses are reported as the percent of the total.

            Compared to the beginning of the semester, students are statistically significantly more likely to think that attendance has a positive impact on grade at the end of the semester (p < 0.01).  However, students who have completed introductory microeconomics believe that participation did not help them earn a higher grade.  We also observe drop-offs in the number of students who believe professors should require attendance or participation, or whether reading the textbook before class is helpful, over the course of the semester.  These differences were marginally statistically significant (p < 0.20).  We hypothesize that these differences may have to do with the preconceived notions students may have about economics.  When these preconceptions are not validated through experience, the students may report that they did not find participation or reading the textbook helpful.  Another possible explanation may relate to students' transition from freshmen to sophomores as students become both more jaded and more savvy in their studying and course preparation, they may find that certain behaviors are not as beneficial as they thought, or that the opportunity cost of pursuing such behaviors exceeds the payout.[3]  Certainly, the results are suggestive for further research.

In general, we find that students felt they were engaged or somewhat engaged in the course, though no difference was found between the treatment and control sections.  Only 7.7% of control students and 11.39% of treatment students reported that they found the course boring; again the difference was not statistically significant.  Over 88% of students agreed either strongly or somewhat with the statement that "using clickers helps me pay attention in class."  We find this result highly encouraging.

The professors observed a difference in students' response to the quizzes.  In the clicker section the immediate feedback led to more and better discussion of the quiz questions than in the control section.  Students in the control sections were encouraged to ask questions about the quiz, and very few took advantage of the opportunity, while the students in the clicker section were asking questions and discussing the quiz results immediately.

However, in addition to the impact of clickers on student motivation, we are also interested in whether clickers can be associated with improved student performance in introductory microeconomics.  In Figure 1, the percentage distribution of the number of questions answered correctly on the three course exams is examined (out of 150 questions).  Students are grouped into two categories:  those in clicker sections (Clicker Series) and those in the control sections (Nonclicker Series).  While inconclusive, it seems that students in the clicker sections preformed poorly relative to the nonclicker section at the lower end of the question distribution, but preformed better at the upper end of the distribution.  This may suggest that the clickers are beneficial to the students who are already better students.  It might also imply that the clickers helped students that who took advantage of the technology, but actually hurt those who did not take it seriously.  There will need to be further analysis of the data to determine why we see this type of distribution.

A t-test of means suggests that students in the clicker sections scored 4.5 more exam questions correct that students in the nonclicker sections (p = 0.04) over the course of the semester.  ANOVA testing verifies this result (F = 2.93,  p = 0.08).  The difference varied significantly across professors.  In Professor 1's sections, there were no statistically significant differences between groups.  However, in Professor 2's sections, the clicker students answered, on average, nine more questions correctly than the nonclicker section (p = 0.01).[4] 

To more carefully explore the relationship between the use of clickers and student performance, we estimate an educational production function, as developed by Allison (1979) and Hanushek (1979).  This model suggests knowledge is produced out of a variety of student motivational and background variables as well as university and instructor specific variables.  Our dependent variable, "exam questions correct", is a numerical variable indicating the number of multiple-choice questions a student correctly answered on the three exams over the course of the semester (out of 150 total questions).  We suggest the "exam questions correct" for each student, i, depends a student's background (gender, race, university class), the student's level of engagement with the class (attendance, participation, studying), intelligence (GPA, ACT score), a control variable for the instructor, and the method by which the quiz was delivered paper or plastic.

Exam Questions Correcti = f ( backgroundi, engagementi, intelligencei, instructori, quiz methodi)

            We report the results of the regression analysis in Table 3.  Although the regression equation explains only 31% of the variation in the number of questions a student got correct, it has significant explanatory power as indicated by the F-statistic.  In general we find that students' gender, GPA, ACT, and math background significantly affect the number of questions a student answered correctly throughout the semester Principles of Microeconomics course (at the 0.04 level or less).  Females answered roughly 7 fewer questions (or about 5%) correctly than males; unfortunately, this is a common result in the economics literature (Ballard and Johnson 2005; Robson 1993; Walstead and Robson 1997).  Each additional GPA point is associated with more than 12 additional questions answered correctly (about one letter grade, as would be expected).  Further, for each extra point scored on the ACT test (out of 36), students could expect to earn nearly one additional correct response on course exams.  Students who had to take remedial math scored, on average, 8.6 total points lower on exams than students who did not have to take remedial math.  This is also not unexpected (Johnson and Kuennen 2005).  There were no statistically significant differences based on hours worked or studied, whether a student was a minority or a sophomore, or who the student had for an instructor. 

Table 3. Regression Analysis of Questions Correct

Variable

Coefficient

Standard Error

t-statistic

P-value

Lower 95% Confidence Interval

Upper 95% Confidence Interval

Female

 -6.868

  2.47

  -2.78***

0.01

 -11.749

  -1.986

GPA

12.405

  2.29

   5.41***

0.00

   7.873

 16.937

ACT

  0.897

  0.43

   2.09**

0.04

   0.048

   1.745

Hours Work

  0.712

  0.10

   0.71

0.18

  -0.127

   0.271

Hours Study

 -0.023

  0.17

  -0.14

0.89

  -0.354

   0.308

Minority

 -0.052

  3.63

  -0.01

0.99

  -7.219

   7.115

Remedial Math

 -8.656

  3.26

  -2.65***

0.01

-15.104

  -2.209

Sophomore

  3.165

  2.35

   1.34

0.18

  -1.485

   7.816

Professor

  2.574

  2.24

   1.10

0.27

  -2.037

   7.186

Clicker

  3.758

  2.24

   1.61*

0.11

  -0.856

   8.373

Constant

44.458

10.75

   4.13

0.00

 23.207

 65.708

R-squared

0.31

Number of  Observations

160

F-Statistic

      6.72***

*** Significant at the 0.01 level     ** Significant at the 0.05 level   * Significant at the 0.11 level 

Controlling for other factors, students in the clicker sections scored, on average, nearly 4 questions higher than comparable students in the non-clicker sections.  This result is marginally statistically significant at the 11% level; testing indicates that a larger sample of similar students would generate more robust significance for the clicker variable.    

                       

DISCUSSION

While these results are preliminary and demand further study, we find that the clickers are associated with marginal improve student responses on exam questions.  While the attitudinal data is more mixed, we find that over 88% of students felt that the clickers helped them pay more attention in class.  This is an important result.  One of the challenges facing brick and mortar universities is to distinguish themselves from the distance learning and self-paced learning programs that have mushroomed in recent years.  They can do this by capitalizing on the fact that they are able to bring students and faculty together face to face (Beatty 2004).  But many universities end up offering large classes, even though they are (1) plagued with reduced student-student and teacher-student interactions, (2) suffer attendance problems, (3) encourage little student class preparation prior to class, (4) fail to account for diverse learning styles, and (5) are associated with lower student grades (Ober 1997).  Clickers may be one way to help remedy these problems.

            Further, this experiment indicates that the clickers do not negatively affect student performance when the instructors are just learning how to use the technology.  As the research shows, the more exposure and experience by both teacher and students have, the better the results from using the clickers.   By the end of the course, both instructors were more comfortable with the technology and felt the students were also learning how to best use the technology to help them prepare for their exams.  This study indicates that more research is needed before we can recommend that clickers should be implemented in all of our principle-level courses, since the samples were relatively small and the characteristics of each class varied to some degree.  Anecdotally one professor commented on how engaged the control section was compared to the clicker section, while it was the opposite in the other instructor's class.   Over the next few semesters we will continue to collect more data, and the instructors will be able to build on their experiences and design better ways of using the technology.  One area that the research does stress is the importance of instructor helping the students to understand the significance of engagement and participation in the learning process and the role of the clickers in facilitating that process.

REFERENCES

Allison, E.  (1979).  Educational Production Function for an Introductory Economics Course.  In Research on Teaching College Economics, Eds. R. Fels and J. Siegfried.  New York, NY:  Joint Council on Economic Education:  171-194.

Bachelder, Francoise, Robert French and Steven Lichti. (2006). "Purdue's System-wide Deployment of a Classroom Response System".  Presented at ECAR (Educause Center for Applied Research) Midwest Regional Conference, Chicago, IL. http://www.educause.edu/LibraryDetailPage/666?ID=MWR0696

Ballard, C.L. and M.F. Johnson.  (2005).  "Gender, Expectations, and Grades in Microeconomics," Feminist Economics 11 (1):  95 122.

Beatty, I. (2004). "Transforming Student Learning with Classroom Communication Systems." ECAR (Educause Center for Applied Research) Research Bulletin, 3 (February, 3): 2-13.

Caruso, Judith B. and Robert B. Kvavik. (2005). "Students and  Information Technology, 2005:  Convenience, Connection, Control, and Learning Roadmap." ECAR (Educause Center for Applied Research)Research Study, 6. http://www.educause.edu/LibraryDetailPage/666?ID=ECM0506

Cox, A. J. and W.F. Junkin, III. (2002). "Webforms to enhance student learning across the curriculum."  http://www.icte.org/T01_Library/T01_229.PDF

Duncan, D. (2005). "Clickers in the Classroom: How to Enhance Science Teaching Using Classroom Response Systems." San Francisco. Pearson-Addison Wesley-Benjamin Cummings.

Everett, M. D. and R. A. Ranker (2002). "Classroom Response System: An evaluation at an easy-access regional university."  Paper presented at the 8th Annual University of Kentucky Economics Teaching Workshop, Lexington, KY.

Horowitz, H. M. (1988).  "Student response systems: Interactivity in a classroom environment," http://www.qwizdom.com/software/interactivity_in_classrooms.pdf

Hanushek, E.  (1979).  "Conceptual and Empirical Issues in the Estimation of Education Production Functions."  Journal of Human Resources 14 (Summer):  351-388.

Johnson, M. and E. Kuennen.  2005.  "On-Line Math Reviews and Performance in Introductory Microeconomics," Journal of Economics and Economic Education Research, Vol.  6 (3): 1 - 22.

Ober, D. (1997). "A Student Response System in an Electronic Classroom:  Technology Aids for Large Classroom Instruction." The Compleat Learner 2 (November): 4.

Rice, R.E. and U. Bunz. (2005). Evaluating a Wireless Course Feedback System:

The Role of Demographics, Expertise, Fluency, Competency, and Usage.  Working Paper.

Robson, D.  1993.  "Sex Differences and Measurement Bias in Economic Achievement." Dissertation.  University of Nebraska Lincoln.

Sharma, M.D., J. Khachan, B. Chan, and J. OByrne. (2005). "An Investigation of the Effectivenss of Electronic Classroom Communication Systems in Large Lecture Classes." Australasian Journal of Educational Technology. 21 (2): 137-154.

Walstad, W.B. and D. Robson. (1997). "Differential Item Functioning and Male-Female Differences on Multiple-Choice Tests in Economics."  Journal of Economic Education 28 (2): 155 - 171.

END NOTES



[1] Using clickers only for quizzing uses, under-utilizes the technologies potential.  However, if we were to completely restructure the course, we would be less able to convince our colleagues that the clickers can be beneficial even with a minimal amount of change in the course.

[2] While we rely on self-reported data, we find that the numbers are highly consistent with university averages.  Statistics reported are also consistent with previous studies done in introductory microeconomics at this university using university-reported GPA and ACT data (Johnson and Kuennen 2005).

[3] A series of ANOVA tests were performed, attempting to ascertain whether there were any patterns in the students who reported that attendance, class participation, and reading the chapter in advance were less important at the end of the semester than at the beginning.  No pattern emerged, as F-tests showed no statistically significant relationships between these questions and GPA, ACT score, student gender, or student age.

[4] While it is good to know that the clickers do not seem to harm student performance, it is important to note that Professor 1 was very ill at the beginning of the semester and as such had a rocky start with the technology.  This may or may not have influenced the effectiveness of the clickers in his section.

CopyrightUW System

*Marianne Johnson, Associate Professor, email johnsonm@uwosh.edu; Denise Robson, Associate Professor, email robson@uwosh.edu; Lee Van Scyoc, Associate Professor, email vanscyoc@uwosh.edu. All: Department of Economics, University of Wisconsin Oshkosh, Oshkosh, WI, 54901. We would like thank our research assistant Beau Buchmann for his valuable assistance. Research was supported by a University of Wisconsin Oshkosh Scholarship of Teaching and Learning Grant, 2005 - 2006. Appropriate IRB approval was granted for this study.