Subscribe to our Newsletter and get informed about new publication regulary and special discounts for subscribers!

ILSHS > Volume 11 > The Investigation of Difference between PPT and...
< Back to Volume

The Investigation of Difference between PPT and CBT Results of EFL Learners in Iran: Computer Familiarity and Test Performance in CBT

Full Text PDF


The purpose of this study is to examine the score comparability of institutional English reading tests in two testing methods, i.e. paper-based and computer-based tests taken by Iranian EFL learners in four language institutes and their branches in Iran. In the present study, the researcher tried to examine whether there is any difference between computer-based test results (henceforth CBT) and paper-based test (PPT) results of a reading comprehension test as well as exploring the relationship between students' prior computer experience and their test performance in CBT. Two equivalent tests were administered to one group of EFL learners in two different occasions, one in paper-based format and the other in computer-based test. Utilizing t-test, the means of two modes have been compared and the results showed the priority of PPT over CBT with .01 degree of difference at p < 05. Using ANOVA, the findings revealed that computer experience had no significant influence on the students’ performance in computerized test.


International Letters of Social and Humanistic Sciences (Volume 11)
M. Hosseini et al., "The Investigation of Difference between PPT and CBT Results of EFL Learners in Iran: Computer Familiarity and Test Performance in CBT", International Letters of Social and Humanistic Sciences, Vol. 11, pp. 66-75, 2013
Online since:
Sep 2013

Bachman L., Language testing 17(1) (2000) 1-42.

Bennett R. E., Braswell J., Oranje A., Sandene B., Kaplan B., Yan F., Does it matter if I take my mathematics test on computer? A second empirical study of mode effects in NAEP. Journal of Technology, Learning, and Assessment 6(9) (2008).

Berner, J. E. (2003). A study of factors that may influence faculty in selected schools of education in the Commonwealth of Virginia to adopt computers in the classroom. George Mason University.

Boo, J. (1997) Computerized versus paper-and-pencil assessment of educational development: Score comparability and examinee preferences. Unpublished PhD dissertation, University of Iowa, USA.

Bridgeman B., Lennon M. L., Jackenthal A. Applied Measurement in Education 16 (2003) 191-205.

Chapelle C. A., Annual Review of Applied Linguistics 27 (2007) 98-114.

Clariana R. Wallace P., British Journal of Educational Technology 33(5) (2002) 593-602.

Coniam, D. ReCALL 18 (2006) 193-211.

Cumming, A., R. Kantor, K. Baba, U. Erdosy & M. James (2006).

DeAngelis S., Journal of Allied Health 29 (2000) 161-164.

DeBell, M. & Chapman, C. (2003). Computer and Internet use by children and adolescents in 2001: Statistical Analysis Report. Washington, DC: National Center for Education Statistics.

Douglas D., Hegelheimer V., Annual Review of Applied Linguistics 27 (2007) 115-132.

Fleming, S. & Hiple, D. (2004).

Higgins J., Russell M., & Hoffmann T. (2005).

Horkay, N., Bennett, R. E., Allen, N., & Kaplan, B. (2005).

Isleem M. (2003). Relationships of selected factors and the level of computer use for instructional purposes by technology education teachers in Ohio public schools: a statewide survey. The Ohio State University.

Jamieson J. M., Annual Review of Applied Linguistics 25 (2005) 228-242.

Kathleen Scalise and Bernard Gifford, (2006). Computer-Based Assessment in E-Learning: A Framework for Constructing Intermediate Constraint, Questions and Tasks for Technology Platforms (Volume 4, Number 6).

Leahy S., Lyon C., Thompson M., William D., Educational Leadership 63(3) (2005) 19-24.

Laborda J. Journal of LLT 11 (2007) 3-9.

Leeson H., International Journal of Testing 6(1) (2006) 1-24.

Mason B. J., Patry M., Bernstein D. J., Journal of Educational computing research 24 (2001) 29-39.

Mazzeo, J. Druesne, B., Raffeld, P., Checketts, K. & Muhlstein, A. (1991). Comparability of computer and paper-and-pencil scores for two CLEP general examinations. (College Board Report 91-5). Princeton, NJ: ETS.

Pommerich M., Developing computerized versions of paper-and-pencil tests: Mode effects for passage-based tests. The Journal of Technology, Learning, and Assessment, 2(6) (2004).

Pomplun M., Frey S., Becker D. F. Educational and Psychological Measurement 62 (2002) 337-354.

Pomplun M., Custer M., Journal of Educational Computing Research 32(2) (2005) 153-166.

Pomplun M., Ritchie T., Custer M., Educational Assessment 11(2) (2006) 127-143.

Rezaee A. A., Zainol Abidin M. J., Issa J. H., Mustafa P. O., English Language Teaching 5(1) (2012) 61-68.

Roussos P., Computers in Human Behavior 23(1) (2007) 578-590, http: /dx. doi. org/10. 1016/j. chb. 2004. 10. 027.

Sadik A., Education Review 30(1) (2006) 86-113.

Salimi, H., Rashidy, A., Salimi, A. H., Amini Farsani, M. (2011).

Schaeffer, G., Reese, C., Steffen, M. McKinley, R. & Mills, C. (1993). Field Test of a Computer-Based GRE General Test. Reports-Research/Technical ETS-RR-93-07.

Taylor C., Kirsch I., Eignor D., Jamieson J., Language Learning 49(2) (1999) 219-274.

Wang, S. (2004). Online or paper: does delivery affect results? Administration mode comparability study for Stanford Diagnostic Reading and Mathematics Tests. Harcourt Assessment Inc, USA.

Yurdabakan I., Turkish Online Journal of Distance Education 13(3) (2012) 177-188.

Zhang, L, & Lau, C. A. (2006, April). A comparison study of testing mode using multiple-choice and constructed-response items - Lessons learned from a pilot study. Paper presented at the Annual Meeting of the American Educational Association, San Francisco, CA. ( Received 03 September 2013; accepted 08 September 2013 ).

Show More Hide