EDUCATIONAL TECHNOLOGY (TURKISH, THESIS)
Master TR-NQF-HE: Level 7 QF-EHEA: Second Cycle EQF-LLL: Level 7

Course Introduction and Application Information

Course Code Course Name Semester Theoretical Practical Credit ECTS
EDE5124 Introduction to Item Response Theory (IRT) Fall 3 0 3 8
This catalog is for information purposes. Course status is determined by the relevant department at the beginning of semester.

Basic information

Language of instruction: Turkish
Type of course: Departmental Elective
Course Level:
Mode of Delivery: Face to face
Course Coordinator : Dr. Öğr. Üyesi NİHAL YURTSEVEN
Recommended Optional Program Components: -
Course Objectives: IRT involves modeling subjects’ responses to individual items, in contrast to Classical Test Theory, which
models test scores on complete test forms. IRT offers substantial advantages for many technical
problems that arise in creating and using tests, including test design, test equating, assessment of item
and test bias, and test scoring. IRT models have the advantage of invariance of person estimates to the
collection or sample of items in a particular test, and the invariance of item parameter estimates to the
sample of subjects used in test calibration.

Learning Outcomes

The students who have succeeded in this course;
A student who successfully completes this course will be familiar with various IRT models, and will be able to interpret and apply these models appropriately.

Course Content

The course will begin with
presentation of popular models, their estimation, and proper interpretation, and then continue reinforcing
these lessons throughout the semester with examples and applications using educational test data.

Weekly Detailed Course Contents

Week Subject Related Preparation
1) Basic IRT Concepts, Models, and Assumptions. -
1) Test score equating using IRT -
1) Basic IRT Concepts, Models, and Assumptions. -
2) Model Specifications and Scale Characteristics -
3) Ability and Item Parameter Estimation -
4) Item, Test Information Functions -
5) Assessment of Model-data Fit -
6) Test construction, item banking using IRT -
7) IRT Software: BILOGMG -
8) Visa -
9) Differential item functioning -
10) Polytomous response models 1 -
11) Polytomous response models 2 -
12) IRT Software: PARSCALE -
13) Computer-based test designs (CAT, MST) -
13) Computer-based test designs (CAT, MST) -
14) Other models: NRM, MIRT, HRM, TRT -
15) Presentations -

Sources

Course Notes / Textbooks: Applied Psychological Measurement (Special Issue), Advances in item response theory and applications. Fall, 1982. (Includes eight papers.)
Applied Psychological Measurement (Special Issue), Polytomous item response theory. Spring, 1995. (Includes seven papers.)
References: Baker, F. B. (1992). Item response theory: Parameter estimation techniques. New York: Marcel
Dekker.
Bock, R. D., & Aitkin, M. (1981). Marginal maximum likelihood estimation of item parameters:
Application of an EM algorithm. Psychometrika, 46, 443-459.
Engelhard, G., Jr. (1994). Examining rate errors in the assessment of written composition with a manyfaceted
Rasch model. Journal of Educational Measurement, 31, 93-112.
Hambleton, R. K. & Swaminathan, H. (1985). Item response theory: Principles and applications.
Boston: Kluwer Nijhoff Publishing.
Holland, P. W., & Hoskens, M. (2003). Classical test theory as a first-order item response theory:
Applications to true-score prediction from a possibly nonparallel test. Psychometrika, 68, 123-149.
Holland, P. W., & Thayer, D. T. (1988). Differential item performance and the Mantel-Haenszel
procedure. In H. Wainer & H. I Braun (Eds.), Test validity (pp. 129-145). Hillsdale, NJ: Erlbaum.
Journal of Educational Measurement (Special Issue), Applications of latent trait models. Summer, 1977.
(Includes six papers.)
Junker, B. W., & Sijtsma, K. (2001). Nonparametric item response theory in action: An overview of the
special issue. Applied Psychological Measurement, 25, 211-220.
Linn, R. L. (1990). Has item response theory increased the validity of achievement test scores? Applied
Measurement in Education, 3, 115-141.
Livingston, S. A., & Lewis, C. (1995). Estimating the consistency and accuracy of classifications based
on test scores. Journal of Educational Measurement, 32, 179-197.
Lord, F. M. (1980). Applications of item response theory to practical testing problems. Hillsdale, NJ:
Lawrence Erlbaum Associates.
Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores. Reading, MA: Addison-
Wesley.
Mislevy, R. J. (1986a). Bayes modal estimation in item response models. Psychometrika, 51, 177-195.
Mislevy, R. J. (1986b). Recent developments in the factor analysis of categorical variables. Journal of
Educational Statistics, 11, 3-31.
Muraki, E., & Carlson, J. E. (1995). Full-information factor analysis for polytomous item responses.
Applied Psychological Measurement, 19, 73-90.
Rasch, G. (1980). Probabilistic models for some intelligence and attainment tests. Chicago: University of
Chicago Press.
5
Reckase, M. D. (1985). The difficulty of test items that measure more than one ability. Applied
Psychological Measurement, 9, 401-412.
Thissen, D., & Steinberg, L. (1986). A taxonomy of item response models. Psychometrika, 51, 567-577.
Thissen, D., Steinberg, L, & Mooney, J. (1989). Trace lines for testlets: A use of multiple-categorical
response models. Journal of Educational Measurement, 26, 247-260.
Thissen, D., Steinberg, L, & Wainer, H. (1988). Use of Item Response Theory in the Study of Group
Differences in Trace Lines. In H. Wainer & H. I Braun (Eds.), Test validity (pp. 147-169).
Hillsdale, NJ: Erlbaum.
van der Linden, W. J. & Hambleton, R. K. (Eds.) (1997). Handbook of modern item response theory.
New York: Springer.
Wright, B. D. (1977). Solving measurement problems with the Rasch model. Journal of Educational
Measurement, 14, 97-116.

Evaluation System

Semester Requirements Number of Activities Level of Contribution
Homework Assignments 5 % 15
Presentation 1 % 15
Midterms 1 % 30
Final 1 % 40
Total % 100
PERCENTAGE OF SEMESTER WORK % 60
PERCENTAGE OF FINAL WORK % 40
Total % 100

ECTS / Workload Table

Activities Number of Activities Duration (Hours) Workload
Course Hours 14 3 42
Application 2 25 50
Study Hours Out of Class 14 5 70
Presentations / Seminar 1 5 5
Homework Assignments 5 6 30
Midterms 1 1 1
Final 1 2 2
Total Workload 200

Contribution of Learning Outcomes to Programme Outcomes

No Effect 1 Lowest 2 Low 3 Average 4 High 5 Highest
           
Program Outcomes Level of Contribution
1) Students will be able to demonstrate theoretical and practical knowledge in the areas of Educational/Instructional Technology.
2) Students will be able to conduct research in the area of Educational/Instructional Technology.
3) Students will be able to plan and evaluate in the process of teaching information technologies.
4) Students will be able to select and implement appropriate strategies and techniques for teaching information technologies.
5) Students will be able to put their theoretical information into practice in the area of Educational/Instructional Technology.
6) Students will be able to design and develop educational materials, software and games.
7) Students will be able to implement information technologies effectively in and outside of educational environments.
8) Students will be able to measure and evaluate learners' performances in educational environments.
9) Students will be able to self-improve their knowledge continuously in information technologies.
10) Students will be able to act ethically in electronic and non-electronic educational environments, and pass these values to next generations.
11) Students will be able to plan, manage, and evaluate educational projects.
12) Students will be able to find out the technologic necessities of companies, and set up these technologies.