 Research
 Open access
 Published:
Teachercentered analysis with TIMSS and PIRLS data: weighting approaches, accuracy, and precision
Largescale Assessments in Education volume 12, Article number: 29 (2024)
Abstract
This paper extends existing work on teacher weighting in studentcentered surveys by looking into aspects of practical implementation of deriving and using weights for teachercentered analysis in the Trends in International Mathematics and Science Study (TIMSS) and the Progress in International Reading Literacy Study (PIRLS). The formal conditions to compute teachercentered weights are detailed, including mathematical equations. We provide a proposal on how to define the targeted populations as well as how to collect data that is needed to derive teachercentered weights, yet currently unavailable. We also tackle the issue of teacher nonresponse by proposing a respective adjustment factor, as well as mentioning the challenge of multiple selection probabilities when teachers teach in multiple schools. The core part of the paper focuses on studying the level of accuracy that can be expected when estimating teacher population characteristics. We use TIMSS 2019 data and simulate likely scenarios regarding the variance in weights. The results show that (i) the different weighting scenarios lead to relatively similar estimates; however, the differences between the scenarios are sufficient to justify the recommendation to use correctly derived teacher weights; (ii) differences between estimated standard errors based on complex sampling and corresponding estimates based on simple random sampling are sufficiently consistent to support use of a procedure to estimate standard errors that accounts for both sample weights and the complex sampling design; (iii) sample sizes and variance in weights significantly limit estimate precision, so that total population estimates with sufficient precision are available in the majority of countries but subpopulation features are generally not sufficiently precise. To provide a critical evaluation of our results, we recommend implementation of the proposed method in one or more countries. This recommended study will permit examination of logistical considerations in implementation of required changes in data acquisition and will provide data to replicate the analysis with teachercentered weights.
Introduction
Many contemporary international largescale assessments (ILSA), for example the Trends in International Mathematics and Science Study (TIMSS, Martin et al., 2020), the Progress in International Reading Literacy Study (PIRLS, Martin et al., 2017), and the Programme for International Student Assessment (PISA, OECD, 2019a), investigate student populations. Others cover teachers, the most prominent one is the Teaching and Learning International Survey (TALIS, OECD, 2019b). There is a third type of ILSA that attempts to cover both teacher and student populations within one study, requiring compromises regarding the optimization of the sampling designs. Examples for such studies are the International Civic and Citizen Study (ICCS, Schulz et al., 2018) and the International Computer and Information Literacy Study (ICILS, Fraillon et al., 2020), which both target eighth grade students and their teachers and aim for fully representative samples for both groups. While this solution sounds intriguing and costefficient, it comes with a severe disadvantage, that is, there is no direct linkage between teachers and students, hence, for example, teachers’ attitudes and teaching styles cannot be related directly to their students’ characteristics and outcomes.
TIMSS and PIRLS are among the most wellknown ILSAs in the world, with more than 50 participating countries and educational systems. Since 1995, TIMSS every four years has investigated attainment in mathematics and science of students in fourth and eighth grades. Since 2001, PIRLS every five years has studied reading literacy of students in fourth grade. A rich array of contextual information is gathered in both studies from both the students themselves and individuals involved in students’ learning: school principals, parents, and teachers of the sampled students. Even though TIMSS and PIRLS are designed to provide information on student learning, and analyzing teacherlevel characteristics is not part of the studies’ analytical objectives, scholars are interested to use the information that is collected from teachers. However, analyzing teacher data from these studies is not straightforward. In this paper, we consider TIMSS 2019. This survey provides summary results for teachers on variables ranging from years of experience to job satisfaction for different educational systems, subjects taught, and grade taught. For example, the average years of experience in Albania of a student’s mathematics teacher in Grade 4 is estimated to be 22 (Mullis et al., 2020, page 390). This average does not necessarily estimate in Albania the mean years of experience of a mathematics teacher for Grade 4. Instead the reported average estimates a weighted mean of years of experience. For a given teacher, the weight is a sum over students taught in a given grade and subject of the fraction of instruction provided. In the Albanian example, each student has only one mathematics instructor, so that the weight is proportional to the number of students taught. The TIMSS 2019 User Guide (Fishbein et al., 2021) warns users of the TIMSS 2019 database of this difference between these two averages:
The teachers in the TIMSS 2019 International Database do not constitute representative samples of teachers in the participating countries. Rather, they are the teachers of nationally representative samples of students. Therefore, analyses with teacher data should be made with students as the units of analysis and reported in terms of students who are taught by teachers with a particular attribute. (Fishbein et al., 2021, p. 13)
This warning reflects two distinct issues. The sampling design does not ensure that sampled teachers are a representative sample of all teachers in an educational system, and the data collection does not permit a weighting adjustment to allow use of the sampled teachers to estimate mean characteristics of the population of teachers.
Although TIMSS emphasizes assessment of achievement of students, in line with Hooper et al. (2022), we argue that simple modifications of forms provided by participating schools permit development of teachercentered sampling weights that allow use of the sample of teachers in TIMSS and PIRLS for estimation of means of characteristics of the teacher populations of participating educational systems.
In this paper, we will start by proposing a teacher population definition for the surveyed grades and subjects in TIMSS. Next, we will briefly review weighting in TIMSS and current inferences that implicitly use sample student weights to provide sample teacher weights. We refer to these weights hereafter as studentcentered teacher weights (stchwgt). They are useful for research questions dealing with the relationship between teachers and students. By revisiting the results from Hooper et al. (2022), we will then introduce sample teachercentered teacher weights (ttchwgt) that can be used if the interest is on teachers themselves rather than on their students.
Thereafter, we will apply the findings of Hooper et al. (2022) to determine how to obtain the information needed to derive teachercentered weights and how to examine accuracy of estimates based on ttchwgt. Because the current data from TIMSS and PIRLS do not now permit application of the approaches proposed by Hooper et al. (2022) from a theoretical perspective, results of a simulation study will be presented examining the expected precision of the proposed teachercentered estimates. To inform this simulation, we considered existing data from TIMSS 2019. Complications such as weight adjustments for nonresponse and multiple chances of selections when teachers teach in multiple schools will be considered. The paper will close with conclusions concerning the feasibility in practice of teachercentered estimates and with recommendations concerning implementation of such estimates.
Because TIMSS and PIRLS use the same sampling design (Joncas and Foy, 2012), the findings of this research are fully applicable to other iterations of TIMSS, and to PIRLS. The notation we use for our paper can be found in Table 14.
Defining international target populations of teachers for TIMSS and PIRLS
The introduction of revised teacher weights in TIMSS will facilitate analyses on the teacher level without the need to use students as units of analysis and reporting. To draw direct conclusions about a teacher population with equally weighted teachers, it is important to agree on an unambiguous definition of this population. This section attempts a proposal for such definition in line with the assumptions in the remainder of this paper. According to the authors’ knowledge, there is no explicit definition of the population of teachers in either TIMSS or PIRLS. However, as specified in the TIMSS technical documentation (LaRoche et al., 2020), TIMSS invites all mathematics and science teachers of the selected classes to participate. The same applies for reading/language teachers of the participating PIRLS classes (Martin et al., 2017). To allow the current selection mechanism to align with the procedures proposed in this paper, we suggest to include all mathematics and science teachers who instruct students in the target grade, i.e., fourth and/or eighth grade for TIMSS, and all reading/language teachers of fourthgraders for PIRLS. The proposed definition corresponds to the following TIMSS and PIRLS international target population definition of students:^{Footnote 1}
Fourth grade (TIMSS and PIRLS)
All students enrolled in the grade that represents four years of schooling counting from the first year of ISCED Level 1, providing the mean age at the time of testing is at least 9.5 years (LaRoche et al., 2020, sect. 3.4)
Eighth grade (TIMSS only)
All students enrolled in the grade that represents eight years of schooling counting from the first year of ISCED Level 1, providing the mean age at the time of testing is at least 13.5 years (LaRoche et al., 2020, sect. 3.4)
To these student target populations correspond four distinct teacher target populations in TIMSS: mathematics teachers of fourthgrade classes, science teachers of fourthgrade classes, mathematics teachers of eighthgrade classes, and science teachers of eighthgrade classes; and one teacher target population in PIRLS: reading/language teachers of fourthgrade classes, as follows:
Fourth grade (TIMSS and PIRLS; mathematics, science, and reading/language teachers)
All teachers teaching mathematics [science, reading/language] to students enrolled in the grade that represents four years of schooling counting from the first year of ISCED Level 1, providing the student mean age at the time of testing is at least 9.5 years (LaRoche et al., 2020, sect. 3.4)
Eighth grade (TIMSS only; mathematics and science teachers)
All teachers teaching mathematics [science] to students enrolled in the grade that represents eight years of schooling counting from the first year of ISCED Level 1, providing the student mean age at the time of testing is at least 13.5 years (LaRoche et al., 2020, sect. 3.4)
It is important to note that the teacher target populations are not mutually exclusive; e.g., a mathematics teacher of fourthgrade students can also be a science teacher of eighthgrade students, or a teacher might teach multiple subjects to the same class. Moreover, teachers can teach at different schools. All teachers are considered equally, regardless of the hours taught. We further suggest to define the subjects science and mathematics based on the content domains of the assessment. Thus, subjects related to mathematics must cover at least one of the following content domains: number, measurement, geometry, algebra, data, or probability (Lindquist et al., 2017). Subjects related to science must cover at least one of the following content domains: life science, biology, chemistry, physical science, physics, or earth science (Centurino and Jones, 2017). Even though we have tried to give as accurate a definition as possible, there may still be contested cases. For example, if several teachers teach the same subject to the same class, the general rule is that all teachers are part of the target population. We propose that a teacher associated with a class is not considered part of the target population only if one of the following conditions applies: the teacher is not at all involved in instructing the students, the teacher clearly only has a supporting role, the teacher is in training, or the teacher’s role in delivering instruction is otherwise very limited. Furthermore, in accordance with the proposed definition, teachers who do not teach the respective target grade and/or subject during the TIMSS testing period are not considered part of the target population.
Due to the multistage sampling procedure of TIMSS and PIRLS, the listing of teachers is interrelated with the sampling of schools and classes. In order not to jeopardize the core objectives of the studies and to keep procedures simple and costefficient, exclusion criteria for teachers must align with the exclusion criteria for schools and classes. Thus, teachers are excluded if they only instruct students in excluded schools or excluded classes. For instance, to a limited extent, TIMSS and PIRLS permit countries to exclude very small schools. At the class level, participating countries are allowed to exclude classes in which all students are either nonnative speakers or have functional or intellectual disabilities.
Weighting in TIMSS
In this section, we will summarize the usual sampling procedures applied in TIMSS (Joncas and Foy, 2012), as this knowledge is built upon in the following sections.
In TIMSS, multistage sampling is used to obtain student samples for assessment of achievement in mathematics and science in the fourth and eighth grade (LaRoche et al., 2020). This procedure is not designed to facilitate sampling of teachers. To consider procedural changes to facilitate inferences on teachers, we examine the sampling procedure used in TIMSS for an educational system with N schools, H strata, C classes, and S students in the target grade. At the initial stage, within stratum h, schools are sampled with probability proportional to size (PPS), where ideally the size measure for a school i is defined as the number of students \(S_{hi}\) in the target grade. A school and two replacement schools are selected simultaneously from the \(N_h\) schools in the stratum. The original school is used if it participates. The first replacement school is used if the original school does not participate but the first replacement school does. The second replacement school is used if neither the original school nor the first replacement school participates but the second replacement school does. After adjustments for nonresponse, participating sampled school i from explicit stratum h has a sampling weight \(F_{hi1}=A_{h1}M_h/(n_hm_i)\). This weight involves the size measure \(m_i\) for sampled school i, the sum \(M_h\) of size measures for all schools in stratum h, and the school nonparticipation adjustment \(A_{h1}\) for stratum h. For stratum h, the adjustment \(A_{h1}\) depends on the number \(n_h\) of participating sampled schools and the number \(n_{hnr}\) of cases in which neither the originally sampled school nor its two replacement schools participated. The adjustment \(A_{h1}=(n_h+n_{hnr})/n_h\). If schools in stratum h are certain to participate, then \(A_{h1}\) is always 1 and the inverse of \(F_{hi1}\) is the exact probability that school i participates. The mechanisms used in TIMSS for adjusting nonresponse are based on the assumption that observations are missing at random within the adjustment cells. However, since this assumption cannot be definitively proven, strict requirements on participation rates are enforced Meinck (2015a).
Within a school, classes are usually randomly drawn with equal probability of selection, and the class then has a weight inversely proportional to its probability of selection. As in the case of sampled schools, adjustment is made for nonparticipation. Let \(\delta _i\) be the number of participating classes in school i out of the number \(c_i\) of sampled classes. Let \(C_i\) be the total number of eligible classes in school i. Let \(A_{h2}\), the class nonparticipation adjustment for stratum h be \(n_h\) divided by the sum over participating schools i in the stratum of the class participation fractions \(\delta _i/c_i\). The class weight component for sampled class j of sampled school i is then \(F_{hij2}=A_{h2}C_i/c_i\). The overall weighting of class j of school i is \(G_{hij2}=F_{hi1}F_{hij2}\). The inverse of \(G_{hij2}\) estimates the joint probability that school i and class j are both sampled and participate.
In some cases, classes within schools are divided into strata, and classes are randomly selected within strata. This approach could be used, for example, if schools have classes with different language of instruction, and they aim for a specific sample size for both languages. Such stratification of classes within schools is used by some countries in recent TIMSS and PIRLS studies. Simple changes in arguments must then be made.
Within classes, let \(n_{ij}\) be the number of students in the class, let \(n_{ij1}\) be the number of selected students in the class, let \(n_{ij3}\) be the number of selected students in the class who participate, and let \(n_{ij2}\) be the number of students sampled who might have participated. (It is possible due to class changes that \(n_{ij2}\) and \(n_{ij1}\) differ.) Students who are selected and participate receive weight component \(F_{ij3}=(n_{ij}/n_{ij1})(n_{ij2}/n_{ij3})\). The final weight for a participating student is \(G_{hij3}=G_{hij2}F_{ij3}\). The inverse of \(G_{hij3}\) is the estimated joint probability that student k is a sampled and participating member of sampled and participating class j from sampled and participating school i. If nonparticipation does not exist for schools and classes and all students in a class are sampled, then the student weight \(G_{hij3}\) reduces to \(M_hC_i/(n_hm_ic_i)\). TIMSS also allows subsampling of students within classes. In this case, classes are sampled with PPS and students within classes are sampled with systematic simple random sampling (systematic SRS). This procedure was however used exclusively for Singapore during the last cycles of the studies. For simplicity we do not extend the paper for this special case; however, such an extension is straightforward. Let Y be a real student measurement variable with value \(Y_{ijk}\) for student k from class j of school i, and let \({\bar{Y}}\) be the mean of the S values of Y. The estimated mean \({\bar{Y}}_s\) is then the ratio estimate with numerator equal to the sum of \(G_{hij3}Y_{ijk}\) over observed students k, classes j, and school i for which \(Y_{ijk}\) is available and denominator equal to the corresponding sum of \(G_{hij3}\) over observed students k, classes j, and school i for which \(Y_{ijk}\) is available (Hájek, 1971).
Two types of teacher weights: student and teachercentered weights
Scholars familiar with the TIMSS data will be aware that teacher weights are already provided in publiclyavailable data files. In this research paper, however, we distinguish two types of teacher weights. The teacher weights that are already available are linked to the students of the responding teachers. These weights are labeled teacher weights (TCHWGT) in the TIMSS 2019 data base. To emphasize their relation with the student population, we call these weights studentcentered teacher weights (stchwgt). If stchwgt is used, students are the units of analysis. These weights are derived by dividing the final student estimation weight by the number of teachers related to an individual student. For example, suppose a student has a final weight of 10 and two science teachers. In this case, the student dataset is duplicated and merged to the data of both teachers, and stchwgt for each case in the resulting file has a value of 10/2=5. As pointed out in the introduction, this weight is useful to describe average features of target grade students. It allows statements such as: “50 percent of students in country X have science teachers with a postgraduate degree.”
The second type of teacher weights, which are the subject of this research paper, provide an approach for teachercentered analysis and will be named teachercentered teacher weights (ttchtwgt). With reference to the above example, ttchwgt could be used to estimate the number of science teachers in the targeted teacher population who completed a postgraduate degree. In the following section, we will present the issue in a more formal way.
To describe the current studentcentered teacher weights in TIMSS, consider a teacher variable U with value \(U_{it}\) for teacher t in target school i for a specific subject (mathematics or science). We begin with the studentcentered case. For each student k in class j of school i, let \(K_{ijk}\) be the number of teachers the student has for the subject under consideration. Let the studentcentered population weight \(W_{it}\) of teacher t in school i be the sum of the fractions \(1/K_{ijk}\) for all students k in a class j who are taught by teacher t. The studentcentered population mean \({\bar{U}}_W\) of the teacher variable U is the ratio with numerator equal to the sum of the products \(W_{it}U_{it}\) for teachers t in target schools i and denominator equal to the corresponding sum S of the weights \(W_{it}\). Recall that the target population has S students. The population mean \({\bar{U}}_W\) is also the population mean over all students k in classes j in schools i of the average of the \(U_{it}\) for the \(K_{ijk}\) teachers t who instruct the student. For sampled teacher t of sampled and participating school i, let the studentcentered sampling weight \(W_{its}\) be the sum of \(G_{hij3}/K_{ijk}\) over sampled and participating students k from sampled and participating classes j of school i who have teacher t. Then the studentcentered estimated mean \({\bar{U}}_{Ws}\) is the ratio with numerator equal to the sum of the products \(W_{its}U_{it}\) over sampled teachers t from sampled and participating schools i for whom \(U_{it}\) is observed and denominator equal to the sum of the \(W_{its}\) over sampled teachers t from sampled and participating schools i for whom \(U_{it}\) is observed. The estimates \({\bar{U}}_{Ws}\) are used in TIMSS.
In the case of teachercentered weights, let \(D_i\) be the number of teachers in school i for a targeted subject, let \(D_+\) be the sum of the \(D_i\) over all target schools i, and let \(\Sigma (U)\) be the total of the \(U_{it}\) for the \(D_+\) teachers t in target schools i. The teacherbased mean \({\bar{U}}\) of the teacher variable U for teachers t in target schools i is just the sample mean of the \(U_{it}\) over teachers t in schools i. With current data, \({\bar{U}}\) cannot be estimated. Nonetheless, it is possible to consider how \({\bar{U}}\) and \({\bar{U}}_W\) compare. To aid in comparison, let \(V_{it}=D_+W_{it}/S\) be the adjusted studentcentered population weight, so that the average \({\bar{V}}\) of the \(V_{it}\) is 1. Then \({\bar{U}}\) is the average of the products \({\bar{V}}U_{it}\), while \({\bar{U}}_W\) is the average of the products \(V_{it}U_{it}\). If either the studentcentered population weights \(W_{it}\) are constant, so that each \(W_{it}\) is the average number \(S/D_+\) of students per teacher, or the variables \(U_{it}\) are constant, so that each \(U_{it}\) is \({\bar{U}}\), then \({\bar{U}}_W\) and \({\bar{U}}\) are equal. Arguments here are most appropriate if no teachers teach the same target subject in the same grade at more than one school. Otherwise, some modifications are required.
To establish an upper bound on the difference \({\bar{U}}_W{\bar{U}}\) for the case in which neither the teacher variables \(U_{it}\) nor the studentcentered population weights \(W_{it}\) are constant, let \(\sigma (U)\) be the population standard deviation of the teacher variables \(U_{it}\) for teachers t in target schools i, so that \(\sigma (U)\) is the square root of the mean of the squared deviations \((U_{it}{\bar{U}})^2\), and let \(\sigma (W)\) be the corresponding population standard deviation of the studentcentered weights \(W_{it}\) for teachers t in schools i. By assumption, both \(\sigma (U)\) and \(\sigma (W)\) are positive. Let the population correlation coefficient of the \(U_{it}\) and \(W_{it}\) be \(\rho (U,W)\). The difference between \({\bar{U}}_W\) and \({\bar{U}}\) is the average of the products \((V_{it}1)U_{it}\). Because the average of the differences \((V_{it}1)\) is 0, the average of the products \((V_{it}1){\bar{U}}\) is also 0. Thus the difference \({\bar{U}}_W{\bar{U}}\) is the average of the products \((V_{it}1)(U_{it}{\bar{U}})\). This average is the population covariance \(\gamma (V,U)\) of the \(V_{it}\) and the \(U_{it}\). If \(\rho (V,U)\) denotes the population correlation \(\gamma (V,U)/[\sigma (V)\sigma (U)]\), then it follow that
Thus a small absolute relative difference \({\bar{U}}_W{\bar{U}}/\sigma (U)\) results if either the standard deviation of the adjusted weight variables \(V_{it}\) is small or the absolute value of the correlation coefficient of the \(V_{it}\) and \(U_{it}\) is small. If all classes in the target population have only one teacher for the subject of interest and all teachers teach the same number of students, then this standard deviation is 0.
Teachercentered inference: methods
A simple change in data collection permits direct study of teachers of students in the target population (Hooper et al., 2022). The key is to record, for each sampled teacher in a particular grade and subject in a participating school, the total number of classes taught by that teacher in the same school, subject, and grade. In this way, two approaches described herein have been proposed to estimate the distribution of teacher variables in the target population of teachers (Hooper et al., 2022). HorwitzThompson estimation (Horvitz and Thompson, 1952), which is abbreviated as HT, is a traditional method to obtain unbiased estimates of sums of population variables under sampling without replacement. The other approach, multiplicityadjusted indirect sampling (MAIS), provides simplified analysis that involves possible multiplecounting of the same teacher. Both approaches lead to unbiased estimation of sums of teacher variables in the target population if nonparticipation adjustments are not required. HT has the advantage of fixed weights but requires simple random sampling of classes within schools. MAIS has the advantage of applicability to sampling of classes by methods not equivalent to simple random sampling. In addition, MAIS is much easier to describe, so that it will be emphasized in applications. Theoretical results are derived for variances and their estimates for both the HT and MAIS approaches, however, due to its wider applicability, the MAIS approach will be used to obtain indications of the potential accuracy of estimated means of teacher variables for individual educational systems.
Because the information required for the analysis is not currently obtained in TIMSS, analysis considers plausible scenarios for teacher weights rather than direct use of teacher weights. In addition to consideration of variances, this paper also treats problems of teacher nonresponse via approaches similar to those used in TIMSS for student nonresponse, class nonresponse, and school nonresponse.
In both approaches under consideration, the procedure for sampling classes is the standard one in TIMSS. The two approaches HT and MAIS diverge once classes are sampled. Let \(d_{it}\) of the \(C_i\) classes be taught for a given subject, mathematics or science, at least in part by teacher t, and let \(d_{its}\) of the \(c_i\) sampled classes be taught by that teacher. Let \(\delta _{it}\) be the number of sampled teachers who participate in school i. Let the teacher nonparticipation adjustment \(A_{ht}\) in stratum h be \(n_h\) divided by the sum over participating schools i in the stratum of the fractions \(\delta _{it}/d_{it}\).
As in the development of studentcentered weights, let \(D_i\) be the number of teachers t in the school, and let \(D_+\) be the sum of the \(D_i\) over schools in the target population. The challenge is estimating \({\bar{U}}\) by use of the participating teachers t associated with the \(c_i\) classes sampled from each sampled school i.
To describe the HT approach to teachercentered weights, consider computing the probability that a teacher t from school i is in a sampled class given that school i has been sampled. If \(c_i\) classes are sampled randomly, so that \(C_ic_i\) classes are not sampled, then the probability \(T_{it}\) that teacher t is sampled is 1 if \(C_ic_i<d_{it}\). Otherwise,
The formula for \(C_ic_i<d_{it}\) applies because it is impossible in this case for teacher t not to be sampled. The alternative case holds since the product of \(C_id_{it}a\) over nonnegative integers \(a<c_i\) is the number of ordered samples of classes of size \(c_i\) that do not include teacher t and the product of \(C_ia\) over nonnegative integers \(a<c_i\) is the total number of ordered samples of classes of size \(c_i\). In the simplest case, \(c_i=1\) and \(T_{it}=d_{it}/C_i\). Then the sampling weight \(W_{itH}=F_{hi1}A_{ht}/T_{it}\) for participating sampled teacher t from school i. The teachercentered sample mean \({\bar{U}}_H\) based on the HT approach is then the ratio estimate with numerator equal to the sum of the products \(W_{itH}U_{it}\) over participating sampled teachers t in participating and sampled schools i for which \(U_{it}\) is observed and denominator equal to the sum of the \(W_{itH}\) over participating sampled teachers t in participating and sampled schools i for which \(U_{it}\) is observed. As expected from HorwitzThompson estimation, for a school i with no nonparticipation of teachers and all \(U_{it}\) observed for sampled teachers, the sum of \(U_{it}/T_{it}\) over sampled teachers t estimates the sum \(U_{i+}\) of \(U_{it}\) over all targeted teachers t in the school. The sum of the products \(W_{itH}U_{it}\) over sampled and participating teachers t in sampled and participating schools i then estimates the sum of the \(U_{it}\) over all teachers t in schools i from the target population.
In the MAIS approach, the sample weight \(W_{itM}=G_{ih2}A_{ht}d_{its}/d_{it}\) if teacher t is sampled and participates in sampled and participating school i. The teachercentered sample mean \({\bar{U}}_M\) based on the MAIS approach is then the ratio with numerator equal to the sum of the products \(W_{itM}U_{it}\) over participating sampled teachers t in participating and sampled schools i for which \(U_{it}\) is observed and denominator equal to the sum of the \(W_{itM}\) over participating sampled teachers t in participating and sampled schools i for which \(U_{it}\) is observed. If \(d_{it}>1\) for a sampled teacher t in school i, then the count \(d_{its}\) and the sample weight \(W_{itM}\) are not constant. Nonetheless, \(d_{it}c_i/C_i\) is the expected value of the number \(d_{its}\) of times teacher t teaches a sampled class. This expected value is also the product of the probability \(T_{it}\) that \(d_{its}>0\) and the expected value of \(d_{its}\) given that \(d_{its}>0\). It follows that \(d_{its}\) given that \(d_{its}\) is positive has expected value \(d_{it}c_i/C_i\), so that \(W_{itM}\) and \(W_{itT}\) have the same expected value given selection of teacher t. As a consequence, both the MAIS and HT approaches provide comparable estimates of the teachercentered mean \({\bar{U}}\). Although the simpler form of the MAIS estimate is an attraction in a comparison with the HT estimate, a more important consideration is that MAIS can be employed when simple random sampling of classes is not present as long as the expected value of \(d_{its}\) is \(d_{it}c_i/C_i\). The HT approach must be modified if simple random sampling of classes is not employed within schools.
In a number of cases, the HT and MAIS approaches coincide. If, for all schools i, either the number of sampled classes \(c_i\) is 1, \(c_i=C_i\), or the number \(d_{it}\) of classes each teacher t instructs is always 1, then \(W_{itH}=W_{itM}\) for all sampled and participating teachers t and \({\bar{U}}_M={\bar{U}}_H\).
Teachercentered inferences in TIMSS: changes needed in data collection
Although the current sampling procedure and data collection in TIMSS do not permit simple inferences about the distribution of characteristics for teachers who participate in instruction of mathematics or science in the fourth or eighth grade, it is possible to add a new schoollevel form to permit such inferences without changing other aspects of sample design and data collection described in Johansone (2020). For each grade examined (4 or 8), the required new form for a participating school i includes a list of the \(C_i\) classes eligible for sampling. The list specifies for each eligible class all teachers of mathematics or science who instruct at least some class students.
Figure 1 presents an example of such a listing form. It would replace the currently used class listing form presented in Fig. 2). We acknowledge that this list is more complex than the current class listing form and requires some additional work by the school coordinators. We therefore recommend a field trial to provide a thorough usability test. With the new listing form, it is straightforward to determine the number \(d_{it}\) of classes taught, at least in part, by a teacher t in school i. It is quite common in the fourth grade to have a single teacher who provides all mathematics and science instruction for a class. In this case, values of \(d_{it}\) will typically be small. On the other hand, it is much less common in the eighth grade for only a single teacher to provide all mathematics and science instruction for a class. Thus larger values of \(d_{it}\) may be encountered. Given the new form, no other procedures in TIMSS need be changed in order to replace studentcentered weights by teachercentered weights.
Adjustment for teachers in multiple schools
If a teacher works in the target grade and subject in more than one school in the target population, then the selection probability is affected. We propose to handle this situation as done in other studies like ICCS (Zuehlke and Vandenplas, 2011), ICILS (Meinck and Cortes, 2015), and TALIS (OECD, 2014). This is, we propose to add in the teacher questionnaire the question: “At the moment, in how many other schools do you teach mathematics [/science] to target grade students?”. Based on the response, another weight adjustment factor would be included into the computation of the teacher weights, calculated as the inverse of the total number of schools a teacher teaches target grade students in the respective subject. E.g., the total weight of a science teacher teaching this subject to target grade students in two schools will be halved. Note that this weight adjustment factor is called the “teacher multiplicity factor” or “teacher multiplicity adjustment” in the studies cited above, but should not be confused with the multiplicity adjustment of the MAIS approach. Both address the issue of multiple selection probabilities of teachers, the difference however is that one handles multiple selection probabilities within the sampled school, and the other one in different schools (whether sampled or not). For a more formal description of the computation see, e.g., Meinck and Cortes (2015).
To gain insights if weight adjustments for teachers working at more than one schools would be needed in practice, we analyzed the TALIS 2018 database^{Footnote 2}. TALIS is a teacher and school leader survey with 48 participating education systems in the 2018 cycle. The core target population is lower secondary school teachers (ISCED level 2), but countries can also survey lower and upper secondary schools (ISCED 1 and 3). For each education system a sample of about 200 schools and 20 teachers per school was drawn (OECD, 2019b). Table 1 shows the number of TALIS 2018 participating education systems that have a specified weighted percentage of teachers who indicated working at more than one school. The weighted percentage of teachers reporting working at more than one school is less than five for most of the education systems. But there are also education systems in all four groups for which the estimated percentage of such teachers exceeds 10. Note that it is likely to happen even more rarely that teachers teach the TIMSS and PIRLS target grades in multiple schools, as ISCED levels cover multiple grades while TIMSS and PIRLS cover just one grade. This finding implies that weight adjustments might be necessary for only a limited number of educational systems, and it supports our decision to ignore this issue for the study following later.
Sample sizes
To explore the use of TIMSS and PIRLS data for the practical implementation of teachercentered weights, the teacher sample sizes of both studies were investigated by using the TIMSS 2019^{Footnote 3} and PIRLS 2016^{Footnote 4} databases. The TIMSS 2019 sample sizes for teachers and schools, were calculated separately for each participating country or benchmarking system^{Footnote 5} and for each of the four defined populations (see Table 13 in the Appendix). Within each population, only unique teacher identifiers (IDs, variable IDTEACH in the TIMSS and PIRLS databases) and unique school IDs (variable IDSCHOOL in the TIMSS and PIRLS databases) were considered. One result of this approach is that a teacher of two sampled classes is only considered as one teacher in the calculation of the respective sample size. The same approach was taken for PIRLS, where only one teacher population would be considered, that is reading/language teachers of fourthgrade students.
In TIMSS the sample sizes of teachers vary substantially among participating education systems (summarizing statistics for the four teacher populations can be found in the Tables 2 and 3). For example, the teacher samples of fourthgrade mathematics teachers in Pakistan, Northern Ireland, and Hong Kong SAR are rather small (below 160) whereas the United Arab Emirates’ sample size is 1073. Overall, the sample size exceeds, with few exceptions, 150 in all teacher populations and the minimum sample size of schools over all populations is at least 98 (Malta). This seems to be a promising finding in regard to future teachercentered analyses. On average sample sizes vary between 266 (fourthgrade mathematics teachers) and 382 (eighthgrade science teachers). Differences in sample sizes can be explained by several factors such as the school and class sample sizes, the number of teachers associated with a class and the nonresponse rate.
Due to the sampling procedures in TIMSS, student sample sizes (which ultimately determine school and class sample sizes) significantly affect the size of the teacher samples, being generally positively correlated. For example, England with 3365 sampled students has the lowest student sample size in the eighth grade (Martin et al., 2020, Exhibit 9.6) and accordingly a belowaverage teacher sample size. The opposite is the case for the United Arab Emirates, where the 22,334 participating students is by far the highest student sample size in the eighth grade (Martin et al., 2020, Exhibit 9.6) and with 1036 mathematics and 1180 science teachers the largest teacher sample sizes.
A comparison of sample sizes of mathematics versus science teachers in the fourth grade shows that the two sample sizes do not differ much in most of the educational systems. This result is partly due to an overlap of science and mathematics teachers in the fourth grade. In 43 educational systems more than 50% of the mathematics teachers teach science in addition; and in 18 education systems even more than 90% of the mathematics teachers teach science in addition. Exceptions are educationalsystems like Bahrain, Kuwait and South Africa. These educational systems have as many mathematics as science teachers and no overlap between these groups. When comparing educational systems that participated in both surveys, TIMSS for the fourth grade and TIMSS for eighth grade, most of them (27 out of 38) have a larger science teacher sample in the eighth grade compared to the fourth grade.
The sample sizes of teachers were also analyzed on school level. Figure 3 displays the percentages of schools with a given number of participating teachers per school in TIMSS 2019, lines combine the values for a given education system. As can be seen from the figure, there is substantial variation in between countries regarding the obtained number of teachers per school, affecting the total sample size of teachers. In the majority of sampled schools in all countries, only one or two teachers are obtained. This result can also be concluded from Fig. 4, which shows the international mean percentage of schools that have 1, 2, 3 or more than 3 teachers per school. The situation is slightly different when looking at eighthgrade science teachers, where data of four or more teachers is collected from each school in a significant number of countries, related to the fact that specialist teachers of the different sciences (physics, chemistry, earth science, biology etc.) exist and respond to the questionnaires. Consequently, given the current TIMSS sampling design, the sample size for the four teacher populations of interest can vary in between a minimum determined by the minimum school and class sample size (150 schools with one class in TIMSS), multiplied by the school, class and teacher participation rate, and a relatively large number in countries with large school samples, multiple selected classes within schools, or where structural conditions require multiple teachers teaching a class. Very small countries with school censuses (e.g., Malta) may have even smaller samples.
The sample sizes of fourthgrade teachers in PIRLS show similar pattern as the ones in TIMSS. Sample sizes of teachers vary between 122 (Macao SAR) and 1119 (Canada). On average educational systems have a sample size of 271 teachers. In most of the participating schools, one or two teachers participated in the survey. More information about the sample sizes in PIRLS can be found in Figs. 4, 5 and Table 13.
Sample variances for estimates of teacher variables
Efforts described above to achieve teachercentered teacher weights are only reasonable if the results have an acceptable level of precision. In the following, we investigate what would be likely levels of sampling variance when estimating teacher population characteristics. Large sampling variance could be due, among other factors, to relatively small samples or relatively large variance of weights. An acceptable level of sampling variance could be determined in various ways. One standard involves the accuracy of studentcentered teacher summaries that TIMSS currently reports. Another standard is based on the regular TIMSS requirements for measurement of student achievement that national student samples should provide for a standard error no greater than .035 standard deviation units for the country’s mean achievement. Sample estimates of any studentlevel percentage estimate (e.g., a student background characteristic) should have a confidence interval of ±3.5% (LaRoche et al., 2020). Given the relatively small teacher samples, this precision cannot be reached, even if the design effect of estimates associated with the teacher samples would be close to 1 due to clustering effects expected to be negligible (very small cluster sizes; teacher variables have lower intraclass correlation coefficients than student variables (Meinck, 2015b)). However, given the sample sizes presented in the Table 13 (see Appendix), many but not all precision levels can be expected to correspond to an effective sample size of at least 150, a value that translates to a standard error of .08 standard deviation units. We claim that teacher population estimates reaching these respective minimum levels of precision can be deemed satisfactory. Moreover, it might be informative to compare the sampling variance of an estimator based on teachercentered versus studentcentered teacher weights (Dumais and Morin, 2019; Schulz, 2020). We use TIMSS 2019 data for the analysis. However, because we are missing one important piece of information to compute the teachercentered teacher weights, namely how many classes a participating teacher teaches, we consider some plausible scenarios to suggest possible results of teachercentered weights. These scenarios clearly do not obviate the importance of a pilot study to examine teachercentered weights, but they do provide some indication of how results for teachercentered weights might differ from those from studentcentered weights.
In this discussion, studentcentered weights for teacher characteristics are computed according to current reporting practice in TIMSS 2019. For teachercentered weights, results are obtained for approximations of the MAIS approach. We consider the following two scenarios.
Scenario 1: Classcentered weights. The teachercentered MAIS weight \(W_{itM}\) for teacher t in school i of stratum h is certainly no greater than the sum \(W_{itC}\) of the class weights \(G_{hij2}\) for all the sampled classes j associated with teacher t. This sum is used for classcentered weights. The classcentered weight \(W_{itC}\) is \(W_{itM}\) if teacher t teaches all classes, so that \(d_{its}=d_{it}=C_i\), or if teacher t only teaches a single class, so that \(d_{its}=d_{it}\) if t is sampled.
Scenario 2: Schoolcentered weights. Because the class factor \(F_{hij2}\) is always at least 1, the expected value \(W_{itH}\) of \(W_{itC}\) for a sampled teacher t is always at least as large as the school weight \(F_{hi1}\). In a few educational systems participating in TIMSS 2019, \(F_{hi1}=W_{itM}=W_{itH}\). This situation only applies to Malta and Pakistan for the fourth grade for mathematics and science because all classes and teachers are sampled.
To assess the accuracy of the weighted means under study, jackknife repeated replication (JRR) for schools was employed as in TIMSS 2019 and a parallel analysis (SRS) was employed based on the classical formula for estimation of the variance of a weighted mean under simple random sampling (Cochran, 1977, Chapter 6). As in the JRR results, a finite sampling correction is not used. JRR has the advantage of consistency with current practice, but it should be emphasized that the resulting estimated standard errors need not be accurate. The use of JRR and the use of SRS are both based on assumptions of random sampling with replacement that clearly do not apply given that populations of schools are finite, sampling of schools is without replacement, and sampling of schools within strata is systematic with a random start (Kish and Frankel, 1974). The issue of appropriateness of use of JRR in TIMSS also applies to existing studentcentered weights. Nonetheless, the estimates may provide some guidance concerning reasonable expectations.
As an added check, unweighted results assuming simple random sampling with replacement of teachers in an educational system were obtained and both JRR and SRS were applied.
The full table of results is very large. For each of the two grades and two subjects, seven items were considered for this analysis (see Table 4; for further details on variables and scales see Martin et al. (2020)). We considered exclusively items that would provide interesting information on characteristics of the teacher population such as gender, age, teaching experience, job satisfaction etc. We did not consider variables that are related to a specific class and would hence not be suitable for teachercentered analysis. Occasionally teacher responses were missing or inconsistent. The teacher’s responses for science or mathematics were defined as the average of the responses not missing if more than one teacher questionnaire was available.
For simplicity, this study primarily involves the study of weighted means; however, other summary statistics could easily be examined with the same methodology. For example, cumulative distribution functions can certainly be examined.
For the fourth grade, TIMSS 2019 provides data for 64 educational systems, while in the eighthgrade, data for 46 educational systems are available. Thus in all, our analysis results in a table with 1540 rows. Table columns include the code and name of the educational system, the grade, the subject, the number of observations with item responses, the number of observations with omitted responses, the four estimated means, and the four estimated standard errors. Hence the full table is too large for presentation in this paper; however, it is available in supplementary materials as an R data frame and as an Excel spreadsheet.
A simple summary of results for the raw means and three weighted means is provided in Tables 5 and 6. Because variables vary considerably in their ranges, corresponding summaries of weighted standard deviations are provided in Tables 7 and 8. These summaries are averages across participating educational systems for each grade, subject, and item. Thus by themselves they only provide a rough notion of results. Nonetheless it is worth noting that different weighting approaches do yield relatively similar average results across countries.
In terms of effect sizes in which the difference of means for an item, country, subject, and grade is divided by the square root of the average of the corresponding variances, the average absolute value of the effect size for studentweighted versus classweighted means is 0.036, while the corresponding average for studentweighted versus schoolweighted means is 0.069. These average effect sizes are relatively small. Averages within grades and subjects vary little. Figure 6 provides an illustration of the similarity of studentcentered (xaxis for each panel) and classcentered means (yaxis for each panel) in the case of science in the eighth grade (complementary figures for all other scenarios—schoolcentered means, mathematics and science both grades—can be found in the Appendix, see Figs. 8, 9, 10, 11, 12, 13, 14). To place all items on the same scale, the minimum value of the item score is subtracted from the mean and the result is divided by the range of the item score. Thus all values are between 0 and 1. For reference, the diagonal line has intercept 0 and slope 1. Clearly all points are very close to the line.
Nonetheless, despite the reported averages, it should be noted that effect sizes can sometimes be large. The most extreme case for comparison of studentcentered and classcentered weights occurs in Pakistan for mathematics in the fourth grade for item ATBM10. In this case, the studentcentered weighted mean is 2.683 and the classcentered weighted mean is 2.170. The respective weighted standard deviations are 1.671 and 1.479, so the effect size is 0.325. For comparison of studentcentered and schoolcentered weighted means, the most extreme case is in the United States for mathematics in the eighth grade for item BTBM23. The studentcentered weighted mean is 3.528, and the schoolcentered weighted mean is 2.910. The respective weighted standard deviations are 1.132 and 1.157. The corresponding effect size is 0.540. In these two instances, the difference in weighted means can have substantial effect on interpretations of results.
Standard errors are usually a significant concern in largescale assessments because these studies rely on complex samples. These samples are characterized by various features such as stratification and clustering which prevent using standard formula (assuming SRS) to estimate standard errors (Lohr, 1999). Looking at standard error estimates using both the SRS and the JRR approach we investigate whether this may also be a concern for teachercentered analysis. A summary of design effects is provided in Tables 9 and 10.
These design effects are averages over countries of squares of the ratios of standard errors from JRR and SRS. Average design effects are often close to 1, especially in the unweighted case, but average ratios are much higher in weighted cases for the scales ATBGTJS and BTBGTJS. Thus the design effects indicate a small but nonnegligible effect of the complex design on standard errors, likely clustering and unequal weights being the driving forces (see Meinck and Vandenplas (2021) for more details). The most extreme design effects are quite large. In the case of schoolcentered means for item ATBG02 in Latvia in the fourthgrade mathematics, the design effect is about 28.9, however, there is a fundamental difficulty in this case because only one of 200 sampled teachers of mathematics in the fourthgrade reports being male. In this case, instability of estimates of standard errors (and design effects) is not surprising. On the other hand, for classcentered means, 14.4, the largest design effect, arises in Australia for mathematics in the eighth grade for item BTBGTJS, pointing to a substantial clustering effect regarding job satisfaction of eighthgrade mathematics teachers in this country (i.e., teachers within the same school tend to have similar job satisfaction levels), inflated by the high variance in weights. For studentcentered means, the most extreme ratio, 11.8, arises in Dubai for item ATBGTJS for mathematics in the fourth grade. Given these results, further analysis will be based on JRR, and a clear recommendation for using standard error estimation methods accounting for the complex designs is warranted.
As evident from Tables 11 and 12, standard errors are a major concern in any of the weighted means under study. We noted above that a standard error of .08 standard deviation units might be deemed acceptable, however, even the average ratio between standard errors and standard deviations^{Footnote 6} is higher for most variables and weighting scenarios, meaning more than half of the ratios for specific countries are higher than this value. Studentcentered and classcentered estimates have similar ratios of standard errors to standard deviations, and results for schoolcentered weights are a bit worse. The least satisfactory results are associated with the job satisfaction scales ATBGTJS and BTBGTJS.
To check more thoroughly on the issue of standard errors, it is helpful to examine cumulative distribution functions of JRR ratios of standard errors to weighted standard deviation (scaled JRR standard errors). Figure 7 provides an example for schoolcentered weighted means (complementary figures for other grades, subjects and weighting scenarios can be found in the Appendix, see Figs. 15, 16, 17, 18, 19, 20, 21), with the ratio of standard error to standard deviation on the xaxis for each panel and the cumulative distribution function on the yaxis for each panel. Clearly results are rather variable for different educational systems. As evident from the vertical line at 0.08, it is certainly not uncommon for ratios to be less than 0.08; however, occasionally ratios are about 0.3, pointing to very imprecise estimates. A basic issue is the existence of enough responses, depending not only on sample size but also on participation. For example, the value of 0.332 for England involves only 86 responses, due to low participation rates at both school and teacher level. On the other hand, the issue is a bit more complicated. For example, for item BTBGTJS in the United States, 426 responses are present but the ratio is 0.240. Some explanation is provided in terms of the effective sample size measure equal to the ratio of the square of the sum of the weights to the sum of the squares of the weights (Kish, 1965, p. 259). In the case of the United States, the sample size for science teachers in eighth grade is 468, but the effective sample size for schoolcentered weights is only 32.7, pointing to a very large design effect of almost 15. The effective sample size for the United States is so low because some sampled schools have very low probabilities of being sampled and hence very high weights. These very low probabilities reflect very small school sizes. The effect on the weights could even not be compensated by a method applied in TIMSS and PIRLS to minimize fluctuations in sampling weights, that is, set uniform selection probabilities when sampling small schools. For example, for eighth grade one sampled school had only one sampled student and another had only two sampled students. This result reflects a decision not to exclude very small schools from the American sample and a decision in TIMSS not to apply methods to reduce unusually high weights, which may be reconsidered in future cycles of TIMSS. The exclusion for small schools is not unusual in other educational systems participating in TIMSS, and standardizing this approach may be an effective measure to avoid large variance in weights also for the student sample. At the moment, TIMSS allows exclusion of small schools covering up to 2% of the student population. For example, in Gauteng and Western Cape, schools in the sample for eighth grade must have at least 10 students. Another reasonable approach to consider is the application of exclusion rules for teacher analysis not applied for student analysis due to the much smaller number of teachers in an educational system.
Overall, according to the considered scenarios, teachercentered analysis seems to be possible with fairly reasonable precision using the MAIS approach, although some limits exist for specific variables and educational systems. In any case, the results suggest that analysis of teachers in any educational system participating in TIMSS generally cannot effectively examine subgroups given the number of teachers sampled.
Summary, conclusions and recommendations
TIMSS and PIRLS expend significant effort and cost to collect and analyze data for an elaborate explanatory model covering student achievement in the areas of mathematics, science, and reading, and the contexts of learning these subjects. The ability to analyze teacherlevel characteristics from proper samples drawn from teacher populations is not included in their study designs, as choices had to be made to keep the costs and complexity levels of these studies manageable. Still, a rich array of data related to teacher characteristics is collected, and scholars wish using this data to investigate characteristics of teachers. This paper builds on the work by Hooper et al. (2022), extending their introduction of two approaches to derive weights for teachercentered analysis using TIMSS and PIRLS data by looking into aspects of practical implementation of these approaches.
We began with proposing a definition for teacher target populations, tied to the grades and subjects they teach, in line with the focus of the two largescale assessments. This definition should help to correctly and comprehensively identify all inscope teachers within schools sampled for TIMSS and PIRLS, being a requisite for accurate estimation of population characteristics. We then formalized the computation of teachercentered weights and using them to derive teachercentered population estimates, and discuss some issues and limitations related with this. We highlighted the utility of both, studentcentered and teachercentered analysis, depending on the research question to be answered, and disentangled the differences between the two types of weights. Next we suggest a procedure and form on how to collect data about teachers that is needed to derive teachercentered weights, yet currently unavailable. This step is key if in future cycles teachercentered weights should be derived in TIMSS and PIRLS. Alternative forms or procedures may work, and optimal solutions may depend on the particular situation in participating countries. We however recommend here a standardized procedure that can be applied in all countries, a feature that is important in ILSA to support their dense timelines, high quality standards, and production modes. Collecting this additional information demands slightly more work by school coordinators, and a small adjustment in operations, that may be well justifiable given the possible gain in knowledge.
We also tackle the issue of nonresponse by proposing a nonresponse adjustment factor in line with existing approaches in ILSA, as well as mentioning the challenge of multiple selection probabilities when teachers teach in multiple schools, where we refer to solutions applied in other ILSA.
The core part of the paper focuses on studying the level of accuracy that can be expected when estimating teacher population characteristics. We look into sample sizes as they are a fundamental factor related with precision. Then we use TIMSS 2019 data and simulate likely scenarios regarding the variance in weights. Identifying the MAIS method as the method most effective for TIMSS and PIRLS as it can handle withinschool stratification, we continue only with this method. The results show that the different weighting scenarios (including using no weights) lead to relatively similar estimates, at least on average, however with large enough differences for specific variables and countries to warrant the recommendation to use teachercentered weights for analysis of teacher populations rather than studentcentered weights. Second, results provide evidence to use weights and an algorithm to estimate standard errors that accounts for the complex sampling design, as standard error estimates would otherwise be systematically biased. We find further that sample sizes and variance in weights are significantly limiting estimate precision. Especially the large variation in weights induces particularly large design effects. Hence, while characteristics of whole teacher populations can be estimated with sufficient precision in the majority of countries, we discourage estimating subpopulation features (such as, for example, job satisfaction of male teachers), and we strongly recommend that, to avoid unreasonable interpretations, analysts with research questions should thoroughly check sample sizes and variances in weights of the populations of interest. However, if such research questions are deemed of high interest, national research coordinators should discuss options to adjust the sampling design for their countries. Options that would not jeopardize the core objective of TIMSS and PIRLS (that is, studying students) include increasing the number of schools or classes (and thereby teachers) selected and extending the teacher survey to teachers not sampled by way of student sampling.
The results presented here are of limited reliability as they are based on plausible scenarios rather than real data that permit computation of teachercentered weights. Therefore, the next step is actual implementation in one or more countries, followed by replicating the analysis presented here with real data, which would allow a critical evaluation of our results.
Availability of data and materials
The datasets generated and/or analyzed during the current are available in the following repositories: IEA, TIMSS: https://www.iea.nl/datatools/repository/timss IEA, PIRLS: https://www.iea.nl/datatools/repository/pirls OECD, TALIS: https://www.oecd.org/education/talis/talis2018data.htm.
Notes
International Standard Classification of Education (ISCED).
OECD, TALIS 2018 Database, https://www.oecd.org/education/talis/talis2018data.htm (assessed on July 21st, 2022).
TIMSS 2019 International Database, https://www.iea.nl/datatools/repository/timss, (assessed on January 21st, 2022).
PIRLS 2016 International Database, https://www.iea.nl/datatools/repository/pirls, (assessed on January 21st, 2022).
Since TIMSS 2003, TIMSS introduced a socalled Benchmarking Program, which also allows subentities of countries to participate in the survey (Martin and Mullis, 2004). We will use the term educational system for a participating country or benchmarking system in the following.
Contrasting standard errors against standard deviations allows direct comparisons of sampling precision between populations or variables with different scales. In other contexts, the coefficient of variation is often used instead, but it has some drawbacks, for example it does not work well for scales with a mean of zero.
Abbreviations
 ICCS:

International Civic and Citizenship Study
 ICILS:

International Computer and Information Literacy Study
 IEA:

International Association for the Evaluation of Educational Achievement
 ILSA:

International largescale assessments
 HT:

HorwitzThompson
 ISCED:

International Standard Classification of Education
 JRR:

Jackknife repeated replication
 MAIS:

Multiplicityadjusted indirect sampling
 OECD:

Organization for Economic Cooperation and Development
 PPS:

Probability proportional to size
 PIRLS:

Progress in International Reading Literacy Study
 SRS:

Simple random sampling
 stchwgt:

Studentcentered teacher weights
 ttchwgt:

Teachercentered teacher weights
 TALIS:

Teaching and Learning International Survey
 TIMSS:

Trends in International Mathematics and Science Study
References
Centurino, V.A.S., & Jones, L.R. (2017). TIMSS 2019 science framework. In: Mullis, I.V.S., Martin, M.O. (eds.) TIMSS 2019 Assessment Frameworks, Chestnut Hill, pp. 27–56 Chap. 2. http://timssandpirls.bc.edu/timss2019/frameworks/
Cochran, W.G. (1977). Sampling techniques, 3rd ed. Wiley.
Dumais, J., & Morin, Y. (2019). Sample design. In: Publishing, O. (ed.) TALIS 2018 Technical Report, pp. 96–108. OECD Publishing, Paris Chap. 5. http://www.oecd.org/education/talis/
Fishbein, B., Foy, P., & Liqun Yin, L. (2021). TIMSS 2019 User Guide for the International Database (2nd Ed.). TIMSS & PIRLS International Study Center, Lynch School of Education and Human Development, Boston College and International Association for the Evaluation of Educational Achievement (IEA), Chestnut Hill, MA. https://timss2019.org/internationaldatabase/downloads/TIMSS2019UserGuidefortheInternationalDatabase2ndEd.pdf
Fraillon, J., Ainley, J., Schulz, W., Friedman, T., & Duckworth, D. (eds.). (2020). IEA International Computer and Information Literacy Study 2018 Technical Report. International Association for the Evaluation of Educational Achievement (IEA), Amsterdam . https://www.iea.nl/studies/iea/iccs/2016
Hájek, J. (1971). Discussion of “An essay on the logical foundation of survey sampling, part one” by D. Basu. In: Godambe, V.P., Sprott, D.A. (eds.) Foundations of Statistical Inference, p. 236. Holt, Rinehart, and Winston
Hooper, M., Broer, M., Yarnell, L. M., & Holmes, J. (2022). Talking about teachers: Would sampling weight adjustments allow for teachercentric inferences in future timss assessments? Studies in Educational Evaluation, 73, 101148. https://doi.org/10.1016/j.stueduc.2022.101148
Horvitz, D. G., & Thompson, D. J. (1952). A generalization of sampling without replacement from a finite universe. Journal of the American Statistical Association, 47, 663–685. https://doi.org/10.1080/01621459.1952.10483446
Johansone, I. (2020). Survey operations procedures for TIMSS 2019. In: Martin, M.O., von Davier, M., Mullis, I.V.S. (eds.) Methods and Procedures: TIMSS 2019 Technical Report, Chestnut Hill, MA, pp. 6–1628 . Chap. 6. https://timssandpirls.bc.edu/timss2019/methods
Joncas, M., & Foy, P. (2012). Sample design in TIMSS and PIRLS. In: Martin, M.O., von Davier, M., Mullis, I.V.S. (eds.) Methods and procedures: TIMSS and PIRLS 2011, Chestnut Hill, MA, pp. 1–21 . Chap. 3. https://timssandpirls.bc.edu/methods/
Kish, L. (1965). Survey sampling. Wiley.
Kish, L., & Frankel, M.R. (1974). Inference from complex samples. Journal of the Royal Statistical Society. Series B (Methodological) 36, 1–37 . https://doi.org/10.2307/2984767
LaRoche, S., Joncas, M., & Foy, P. (2020). Sample design in TIMSS 2019. In: Martin, M.O., von Davier, M., Mullis, I.V.S. (eds.) Methods and Procedures: TIMSS 2019 Technical Report, Chestnut Hill, MA . Chap. 3. https://timssandpirls.bc.edu/timss2019/methods
Lindquist, M., Philpot, R., Mullis, I.V.S., & Cotter, K.E. (2017). TIMSS 2019 mathematics framework. In: Mullis, I.V.S., Martin, M.O. (eds.) TIMSS 2019 Assessment Frameworks, Chestnut Hill, pp. 11–26 . Chap. 1. http://timssandpirls.bc.edu/timss2019/frameworks/
Lohr, S. L. (1999). Sampling: Design and analysis. Duxbury Press.
Martin, M.O., & Mullis, I.V.S. (2004). Overview of TIMSS 2003. In: Martin, M.O., Mullis, I.V.S., Foy, P., Chrostowski, S.J. (eds.) TIMSS 2003 Technical Report, pp. 2–21. TIMSS & PIRLS International Study Center, Lynch School of Education, Boston College, Chestnut Hill, MA . Chap. 1. https://timssandpirls.bc.edu/timss2003i/technicalD.html
Martin, M.O., Mullis, I.V.S., Hooper, M. (eds.) (2017). Methods and Procedures in PIRLS 2016. TIMSS & PIRLS International Study Center, Lynch School of Education and Human Development, Boston College and International Association for the Evaluation of Educational Achievement (IEA), Chestnut Hill, MA . https://timssandpirls.bc.edu/publications/pirls/2016methods.html
Martin, M.O., von Davier, M., Mullis, I.V.S. (eds.) (2020). Methods and Procedures: TIMSS 2019 Technical Report. TIMSS & PIRLS International Study Center, Lynch School of Education and Human Development, Boston College and International Association for the Evaluation of Educational Achievement (IEA), Chestnut Hill, MA . https://timssandpirls.bc.edu/timss2019/methods/
Meinck, S. (2015a). Computing sampling weights in largescale assessments in education. survey insights: Methods from the field. Survey Insights: Methods from the Field, Weighting: Practical Issues and ‘How to’ Approach . https://doi.org/10.13094/SMIF201500004
Meinck, S. (2015b). Sampling design and implementation. In: Fraillon, J., Schulz, W., Friedman, T., Ainley, J., Gebhardt, E. (eds.) International Computer and Information Literacy Study 2013 Technical Report, Amsterdam, pp. 67–86 . Chap. 6. https://www.iea.nl/publications/technicalreports/icils2013technicalreport
Meinck, S., & Cortes, D. (2015). Sampling weights, nonresponse adjustments and participation rates. In: Fraillon, J., Schulz, W., Friedman, T., Ainley, J., Gebhardt, E. (eds.) International Computer and Information Literacy Study 2013 Technical Report, Chestnut Hill, MA, pp. 87–112 . Chap. 7. https://www.iea.nl/publications/technicalreports/icils2013technicalreport
Meinck, S., & Vandenplas, C. (2021). In: Nilsen, T., StancelPiątak, A., Gustafsson, J.E. (eds.) Sampling Design in ILSA, pp. 1–25. Springer, Cham . https://doi.org/10.1007/9783030382988_251.
Mullis, I.V.S., Martin, M.O., Foy, P., Kelly, D.L., Fishbein, B. (eds.) (2020). TIMSS 2019 International Results in Mathematics and Science. TIMSS & PIRLS International Study Center, Lynch School of Education and Human Development, Boston College and International Association for the Evaluation of Educational Achievement (IEA), Chestnut Hill, MA . https://timssandpirls.bc.edu/timss2019/
OECD. (2014). TALIS 2013 Technical Report. OECD Publishing . https://www.oecd.org/education/school/TALIStechnicalreport2013.pdf
OECD. (2019a). PISA 2018 Assessment and Analytical Framework. OECD Publishing . https://doi.org/10.1787/b25efab8en
OECD. (2019b). TALIS 2018 Technical Report. OECD Publishing . http://www.oecd.org/education/talis/TALIS_2018_Technical_Report.pdf
Schulz, W. (2020). The reporting of ICILS 2018 results. In: Fraillon, J., Ainley, J., Schulz, W., Friedman, T., Duckworth, D. (eds.) IEA International Computer and Information Literacy Study 2018: Technical Report, pp. 221–234. International Association for the Evaluation of Educational Achievement (IEA) 2020, Amsterdam. Chap. 13. https://www.iea.nl/publications/technicalreports/icils2018technicalreport
Schulz, W., Carstens, R., Losito, B., Fraillon, J. (eds.). (2018). International Civic and Citizenship Education Study 2016: Technical Report. International Association for the Evaluation of Educational Achievement (IEA), Amsterdam. https://www.iea.nl/studies/iea/iccs/2016
Zuehlke, O., & Vandenplas, C. (2011). Sampling weights and participation rates. In: Schulz, W., Ainley, J., Fraillon, J. (eds.) ICCS 2009 Technical Report, Amsterdam, pp. 69–88. Chap. 7. https://www.iea.nl/studies/iea/iccs/
Acknowledgements
We would like to extend our sincere thanks to Diego Cortes and Umut Atasever for their most helpful review of this paper.
Funding
The research is funded by the International Association for the Evaluation of Educational Achievement (IEA).
Author information
Authors and Affiliations
Contributions
Shelby Haberman conducted the key analyses for this paper and was a major contributor in writing the manuscript. Sabine Meinck steered the project, contributed in major ways to the conceptional design of the research and structure of the paper, and wrote parts of the manuscript. AnnKristin Koop conducted supplementary analyses for the manuscript. She was responsible for definitions and explanatory parts in the manuscript and wrote parts of the paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
The authors give consent for the publication which can include data, graphics, and tables.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Haberman, S.J., Meinck, S. & Koop, AK. Teachercentered analysis with TIMSS and PIRLS data: weighting approaches, accuracy, and precision. Largescale Assess Educ 12, 29 (2024). https://doi.org/10.1186/s4053602400214x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s4053602400214x