Patterns of students’ computer use and relations to their computer and information literacy: results of a latent class analysis and implications for teaching and learning
Large-scale Assessments in Education volume 5, Article number: 16 (2017)
Previous studies have shown that there is a complex relationship between students’ computer and information literacy (CIL) and their use of information and communication technologies (ICT) for both recreational and school use.
This study seeks to dig deeper into these complex relations by identifying different patterns of students’ school-related and recreational computer use in the 21 countries participating in the International Computer and Information Literacy Study (ICILS 2013).
Latent class analysis (LCA) of the student questionnaire and performance data from the ICILS 2013 study, revealed different patterns of use of ICT; these patterns could be related to differences in students’ CIL scores. These analyses support the conclusions of previous studies, which found, in many cases, a ‘hill shape’ in the data, suggesting that both low and extended use of computers may be correlated with lower scores on the CIL scale, while intermediate use is correlated with higher scores.
The study identifies interesting differences between countries, and, in addition to the hill shape, both a ‘plateau shape’ and a ‘hill-valley shape’ were apparent in the data, raising important questions about differences in contexts.
In the international report from the International Computer and Information Literacy Study (ICILS) 2013, the analyses showed a positive relationship between the use of computers and computer and information literacy (CIL) for some purposes of computer use and some countries, but not for all countries (Fraillon et al. 2014, p. 234ff.). In this paper we use Latent Class Analysis to look further into the students’ recreational and school-related computer use in order to identify patterns of students’ use of information and communication technologies (ICT). Furthermore, we investigate differences in the relation between recreational and school-related computer use patterns. To do so, secondary analyses of the ICILS 2013 dataset are conducted using the student data set from the 21 education systems that participated in the study.
First, the theoretical framework as well as the current state of research will be presented. After developing the research question of this paper, the sample, the measurement instruments as well as the statistical techniques are described and the results presented.
Earlier studies and theoretical framework
The question of whether the use of computers is related to students’ achievement has been asked for many years now. A number of studies have shown a primarily negative relation between use of computers and student performance in other topics than ICT. Biagi and Loi used PISA 2009 data to show that on four constructed scales, only gaming had a mostly positive relation to student performance across countries (Biagi & Loi 2013), while the use of computers for other activities both in school and out of school were negatively correlated to student performance in most countries. Based on 2012 data from PISA, OECD showed that “overall, the relationship between computer use at school and performance is graphically illustrated by a hill shape, which suggests that limited use of computers at school may be better than no use at all, but levels of computer use above the current OECD average are associated with significantly poorer results” (OECD 2015, p. 146). But OECD also shows that the outcome of the use of computers is dependent on the context and the types of use, concluding that “overall, the evidence from PISA, as well as from more rigorously designed evaluations, suggests that solely increasing access to computers for students, at home or at school, is unlikely to result in significant improvements in education outcomes. Furthermore, both PISA data and the research evidence concur on the finding that the positive effects of computer use are specific—limited to certain outcomes, and to certain uses of computers” (OECD 2015, p. 163).
Gerick et al. (2014) surveyed a number of studies and meta studies from both national and international contexts and concluded that while some studies show a positive relation between computer use and academic outcome, the overall conclusion is that the effect of computer use is dependent on the teaching methods and the contexts, and that there is still a lack of representative studies based on valid broad assessments of academic knowledge and skills (Gerick et al. 2014, p. 221). Comi et al. (2017) were able to show that specific teaching practices related to ICT were indeed better than others to improve students’ achievement. The successful practices were: “aimed at increasing students’ awareness of ICT use and at improving their navigation critical skills, developing students’ ability to distinguish between relevant and irrelevant material and to access, locate, extract, evaluate and organize digital information” (Comi et al. 2017, p. 36f.).
In a recent review Bulman and Fairlie investigated the impact of investment in computers in school, the use of computer-assisted instruction (CAI), and the use of computers at home. Their conclusion is lukewarm: “Theoretically, the net effects of ICT investments in schools, the use of CAI in schools, and the use of computers at home on educational outcomes are ambiguous […] Schools should not expect major improvements in grades, test scores, and other measures of academic outcomes from investments in ICT or adopting CAI in classrooms” (Bulman & Fairlie 2016, p. 275).
There has been limited research into the relations between the use of ICT in- and outside of school and students’ ICT competences or Computer and Information Literacy (CIL) (Alkan & Meinck 2016, p. 2). But in recent years a number of studies have looked more specifically into the topic. The results have been somewhat diverse. A positive relation between use of computers and students’ CIL was shown in an intervention study that compared classrooms where students used computers as part of a prescribed curriculum, with classrooms with no or little use of computers (Spektor-Levy & Granot-Gilat 2012). A study using data from ICILS 2013 also showed a positive relation between CIL and use of ICT for social communication (Alkan & Meinck 2016, p. 14). Some studies are not able to show a relation at all (Scherer et al. 2017, p. 496), and some even find negative relations between use of ICT and CIL (Hatlevik et al. 2015, p. 228).
In this study we use ICILS data to look into the question of relation between ICT use and student achievement. ICILS 2013 is measuring computer- and information literacy (CIL) in 8th grade students. CIL is defined as “an individual’s ability to use computers to investigate, create, and communicate in order to participate effectively at home, at school, in the workplace, and in society” (Fraillon et al. 2013, p. 17), and CIL is conceptualized in two strands: “collecting and managing information” and “producing and exchanging information”. The final instrument measured these two strands as one dimension (Fraillon et al. 2014, p. 73). As part of the study, students were asked to fill out a questionnaire, asking about kinds of computer use at home and in school. From a common sense point of view, use of computers in school should positively affect CIL. But, as stated earlier, even though there is a small but positive relationship between computer use in school and the measured CIL in the international average, it is not significant in most of the ICILS countries.
By digging deeper into differences in students’ reports on computer use, this study will identify qualitatively different patterns in use and relations between these patterns and CIL.
Accordingly, this article addresses the following research questions:
Is it possible to empirically identify patterns of students’ computer use for out-of-school and recreational use in the 21 education systems participating in ICILS 2013?
If so, are there differences in the distribution of the identified school clusters across these 21 education systems?
Are there relations between the identified patterns to the students’ level of computer and information literacy in the 21 education systems participating in ICILS 2013?
Sample, measurement instruments, and statistical analyses
The data for the secondary analyses are derived from the (ICILS 2013) conducted by the International Association for the Evaluation of Educational Achievement (IEA). For the first time, the computer and information literacy of Grade 8 students was examined in an international comparison using computer-based testing. Furthermore, information on teaching and learning with ICT was collected using questionnaires for students, teachers, school principals, and ICT coordinators as well as a national context questionnaire (Jung and Carstens 2015). Overall, 21 education systems participated in ICILS 2013. The student data set contains 59,430 students.
Cases with missing values in any of the relevant variables were omitted from the analyses. Overall, the data of 57,989 Grade 8 students from the ICILS 2013 study could be taken into account across the 21 education systems participating in ICILS 2013. It needs to be taken into account that five education systems did not meet the IEA’s high sampling requirements, but all of them show a student participation rate of 80% or above (Denmark 87.8%, Hong Kong 89.1%, Netherlands 87.7%; Switzerland 89.7%; Buenos Aires: 80.2%, cf. Bos et al. 2014, p. 331). In favor of the international comparison of all participating education systems in ICILS 2013, these countries have nonetheless been included into the analyses. The data from these five education systems are more bias-prone and should be interpreted with caution.
To conduct the secondary analyses in order to identify different patterns of students’ school-related and recreational computer use, we used five international scales, which were developed based on the items from four questions from the ICILS 2013 student questionnaire relating to use of computers at home and in school (Jung & Carstens 2015, p. 268ff.). All of these four questions dealt with the frequency (“How often do you use…”) of using ICT (computer or the Internet) in different contexts for different purposes: outside of school for different activities (Q18, Q19), for different out-of-school activities (Q20) and for different school-related purposes (Q21).
The five scales are presented in Table 1.
All scales are internationally standardized to a mean of 50 and a standard deviation of 10. For our secondary analyses, we z-standardized the scales to a mean of 0 and a standard deviation of 1.
Furthermore, the students’ achievement data from the computer based competence test used in ICILS 2013 to measure the students’ CIL was used, taking into account the five plausible values (Jung & Carstens 2015).
In order to answer the first research question, a latent class analysis (LCA) was conducted. LCA is a method of identifying latent clusters based on probabilistic test models (Lazarsfeld and Henry 1968; McCutcheon 1987) using manifest indicators (Geiser 2013). The aim of the LCA is to determine the probability that a case in a data set belongs to a certain cluster. Cases within a cluster should be the most similar, while the differences between distinct clusters should be as large as possible. In the context of this article, each student represents a case, and the LCA seeks to cluster these students according to probability using the five aforementioned z-standardized scales.
To identify the statistically optimal amount of clusters, different statistical models are analyzed separately and subsequently compared using the analysis software Mplus 7.0 (Muthén and Muthén 2012). To compare the different models, information criteria like the Akaike Information Criterion (AIC; Akaike 1974) or the Bayesian Information Criterion (BIC; Schwarz 1978) can be used. Lower AIC and BIC values for a model indicate a better model fit (Rost 2004).
Since the number of schools varies in the countries that participated in ICILS 2013, and to make sure that each country contributes the same proportion of data into the LCAs, the student weights in all schools across the 21 education systems were rescaled (Gonzalez 2012) to a sample size of 500 in each country. This was done by multiplying the original weights with the constant 500 and then dividing the product by the sum of the original student weights. The advantage of this procedure is the resultant equal weighting of the countries irrespective of the individual sample size within the country. To analyze the distribution of the clusters across the 21 education systems in the ICILS 2013 study, the results of the LCA from Mplus were matched with the international student dataset. Descriptive statistics were subsequently obtained using the IEA IDB Analyzer 3.1 (Rutkowski et al. 2010) and the total student weight provided by the international ICILS 2013 database (Jung and Carstens 2015).
To answer the second research question, the IDB Analyzer 3.1 was used to compare the means in the student achievement (plausible values) between the different clusters by country.
The following section presents the results of the secondary analysis of the ICILS 2013 data, structured by the research questions presented earlier. It looks first at whether it is possible to empirically cluster the participating Grade 8 students by their school-related and recreational computer use, using the results of the latent class analysis and with that identify different patterns. The distribution of the clusters or patterns across the countries that participated in ICILS 2013 is also presented. The detailed examination of the relation between clusters and student CIL forms the focus of the results regarding the second research question.
Research question 1a. Identification of different patterns of students’ computer use
To answer the first research question, we first had to analyze whether it is possible to identify different clusters or patterns of students’ computer use across the 21 education systems, which participated in ICILS 2013. The comparison of different LCAs conducted with Mplus to identify the optimal amount of clusters revealed that the three-cluster-model best describes the data.
In the model selected, the average latent class probabilities for the most likely latent class membership are 0.9 or above and are thus good for all three identified clusters (Rost 2006). Figure 1 shows the distribution of the three clusters or patterns across the five scales representing the students’ school-related and recreational computer use. As described in the methods-section, the scales were z-standardized and therefore have a mean of 0 and a standard deviation of 1.
As Fig. 1 shows, over three quarters of the students (77.2%) have the highest probability of being categorized into cluster 2. The students in this cluster can be characterized as having an average frequency school-related and recreational computer use. In comparison, students in cluster 1, comprising 11.5 percent of the Grade 8 students included in the analyses, can be characterized as having a low frequency use pattern. Particularly in the scales Use of ICT for communication (USECOM) and Use of ICT for study purposes (USESTD), the frequency of ICT use is far below the average. Finally students in cluster 3 (11.3%) can be characterized as having a high frequency use pattern. These students stand out as having a particularly high frequency of computer use for exchanging information (USEINF). Except from the Use of ICT for recreation (USEREC), all scales are about or more than one standard deviation above the average frequency.
The three groups are closest to each other in the frequency of USEREC and farthest away from each other in the frequency of USESTD and Use of ICT for exchanging information (USEINF).
Table 2 shows the means of all five scales by cluster.
Overall, the results show that it is possible to empirically identify different patterns of students’ computer use. The next step is to look at the distribution of the three patterns across the 21 education systems.
Research question 1b. Distribution of the patterns across countries
Table 3 shows the percentages of students which can be classified into the three identified patterns of computer use for school-related or recreational purposes for each of the 21 education systems that participated in ICILS 2013. Significances in the percentages are calculated for each participating education system for each of the three clusters in comparison to the average frequency of each cluster.
The highest percentage of students across all 21 education systems are in cluster 2, with consist of students with an average frequency use pattern. The percentage differs by country with nearly nine out of ten (86.6%) students in Norway and less than two-thirds (63.1%) in the Republic of Korea.
Bigger differences between the education systems can be found when looking into clusters 1 and 3. In 12 education systems, a higher percentage of students can be categorized in cluster 3, high frequency use pattern, in comparison to cluster 1, low frequency use pattern. In comparison, in nine education systems, more students can be categorized in cluster 1 than in cluster 3. The highest frequencies of students categorized in cluster 1 can be found in the Republic of Korea (31.7%), Germany (18.5%), Hong Kong (17.3), and Buenos Aires (17.2%), with the smallest percentages in Denmark (2.2%), Norway (4.6%), and the Russian Federation (5.6%). The highest frequencies of students categorized in cluster 3, high frequency use pattern, are in the Russian Federation (28.2%), Turkey (16.6%), and Poland (13.9%), whereas the lowest percentages can be found in Germany (3.1%) and Switzerland (3.3%).
To sum up, in all 21 education systems the highest percentages of students have an average frequency use pattern, even though the percentages differ between education systems. Furthermore, in some education systems, more students have a low frequency use pattern, whereas in the other countries, more students have a high frequency use pattern.
Research question 2. Relation between school clusters and student CIL
After the identification of different patterns of students’ computer use, the question arises whether there is a relation between the different patterns of computer use and the students’ computer and information literacy. To answer this question, comparisons of students’ average CIL per country were conducted. The results show differences between education systems in terms of the relations between the patterns of students’ computer use and their CIL. In the following, the results for countries with similar results are presented.
Figures 2 and 3 as well as Table 4 show that in 12 of the 21 education systems, the comparisons reveal significant differences in the mean in students’ CIL between students categorized in the low frequency use pattern and students categorized in both or one of the average frequency use pattern and the high frequency use pattern. The mean differences between the average and high use patterns are not significant in any of these countries. Hence, in these countries the students’ CIL increase when they increase their use of ICT to an average level (we do not know if there is a causal relation, though). After that, the CIL neither increases nor decreases. We call this a plateau shape.
Figure 4 and Table 5 show the results for the Czech Republic, Germany, the Netherlands, and Switzerland.Footnote 1 In these four countries, the CIL of students categorized in the average use pattern achieve significantly higher CIL than students with both low and high frequency use patterns. Furthermore, the means in students’ CIL between the low and the high frequency use patterns do not differ significantly. Hence, in this group of countries, we do see the same hill shape as has been identified in a number of earlier studies (OECD 2015).
Figure 5 and Table 6 show the results for the four education systems for which significant differences in the mean CIL between all three clusters can be identified. For Turkey, the results show a continuously increasing slope, so students with the high frequency use pattern have higher CIL than the students with average frequency use, which again have higher CIL than the students with the lowest use pattern. In Slovenia, Denmark, andHong Kong, students with the average frequency use pattern achieve significantly higher CIL than students with the low or high frequency use pattern.Footnote 2 Students with a high frequency use pattern achieve higher CIL scores than students with a low frequency use pattern. We call this a hill-valley shape.
The results for Buenos Aires show that students with average use patterns score significantly better on the CIL test than students with a low frequency use pattern. The other differences are not significant. Given that Buenos Aires did not meet the sampling requirements and is the only participant in the study with this pattern, we will not go further into this.
To sum up, students who use computers with low frequency, generally score lower than students who use them more often. However, it does not seem to help students to use computers at high frequencies when it comes to their computer and information literacy. In this, there are interesting differences between the three groups of countries (see Figs. 2, 3, 4). In the first group, the high frequency users are at the same level as the average frequency users. We call this the plateau shape. In the second group of countries, students with high and low frequency use patterns are at the same CIL level. We follow OECD (2015) in calling this the hill shape. In the third group of countries, the shape also looks like a hill, but it does not fall to the level of the students with low frequency use. We call this the hill-valley shape.
One country, Turkey, has what we will call a linear relationship between frequency of use and CIL scores. It looks like the same could be the case for Thailand, but the differences are not significant. These two countries are the ones with the lowest mean CIL scores, and this could suggest that there is a difference in how increased use of computers affect low and higher performing countries. We encourage others to look deeper into this.
Summary of the results
The aim of this article was to have a closer look into students’ use of computers and the effect of use on their computer and information literacy (CIL). We conducted a latent class analyses using the student data set of the International Computer and Information Literacy Study (ICILS 2013) across all 21 education systems that participated in ICILS 2013. We identified different use patterns in regard to students’ school-related and recreational computer use, and related these patterns to students’ CIL.
The results for the first research question show that it is empirically possible to identify three different patterns of students’ computer use. Globally, these three patterns can be described as a low, average, and high frequency use patterns. The percentages of students, which are categorized in these three patterns, differ between the 21 education systems. Nevertheless, the highest percentage of students has an average use pattern across all 21 participating countries.
The results of the analyses regarding the second research question show that students with an average use pattern achieve the highest CIL scores in all countries, except for Turkey. Therefore our analyses at first seem to support the conclusions in previous studies of a hill shape, where both low and high use of computers is correlated to lower scores on the CIL scale. But in our study we find two other shapes, a plateau and a hill-valley shape, suggesting that in some countries students with high use patterns have either the same CIL as the average use pattern or a higher CIL than students with a low use pattern.
When interpreting the results, one has always to keep in mind that the ICILS 2013 data is cross-sectional and relations cannot be interpreted as causalities.
Discussion of the results
The results we present in this paper are to some extent counter intuitive. Common sense would suggest that students, who use computers more both in and out of school, would be more skilled at handling computers and using them for information-related tasks. However, it is not that simple. One reason could be that computers are given, to a higher extent, to students with lower academic achievement to support their learning. The results in this study are difficult to explain using that argument; first, the use patterns we have identified show that use patterns are consistent across school and out-of-school contexts, as well as across school related and recreational use, which points to that students who use computers much, do so both in and out of school. Second, computers are used by very high percentages of students (Fraillon et al. 2014, p. 129f.), and therefore it seems unlikely that students with lower abilities should use them consistently more than students with higher abilities. Another reason for the differences in CIL score, in line with the first suggestion, could be ascribed to differences in socio-economic status or other covariates. Our study has not looked into this.
Given that there are three distinct groups of countries where the use patterns have different relations to students’ CIL, we hypothesize that the reasons for the differences should be found at the contextual level, either in the country culture, education system organization, methods of integrating ICT in schools, or in differences in teachers’ approaches to teaching. We therefore suggest further studies into the relations between the results we have presented in this paper and the contexts and teaching practices in the participating countries, e.g. by using the data from the teacher questionnaire in the ICILS study.
The last two of these countries did not meet the sampling requirements, so the results from these countries should be considered with some caution (for further information see the chapter “Sample”).
The last two of these countries did not meet the sampling requirements, so the results from these countries should be considered with some caution (for further information see the chapter “Sample”).
Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6), 716–723
Alkan, M., & Meinck, S. (2016). The relationship between students’ use of ICT for social communication and their computer and information literacy. Large-Scale Assessments in Education, 4(15), 1–17. doi:10.1186/s40536-016-0029-z.
Biagi, F. & Loi, M. (2013). Measuring ICT Use and Learning Outcomes: evidence from recent econometric studies. European Journal of Education, 48(1), 28–42
Bos, W., Eickelmann, B., Gerick, J., Goldhammer, F., Schaumburg, H., Schwippert, K., Senkbeil, M., Schulz-Zander, R. & Wendt, H. (eds.). ICILS 2013. Computer- und informationsbezogene Kompetenzen von Schülerinnen und Schülern in der 8. Jahrgangsstufe im internationalen Vergleich [ICILS 2013 - Computer and Information Literacy of Grade 8 Students in an International Comparison]. Münster: Waxmann.
Bulman, G., & Fairlie, R.W. (2016). Technology and education. In: Handbook of the economics of education (Vol. 5, pp. 239–280). Elsevier. Retrieved from http://linkinghub.elsevier.com/retrieve/pii/B9780444634597000051.
Comi, S. L., Argentin, G., Gui, M., Origo, F., & Pagani, L. (2017). Is it the way they use it? Teachers, ICT and student achievement. Economics of Education Review, 56, 24–39. https://doi.org/10.1016/j.econedurev.2016.11.007.
Fraillon, J., Ainley, J., Schulz, W., Friedman, T., & Gebhardt, E. (2014). Preparing for life in a digital age. The IEA international computer and information literacy study international report. Cham: Springer. http://link.springer.com/book/10.1007%2F978-3-319-14222-7.
Fraillon, J., Schulz, W., & Ainley, J. (2013). International computer and information literacy study assessment framework. Amsterdam: International association for the evaluation of educational achievement (IEA). Retrieved from http://www.iea.nl/fileadmin/user_upload/Publications/Electronic_versions/ICILS_2013_Framework.pdf.
Geiser, C. (2013). Data Analysis with Mplus. New York: Guilford Press.
Gerick, J., Eickelmann, B. & Vennemann, M. (2014) Zum Wirkungsbereich digitaler Medien in Schule und Unterricht. Internationale Entwicklungen, aktuelle Befunde und empirische Analysen zum Zusammenhang digitaler Medien mit Schülerleistungen im Kontext internationaler Schulleistungsstudien In: Holtappeis, H.G., Willems, A.S., Pfeifer, M., Bos, W., & McEivany, N. (eds.) Jahrbuch der Schulentwicklung. Band 18. Daten, Beispiele und Perspektiven. Weinheim: Beltz Juventa, 206-238.
Gonzalez, E. (2012). Rescaling sampling weights and selecting mini-samples from large-scale assessment databases. IERI Monograph Series Issues and Methodologies in Large-Scale Assessments, 5, 115–134.
Hatlevik, O. E., Ottestad, G., & Throndsen, I. (2015). Predictors of digital competence in 7th grade: a multilevel analysis: Predictors of digital competence. Journal of Computer Assisted Learning, 31(3), 220–231. https://doi.org/10.1111/jcal.12065.
Jung, M., & Carstens, R. (Eds.) (2015). ICILS 2013 User Guide for the International Database. Amsterdam: IEA Secretariat. Retrieved from http://pub.iea.nl/fileadmin/user_upload/Publications/Electronic_versions/ICILS_2013_IDB_user_guide.pdf.
Lazarsfeld, P. F., & Henry, N. W. (1968). Latent structure analysis. Boston: Houghton Mifflin.
McCutcheon, A. C. (1987). Latent class analysis. Beverly Hills: Sage Publications.
Muthén BO and Muthén LK (2012). Software Mplus Version 7.
OECD (2015). Students, computers and learning. France, : OECD Publishing. Retrieved from http://www.oecd-ilibrary.org/education/students-computers-and-learning_9789264239555-en.
Rost, J. (2004) Lehrbuch Testtheorie – Testkonstruktion [Textbook on the Theory and Construction of Tests]. Bern: Hans Huber.
Rost, J. (2006). Latent-Class-Analyse [Latent Class Analysis]. In F. Petermann, & M. Eid (eds.), Handbuch der Psychologischen Diagnostik (pp. 275–287). Göttingen: Hogrefe.
Rutkowski, L., Gonzalez, E., Joncas, M., & von Davier, M. (2010). International large-scale assessment data. Educational Researcher, 39(2), 142–151.
Scherer, R., Rohatgi, A., & Hatlevik, O. E. (2017). Students’ profiles of ICT use: Identification, determinants, and relations to achievement in a computer and information literacy test. Computers in Human Behavior, 70, 486–499. https://doi.org/10.1016/j.chb.2017.01.034.
Schwarz, G. (1978). Estimating the Dimension of a Model. The Annals of Statistics, 6(2), 461–464
Spektor-Levy, O., & Granot-Gilat, Y. (2012). The Impact of Learning with Laptops in 1: 1 Classes on the Development of Learning Skills and Information Literacy among Middle School Students. Interdisciplinary Journal of E-Learning and Learning Objects, 8(1), 83–96.
JB was lead author on the Introduction and the Earlier studies and theoretical framework sections. JG did the statistical analysis and was lead author on the Sample, Measurement Instruments, and Statistical Analyses section. The Results and Conclusions sections were written collaboratively by both authors. Both authors read and approved the final manuscript.
The authors declare that they have no competing interests.
Availability of data and materials
The dataset supporting the conclusions of this article is available in the The IEA Data Repository: http://www.iea.nl/our-data.
Consent for publication
Ethics approval and consent to participate
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Bundsgaard, J., Gerick, J. Patterns of students’ computer use and relations to their computer and information literacy: results of a latent class analysis and implications for teaching and learning. Large-scale Assess Educ 5, 16 (2017). https://doi.org/10.1186/s40536-017-0052-8
- Computer use
- Latent class analysis (LCA)
- Computer and information literacy