 Research
 Open access
 Published:
Exploration of the linear and nonlinear relationships between learning strategies and mathematics achievement in South Korea using the nominal response model : PISA 2012
Largescale Assessments in Education volume 12, Article number: 11 (2024)
Abstract
Learning strategies have been recognized as important predictors of mathematical achievement. In recent studies, it has been found that Asian students use combined learning strategies, primarily including metacognitive strategies, rather than rote memorization. To the best of the authors’ knowledge, there is only one prior study including South Korea in investigations of the relationship between learning strategies and mathematics achievement in PISA 2012. In that study, students were classified into groups using specific learning strategies, and their mathematics achievements were compared. There are two research gaps: (1) previous studies insufficiently explored how students use learning strategies in the South Korean education system, and (2) there is little research applying the nominal response model (NRM) to explore the association between learning strategy use and mathematics achievement in PISA 2012. Thus, the present study explores to what extent the NRM fits the data compared to the generalized partial credit model (GPCM). We created a learning strategy score from the NRM for South Korean students in PISA 2012 (N = 3,310). Then, using correlation analysis and quadratic regression analysis, we identified linear and nonlinear relations between learning strategy scores from the NRM and mathematics achievement. The findings indicated that (1) NRM was a better fit for creating learning strategy scores than GPCM, (2) the average correlation coefficient between the learning strategy score and mathematics achievement was 0.18 (p < .05), and (3) for the curvilinear relationship between the learning strategy score and mathematics achievement, the standardized quadratic coefficient was − 0.090 (p < .001). Overall, the NRM represents an appropriate model for explaining the relationship between learning strategy and mathematical achievement. Additionally, highperforming South Korean students tend to primarily use metacognitive strategies with memorization. The negative quadratic coefficient captured the limited effect of the primary use of metacognitive strategies with memorization. The implications for the South Korean education system are discussed.
Introduction
Learning strategies can be defined as behaviors and thoughts in which a learner engages or that are intended to influence the learner’s encoding process. Like goaloriented activities, learning strategies are used for acquiring, organizing, or transforming information, as well as for reflecting upon and guiding the learning process (Weinstein & Mayer, 1986). These strategies have also been recognized as important predictors of academic achievement (Hong et al., 2006). The Program for International Student Assessment (PISA), an international largescale assessment developed by the Organization for Economic Cooperation and Development (OECD), measures 15yearold students’ use of learning strategies on an ordinal scale (OECD, 2012). The PISA uses three learning strategies—memorization, elaboration, and metacognitive strategies—and defines them as follows: memorization involves learning key terms and repeated learning of materials; elaboration includes making connections to related areas and thinking about alternative solutions; and metacognition involves planning, monitoring, and regulation (OECD, 2005; Zimmerman, 2001). After PISA 2012, the ordinal scale of learning strategy items was converted into a nominal scale.
By analyzing learning strategy items in PISA, research has demonstrated that these mathematics learning strategies are associated with students’ mathematics achievement (Areepattamannil & Caleon, 2013; Kiliç et al., 2012; Lin & Tai, 2015; Wu et al., 2020). For instance, Areepattamannil and Caleon (2013) found that in East Asian education systems, including ShanghaiChina, Hong KongChina, Korea, and Singapore, memorization strategies were negatively associated with mathematics achievement, and the magnitude of the negative correlation differed among the countries. Some studies have also explored the use of learning strategies without using PISA data. Elaboration strategies and metacognitive strategies have been found to be positively correlated with learning achievement across 34 countries (Chiu et al., 2007), including Germany (Glogger et al., 2012; Murayama et al., 2013), Hong Kong (McInerney et al., 2012), and Sweden (Rosander & Bäckström, 2012). Memorization is generally considered less effective than other learning strategies (e.g., elaboration and metacognitive strategies; McInerney, 2012).
Among studies that have investigated learning strategies for mathematics in the East Asian educational system (Lin & Tai, 2015; Liu et al., 2019; Wu et al., 2020), only Wu et al. (2020) included South Korea in the East Asian data of PISA 2012. In fact, very limited research has examined the relationship between learning strategies and mathematics achievement in the South Korean education system (e.g., an examdriven culture). Furthermore, considering that the magnitude of correlation coefficients between learning strategy use and mathematics achievement is different across countries (Areepattamannil & Caleon, 2013; Wu et al., 2020), research is necessary for interpreting and discussing a specific education system to better understand learning strategy use and mathematics achievement.
Thus, this study focuses on the South Korean education system for two reasons. First, due to the feature of an educational system in Korea, Korean students are encouraged to use memorization strategies. In South Korea, the College Scholastic Ability Test (CSAT) is considered the sole determinant of which university a student can attend (Blazer, 2012). The CSAT has led students to rely on memorization strategies to learn testtaking skills and improve their ability to solve multiplechoice questions in a limited amount of time (Kim, 2004). Second, South Korea consistently shows mathematics performance in international largescale assessments (Choi et al., 2019; Park, 2004). For instance, South Koreans ranked 4th in PISA 2009, 5th in PISA 2012, 7th in PISA 2015 and 5th in PISA 2018.
In the present study, we use the nominal response model (NRM) to score Korean students’ learning strategies in PISA 2012 and examine the association between learning strategy use and mathematics achievement. The NRM is an item response theory (IRT) model for modeling the probability of responses to items with nominal categories (i.e., unordered responses; Zu & Kyllonen, 2020) as a type of learning strategy item in PISA 2012. To explore the extent to which the NRM fits the data, we compare it to the generalized partial credit model (GPCM), which assumes the order of categories (i.e., the ordinal relationship between learning strategies) in IRT modeling. The advantage of applying IRT in scoring compared to the sum of scores is that a parametric model can be used to estimate the uncertainty of point estimates (i.e., standard errors), which can be considered in the subsequent analysis of the relationship between learning strategy and mathematics achievement using the plausible values approach.
This study aims to examine and explore the relationship between learning strategy use and mathematics achievement in the South Korean education system. We explore (1) the extent to which the NRM fits the data and (2) the linear and nonlinear relationship between learning strategy use and mathematics achievement in South Korea. To achieve the first goal, we compared the NRM to the GPCM and expected that the NRM would fit the data better than the GPCM because of the nominal nature of learning strategies. The second goal was achieved by conducting a correlation analysis between the Korean students’ learning strategy scores from the NRM and mathematics scores, as well as a correlation analysis between raw scores of single strategies and mathematics scores in PISA 2012. In addition, to examine the nonlinear relationships between these variables, we used quadratic regression analysis.
The article is organized as follows. We review the literature on learning strategies using selfregulated learning theory, the relationships between learning strategy use and mathematics achievement, and learning strategies used in the East Asian and South Korean contexts. We briefly introduce two IRT models, the NRM and GPCM, and then generate the two research questions according to the research gaps. The methodology section describes the South Korean sample and the measures of learning strategies and mathematics achievement in PISA 2012. In the statistical analysis, a model comparison between NRM and GPCM is performed to answer the first research question. Then, correlation analyses and nonlinear regression analysis between learning strategy use and mathematics achievement are conducted to answer the second research question. Finally, we discuss our findings and elaborate on the reasons for them.
Literature review
Learning strategies in selfregulated learning theory
The selfregulated learning (SRL) process was introduced by Zimmerman (2001) to describe how students regulate their own learning processes, including learning strategies, motivation, and behavior. According to SRL, a selforiented feedback loop occurs during learning (Carver & Scheier, 1981; Zimmerman, 1990). In this cyclical loop process, students monitor the effectiveness of their learning strategies and respond to this feedback in several ways, such as replacing one learning strategy with another to achieve more desirable results (Zimmerman, 2001).
In the SRL process, students are regarded as selfregulated learners to the degree that they are metacognitively, motivationally, and behaviorally active participants in their own learning processes (Zimmerman, 1986). These students selfgenerate thoughts, feelings, and actions as their learning goals. SRL includes students’ metacognitive strategies for planning, monitoring, and modifying their cognition (Campione et al., 1984; Corno, 1986; Zimmerman & Pons, 1986, 1988) and the actual cognitive strategies that they use to learn, remember, and understand the material (Corno & Mandinach, 2009; Zimmerman & Pons, 1986, 1988). These different cognitive strategies, such as rehearsal, elaboration, and organizational strategies, have been found to foster active cognitive engagement in learning and to result in higher levels of achievement (Weinstein & Mayer, 1986).
Learning strategies and achievement
SRL theory has prompted many empirical studies to define different types of learning strategies and demonstrate their efficiency (Dent & Koenka, 2016; Pintrich & Groot, 1990; Zimmerman & Pons, 1986). Although various classifications of learning strategies have been suggested (Kember et al., 2004; Lee & Shute, 2010; Marton & Säljö, 1976; Weinstein & Mayer, 1986; Zimmerman & Pons, 1986), many studies have followed the concept of Weinstein and Mayer’s (1986) framework to define cognitive and metacognitive strategies.
Cognitive strategies (e.g., memorization and elaboration) refer to mental procedures that are related to learning, storing, organizing, summarizing, and understanding information by relating it to new and prior knowledge (Weinstein & Mayer, 1986; Zimmerman & Pons, 1986). While learning mathematics, students may recall formulas, summarize a mathematical concept that they have absorbed, or connect a mathematics concept to their actual experiences (Wu et al., 2020). Metacognitive strategies (i.e., control strategies in PISA) refer to supervising, controlling, and regulating cognitive activities (Weinstein & Mayer, 1986; Zimmerman & Pons, 1986). During mathematics learning, metacognitively aware students may devise plans to solve the next mathematics tasks, review their own understanding of the concepts learned, ask for help, and assess their own learning strategies to improve performance (OECD, 2013).
PISA uses selfreported learning strategy items in the mathematics domain (OECD, 2005), following Weinstein and Mayer’s (1986) concept of learning strategies. We discuss memorization, elaboration, and metacognitive strategies in the following subsections.
Memorization strategies and mathematics achievement
Memorizing factual knowledge might be useful in the introductory stage of acquiring mathematics knowledge (Dinsmore & Alexander, 2016), but exclusively using memorization as a strategy does not generally lead to improvements in complex problem solving or advanced logical skills (Biggs, 1993; Liu et al., 2019; Marton & Säljö, 1976; McInerney et al., 2012). For example, Liu et al. (2019) suggested that Chinese students who use the memorization strategy in combination with other learning strategies (e.g., elaboration and metacognition) perform better in mathematics than those who use only the memorization strategy.
Educational studies have investigated the impact of memorization strategies on mathematics achievement (Areepattamannil & Caleon, 2013; Kiliç et al.,2012; Pintrich & Groot, 1990). In general, the exclusive use of memorization is negatively correlated with mathematics achievement. Pintrich and Groot (1990) found that the use of memorization without metacognitive strategies was not conducive to academic performance. Similarly, Kiliç et al. (2012) found that memorization had a negative effect on learning in Turkey and its neighboring countries, and Areepattamannil and Caleon (2013) concluded that memorization strategies were negatively associated with mathematics achievement in four East Asian education systems: ShanghaiChina, Korea, Hong KongChina, and Singapore. While studies have suggested that the mixed use of learning strategies, including memorization, may lead to better academic performance than the use of a single strategy (Dent & Koenka, 2016; Wu et al., 2020), educational researchers tend to hold negative views of using only memorization.
Elaboration strategies and mathematics achievement
Elaboration is defined as mental processes and behaviors that involve integrating information from different sources to create meaningful interpretations, relate new concepts to prior knowledge, and summarize material into one’s own words (Pintrich & Groot, 1990; Trigwell & Prosser, 1991; Walker et al., 2006; Wolters, 2004). Elaboration can occur during selfstudy, discussions, notetaking, or answering questions (Pires et al., 2020).
Elaboration strategies that deepen understanding of knowledge and skills lead to highquality learning outcomes, whereas students who use a surface approach (e.g., rehearsal or memorization; Ramsden, 1988) are more likely to achieve lowerquality outcomes (Marton & Säljö, 1976; Prosser & Millar, 1989). It has been found in previous studies that elaboration strategies lead to a positive effect on student learning, including in mathematics (Donker et al., 2014; Murayama et al., 2013). In one metaanalysis study (Donker et al., 2014), it was found that elaboration was the only substrategy that demonstrated a significantly positive relationship with mathematics achievement among a variety of substrategies. A longitudinal study (Murayama et al., 2013) suggested that growth in students’ mathematics achievement was positively predicted by deep learning strategies from Grades 5 through 10 and was negatively predicted by surface learning strategies (Ramsden, 1988).
In contrast, the relationship between elaboration strategy and mathematics achievement did not demonstrate a consistent pattern of results across different educational systems in a study by Chiu et al. (2007), who found that these strategies were not linked to achievement in any domain or culture. Liu et al. (2009) indicated that elaboration strategy use by Chinese eighthgrade students showed either a positive or negative relationship with mathematics achievement, depending on unique Chinese demographic variables (e.g., only child families and residential locations). Thus, the effect of elaboration strategies varied between countries.
Metacognitive strategies and mathematics achievement
According to SRL theory, selfregulated learners are able to monitor the efficiency of their learning strategies and change one learning strategy to another to achieve their goals. This is referred to as a metacognitive strategy (Zimmerman, 2001). Several researchers have shown that metacognition plays an important role in mathematics success (Borkowski & Thorpe, 1994; De Clercq et al., 2000; Schoenfeld, 2016). Artz and ArmourThomas (2009) found that the main reason for students’ failures in mathematical problem solving was that they were not able to monitor their own mental procedures.
Many empirical studies have demonstrated the effectiveness of metacognitive strategies for improving students’ mathematics performance (Areepattamannil & Caleon, 2013; Desoete et al., 2001; Dignath & Büttner, 2008; Perels et al., 2009). Desoete et al. (2001) indicated that metacognitive knowledge and skills accounted for 37% of achievement in mathematical problem solving. Dignath and Büttner (2008) demonstrated a stronger relationship of metacognitive strategies with mathematics than with other subjects. Perels et al. (2009) investigated the effects of training metacognitive strategies (i.e., selfregulative strategies) on mathematical achievement for Grade 6 students in Germany. The students in the experimental group, whose teachers taught mathematics topics combined with metacognitive strategies (i.e., selfregulative strategies), showed more improvement in their mathematics skills in a pre/posttest comparison than the control group, whose teachers taught only mathematical topics. Areepattamannil and Caleon (2013) found that metacognitive strategies were positively associated with mathematics achievement in four East Asian education systems: ShanghaiChina, Korea, Hong KongChina, and Singapore. In addition, Wu et al.’s (2020) findings showed that the combined use of metacognitive and elaboration strategies was the most effective way for mathematics achievement in most East Asian countries, followed by the mixed use of metacognitive and memorization strategies.
East Asian students’ learning strategy use
In recent decades, Western educators have explored the reasons for East Asian students’ high mathematics performance. They believed that East Asian students relied on memorization, but these students performed better on international largescale assessment (ILSA) than Western students (Biggs, 1998; Leung, 2014). However, several studies have found that East Asian students do not depend on a single strategy, such as memorization, but instead use mixed learning strategies (Lin & Tai, 2015; Liu et al., 2019; Wu et al., 2020). According to Wu et al. (2020), most East Asian students use multiple learning strategies for learning mathematics, and students who use both metacognitive and elaboration strategies achieve the highest scores on the mathematics exam, followed by those who use metacognitive and memorization strategies. Several studies have also shown that memorization does not necessarily imply rote learning without understanding (Biggs, 1998; Kember, 2016; Leung, 2014). For instance, as an application of a memorization strategy, continuous practice with increasing variation could help learners understand new material (Hess & Azuma, 1991; Marton & Booth, 1997). Thus, the use of a memorization strategy does not always mean rote learning, and East Asian students do not rely entirely on a memorization strategy.
Learning strategies in the South Korean context
The South Korean education system introduced the CSAT in 1994 to encourage students to develop highlevel thinking abilities rather than fragmented shortterm memorization. However, CSAT was criticized for triggering a different kind of memorization because it had multiplechoice formats and caused repetition of problemsolving exercises in test subjects, including mathematics (Kim, 2004). Students were intent on learning testtaking skills that would ensure their ability to solve these multiplechoice questions in a limited amount of time (Kim, 2004). As CSAT has become an essential determinant of which university a student can attend, South Koreans have expressed concern about whether students rely on rote learning only to obtain high scores on the exam (Blazer, 2012; Li, 2011).
A previous study (Wu et al., 2020) explored the relationship between learning strategies and mathematics achievement for South Korean students as well as those in other education systems in East Asia (e.g., Hong Kong, Japan, Korea, Shanghai, Singapore, Taiwan, and Macau) using latent class analysis. According to Wu et al. (2020), the largest percentage (65%) of South Korean students primarily used metacognitive strategies with memorization strategies (Class 2). Only 14.2% of South Korean students primarily used metacognitive strategies with elaboration (Class 4). Class 4 was found to have the best performance among the classes, followed by Class 2. Although Wu et al. (2020) investigated South Korean students’ learning strategy use, the study lacked discussion about the Korean education system.
Nominal response model
The nominal response model (Darrel Bock, 1972) is designed for items with nominal categories (Thissen et al., 2010). Nominal categories imply that there is no assumption that Category 2 indicates higher ability than Category 1 (Zu & Kyllonen, 2020). In other words, the NRM does not assume that using a metacognitive strategy is better than using a memorization strategy in response to items in PISA 2012. It enables partial credit for different option selections and allows for differential item weights and varying category discriminations. The NRM is expressed as:
where \({a}_{ik}\)and \({c}_{ik}\) are the category slope and category intercept parameters, respectively, for the \({k}^{th}\) category of Item i. In this equation, the expression on the right gives the probability that a person with traitlevel \(\theta\) selects response category \(k\) (\({k=1, 2, 3, \dots m}_{i}\)) on item \(i\).
Within an item, the order for the response categories with respect to latent ability is determined by the value of the \({a}_{ik}\)s. Within item \(i\), response \(k\) indicates higher \(\theta\) than response \(q\) if and only if \({a}_{ik}\)> \({a}_{iq}\) (Thissen et al., 2010). The category intercept parameters (\({c}_{ik}\)) reflect the relative frequency of choosing that category, where a larger \({c}_{ik}\) (intercept parameter) represents a greater relative frequency for option \(k\) (Zu & Kyllonen, 2020).
Generalized partial credit model
The GPCM can be seen as a generalization of the dichotomous 2PL model for handling polytomous data and a constrained version of the NRM. In the GPCM, responses need to be ordered from best to worst with respect to latent ability, which could be accomplished through prior knowledge, expert ratings, in or outofsample response popularity, or other means (e.g., the \({a}_{ik}\) values from the NRM analysis; see Eq. 1; Zu & Kyllonen, 2020). In other words, in the GPCM, the options are coded [Memorization = 1, Elaboration = 2, Metacognitive = 3] based on prior knowledge (Biggs, 1987; OECD, 2014; Weinstein & Mayer, 1986; Zimmerman & Pons, 1986). This implies that students who have high learning strategy scores tend to use metacognitive strategies, and students who have low learning strategy scores tend to use memorization strategies. With the prior ordering of the response categories, the GPCM is the NRM with the constraint that the degree of discrimination between adjacent categories is the same for all adjacent categories in an item. Due to these constraints, category slopes within an item can be represented by one item slope parameter. An expression of the GPCM is the NRM shown in Eq. 2, with constraints:
where \({a}_{i}\) is the slope parameter for item \(i\) (Zu & Kyllonen, 2020). The number of parameters for item \(i\) under the GPCM is the number of response categories, \({m}_{i}\). In other words, within item \(i\), category slopes are all the same (\({{a}_{i1}=a}_{i2}= {a}_{i3}= {a}_{i}\)).
Aims of the present study
We summarize two research gaps: (1) Few published studies have applied NRM to examine learning strategy use and mathematics achievement in PISA 2012; and (2) There is little research exploring the relationship between these variables in the South Korean education system. This study will address these gaps by first exploring to what extent NRM fits learning strategy data in PISA 2012 compared to GPCM and, second, by investigating how South Korean students’ learning strategies are correlated with mathematics achievement.
The present study seeks to answer the following two research questions:

1.
To what extent does the NRM fit the response data in learning strategies in PISA 2012 South Korean data compared to the GPCM?

2.
To what extent is the learning strategy use of South Korean students correlated with mathematics achievement linearly and nonlinearly?
Method
PISA 2012 sampling design
PISA is an OECD study of the achievement of 15yearolds in mathematics, reading, and science. PISA 2012, the fifth PISA survey, covered reading, mathematics, science, problem solving, and financial literacy, with a primary focus on mathematics. In 2012, 65 countries and economies (all 34 OECD countries and 31 partner countries and economies) and approximately half a million students, representing 28 million 15yearold students, participated in the PISA assessment. PISA 2012 adopted a twostage complex survey design to select a representative sample of 15yearold students in each educational system. In the first stage, approximately 150 schools were sampled, and then at least 35 students were selected in each sampled school (OECD, 2014). To acquire sufficiently high response rates, PISA needed each school to have a minimum participation rate of 50%.
Sample
In the present study, the South Korean educational system was examined. PISA 2012 collected data from 5,033 15yearold South Korean students (female = 47%) who participated (Dong et al., 2012). In PISA 2012, the total sample was 5,201: 6.1% middle school students, 73.7% general high school students, and 20.2% vocational high school students.
Mathematics learning strategies
PISA 2012 adopted a rotation design for the student questionnaire (OECD, 2014). The questionnaire included a common part and two of three rotating parts: set1, set2, and set3. Each student randomly received one of three questionnaire booklets. Therefore, 33% of the data for each item for the learning strategies were missing by design. Listwise deletion, which involves deleting all persons with missing data, was employed before the analysis was conducted (Newman, 2014). Of 5,033 participating students, 3,310 students provided complete responses. Thus, after listwise deletion, the present study examined 3,310 South Korean students (female = 46%) in the analysis.
In PISA 2012, three types of learning strategies—memorization, elaboration, and metacognition—were measured using nominal scales. Four items were used to determine students’ use of learning strategies in mathematics. Thus, students chose only one learning strategy from the three options (see Table 1).
Mathematics achievement
Each student was randomly assigned one of 13 booklets, which means that they tested a portion of the items from the entire item pool. PISA 2012 used the item response theory (IRT) framework to estimate a latent posterior distribution for each student. As the students did not answer all booklets, missing data must be inferred from the observed item responses. As one of several alternative approaches for making this inference, PISA uses imputation methodologies called plausible values. Five plausible values were drawn from the posterior distribution with a mean of 500 and a standard deviation of 100 to represent students’ mathematics scores (OECD, 2014). In this study, we used all five plausible values when conducting correlation analysis, as well as quadratic regression analysis, to account for measurement errors.
Statistical analysis
To answer the first research question, we compared the NRM to the GPCM in three domains: model fit and item fit indices, empirical reliability, and item characteristic curve. As learning strategy items in PISA 2012 were measured on a nominal scale, the NRM was expected to be a better fit than models for ordinal data (e.g., the GPCM or graded response model). The GPCM assumes that the degree of discrimination between adjacent categories is the same for all adjacent categories in an item, whereas NRM releases these assumptions. If the NRM fits better than the GPCM, we can conclude that the nominal relationship among the three strategies is maintained. Then, we are able to create learning strategy scores based on their latent ability, considering the posterior distribution of estimates, and calculate plausible values of learning strategy scores in NRM.
We also used Chalmers and Ng’s (2017) plausiblevalue variant of the Q1 statistic and root mean square error approximation (RMSEA; MaydeuOlivares, 2015) to examine the data fit for each item under either NRM or GPCM. A nonsignificant result of hypothesis testing for Q1 statistics would indicate that the model fit data are acceptable for a specific item. The lower the RMSEA, the better the model between NRM and GPCM fitted data for a specific item. RMSEAs smaller than 0.0125 were considered excellent fits (MaydeuOlivares, 2015). Additionally, using the Thissen et al. (2010) conceptualization of the model, we converted the discrimination and location parameters estimated under the NRM to category boundary discrimination parameters and intersections, respectively. In other words, Preston et al. (2011) rewrote the NRM as P(X_{i}=kθ) = 1/(1 + exp[− a^{*}_{ik} θ + c^{*}_{ik}]), where a^{*}_{ik} = a_{ik} − a_{i(k−1)}, that is, the difference of the a parameter between adjacent category k and k − 1 in Eq. 1; c^{*}_{ik} = c_{ik} − c_{i(k−1)} is an intercept. The boundary discrimination parameters in NRM are the discrimination differences between the two adjacent categories that GPCM constrains to be consistent within items. We compared the boundary discrimination parameters in NRM with the discrimination parameters in GPCM by the Wald test (Preston et al., 2011). When an item’s boundary discrimination parameters in NRM significantly differ from each other, the discriminations are inconsistent within items. Then, we could conclude that items fit better NRM than GPCM. This is helpful for examining whether each item should be explained with the ordinal relationship between Memorization, Elaboration, and Metacognitive as the constraints in GPCM.
To answer the second research question, we conducted two correlation analyses and a quadratic regression analysis with learning strategy use and mathematics achievement as three relationships (see Table 2). First, the correlation between the South Korean students’ learning strategy scores created by the NRM and mathematical scores was obtained with the plausible values approach for taking the measurement error into account (OECD, 2009). Second, we examined the correlation between the observed raw score of each learning strategy and mathematics achievement to suggest the baseline value of each learning strategy. Third, we performed a quadratic regression analysis, adding a quadratic component of the learning strategy score to a linear model to investigate the nonlinear relationship between the learning strategy score created by NRM and mathematics achievement. Finally, we tested the hypothesis for the coefficient of the quadratic term to understand the curvilinear relationship between mathematics achievement and learning strategy use.
Model comparison between NRM and GPCM
We used the mirt package in R (Chalmers et al., 2022) by the function of mirt with the argument of itemtype = “nominal” or itemtype = “gpcm” to introduce the NRM and GPCM models. In this study, under the GPCM, the options were recoded (Memorization = 1, Elaboration = 2, Metacognitive = 3) based on prior literature (Biggs, 1987; OECD, 2014; Weinstein & Mayer, 1986; Zimmerman & Pons, 1986). We compared the model data fit index of the NRM to that of the GPCM with the Akaike information criterion (AIC; Bozdogan, 1987) and Bayesian information criterion (BIC; Schwarz, 1978). In addition, empirical IRT reliability via sampling variances and empirical variances estimated from the expected a posteriori (EAP) method was used to indicate the features of these four items and how precise they were. To understand the meaning of the scores, we compared the item characteristic curves (ICCs) of the NRM to those of the GPCM, which also indicated that the equal discrimination constraint in the GPCM was inadequate.
Learning strategy score
We computed the five plausible values of ability estimates in NRM via the fscores function in the mirt package, which randomly sample five scores from the posterior distribution of \(\theta\)in Eq. 1. When a model contains latent regression predictors, the plausible values approach accounts for latent regression predictor effects and measurement error simultaneously (Chalmers et al., 2022). The coefficients of predictor effects with the plausible values approach are unbiased compared to other estimators, such as weighted likelihood estimates that underestimate coefficients and EAP, which overestimates coefficients (OECD, 2009). The plausible values of the learning strategy score were thus used for further correlation analysis and quadratic regression analysis.
We also computed the raw scores of the learning strategies to suggest the baseline value of the association between single learning strategies and mathematics achievement. We created each raw score for memorization, elaboration, and metacognitive strategies equal to the frequency of choosing the corresponding strategy in the four items. For instance, if a student chose memorization once, elaboration twice, and metacognition once among four items, then the raw score for the student would be 1 for memorization, 2 for elaboration, and 1 for metacognition. If a student chose elaboration twice and metacognition twice, then the raw score would be 0 for memorization, 2 for elaboration, and 2 for metacognition. Compared to the NRM scores, which represented multiple learning strategies mixed together, the raw scores indicated the frequency of using single learning strategies.
Linear relationship (1): correlation between learning strategy score and mathematics scores
The average of plausible value statistics was used for the point estimates of the population statistics. Thus, to obtain a correlation coefficient between the learning strategy scores and mathematics achievement scores, the five correlation coefficients of plausible values were computed and then averaged (OECD, 2009). Mathematically, secondary analyses with plausible values can be described as follows: The population coefficient \(\rho\) is the formulation of \({\rho }_{i}\), which is the coefficient computed on one plausible value, then:
where M is the number of plausible values.
To compute the uncertainty in the averaged correlation coefficient, the measurement variance, usually denoted as imputation variance, is equal to:
This corresponds to the variance of the five plausible value statistics of interest. Finally, the sampling variance and the imputation variance should be combined as follows:
where \(U\) is the sampling variance and \(V\) is the squared standard error of the correlation coefficient between learning strategies and mathematics achievement.
Linear relationship (2): correlation between learning strategy raw score and mathematics score
In addition to using the plausible values approach, we conducted correlation analysis between the learning strategy raw score and mathematics score to understand the correlation between the use of a single strategy across all items (i.e., across learning situations) and mathematics achievement. Unlike the NRM scores representing the Korean students’ multiple strategies used in learning mathematics, the raw scores of the learning strategies represented the use of a single strategy (i.e., memorization, elaboration, or metacognition). The correlation based on raw scores is the baseline value to understand to what extent each learning strategy score was related to mathematics achievement. For instance, if the correlation coefficient based on the NRM score is larger than that based on each single strategy raw score, we can conclude that the Korean student’s multiple learning strategies for various learning situations are more efficient than using a single strategy across learning situations.
Nonlinear relationship: quadratic and cubic relationship between learning strategy score and mathematics score
We performed a quadratic regression analysis on the basis of the previously examined linear model. The quadratic regression model contains a linear term and a quadratic term to capture the linear and quadratic relationships between learning strategies and mathematics achievement (Cohen et al., 2002). More specifically, we first specified a model assuming a linear relation. In the second step, we added a quadratic component to examine whether a curvilinear relationship described the data better. A curvilinear (secondorder) predictor, such as \({X}^{2}\), is added to the linear regression equation (\(Y= {B}_{1}X+{B}_{0}+\epsilon )\) as follows:
where \(X\)is the learning strategy score from the NRM as a predictor, \({X}^{2}\) is the squared value of the learning strategy score as a curvilinear (secondorder) predictor, \(Y\) is the mathematics score as an outcome variable, and \(\epsilon\) is an error term with mean to zero and variance to the residual variance. In addition, \({B}_{0}\) is the intercept of the equation, implying the mean of the mathematics score when the learning strategy score is zero. \({B}_{1}\) is a regression coefficient of \(X\) (learning strategy score), and \({B}_{2}\) is a quadratic coefficient of \({X}^{2}\) (the squared value of the learning strategy score). When we were conducting hypothesis testing, we explored the significance of the quadratic coefficient \(({B}_{2}\)). If a quadratic term \({B}_{2}\) is significant (p < .001), this implies that mathematical achievement scores did not monotonically increase as learning strategy scores increased.
To explore the further possible nonlinear relationship between learning strategy scores and mathematics scores, we compared the quadratic regression model (Eq. 6) with a cubic regression model where we added one more predictor of \({B}_{3}{X}^{3}\) to explain the cubic relationship between learning strategies and mathematics performance by using the model data fit index of BIC and hypothesis testing for the null hypothesis of \({B}_{3}=0\). When the BIC of the quadratic model is smaller than that of the cubic model and the hypothesis of \({B}_{3}=0\) cannot be rejected, we conclude the quadratic relationship between learning strategy scores and mathematics scores; otherwise, the cubic relationship.
In the nonlinear regression analysis, the plausible values of regression coefficients were calculated using the same method in the linear relationship (1), and the five plausible values of the regression coefficients (e.g., LS1MATH1, LS2MATH2, … LS5MATH5) were averaged (OECD, 2009) to account for measurement errors. The predictors in the regression analysis were centralized to a mean equal to zero so that B_{1} can be interpreted as the predominant direction of the trend and B_{2} can be interpreted as concavity (Dalal & Zickar, 2012). The sum of weights within the country named SENWGT_STU was used to consider the weights within the country.
Results
Frequencies of learning strategy used among South Korean students
The frequencies and percentages of learning strategy use by South Korean students are presented in Table 3. The primary learning strategy varied across items, and metacognitive strategy was the most frequent learning strategy overall. For instance, more than 80% of South Korean students reported using elaboration and metacognition (43.3% and 40.0% each), with only 20% using memorization in Item 1. The percentage of metacognitive strategy use was the most noticeable in Items 2 and 3. In Item 2, South Korean students reported using metacognition, memorization, and elaboration (51.1%, 29.6%, and 19.3%, respectively). In Item 3, they chose metacognition, elaboration, and memorization at 62.5%, 22.9%, and 14.6%, respectively. In Item 4, more than half of the students reported the use of memorization (54.9%), followed by metacognition (30.9%) and elaboration (14.2%).
Comparison between NRM and GPCM
Model fit
We used the likelihood ratio test to compare the model data fit between the NRM and the GPCM. The result of the likelihood ratio test (χ^{2}[4] = 215.008, p < .001) showed that the NRM had a significant difference in model data fit from the GPCM. A model with better model data fit would have lower AIC and BIC values. Table 4 shows that the NRM had lower AIC and BIC values than the GPCM. Thus, the NRM was found to be a better fit for the data. BIC and AIC penalize the number of parameters, so the superiority of fit for the NRM is not merely due to the increase in the number of parameters.
Itemlevel comparison between NRM and GPCM
Table 5 shows the Q1 statistics (Chalmers & Ng, 2017) for each item under NRM and GPCM. All items in NRM had acceptable item data fit (p values are larger than 0.05), whereas Item 4 under GPCM cannot fit the data well (p < .05). The RMSEA showed that for Items 2 and 3, the NRM fit the data better than the GPCM. As the Q1 statistics result concluded, the RMSEA for Item 4 showed a worse model data fit in GPCM. However, Item 1 had a better item fit in the GPCM than in the NRM. Nevertheless, Item 1 in NRM still performed an acceptable item fit of Q1 statistics.
Table 6 shows that the boundary discrimination parameters of Item 1 did not differ from each other. Item 1 had a consistent discrimination parameter across categories within items. This is consistent with the result in Table 5 that GPCM fit the data better than NRM for Item 1. This implied that for Item 1, the ordinal relationship between Memorization, Elaboration, and Metacognitive was described by GPCM better than the nominal relationship by NRM. For Items 2 and 3, the boundary discrimination parameters were significantly different within items. The nominal relationship between categories in NRM described responses to Item 2 and Item 3 better than the ordinal relationship in GPCM. Although the boundary discrimination parameters for item 4 did not differ, Table 5 showed that Item 4 has a misfit in GPCM. Thus, we were cautious of stating that GPCM fit better than NRM for item 4. In summary, we decided to accept the NRM compared to the GPCM according to the overall modeldata fit, item fit for each item, and analysis of boundary discrimination parameters. The following sections will focus on the item features and the explanation of learning strategy scores under NRM more than under GPCM.
Reliability
To estimate the reliability of the learning strategy items, we reported empirical reliability. The empirical reliability for the NRM was 0.365, and that for the GPCM was 0.214. Considering that there were only four items, it is probable that both models’ empirical reliabilities were low. In general, the more information (i.e., more items in the test) we have, the more reliably we can measure the underlying trait (Cheng et al., 2012). Nevertheless, the reliability of the NRM was higher than that of the GPCM.
Item characteristic response curve
The NRM redefined each learning strategy regardless of recoded numbers. As \({c}_{ik}\) indicates the relative frequency of choosing that option (compare Tables 3 and 7), the P1 curve showed memorization, the P2 curve showed elaboration, and the P3 curve showed metacognition (see Fig. 1). Under GPCM, within Item i, the category slopes are all the same (\({a}_{i1}\) = \({a}_{i2}\) = \({a}_{i3}\)= \({a}_{i}\)). These constraints led Item 2, Item 3, and Item 4 response curves to be shaped differently from the curves of the NRM.
Overall, the response curve patterns of the GPCM and those of the NRM were different in Items 2, 3, and 4, while Item 1 had a similar pattern in both models. This result is consistent with the conclusions from the item fit analysis in Table 5. In Item 2, under the GPCM, as the learning strategy score (\(\theta\)) increased, the probability of choosing memorization (P1/ blue) decreased with a steeper slope than under the NRM, and the probability of choosing elaboration (P2/pink) increased for learning strategy scores lower than 0 and decreased for learning strategy scores higher than 0. Under the NRM, the probability of choosing elaboration (P2/pink) decreased monotonically as the learning strategy score (\(\theta\)) increased. In Item 3, under the GPCM, as the learning strategy score (\(\theta\)) increased, the probability of choosing memorization (P1/blue) decreased with a steep slope, and the probability of choosing elaboration (P2/pink) increased for learning strategy scores lower than approximately − 2 but decreased for learning strategy scores higher than − 2. Under the NRM, the probability of choosing memorization (P1/blue) increased for learning strategy scores lower than 0 but decreased for learning strategy scores higher than 0, and the probability of choosing elaboration (P2/pink) decreased steeply as the learning strategy score (\(\theta\)) increased. The Item 4 curves of the GPCM showed very different shapes from those of NRM. Under GPCM, as learning strategy scores (\(\theta\)) increased, the probability of choosing memorization (P1/blue) increased monotonically, the probability of choosing elaboration (P2/pink) did not seem to change, and the probability of choosing metacognition (P3/green) decreased monotonically. Under the NRM, as the learning strategy score (\(\theta\)) increased, the probability of choosing memorization (P1/blue) increased for learning strategy scores lower than 0 but decreased slightly for learning strategy scores higher than 0, the probability of choosing elaboration (P2/pink) decreased with a steep slope, and the probability of choosing metacognition (P3/green) increased gradually with a slope change of approximately 0.
Comparing the NRM to the GPCM, the NRM showed a better model data fit. Therefore, we used NRM to create South Korean students’ learning strategy scores. To understand the implications of the learning strategy scores, we considered item characteristic response curves and parameter estimates of the NRM (\({a}_{ij}\), \({c}_{ik}\)) (see Table 7; Fig. 1). In general, the \({a}_{3}\) values were larger than the other two \({a}_{1}\) and \({a}_{2}\)values in all four items. However, \({a}_{1}\) and \({a}_{3}\) had similar values in Item 4. The NRM curves in Fig. 1 show that the memorization strategy (P1/blue) was slightly higher than the metacognitive strategy (P3/green) in Item 4 when \(\theta\) (learning strategy score) was approximately 6.0. Thus, in general, the learning strategy score in the NRM might suggest the use of metacognitive strategies with memorization strategies. A higher \(\theta\) (higher learning strategy score) implied more frequent use of metacognitive strategies with memorization strategies, depending on the context. For example, the students who had a high learning strategy score tended to use memorization in Item 4 but metacognition in Items 2 and 3.
The Relationship between learning strategy and mathematics achievement
To summarize briefly, the correlation analysis showed that the learning strategy score from the NRM positively correlated with mathematics achievement. As the result of the raw score correlation, we found that using multiple learning strategies depending on items was more effective than using a single strategy across items for South Korean students. Finally, we explored a curvilinear relationship by adding a quadratic term and a cubic term of the learning strategy score from the NRM and found a significant negative quadratic term associated with mathematics achievement. More detailed results about the findings are presented below.
Linear relationship (1): correlation between learning strategy score and mathematics score
All correlation coefficients between learning strategy scores from the NRM and mathematics scores were significantly larger than zero (p < .05). In other words, the confidence intervals of the correlations between the variables did not include zero. The mean of the correlation coefficients was 0.18 (SE = 0.00075, Range = 0.17–0.22). The results indicate that there was a tendency that the higher the mathematics score was, the higher the learning strategy score, and the reverse also applied. Thus, the South Korean students who primarily used the metacognitive strategy with memorization, depending on the context, obtained high scores on mathematics exams.
Linear relationship (2): correlation between learning strategy raw score and mathematics score
The second correlation analysis between the raw score of learning strategy and mathematics score was conducted. The correlation based on raw scores was the baseline value for understanding to what extent the students’ learning strategy scores were related to mathematics achievement. The correlation coefficients between the raw score of the single learning strategy and the mathematics score were all significantly different from zero (p < .05). The mean of correlation coefficient between the raw score of the metacognitive strategy and the mathematics score (with five plausible values) was 0.12. In contrast, the mean correlation coefficients between the elaboration strategy and mathematics score were negative (ρ=−0.04), and the correlation between the memorization strategy and mathematics score was − 0.10. These results support the evidence that those students who used metacognition exclusively tended to achieve higher mathematics scores than those who used elaboration or memorization exclusively. More specifically, the sole use of memorization or elaboration learning strategies had a negative impact on mathematics scores.
The mean correlation coefficient between the raw score of metacognitive strategy and mathematics score was 0.12, which was less than 0.18 (i.e., the mean correlation coefficient between the learning strategy score from the NRM and mathematics score). This indicates that students using multiple learning strategies depending on learning situations had higher mathematics achievement scores than those who used only metacognitive strategies across all situations, in line with previous research (Wu et al., 2020).
Nonlinear relationship: quadratic and cubic relationship between learning strategy score and mathematics score
The quadratic regression coefficients were significantly different from zero (p < .001). The average Rsquared difference between the quadratic regression model and the linear regression model was 0.005812. The quadratic regression model showed a better fit than the linear model; the mean value of the BIC for quadratic regression was smaller than that of the linear model (BIC = 39,802 and 39,807, respectively). In the comparison between the two nonlinear regression models, the quadratic model fit the data better than the cubic model, with a BIC = 39,808. The average coefficient of the cubic term in the cubic regression model did not significantly differ from zero either (average B_{3} =0.873, average p value = 0.46 among the five plausible values of B_{3}). This finding confirmed our expectation that a linear and cubic relationship could not fully capture the relationship between students’ learning strategies and mathematics performance.
We presented a significance test for each of the individual regression coefficients with 95% confidence intervals on the mean of the five regression coefficients (see Table 8). We also summarized both linear and quadratic regression models in the scatter plot (see Fig. 2).
Table 8 shows that the mean of the standardized quadratic regression coefficients of the learning strategy scores (i.e.,\({ LS}^{2}\)) was significantly negative at 0.0667 (p < .001). The negative average coefficient of quadratic regression estimates implies that the initially positive association between the learning strategy score and mathematics achievement diminished slightly and became negative as the value of the learning strategy score increased.
The scatter plot in Fig. 2 shows both the positive linear relationship and negative quadratic relationship between the learning strategy score in NRM (xaxis) and the mathematics score. To simplify the scatter plot, we used the first plausible value of learning strategy scores and the first plausible value of mathematics scores as the xvariable and yvariable, respectively, rather than five plausible values. The red line and blue curve in Fig. 2 are the fitted linear regression line and the fitted quadratic regression curve, respectively. The linear regression line (red line) was fitted to \(\widehat{Y}= 16.601 LS1+554.241\), where \(\widehat{Y}\) is the predicted mathematics achievement score and LS1 is the first plausible value of learning strategy scores in the NRM (i.e., xvariable in Fig. 2). The coefficient of LS1 in the linear equation was 16.601 (CI = [13.21, 19.99]), which was significantly larger than zero, with t(3307) =9.603 and p < .001. The standardized coefficient of LS1 was 0.164. The blue curve in Fig. 2 indicates the fitted quadratic regression equation with predictors of both \(LS1\) and \({ LS1}^{2}\), which was \(\widehat{Y}= 16.224LS1+\left(3.272\right){LS1}^{2}+557.374\), where the coefficient of \({ LS1}^{2}\) was \(3.272\) with the 95% confidence interval not including zero (CI = [5.774, 0.771]), and the hypothesis testing for the coefficient of \({ LS1}^{2}\) showed a significant negative value with t(3307) = 2.565 and p < .001. Likewise, as shown in Table 8, the negative quadratic relationship between the variables took the shape of an inverted Ucurve rather than a straight line. The implications of the inverted Ucurve are presented in the discussion section. Please note that the difference in BIC between linear and quadratic models and the effect size of the quadratic term were small. We might not have strong evidence to reject the linear relationship between the learning strategy score and mathematics performance. The higher a student’s learning strategy score is, the better mathematics performance she or he has, which is still in our conclusion.
Discussion
This study aimed to explore the link between South Korean students’ learning strategy use and their achievement in mathematics exams using the NRM. We found that the Korean students who primarily used the metacognitive strategy with memorization, depending on the context, achieved high scores on mathematics exams with a limited effect. Our investigation extended previous research in two ways. First, it created scores for learning strategy use with the NRM. Second, it addressed the existence of a curvilinear relationship between learning strategy scores and mathematics achievement, as well as the linear relationship between the variables, focusing on one of the topperforming East Asian education systems (i.e., South Korea). A more detailed discussion of the findings is presented below.
The curvilinear relationship between learning strategy score and mathematics achievement
The strategy score had a positive linear relationship with mathematics achievement. Even so, Table 6 shows that a linear relationship may not accurately reflect the nature of the association, and Fig. 2 also indicates a curvilinear pattern. The negative curvilinear coefficient indicates the presence of a curvilinear association between the learning strategy score and mathematics achievement. The increasing use of metacognitive and memorization strategies was correlated with higher achievement in mathematics until it reached an optimum value; then, this association decreased slightly as the use of both strategies increased. This nonlinear pattern indicates that excessive use of metacognition and memorization may have diminishing returns for increasing student achievement and that more use of metacognition and memorization does not necessarily lead to better performance. In other words, the learning strategy combination of metacognition and memorization might not be the best strategy combination for every highperforming Korean student. Our finding is also in line with those of a previous study (Wu et al., 2020). Wu et al. (2020) suggested that 14.2% of students who primarily used metacognition with elaboration performed slightly better on mathematics exams than 65% of students who primarily used metacognition with memorization. Nevertheless, the effect size of the curvilinear model is small; Cohen (1992) suggested that 0.02 reflects a small effect size.
In addition, two possible responses to Item 4, memorization and metacognition, would lead to a high learning strategy score. This could be a probable reason for the negative nonlinear relationship between the learning strategy score and mathematics achievement. In other words, both the students who used metacognition for all items and those who used metacognition for Items 1, 2, and 3 but memorization for Item 4 received a high learning strategy score. The former students could get lower mathematics scores than the latter because the memorization strategy of Item 4 is more effective than the metacognitive strategy of Item 4 according to the item contents (see Table 1). A more detailed explanation of the highfrequency use of the memorization strategy in Item 4 is suggested in another subsection.
Use of metacognitive strategies with other learning strategies
We found that highperforming students in South Korea reported heavy use of metacognitive strategies with memorization strategies. This implies that the students did not use metacognitive strategies or memorization strategies alone, which is in line with previous studies (Nathan, 2021; Quigley et al., 2018). Nathan (2021) suggested that metacognition could only be developed in withinsubject or contentbased lessons and with other learning strategies. Thus, metacognitive strategies rely upon the use of other cognitive strategies (e.g., memorization and elaboration) and content that learners can use to plan, monitor, and evaluate. For example, if students who are selfregulated learners (Zimmerman, 1986) were asked to solve a math question with regard to mathematical formulas, they would start with some knowledge of the task and strategies. They could utilize one of the formulas that they already knew (i.e., elaboration strategy). In the process of recalling possible formulas, it is necessary to understand the formula and practice its use repeatedly in advance (i.e., memorization). Students could then evaluate their overall success and check whether they were correct. If their answers were wrong, they could try other strategies (Quigley et al., 2018). Therefore, the finding that highachieving students in South Korea use mixed learning strategies makes sense.
Variation in the use of memorization learning strategies
More than half of the students reported using the memorization strategy in Item 4 (i.e., In order to remember the method for solving a mathematics problem, I go through examples again and again). Although memorization is generally regarded as a relatively inefficient strategy (e.g., rote learning), the memorization strategy of Item 4 (In order to remember the method for solving a mathematics problem, I go through examples again and again) is no closer to the rote learning concept than the other memorization strategies in Items 1 and 3 (Wu et al., 2020). In fact, “gothrough examples” represent a common practice method in mathematics learning, especially in the introductory stages (Dinsmore & Alexander, 2016).
In South Korea, the most common way to learn mathematics in class is by doing different examples repeatedly, regardless of the students’ level. The variation of examples is associated with the students’ mathematics level or step of the mathematics learning process. In the beginning stages of learning, most students do examples with minor variations (e.g., numbers or \(\pm , \times , ?\)). It is common for lowachieving students to do less varied examples and even the same examples from the textbook repeatedly, which can lead to rote learning. When relatively highachieving students do examples with increasing variation, this can be considered a “route to understanding” (Marton & Booth, 1997; Hess & Azuma, 1991). Highperforming students even create and solve examples of their own.
In South Korea, students are usually encouraged to make their own review notes for wrong answers, called Odabnote (i.e., incorrect answer notes or incorrect notes), particularly in mathematics exams (Moon, 2019). After exams, they take notes to review the wrong answers. They report what they did wrong and why it was wrong and even develop a new question based on the concepts they got wrong. Then, they review their notes by going through not only the same questions but also their own examples before exams. Thus, frequent use of the memorization strategy, such as Item 4, does not necessarily mean rote learning. Nevertheless, to examine whether the use of memorization strategies causes rote learning, further research with different methods, such as cognitive lab or thinkaloud, is needed.
Use of learning strategies in the South Korean education system
The majority of PISA testtaking students in South Korea (79.8%) are from general secondary schools, which are academically oriented and sometimes called college preparatory schools, where most Korean secondary school students are enrolled (Kim & Byun, 2014). Most students consider university entrance exams to be very important (Lee, 2010; Ripley, 2013), prompting them to study mathematics, which is one of the core subjects that determines their future college options (Hwang, 2001; Yoon et al., 2021). According to the OECD, South Korean high school students study mathematics for 10.4 h per week on average, which is 3 h more than the OECD country average (7.6 h; Lee, 2014). In addition, 50.2% of South Korean students engage in private tutoring (at a hagwon or through informal private instruction by a university student) to study mathematics, which is more than in other subjects (e.g., English, Korean, and science).
Considering how much time they spent studying mathematics and their reasons for doing so, we can understand why the fewest South Korean students reported using elaboration strategies (19.3%) in Item 2, while more than half (51.1%) reported using metacognitive strategies. The elaboration strategy in Item 2 (When I study mathematics, I think of new ways to get the answer) might not be an efficient way to learn mathematics, especially for 15yearolds who learn mathematics in a highly stressful and competitive environment. If they already know how to solve a problem, they do not need to find another way. They are more likely to spend time determining what they do not understand (i.e., metacognitive learning strategy) to obtain more correct answers on their mathematics exams. This may explain why more than half of the students chose the metacognitive strategy in Item 2 (When I study mathematics, I try to figure out which concepts I still have not understood properly). Likewise, the fewest students reported using the elaboration strategy in Items 3 and 4. These items asked if students thought about and related their knowledge to other subjects (Item 3) or their lives (Item 4), both of which are unnecessary for finding an answer in a mathematics exam. This finding is related to why the raw score of elaboration strategies was negatively correlated with mathematics achievement. Regardless of whether the use of elaboration strategies deepens leaners’ understanding of knowledge and leads to highquality learning outcomes (Marton & Säljö, 1976; Prosser & Millar, 1989), it does not necessarily mean that they will obtain high scores on mathematics exams.
In contrast, 62.5% of South Korean students reported using metacognitive strategies in Item 3 (When I study mathematics, I start by working out exactly what I need to learn). As the importance of metacognition is emphasized in education, education stakeholders in South Korea, including private cram schools, are very interested in metacognitive learning strategies (Ji, 2021). Not only has the school curriculum focused on how to teach these strategies, but more cram schools are also advertising themselves using the slogan “The secret to getting 100% on a mathematics exam: metacognitive learning strategies.” Although metacognition should not be misunderstood as a process of verifying true and false, right and wrong, or good and bad (Park, 2021), South Korean education stakeholders could misuse metacognitive strategies to verify which mathematical knowledge is helpful for performing well on exams. Thus, it is probable that South Korean PISA testtaking students think of Item 3 as “When I study mathematics, I start by working out exactly what I need to learn for the mathematics exam.” To accept the widely held assumption that metacognition is beneficial, it could, at least in part, be understood as a result of its close relationship to selfregulation (Efklides, 2011; Norman, 2020; Zimmerman, 2008). Therefore, future studies should investigate how South Korean students use metacognitive strategies to illustrate that they can produce positive effects.
Limitations
This study has some notable limitations. First, as we focused on the Korean context, the degree to which the findings generalize to other populations is uncertain. Thus, to generalize the relationship between learning strategy and mathematics achievement in other countries, other factors, such as cultural context, should be considered. Second, this study is based on selfreported learning strategy data, which may not mirror students’ actual learning strategy use. A followup study in which learning strategy is assessed using different methodologies (e.g., observational data, thinkaloud, and retrospective thinkaloud) would add to the weight of these findings (Wu et al., 2020). Third, the conclusion in this study is based on data with only four items, which PISA 2012 included, so the generalization to “metacognitive” and “memorization” might be limited. Developing a longer learning strategy survey might be desired in future investigations. Fourth, the present study focuses on learning strategies and mathematics achievement without accounting for other psychological variables (e.g., motivation and behavior), which the SRL theoretical framework suggests (Wu et al., 2020), or other testtaking strategies. To fully understand South Korean students’ high achievement in mathematics, further studies need to consider psychological characteristics and other practical strategies that students might use for exams.
Conclusion
This research explored the relationship between learning strategy use and mathematics achievement in the South Korean education system using the NRM. The findings show that frequent use of metacognitive strategy with memorization is positively related to South Korean students’ mathematics achievement until it reaches an optimum value. We extended earlier research by creating learning strategy scores via the NRM. Our results also provide insight into the multifaceted nature of the association between learning strategy use and mathematics achievement by examining the existence of a curvilinear relationship. This study also discussed the relationship between the variables based on the South Korean education system. The necessity of further studies on how students use each learning strategy based on a specific education system was highlighted. Overall, these results are useful for understanding how South Korean students use learning strategies for mathematics achievement.
Data, Materials and Code availability
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
References
Areepattamannil, S., & Caleon, I. S. (2013). Relationships of cognitive and metacognitive learning strategies to mathematics achievement in four highperforming east Asian education systems. The Journal of Genetic Psychology, 174(6), 696–702. https://doi.org/10.1080/00221325.2013.799057.
Artz, A. F., & ArmourThomas, E. (2009). Development of a cognitivemetacognitive framework for protocol analysis of mathematical problem solving in small groups. Cognition and Instruction. https://doi.org/10.1207/s1532690xci0902_3. Advance online publication.
Biggs, J. B. (1987). Student approaches to learning and studying. Learning process questionnaire manual. Australian Council for Educational Researchhttps://eric.ed.gov/?id=ED308199.
Biggs, J. (1993). What do inventories of students’ learning processes really measure? A theoretical review and clarification. British Journal of Educational Psychology, 63(1), 3–19. https://doi.org/10.1111/j.20448279.1993.tb01038.x.
Biggs, J. (1998). Learning from the confucian heritage: So size doesn’t matter? International Journal of Educational Research, 29(8), 723–738. https://doi.org/10.1016/S08830355(98)000603.
Blazer, C. (2012). Is South Korea a case of highstakes testing gone too far? Information capsule. 1107. Research services, MiamiDade County Public Schools. https://eric.ed.gov/?id=ED536521.
Borkowski, J. G., & Thorpe, P. K. (1994). Selfregulation and motivation: A lifespan perspective on underachievement. Selfregulation of learning and performance: Issues and educational applications (pp. 45–73). Lawrence Erlbaum Associates, Inc.
Bozdogan, H. (1987). Model selection and Akaike’s Information Criterion (AIC): The general theory and its analytical extensions. Psychometrika, 52(3), 345–370. https://doi.org/10.1007/BF02294361.
Campione, J. C., Brown, A. L., Ferrara, R. A., & Bryant, N. R. (1984). The zone of proximal development: Implications for individual differences and learning. New Directions for Child and Adolescent Development, 1984(23), 77–91. https://doi.org/10.1002/cd.23219842308.
Carver, C. S., & Scheier, M. F. (1981). The selfattentioninduced feedback loop and social facilitation. Journal of Experimental Social Psychology, 17(6), 545–568. https://doi.org/10.1016/00221031(81)900391.
Chalmers, R. P., & Ng, V. (2017). Plausiblevalue imputation statistics for detecting item misfit. Applied Psychological Measurement, 41(5), 372–387. https://doi.org/10.1177/0146621617692079.
Chalmers, P., Pritikin, J., Robitzsch, A., Zoltak, M., Kim, K., Falk, C. F., Meade, A., Schneider, L., King, D., Liu, C. W., & Oguzhan, O. (2022). MIRT: Multidimensional item response theory (Version 1.31) [Computer software]. https://cran.rproject.org/web/packages/mirt/mirt.pdf.
Cheng, Y., Yuan, K. H., & Liu, C. (2012). Comparison of reliability measures under factor analysis and item response theory. Educational and Psychological Measurement, 72(1), 52–67. https://doi.org/10.1177/0013164411407315.
Chiu, M. M., Chow, B. W. Y., & McbrideChang, C. (2007). Universals and specifics in learning strategies: Explaining adolescent mathematics, science, and reading achievement across 34 countries. Learning and Individual Differences, 17(4), 344–365. https://doi.org/10.1016/j.lindif.2007.03.007.
Choi, Y., Kim, S., & Hong, W. P. (2019). Is the role of cultural capital in student achievement in South Korea different? A systematic review. British Journal of Sociology of Education, 40(6), 776–794. https://doi.org/10.1080/01425692.2019.1592662.
Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155. https://doi.org/10.1037/00332909.112.1.155.
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2002). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Routledge. https://doi.org/10.4324/9780203774441.
R Core Team (2020). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.Rproject.org/.
Corno, L. (1986). The metacognitive control components of selfregulated learning. Contemporary Educational Psychology, 11(4), 333–346. https://doi.org/10.1016/0361476X(86)900299.
Corno, L., & Mandinach, E. B. (2009). The role of cognitive engagement in classroom learning and motivation. Educational Psychologist Advance Online Publication. https://doi.org/10.1080/00461528309529266.
Dalal, D. K., & Zickar, M. J. (2012). Some common myths about centering predictor variables in Moderated multiple regression and polynomial regression. Organizational Research Methods, 15(3), 339–362. https://doi.org/10.1177/1094428111430540.
Darrell Bock, R. (1972). Estimating item parameters and latent ability when responses are scored in two or more nominal categories. Psychometrika, 37(1), 29–51. https://doi.org/10.1007/BF02291411.
De Clercq, A., Desoete, A., & Roeyers, H. (2000). Epa2000: A multilingual, programmable computer assessment of offline metacognition in children with mathematicallearning disabilities. Behavior Research Methods Instruments & Computers, 32(2), 304–311. https://doi.org/10.3758/BF03207799.
Dent, A. L., & Koenka, A. C. (2016). The relation between selfregulated learning and academic achievement across childhood and adolescence: A metaanalysis. Educational Psychology Review, 28(3), 425–474. https://doi.org/10.1007/s1064801593208.
Desoete, A., Roeyers, H., & Buysse, A. (2001). Metacognition and mathematical problem solving in Grade 3. Journal of Learning Disabilities, 34, 435–449. https://doi.org/10.1177/002221940103400505.
Dignath, C., & Büttner, G. (2008). Components of fostering selfregulated learning among students: A metaanalysis on intervention studies at primary and secondary school level. Metacognition and Learning, 3(3), 231–264. https://doi.org/10.1007/s114090089029x.
Dinsmore, D. L., & Alexander, P. A. (2016). A multidimensional investigation of deeplevel and surfacelevel processing. The Journal of Experimental Education, 84(2), 213–244. https://doi.org/10.1080/00220973.2014.979126.
Dong, H., Ok, H., Lim, H., Jeong, H., Son, S., & Seong, B. (2012). OECD international academic achievement assessment study: PISA 2012 main test implementation report (RRE 201231), 178–179. (in Korean).
Donker, A. S., de Boer, H., Kostons, D., van Dignath, C. C., & van der Werf, M. P. C. (2014). Effectiveness of learning strategy instruction on academic performance: A metaanalysis. Educational Research Review, 11, 1–26. https://doi.org/10.1016/j.edurev.2013.11.002.
Efklides, A. (2011). Interactions of metacognition with motivation and affect in selfregulated learning: The MASRL model. Educational Psychologist, 46(1), 6–25. https://doi.org/10.1080/00461520.2011.538645.
Glogger, I., Schwonke, R., Holzäpfel, L., Nückles, M., & Renkl, A. (2012). Learning strategies assessed by journal writing: Prediction of learning outcomes by quantity, quality, and combinations of learning strategies. Journal of Educational Psychology, 104(2), 452. https://doi.org/10.1037/a0026683.
Hess, R. D., & Azuma, H. (1991). Cultural support for schooling: Contrasts between Japan and the United States. Educational Researcher, 20(9), 2–9. https://doi.org/10.3102/0013189X020009002.
Hong, E., Sas, M., & Sas, J. C. (2006). Testtaking strategies of high and low mathematics achievers. The Journal of Educational Research, 99(3), 144–155. https://doi.org/10.3200/JOER.99.3.144155.
Hwang, Y. (2001). Why do South Korean students study hard? Reflections on Paik’s study. International Journal of Educational Research, 35(6), 609–618. https://doi.org/10.1016/S08830355(02)000149.
Ji, S. (2021, November 11). What is the professionalism of teachers required by the 2022 revised curriculum and high school credit system? Senior teachers’ association emphasizes ‘metacognition and Q&R learning methods’. Education Plus. http://www.edpl.co.kr/news/articleView.html?idxno=3199.
Kember, D. (2016). Why do Chinese students outperform those from the West? Do approaches to learning contribute to the explanation? Cogent Education, 3(1), 1248187. https://doi.org/10.1080/2331186X.2016.1248187.
Kember, D., Biggs, J., & Leung, D. Y. P. (2004). Examining the multidimensionality of approaches to learning through the development of a revised version of the learning process questionnaire. British Journal of Educational Psychology, 74(2), 261–279. https://doi.org/10.1348/000709904773839879.
Kilic, S., Cene, E., & Demir, I. (2012). Comparison of learning strategies for mathematics achievement in Turkey with eight countries. Educational Sciences: Theory and Practice, 12(4), 2594–2598.
Kim, H. (2004). Analyzing the effects of the high school equalization policy and the college entrance system on private tutoring expenditure in Korea. KEDI Journal of Educational Policy, 1(1). https://www.proquest.com/docview/1013966600/abstract/8C5C215702634B8APQ/1.
Kim, K., & Byun, S. (2014). Determinants of Academic Achievement in Republic of Korea. In H. Park & K. Kim (Eds.), Korean Education in Changing Economic and Demographic Contexts (pp. 13–37). Springer. https://doi.org/10.1007/9789814451277_2.
Lee, Y. (2010). Views on education and achievement: Finland’s story of success and South Korea’s story of decline. KEDI Journal of Educational Policy, 7, 379–401.
Lee, S. (2014, July 14). Mathematics scores are not proportional to study time. Science Times. https://bit.ly/3spXR4K.
Lee, J., & Shute, V. J. (2010). Personal and socialcontextual factors in K–12 academic performance: An integrative perspective on student learning. Educational Psychologist, 45(3), 185–202. https://doi.org/10.1080/00461520.2010.493471.
Leung, F. K. S. (2014). What can and should we learn from international studies of mathematics achievement? Mathematics Education Research Journal, 26(3), 579–605. https://doi.org/10.1007/s1339401301090.
Li, A. (2011, October 4). Korea cracks down on clandestine study groups. The Toronto Star. http://thestar.com/news/article/1064697.
Lin, S. W., & Tai, W. C. (2015). Latent class analysis of students’ mathematics learning strategies and the relationship between learning strategy and mathematical literacy. Universal Journal of Educational Research, 3(6), 390–395. https://doi.org/10.13189/ujer.2015.030606.
Liu, Q., Du, X., Zhao, S., Liu, J., & Cai, J. (2019). The role of memorization in students’ selfreported mathematics learning: A largescale study of Chinese eighthgrade students. Asia Pacific Education Review, 20(3), 361–374. https://doi.org/10.1007/s12564019095762.
Marton, F., & Booth, S. A. (1997). Learning and awareness. Psychology Press. https://doi.org/10.4324/9780203053690.
Marton, F., & Säljö, R. (1976). On qualitative differences in learning: I—Outcome and process*. British Journal of Educational Psychology, 46(1), 4–11. https://doi.org/10.1111/j.20448279.1976.tb02980.x.
MaydeuOlivares, A. (2015). Evaluating fit in IRT models. In P. Steve, Reise, A. Dennis, & Revicki (Eds.), Handbook of item response theory modeling: Applications to typical performance Assessment (pp. 111–127). Routledge. https://www.taylorfrancis.com/chapters/edit/10.4324/97813157360137/evaluatingfitirtmodelsalbertomaydeuolivares.
McInerney, D. M., Cheng, R. W., Mok, M. M. C., & Lam, A. K. H. (2012). Academic selfconcept and learning strategies: Direction of effect on student academic achievement. Journal of Advanced Academics, 23(3), 249–269. https://doi.org/10.1177/1932202X12451020.
Moon, Y. (2019, April 19). How to write an incorrect answer note to be top in the mathematics class. Edugene Internet Education Newspaper. https://www.edujin.co.kr/news/articleView.html?idxno=30692.
Murayama, K., Pekrun, R., Lichtenfeld, S., & vom Hofe, R. (2013). Predicting longterm growth in students’ mathematics achievement: The unique contributions of motivation and cognitive strategies. Child Development, 84(4), 1475–1490. https://doi.org/10.1111/cdev.12036.
Nathan, B. (2021). January 30). 5 myths about metacognition that we need to banish. Tes Magazine. https://www.tes.com/magazine/archived/5mythsaboutmetacognitionweneedbanish.
Newman, D. A. (2014). Missing data: Five practical guidelines. Organizational Research Methods, 17(4), 372–411. https://doi.org/10.1177/1094428114548590.
Norman, E. (2020). Why metacognition is not always helpful. Frontiers in Psychology, 11. https://www.frontiersin.org/article/10.3389/fpsyg.2020.01537.
OECD. (2013). PISA 2012 assessment and analytical framework mathematics, reading, science, problem solving and financial literacy. OECD Publishing.
OECD (2014). PISA 2012 technical report. Paris, France: OECD Publishing.
OECD (2005). PISA 2003 technical report. Paris, France: OECD Publishing.
OECD (2009). PISA data analysis manual: SPSS, second edition. OECD. https://doi.org/10.1787/9789264056275en.
OECD (2012). PISA 2009 technical report. Paris, France: OECD Publishing. https://doi.org/10.1787/9789264167872en.
Park, K. (2004). Factors contributing to Korean students’ high achievement in mathematics. Korea, 547, 84. http://matrix.skku.ac.kr/ForICME11/ICME/Chap5(kPark).htm.
Park, H. (2021, July 23). The beginning of empathy is ‘metacognition’. ShinA Ilbo. http://www.shinailbo.co.kr/news/articleView.html?idxno=1439847.
Perels, F., Dignath, C., & Schmitz, B. (2009). Is it possible to improve mathematical achievement by means of selfregulation strategies? Evaluation of an intervention in regular math classes. European Journal of Psychology of Education, 24(1), 17–31. https://doi.org/10.1007/BF03173472.
Pintrich, P. R., & de Groot, E. V. (1990). Motivational and selfregulated learning components of classroom academic performance. Journal of Educational Psychology, 82(1), 33. https://doi.org/10.1037/00220663.82.1.33.
Pires, E. M. S. G., DanielFilho, D. A., de Nooijer, J., & Dolmans, D. H. J. M. (2020). Collaborative learning: Elements encouraging and hindering deep approach to learning and use of elaboration strategies. Medical Teacher, 42(11), 1261–1269. https://doi.org/10.1080/0142159X.2020.1801996.
Preston, K., Reise, S., Cai, L., & Hays, R. D. (2011). Using the nominal response model to evaluate response category discrimination in the PROMIS emotional distress item pools. Educational and Psychological Measurement, 71(3), 523–550. https://doi.org/10.1177/0013164410382250.
Prosser, M., & Millar, R. (1989). The how and what of learning physics. European Journal of Psychology of Education, 4(4), 513–528. https://doi.org/10.1007/BF03172714.
Quigley, A., Muijs, D., & Stringer, E. (2018). Metacognition and selfregulated learning: Guidance report.
Ramsden, P. (1988). Context and strategy. In R. R. Schmeck (Ed.), Learning strategies and learning styles (pp. 159–184). Springer. https://doi.org/10.1007/9781489921185_7.
Ripley, A. (2013). The smartest kids in the world: And how they got that way. Simon and Schuster.
Rosander, P., & Bäckström, M. (2012). The unique contribution of learning approaches to academic performance, after controlling for IQ and personality: Are there gender differences? Learning and Individual Differences, 22(6), 820–826. https://doi.org/10.1016/j.lindif.2012.05.011.
Schoenfeld, A. H. (2016). Learning to think mathematically: Problem solving, metacognition, and sense making in mathematics (reprint). Journal of Education, 196(2), 1–38. https://doi.org/10.1177/002205741619600202.
Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6(2), 461–464. https://doi.org/10.1214/aos/1176344136.
Shi, D., MaydeuOlivares, A., & Rosseel, Y. (2020). Assessing fit in ordinal factor analysis models: Srmr vs. rmsea. Structural Equation Modeling: A Multidisciplinary Journal, 27(1), 1–15.
Thissen, D., Cai, L., & Bock, R. D. (2010). The nominal categories item response model. Handbook of polytomous item response theory models. Routledge.
Trigwell, K., & Prosser, M. (1991). Improving the quality of student learning: The influence of learning context and student approaches to learning on learning outcomes. Higher Education, 22(3), 251–266. https://doi.org/10.1007/BF00132290.
Walker, C. O., Greene, B. A., & Mansell, R. A. (2006). Identification with academics, intrinsic/extrinsic motivation, and selfefficacy as predictors of cognitive engagement. Learning and Individual Differences, 16(1), 1–12. https://doi.org/10.1016/j.lindif.2005.06.004.
Weinstein, C. E., & Mayer, R. E. (1986). The teaching of learning strategies. In M. Wittrock (Ed.), Handbook of research on teaching (pp. 315–327). Macmillan.
Wolters, C. A. (2004). Advancing achievement goal theory: Using goal structures and goal orientations to Predict Students’ motivation, cognition, and achievement. Journal of Educational Psychology, 96(2), 236–250. https://doi.org/10.1037/00220663.96.2.236.
Wu, Y. J., Carstensen, C. H., & Lee, J. (2020). A new perspective on memorization practices among east Asian students based on PISA 2012. Educational Psychology, 40(5), 643–662. https://doi.org/10.1080/01443410.2019.1648766.
Yoon, H., Bae, Y., Lim, W., & Kwon, O. N. (2021). A story of the national calculus curriculum: How culture, research, and policy compete and compromise in shaping the calculus curriculum in South Korea. ZDM – Mathematics Education, 53(3), 663–677. https://doi.org/10.1007/s1185802001219w.
Zimmerman, B. J. (1986). Becoming a selfregulated learner: Which are the key subprocesses? Contemporary Educational Psychology, 11(4), 307–313. https://doi.org/10.1016/0361476X(86)900275.
Zimmerman, B. J. (1990). A social cognitive view of selfregulated academic learning. Journal of Educational Psychology, 81(3), 329. https://doi.org/10.1037/00220663.81.3.329.
Zimmerman, B. J. (2001). Theories of selfregulated learning and academic achievement: An overview and analysis. In B. J. Zimmerman, & D. H. Schunk (Eds.), Selfregulated learning and academic achievement: Theoretical perspectives (pp. 1–39). Lawrence Erlbaum Ass.
Zimmerman, B. J. (2008). Investigating selfregulation and motivation: Historical background, methodological developments, and future prospects. American Educational Research Journal, 45, 166–183. https://doi.org/10.3102/0002831207312909.
Zimmerman, B. J., & Pons, M. M. (1986). Development of a structured interview for assessing student use of selfregulated learning strategies. American Educational Research Journal, 23(4), 614–628. https://doi.org/10.3102/00028312023004614.
Zimmerman, B. J., & Pons, M. M. (1988). Construct validation of a strategy model of student selfregulated learning. Journal of Educational Psychology, 80(3), 284–290. https://doi.org/10.1037/00220663.80.3.284.
Zu, J., & Kyllonen, P. C. (2020). Nominal response model is useful for scoring multiplechoice situational judgment tests. Organizational Research Methods, 23(2), 342–366. https://doi.org/10.1177/10944281188.
Acknowledgements
We have to express our appreciation to our colleagues from the Frontier Research in Educational Measurement (FREMO) research group at the Center for Educational Measurement, University of Oslo (CEMO) for sharing their pearls of wisdom with us during the course of this research.
Funding
This research received no specific grant from any funding agency in the public, commercial, or organizations for the submitted work.
Author information
Authors and Affiliations
Contributions
Jiyoun Kim conceptualized the topic, wrote the manuscript, conducted the statistical analyses, revised the manuscript and ChiaWen Chen conceptualized the topic, supervised, managed the project, and edited the manuscript. YiJhen Wu contributed to revision and edition of the manuscript.
Corresponding author
Ethics declarations
Ethical approval and consent
This research discharges its duty imposed by the European Economic Area (EEA)’s general data protection regulation (GDPR) by following Norwegian Centre for Research Data (NSD)’s notification. The Program for International Student Assessment (PISA) data provided by Organization for Economic Cooperation and Development (OECD) contains only aggregated and depersonalized datasets with no possibility of backtracing to any particular participant. Resultantly, no identifiable personal data were collected or used at any stage of this research.
Consent for publication
We, Jiyoun Kim, ChiaWen Chen, and YiJhen Wu give our consent for the publication of identifiable details, which can include photograph(s) and/or videos and/or case history and/or details within the text (“Material”) to be published in the Largescale Assessments in Education. We confirm that we have seen and been given the opportunity to read both the Material and the Article to be published by Largescale Assessments in Education. We understand that Largescale Assessments in Education may be available in both print and on the internet, and will be available to a broader audience through marketing channels and other third parties. Therefore, anyone can read material published in the Journal. I understand that readers may include not only educational assessment professionals and scholarly researchers but also journalists and general members of the public.
Competing interests
The authors declared no potential conflicts of financial interest or nonfinancial interest with respect to the research, authorship, and/or publication of this article.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kim, J., Chen, CW. & Wu, YJ. Exploration of the linear and nonlinear relationships between learning strategies and mathematics achievement in South Korea using the nominal response model : PISA 2012. Largescale Assess Educ 12, 11 (2024). https://doi.org/10.1186/s40536024001988
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40536024001988