There is a pressing need to improve the science achievement of American students, who do not perform well on assessments of scientific problem solving. For example, The latest results, published in 2013 from tests taken in 2012, Program for International Student Assessment [PISA] test, American 15-year olds ranked 36th out of 40 developed nations in mathematics literacy and problem solving (Augustine, 2005; NCES, 2016). Improving science achievement depends on developing innovative instructional models that will ensure that teachers can provide effective classroom experiences. More specifically, teachers need information about how their students are learning so that they can adapt their instruction to address areas of confusion, and this information must be available in a timely manner.
With the development of increasingly powerful online learning environments and the coupling of them to dynamic assessment methodologies it is now becoming possible to rapidly acquire data with linkages to the students’ changing knowledge, skill and understanding as they engage in real-world complex problem solving.
While it is becoming relatively easy to capture student performance data, a continuing question, is how to best extract the most important features of the student data streams and refine them into models (predictive simplifications or abstractions) that can be used to more accurately position students on learning trajectories and to optimize the form of subsequent interventions. A range of tools are being employed in these analyses including Bayesian Nets, computer adaptive testing based on item response theory (IRT), regression models, artificial neural networks, each of which possesses particular strengths and limitations. One emerging lesson however, is that a single approach is unlikely to be adequate for modeling the multitude of influences on learning, Technical and conceptual challenges are to develop system architectures that can provide rigorous and reliable measures of student progress, yet can also be progressively scaled and refined in response to evolving student models and new interventional approaches.
We have addressed these challenges with our online problem solving and analytic system termed IMMEX™ (Interactive Multi-Media Exercises). IMMEX™ supports the development and implementation of problem solving assessment tasks that require students to analyze descriptive scenarios, judge what information is relevant, plan a search strategy, gather information, and eventually reach a decision(s) that demonstrates understanding (Palacio-Cayetano et al, 2002; Underdahl et al, 2002). This has resulted in a system that implements a full range of formative assessment tools to gather evidence about problem solving, make sense of the data, and help teachers adjust instruction as needed.
Over the years, investigators worldwide have contributed to the knowledge base of factors that are essential for scaling up IMMEX™. They have played critical roles in quantitative and qualitative studies that link problem solving to traditional measures of student achievement and study the effects of prolonged problem solving on these traditional measures. These studies are multidisciplinary, encompassing curriculum development, classroom practice, machine learning, artificial intelligence, and neurophysiology. This kind of researcher collaboration enables the project to determine not only what is required to scale up an intervention, but also what it means to scale it out in a multidisciplinary sense. Below are some sample highlights:
At the achievement level, a two-year study conducted by Fayette County Public Schools compared the KCCT science scores between students (n = 689) who had performed IMMEX™ in the classroom (between 5 and 15+ cases) and 549 students in different classrooms of the same of teachers who received no IMMEX™ training; the KCCT reading scores were included as a covariate. The KCCT academic index for science achievement, calculated by assigning Kentucky Department of Education numeric weights to performance levels, for the experimental group was significantly higher than the control, non-IMMEX™ group (.95 ± .39 vs. .69 ± .39, F = 25.25, p<.001).
At the curriculum level, IMMEX™ is the basis for an undergraduate problem-solving chemistry curriculum containing 15 problem sets, each containing 15+ separate cases (Case et. al, 2007, Cox et al, 2007), that with NSF funding has been experimentally adopted by the American Chemical Society’s Examination Institute for testing students in alternative formats (Stevens et al., 2007).
In the high risk category, investigators have modeled changes in electroencephalography (EEG) - derived measures of cognitive workload, engagement, and distraction as individuals developed and refined their problem solving skills in science. Subjects performing a series of problem solving simulations showed decreases in the times needed to solve the problems; however, metrics of high cognitive workload and high engagement remained the same. When these indices were measured within the navigation, decision, and display events in the simulations, significant differences in workload and engagement were often observed.
At the research level, findings that have been observed from middle school through the university are that students initially conduct extensive explorations of the problem space, and then begin to refine their strategies as they gain experience. Consistent with models of skill acquisition, after a relatively short number of problem performances (generally 3-6), most students stabilize with preferred strategies that they continue to use for extended periods of time although some strategies may not be effective or efficient. Interestingly, after the third or fourth problem performance, verbal protocol analyses also become less detailed. Using this modeling approach we have also shown that higher ability students often stabilize strategies more slowly than low ability students, perhaps suggesting a richer repertoire of candidate approaches. Although males and females solve the same number and proportion of attempted problems, Neural Networks and Hidden Markov Modeling techniques showed that there are significant strategic differences across gender.
At the classroom level, teacher effects have been shown to have the greatest effect on subsequent student problem solving outcomes, distancing school, gender, demographic, and prior achievement effects. Detailed videotape analysis of instruction suggests that the ways that teachers represent the task to the students has a large effect on problem-solving outcomes. The strategies that students stabilize on, and the rates at which they stabilize, depend on whether students work individually or in groups. For most students working in collaborative groups the average gain in performance by IRT estimates is 10% (Cooper et al, 2007).
From a dissemination perspective, IMMEX™ is rapidly approaching a performance dataset size of 1M student problem solving performances. It contains longitudinal data for many students that spans over a year where they solved dozens of problems across multiple science domains.
Deriving such a broad range of results has required both a long-term research commitment and a continual balancing of innovation with the need to satisfy practical classroom goals; review our Publications list for more.