For well-defined inquiry skills, we autoscore students’ inquiry using knowledge-engineering algorithms; for more complex inquiry skills, we use algorithms generated via educational data mining. Respectively, these were developed using production rules, and a combination of text replay tagging and machine learning. By reacting to students’ inquiry strategies in real time, we hypothesize that we will positively affect both students’ science process skills and students’ content learning. Specifically, we measure students’ inquiry skills in terms of improvement at: generating testable hypotheses, testing their articulated hypotheses (as opposed to testing other, random hypotheses), conducting controlled experiments, correctly interpreting data, and warranting their claims with appropriate data. Secondly, we measure content knowledge gains using pre- and post-test items from standardized- type content assessments.
In terms of assessment, our detectors, developed using educational data mining, are able to successfully identify when a student is testing their stated hypothesis 86% of the time, and can successfully identify when the student is conducting controlled experiments 85% of the time. In addition, the detectors can accurately predict students’ future performance in the next inquiry cycle as well—79% of the time for testing stated hypotheses, and 74% of the time for designing controlled experiments. Thirdly, our measures of these skills significantly correlate, respectively, with other assessments including multiple-choice assessments and other performance assessments.
Currently, we are testing our system with regard to its efficacy at honing students’ inquiry skills in real time by conducting a series of randomized controlled studies in our partner schools; the demographics of these students represent a wide range of SES and ethnic backgrounds, and thus, our data should generalize well.