Vinsamlegast notið þetta auðkenni þegar þið vitnið til verksins eða tengið í það: http://hdl.handle.net/1946/35759
The goal of this thesis was to develop models and methods to evaluate learning. To do so, data from tutor-web.net, an online repository of lectures, examples and multiple-choice questions for various subjects in mathematics, was examined. The data was generated by students enrolled in Applied Mathematical Analysis at the University of Iceland in 2018 and 2019. First, statistical models were developed to determine which factors influenced students' performance on the multiple-choice questions. Then, various hypotheses tests were proposed to detect learning and their corresponding statistics, which rejected the null hypothesis with the most power, were derived and evaluated with simulations. The results were that either questions on tutor-web are too easy, that students are being given easy questions for too long or potentially both. Regardless, some of the inner workings of tutor-web need to be redesigned with the results of this thesis in mind.