Vinsamlegast notið þetta auðkenni þegar þið vitnið til verksins eða tengið í það: http://hdl.handle.net/1946/36536
While research progresses in artificial general intelligence (AGI), as well as in traditional narrow-AI such as machine learning algorithms, direct comparisons between AI learners are few and far between. Evaluating and comparing the performance of different architectures is likely to prove valuable, and so is estimating the generality of an architecture by measuring traits such as transfer learning ability and knowledge retention. In this context, transfer learning ability describes how knowledge gathered learning one task speeds up learning of a similar task, while knowledge retention describes how well a task is performed after a second task has also been learned. We examine how OpenNARS for Applications (ONA), a version of the AGI-aspiring Non-Axiomatic Reasoning System, measures up against traditional narrow-AI implementations on these metrics. Few, if any, tools allow such a direct comparison of AGI-aspiring and more traditional narrow-AI systems, an exception being the Simulator for Autonomy & Generality Evaluation (SAGE) system. We tested ONA in SAGE, using tasks identical to those previously tested on a double deep Q network (DDQ) and an actor-critic (AC). ONA was found to vastly outperform the narrow-AI learners in proficiency and learning speed, but could not handle one variable being random or its actions being swapped. The results indicate that the DDQ is the most general of the compared agents, though questions remain about whether further tuning of the interpretation between SAGE and ONA would lead to different results.