Abstract: Although a caregiver tries to be consistent while she teaches a baby, it is not guaranteed that she never makes errors. This situation is also true with a digital game, during which the human player needs to teach a non-player character (NPC). In this work, we report how a teacher can successfully train the Developmental Networks (DNs) while she cannot guarantee an error-free sequence of motor-supervised teaching. We establish that, under certain conditions, a DN tolerates a significant number of errors in a teaching sequence as long as the errors do not overwhelm the correct motor supervisions in terms of the Z-normalized frequency. We also provide theoretical arguments why task-nonspecific agents like DNs create a new dimension for the play values of future digital games. The emergent representations in the DN can not only abstract well like a symbolic representation (e.g., Finite Automaton) but also deal with the problem of exponential complexity with the traditional symbolic representations currently prevailing in the artificial intelligence (AI) field and in the digital gaming field. The experimental results showed that the speed of convergence to correct actions depends on the error rates in training.
Pages: 43-50
Full Text: PDF