Abstract: While the brain develops through interactions with the external environment, it almost never experiences an exact physical event twice. Furthermore, the attended objects and events do not appear exactly the same as before and the background settings are not the exactly same either. However, the brain can abstract and generalize to deal with those environmental variations. This paper discusses a theory on the completeness of the logic capability of the Developmental Networks (DN) as a simplified brain-like model from such environmental variations. Various abstractions and generalizations are emergent properties of a DN through its incremental lifetime learning experience. While this process takes place, the network appears to have an increasing amount of logic capability in the eyes of human observers. Since it seems impossible to explain all possible kinds of logic capabilities that a human can acquire through his lifetime, I propose a general task-nonspecific formulation about logic capability in a DN. I prove that a DN incrementally generates and updates an internal, emergent, Finite Automaton (FA), whose complexity becomes increasingly high under the teacher's scaffolding scheme. It seems that such a highly complex FA can implement any practical logic. This is a theoretical paper but it discusses and cites supporting experimental results.
Pages: 35-42
Full Text: PDF