Asim Roy
Arizona State University
Tempe, Arizona USA
asim.roy@asu.edu
Abstract
The main distinction between distributed and localist representation is that in distributed representation, neurons (cells) do not have any “meaning and interpretation,” whereas in localist representation, they do. However, there is preponderance of evidence from single cell recordings of animal and human brains that single cell activations have “meaning and interpretation,” even at the lowest levels of processing in the brain. Another major difference between localist and distributed representation is that in distributed representation, there is no concept of category neurons, neurons that represent a category. However, category neurons have been found in many single cell recordings of animals. Thus, neurophysiological evidence clearly supports the theory that localist representation is widely used in the brain. One can also present theoretical arguments why distributed representation is not efficient and feasible. First, a structure that uses distributed representation is effectively a serial processor. For example, if such a structure (network) is trained to recognize 10 different objects, then it can only recognize one object at a time. Such a structure cannot recognition all 10 objects simultaneously. Hence distributed representation is not suitable for high speed parallel processing, particularly if its structure is packed to recognize many different objects or entities, where simultaneous recognition of many different objects and entities is required.
Short Bio
Asim Roy is a Professor of Information Systems at Arizona State University. He received his M.S. in Operations Research from Case Western Reserve University, Cleveland, Ohio, and Ph.D. in Operations Research from University of Texas at Austin. He has been a Visiting Scholar at Stanford University and a Visiting Scientist at Oak Ridge National Laboratory, Tennessee.
Asim is currently on the Governing Board of INNS and founder and chair of two INNS Sections: Autonomous Machine Learning (AML) and Big Data Analytics (BDA). He has guest edited a special issue of Neural Networks on autonomous learning and is currently guest editing one on big data. He also serves on the editorial boards of Neural Networks and Neural Information Processing – Letters and Reviews. He has been the Letters Editor of IEEE Transactions on Neural Networks and has served on organizing committees of many scientific conferences. He is the Technical Program Co-Chair of IJCNN 2015 in Ireland and General Chair of INNS Conference on Big Data 2015 in San Francisco. Asim is listed in Who’s Who in America.
His research interests are in theories of the brain, brain-like learning, artificial neural networks, automated machine learning, big data and nonlinear multiple objective optimization.