Brain-Mind Institute for Future Leaders of Brain-Mind Research

Home | About | Why BMI? | Why Me? | Programs | BMM | Press | Committees | Webinars | Classes | Founding | Login

Why Is BMI for Me?

BMI is for not only advisees (e.g., graduate students and postdocs) but also advisors (e.g., university faculty, industrial experts, and medical doctors), as well as anybody who is interested in brain-mind.

BMI is a new kind of institute, different from any existing institutes, in the following aspects:

  • BMI is not limited by disciplinary boundaries, as it can accommodate any advisee and advisor interested in brain-mind — the 6 disciplines and beyond (e.g., linguistics, social science, laws, economy, political science, and philosophy).
  • BMI is not limited by geographic locations. You can reside in any country in the world while being registered in a program of BMI.
  • BMI offers a unique academic program and research environment for at least the 6 disciplines. It is meant for career development (e.g., advising, consulting, government review panels, foundation employment, industrial employment, venture capital opportunities), as we enter the new era of brain-mind while my previous knowledge become obsolete fast.

Do I have sufficient infrastructure at my home institution for learning latest computational brain-mind related knowledge in all 6 disciplines?

New knowledge about how the brain-mind works could not only fundamentally change your view about yourself and your social environments, but also greatly shape the future of science, technology, economy, and society. For example, already shown, brain-mind experts with computational depths are increasingly sought by a variety of job sectors. The job growth in the field of computational brain-mind is expected to continue while its impact is increasingly recognized by society.

The following provides some partial reasons from the viewpoint of a researcher in one of the 6 disciplines — biology, computer science (CS), electrical engineering (EE), mathematics, neuroscience, and psychology, but emphasizing computation.

I Am a Biologist

Evolution spans multiple generations while development spans a single life. Evolution and development interact intimately. Without evolution, development does not have the origin of each life. Without development, evolution cannot continue (without giving birth to a new life) and show its functions for life. Although much biological research work has been done, how the genome regulates development is still poorly understood. The fact that every cell in the brain is autonomous during development is likely helpful to understanding how the tissues, organs and organisms emerge and work in the physical environment. However, how does the genome give rise to tissues, organs (including the brain) and organisms? How do autonomously interacting cells give rise to suprisingly organized organ behaviors? It is true that experimental biology is still extremely important, but biology must move ahead further to explain deeper causality. Since many biological puzzles are quantitative in nature relying on many factors to interplay their roles altogether, solving these biological puzzles require broad computational knowledge in EE, CS, and mathematics. The computational mechanisms about how the brain develop its mind should be useful for understanding how other body organs develop their functions.

Why Learn Biology?

Through cell signaling (e.g., morphogens, cell differentiation, cell connections), it is the collective effects of many individual autonomously interacting cells that enable the entire brain to work properly. From biological mechanisms that regulate each cell to migrate, grow, and connect, as well as the sufficiency of such cell-level activities to give rise to the mind, there seems to be no “government” in the brain, as far as individual cell power is concerned — no brain cell is more powerful than any other. The brain should work almost equally well if we delete any single cell in it. This non-intuitive fact is due to a biological principle — the principle of genomic equivalence, dramatically demonstrated recently by animal cloning. This principle means that the information represented by the genes in the nucleus of every cell (other than cells that have lost their nuclei such as blood cells) is sufficient for the cell to develop into a functional human adult body-and-brain consisting of around 100 trillion cells, provided that it lives in a normal human environment. Yes, every cell seems to have all the regulating “laws” for the body and the mind to development. You need to learn biology to be convinced why we say that there seems no “government” in the brain and what kinds of cell-to-cell interactions exist.

I Am a Computer Scientist

The prevailing approaches in Computer Science (CS) and Artificial Intelligence (AI) fall into the domain of symbolic processing. Not many researchers have sufficient background in connectionist (neural network) approaches, which already have over 30 years of history of phenomenal growth. If CS researchers have an opportunity to learn brain-like signal processing, they will find that their ideas of symbolic reasoning (e.g., finite automata, Hidden Markov Models, Markov Decision Process, and knowledge-base) are beautifully used by the brain, but in a deeper emergent way. For example, Marvin Minsky 1991 correctly criticized that artificial neural networks then were "scruffy". The same seems not true any more (Weng 2010) — the brain appears to use emergent representations that are fundamentally different from symbolic models such as Finite Automata, Hidden Markov Models, and Markov Decision Processes. In addition, we should reconsider (symbolic) NP-hard or NP-complete problems in light of new brain models. Computational understanding of brain-mind would drastically change the "landscape" of CS. As another example, the brain of a child learns new concepts and a new language that the parents have not heard about before the child birth — a capability likely will solve a wide array of AI bottleneck problems. Computational understanding of brain-mind could drastically change the "landscape" of AI.

Why Learn Computer Science?

Many researchers thought that computers are just tools, as the tools help them to automate some tasks (e.g., generate plots). This narrow-minded view is no longer true. Computer-like symbolic manipulation and recombination have inspired many psychologists and AI researchers to question the sufficiency of the traditional artificial neural networks (e.g., Minsky 1991). However, many neural network researchers do not understand or even care about such questions, simply disregarding them as "not my problem". The recent establishment (Weng 2010) that the base network of symbolic AI systems (i.e., FA) is a special case of a brain-mind network DN indicates the necessity and urgency for all researchers and students in EE, Psychology, neuroscience, biology, and mathematics to learn computer science, especially the automata theory and computational complexity theory. To understand how the brain biology works, one must understand at least how an automaton operates on symbols and how symbols are related to meanings in computers. No, traditional AI theories are not close to what the brain does, but they are useful for understanding what the brain network must be at least capable of.

I Am an Electrical Engineer

Electrical Engineering (EE) and Computational Intelligence (CI) researchers typically have knowledge background in connectionist (neural net) approach. They account for a major force of research on control systems, communication systems, and artificial neural nets. However, they typically do not have sufficient background in automata theory and symbolic artificial intelligence in CS. How do brain networks deal with abstraction and reasoning that traditional neural nets do not perform well, as Marvin Minsky 1991 correctly criticized? Furthermore, many EE researchers do not have sufficient knowledge in biology, psychology, and neuroscience. How can the brain-like networks (Weng 2010) that use emergent representations and numeric computations deal with general-purpose symbolic problems, including abstraction and reasoning? Such new knowledge will likely solve many current open problems in EE, such as general-purpose nonlinear control, signal detection and prediction, optimal nonlinear system approximation, brain-scale VLSI circuits, and VLSI for brain-like computation. However, EE and CI researchers need to first learn knowledge in CS, psychology, neuroscience and biology before they can solve those open problems. Computational understanding of brain-mind is expected to drastically change the “landscape” of EE and Computational Intelligence.

Why Learn Electrical Engineering?

When a human thinks, he often uses a language to organize his thought. As the language can be written as symbols (e.g., English), it is natural for him to mistake his symbolic ways of representing a problem as what brain does inside the skull. Electrical engineering uses mathematical tools to describe complex electrical and electronic systems, which are often not symbolic in nature (e.g., radio and radar). The field of electrical engineering has developed a series of methods and mathematical tools to model, analyze, approximate, and implement highly complex systems. The most successful type is a class of systems called linear systems. For example, Kalman filter is a linear dynamic system, which has vector inputs, vector outputs, and a linear internal model that changes through time (called dynamic system). Although the brain is neither a Kalman filter nor a nonlinear extension of Kalman filter, the system knowledge studied in electrical engineering is a necessary background for anybody who wants to understand biology (brain or body), neuroscience, artificial intelligence, and a new kind of mathematics that brain tells us.

I Am a Mathematician

The brain is one of the most important and interesting subjects for mathematics. We regard the brain as a mathematical subject since mathematics is a discipline for quantity, structure, space, and change. The precision of concepts and the rigor in analysis are two important feature of mathematics. The brain calls for a new kind of mathematics. The emergent structures of the brain could lead to a new mathematical subject of "emergent structures" (e.g., emergent functional) as opposed to pre-specified structures (e.g., pre-specified basis functions). The domain of a function emerges and adapts, and so does the co-domain (range). The spatial and temporal attention mechanisms in the brain tell us that the source of the domain that the function depends on dynamically change very quickly. The brain-like optimization seems to provide a general-purpose solution to problems in nonlinear dynamic systems, stochastic processes, sparse coding, mathematical nonlinear optimization, statistical learning theory, and much more. How the brain figures out low dimensional nonlinear manifolds incrementally in uncertainty while avoiding the rigidity of probability would likely extend the existing frameworks of probability and functional analysis in mathematics.

Why Learn Mathematics?

The brain is a physical object that is governed by physical quantities, structures, spaces, and changes, which are exactly what mathematics is about. For the same reason, theoretical physics uses much mathematics. Since the brain-mind is more complex than more basic physical properties, such as force, mass, speed, and time, the need of mathematics for brain-mind research is more obvious. The basic mathematical subjects necessary for understanding the brain-mind include, but are not limited to, mathematical analysis, vector analysis, linear algebra (e.g., eigenvectors and eigenvalues), calculus-based multi-dimensional probability, real analysis (e.g., measure), functional analysis (e.g., random vectors, random matrices, limit and convergence in vector space, representation of functionals, nonlinear approximation, nonlinear optimization), and mathematical statistics (e.g., statistical efficiency). Some experimental biologists, neuroscientists and psychologists have learned non-calculus-based probability and statistics, which are useful for them to analyze their own experimental data but insufficient for understanding the brain-mind.

I Am a Neuroscientist

The genomic equivalence principle (Purves et al. 2004) seems to imply two basic principles of the brain computation, emergence and in-place. The emergence means that all brain areas and their functions emergence from experience (Sur & Leamey 2001, Elman et al. 2007, Weng 2010), not completely pre-defined by the genome. In particular, each neurons seems to represent a cluster in its sensorimotor input space (containing both sensors and effectors), instead of precisely a feature or object in the extra-body environment (e.g., not edge orientations in V1, not faces in IT). The in-place means that the brain learning is cell-centered — each cell in-place is responsible for not only its computation but also its learning. However, even if we agree on these facts, currently few neuroscientists believe and understand that such low-level cell mechanisms are sufficient to give rise to the rich complex brain regions, brain wiring, and brain-mind behaviors through experience in the modern societies. Indeed, such low-level cell mechanisms seem sufficient! The process of brain development and adaptation is highly quantitative in nature. Therefore, knowledge in EE, CS, and mathematics is necessary for neuroscience researchers to go beyond the current mode of phenomenology — a path any science must go through from phenomena to causality. A deeper, computational causality is likely to guide and improve experimental designs by experimentalists in neuroscience research.

Why Learn Neuroscience?

Unfortunately, many psychologists are satisfied with an account for observed brain's external behaviors, but have not spent sufficient efforts to study the literature about details inside the brain. Many EE researchers contend with artificial neural networks without studying whether their networks are consistent with the neuroscience literature. Many researchers in artificial intelligence motivate their work by what the brain can do superficially, but not how the brain does it inside the skull. It is true that much of the literature in neuroscience is concerned with many biological details, which do not directly tell us how the brain works. However, such rich details in neuroscience provide important constraints for us to rethink the traditional models in biology, Psychology, CS, EE, and mathematics. For example, in psychology, there are models for sensitization, habituation, classical conditioning, instrumental conditioning, extinction, blocking, homeostasis, cognitive learning, language understanding, and so on. However, each of such models is based on a different symbolic model, but the brain uses a single framework to do them all! It seems time for us to study how a single brain-model does them all and integrate all. Many well-known leaders in neural network research and neuroscience modeling still regard such a research goal to be a fantasy at this stage of knowledge. An applicant who has received BMI 6DC will think otherwise.

I Am a Psychologist

In psychology, a vast literature already exists about brain-mind behaviors. For example, Ida Stockman (Stockman 2010) reviewed rich evidence that movement and action are critical for perceptual and cognitive learning. Linda Smith and coworkers (Yu et al. 2009) have demonstrated that perception-action loops play important roles in children's visual learning. Connectionist modeling since the early 1980s (e.g., McClelland et al. 1986, Elman et al. 1997, Shultz 2003) is a quest for a deeper causality — the computational causality. However, the nature vs. nurture debate cannot be settled (i.e., how epigenetics occurs computationally) without a computational and developmental framework that accommodates much known psychological data. The DN by Weng 2010 seems to predict that a relatively few genetic functions (e.g., Hebbian learning, inhibition, synapse maintenance) are sufficient to give rise to a wide variety of brain functions through rich social experience. For example, the motor areas represent states of spatiotemporal context that are necessary for brain abstraction and reasoning. This network model predicted how the complete transfers in human perceptual learning recently reported (see, e.g., Xiao et al. 2008, Zhang et al. 2010) can occur computationally. Many psychologists, including cognitive neuroscientists, talk about brain as a symbolic network (e.g., with rigid functional modules) but do not see how representations emerge inside the brain. Many computational models in psychology use GOFAI (Good Old Fashioned AI, e.g., symbolic Bayesian models). Therefore, they want to learn biology, neuroscience, computer science (e.g., the automata theory, symbolic AI, and the complexity theory), electrical engineering (e.g., signal processing and system theory), and mathematics (e.g., vectors, probability, statistics, and optimization theory). For example, an increasing number of psychological departments are changing the composition of their faculty toward this direction.

Why Learn Psychology?

The field of developmental psychology has accumulated much evidence that the brain gradually develops its capabilities for perception, cognition, behavior, and motivation. Furthermore, psychology has a rich collection of models about animal learning, including sensitization, habituation, classical conditioning, instrumental conditioning, extinction, blocking, homeostasis, cognitive learning, and language acquisition. However, the brain learns autonomously, fully autonomous inside the brain skull, while displaying capabilities some of which are described by these qualitative learning models. Many models in pattern recognition, AI and neural networks use either supervised learning or unsupervised learning. In the former, class labels are provided. In the latter, class labels are not provided that the system must form clusters in the sensory space. These two learning modes are not exactly what the brain uses. The brain does not need a human teacher to provide discrete class labels. The brain does not use unsupervised clusters in the sensory space alone. Instead, the brain uses its body to be motor-supervised by the physical world and uses its own actions to autonomously self-supervise (practice). One may say that the machine learning community has already reinforcement learning. However, many such reinforcement learning models are symbolic, using a rigid time-discount value model and not using emergent internal representations. Knowledge in psychology enables you to rethink how to overcome your hurdles (e.g., problems in providing discrete class labels).


J. L. Elman, E. A. Bates, M. H. Johnson, A. Karmiloff-Smith, D. Parisi, and K. Plunkett. Rethinking Innateness: A connectionist perspective on development. MIT Press, Cambridge, Massachusetts, 1997.

J. L. McClelland, D. E. Rumelhart, and The PDP Research Group. Parallel Distributed Processing, volume 2. MIT Press, Cambridge, Massachusetts, 1986.

M. Minsky. Logical versus analogical or symbolic versus connectionist or neat versus scruffy. AI Magazine, 12(2):34–51, 1991.

W. K. Purves, D. Sadava, G. H. Orians, and H. C. Heller. Life: The Science of Biology. Sinauer, Sunderland, MA, 7 edition, 2004.

T. R. Shultz. Computational Developmental Psychology. MIT Press, Cambridge, Massachusetts, 2003.

I. J. Stockman. A review of developmental and applied language research on African American children: from a deficit to difference perspective on dialect differences. Language, Speech, and Hearing Services in Schools, 41(1):23-38, 2010.

M. Sur and C. A. Leamey. Development and plasticity of cortical areas and networks. Nature Reviews Neuroscience, 2:251–262, 2001.

J. Weng. A 5-chunk developmental brain-mind network model for multiple events in complex backgrounds. In Proc. International Joint Conference on Neural Networks, pages 1–8, Barcelona, Spain, July 18-23 2010.

C. Yu, L. B. Smith, H. Shen, A. F. Pereira, and T. Smith. Active information selection: Visual attention through hands. IEEE Trans. Autonomous Mental Development, 1(2):141–151, 2009.

L. Q. Xiao, J. Y. Zhang, R. Wang, S. A. Klein, D. M. Levi, and C. Yu. Complete transfer of perceptual learning across retinal locations enabled by double training. Current Biology, 18:1922–1926, 2008.

J. -Y. Zhang, G. -L. Zhang, L. -Q. Xiao, S. A. Klein, D. M. Levi, and C. Yu. Rule-Based Learning Explains Visual Perceptual Learning
and Its Specificity and Transfer. Journal of Neuroscience, 30(37):12323–12328, 2010.