Much research on intelligent systems has concentrated on low level mechanisms or limited subsystems. We need to understand how to assemble the components in an architecture for a complete agent with its own mind, driven by its own desires. A mind is a self-modifying control system, with a hierarchy of levels of control and a different hierarchy of levels of implementation. AI needs to explore alternative control architectures and their implications for human, animal and artificial minds. Only when we have a good theory of actual and possible architectures can we solve old problems about the concept of mind and causal roles of desires, beliefs, intentions, etc. The global information level `virtual machine' architecture is more relevant to this than detailed mechanisms. For example, differences between connectionist and symbolic implementations may be of minor importance. An architecture provides a framework for systematically generating concepts of possible states and processes. Lacking this, philosophers cannot provide good analyses of concepts, psychologists and biologists cannot specify what they are trying to explain or explain it, and psychotherapists and educationalists are left groping with ill-understood problems. The paper outlines some requirements for such architectures showing the importance of an idea shared between engineers and philosophers: the concept of `semantic information'.
|Number of pages||16|
|Journal||Philosophical Transactions A: Mathematical, Physical and Engineering Sciences|
|Publication status||Published - 15 Oct 1994|