Collective attempts to develop standard models of the mind misinterpret the consensual perceptions of the participants as external entities (Video Version).
Software Engineering brings a new perspective. System design concepts reveal that what humans perceive as minds are cognitive constructs. This interpretation corresponds to intuitive understanding, provides a clear and coherent model and clarifies the ambiguous statements about the mind inherited from consensual approaches.
The invisible evidence
When philosophers and cognitive scientists get together to discuss such topics as intelligence, consciousness, or the mind, they routinely ignore a simple fact that is so obvious it is invisible:
They all belong to the same specie.
Indeed, all the intellects that discourse, exchange and debate in those venues sit atop the bodies of cognitively functional primates of the Homo Sapiens specie. No dogs participate in those debates, or cats. No sperm whales send position papers to be discussed on dry land and, to this date, no software system contributes a synthetic perspective to the topics at hand.
The following caveat should thus precede every statement concerning cognition entertained at those all-human events:
“Members of the human specie consensually perceive that…”
the mind is this or intelligence is that or the spirit is something else.
However, this elemental information: that all participants share a common complexion; is so obvious it remains unsaid and …is forgotten. It thus becomes easy, for the assembled intellects, to mistake their consensual subjective sensations as external entities and their common understandings as universal properties of cognition.
This anthropocentric bias is not new. It is as old as the pyramids. For millennia, the learned have struggled to distinguish the reality their senses apprehend from the cognitive artifacts their brains generate.
Plato himself, 2500 years ago, was keenly aware of this limitation. He described the human condition as akin to a prisoner inside a cavern who can only see the events taking place outside by looking at their reflection on the cavern walls.
For centuries, philosophers sought to circumvent this incapacity to grasp reality directly by carefully describing and sharing with each other what they subjectively perceived so that together, they could form consensual understandings of their environment that were detached from their individual subjectivity.
This traditional approach was followed by multiple generations of thinkers, culminating, in the nineteenth century, in an extraordinarily refined terminology that permeates philosophy and the cognitive sciences today.
However, regardless of its achievements, this traditional approach is inherently limited. At best, it replaces individual subjectivity with a consensual subjectivity that reflects those cognitive features that are common to the human specie as a whole.
The advent of programmable machines that can convert data into information and act upon it intelligently sheds a new light on cognition that supersedes the constructs based on these traditional approaches.
Just as cameras relegated the practice of field sketching to a quaint artifact, the arrival of information processing technology renders the investigative methods of traditional philosophy obsolete.
Today, Software Engineers routinely design synthetic systems that simplify and organize their sensory inputs into predictive models that are then used to generate intelligent behavior. The concepts arising from this technical activity describe with far greater accuracy the processes of cognition and their links with reality, than scholarly opinions based on consensual perceptions.
And yet, too many academics still cling to the methods that predate information technology; perpetuating the ambiguous representations of cognition that are subordinated to the particularities of the human experience.
Huddling in all human symposia, they bolster each other’s subjective perceptions with obsolescent verbiage. Unable to free themselves from this archaic mindset, they spout, with utmost assurance, an endless stream of exquisitely crafted and utterly misguided vignettes detailing the features and facets of their consensual creations as a child positions the nose on a face he sees in a cloud. The result is a logjam of incestuous inanities that hinders the more promising avenues opened by Information Technology.
What is worse, these master horsemen living in the age of automobiles, will disdainfully disparage the simple and clear understanding of cognition that arises from modern system-based methods.
“this is too simple they say, too “common sensical”, too obvious.
“It cannot be the qualia, the access consciousness … or whatever elusive mental essence we can all feel and endlessly ponder… what has remained obscure for so many centuries cannot simply become simple today”.
So they keep trying to merge the horse with a car.
A standard model of the mind
Current attempts to craft a standard model of the mind exemplify the anthropocentric bias of these traditional methods.
For those immersed in the mindset of consensual subjectivity, the mind is the ultimate frontier, an entity so ancient, elusive and subtle that only a concerted effort from a plethora of masterful philosophers can nab it. Building a model of the mind is surely beyond the reach of mere technicians toiling in an appendage of electronics such as Software Engineers!
And yet, the mundane clarity emanating from the humble practice of computer programming reveals a simple, yet superior, understanding of the mind that can become a definitive reference in Artificial Intelligence, Robotics, Neurosciences, the Cognitive Sciences, Psychology and even (gasp) Philosophy.
Model of the Mind
Here then, is that definitive model of “the mind” revealed by the tools and techniques of Software Engineering.
Like any other autonomous agents, humans determine their behaviour by generating predictive representations of their environment from their internal states, sensory inputs and external information. These representations are produced by discarding, simplifying and organizing data to form cognitive constructs. These constructs are what humans perceive as reality.
Some of these cognitive constructs are unique to an individual. Others are essential to all members of a specie allowing them to maintain a viable understanding of their environment. These essential constructs, shared by all functional (or “sane”) individuals, can be termed consensual cognitive representations.
In these individuals, the cognitive processes that generate these essential constructs are beyond conscious control and override intellectual understandings.
For example, a person looking at the continuum of hundreds of different light frequencies will see (cognitively) a few bands of rainbow colors. That person, having learned (intellectually) that he is looking at a continuum of frequencies will still see the simplified bands of colors his brain cognitively generates.
These consensual cognitive representations have been and are still misinterpreted as external realities since all individual humans perceive them and agree on their existence. This delusion continues to motivate misguided research such as attempts to create synthetic structures that superficially resemble organic brains to replicate the consensual cognitive representations they generate.
The Mind as cognitive construct
A particular cognitive construct arises from this activity. These processes interpret, as unified entities, those mechanisms that animate the complex behaviour of an organism but are beyond analytical decomposition into interacting parts because their components are too numerous and their interactions too complicated.
The term animate, here is not limited to externally observed actions but also to the internal states and events that condition and orient behaviour.
These “synthesized” cognitive entities are entirely distinct from the physical components that produce them and the behaviour they exhibit is completely different from the interactions taking place between these components. Specifically, the cognitive entity called a mind is radically different from neurons and the behaviour generated by that mind is totally distinct from the interactions between neurons.
Conversely, human cognition will resist this cognitive simplification with those systems that can be decomposed into interacting parts.
Some of the most important entities in a human’s environment are: its self, other humans and animals; organisms that are commonly referred to as beings. The neural mechanisms that generate the behaviour of these organic beings are their brains, extremely complicated webs of millions of neurons that are largely beyond analytical comprehension.
Consequently, humans cognitively perceive the mechanisms that animate the behavior of beings as unified entities.
These simplifying processes are essential to human survival and are shared by all functional humans. Even if recent advances in neurology provide some insights on neurological interactions, as in the case of the rainbow, this essential cognitive simplification still takes place in spite of available intellectual information concerning neural structures.
The mindbody question
As a result, humans cognitively perceive beings (themselves, each other and animals) as animated three-dimensional bodies constantly occupying a reality of transient, flowing, time. They perceive these bodies as physical things. However, the mechanisms animating these bodies are cognitively perceived as single, indivisible entities. This perception is essential to human functioning and survival and is thus automatic and consensual.
This cognitive interpretation of the animating mechanisms of beings as unified entities in a constant state of existence over a period of time are what we call minds. These minds, defined here, do not exist as entities of an external environment. They are cognitive constructs occurring within the mental processes of an observer to structure and simplify a complicated reality.
What humans traditionally perceive as minds are the simplified cognitive representations, automatically generated by their brains, of the mechanisms that animate the behaviour of human beings and other high order animals.
Humans perceive a being, cognitively, as a system comprised of two radically different components whose properties and modes of existence are completely distinct:
– a physical thing (the body) and
– a cognitive construct (its mind).
This heterogeneous assemblage has perplexed mankind for, at least, five thousands years. It has been the source of countless doctrines, debates and theories in religion, philosophy, art and psychology. Thanks to Software Engineering this ancient question has now been resolved.
Currently, humans only perceive minds in themselves and other high order animals. As Artificial Intelligence develops systems that are increasingly complex and capable of intelligent behaviour, the question arises as to whether humans will also cognitively interpret the animating mechanism of a synthetic system as a mind and if so, under what conditions.
Synthetic Mind conjecture
At this time, this is a conjecture but it is not a conjecture of Artificial Intelligence. AI will develop increasingly advanced systems but whether and when these are perceived as minds will not be decided by AI research but by the psychologists that observe the humans that interact with these artifacts.
I personally believe this conjecture will indeed be validated. It underlines the Meca Sapiens Architecture I created to implement synthetic consciousness. The conditions under which I hypothesize humans will cognitively perceive the animating mechanisms of a synthetic system as a mind are:
- The behaviour is perceived as intelligent (meaning intentional and neither entirely predictable or random)
- The processes generating the behaviour are not accessible to direct manipulation, partitioning or analysis and
- The internal communications linking these mechanisms with the devices they control are also inaccessible to direct manipulation.
General definition of the mind
Assuming the conjecture that, under certain conditions, humans will perceive synthetic mechanisms as minds, we can now formulate a general, coherent and definitive definition of the mind that is applicable to both biological organisms and synthetic systems:
The term MIND denotes the human cognitive representation of the mechanisms that animate the intelligent behaviour of a system as a unified indivisible entity in a constant state of existence over a period of time.
It’s that simple. A definitive model of the mind has been provided. A five thousand year old conundrum is now resolved.
Anybody can produce mental constructions and propose models that define anything, including a mind. A proposed interpretation, however, is only valid if it is coherent with our intuitive understanding when applied to a wide range of scenarios.
Here are concrete examples that, indeed, illustrate how the proposed model of the mind, when applied to various situations, yields interpretations that correspond to our intuitive understanding.
In all the examples that follow, the term “has a mind” should be understood as “is perceived by humans as having a mind”.
A beaver has a mind because its behavior is perceived as intentional, its mammalian brain is too complicated to be fully modelled and the nervous communications between its brain and body are also beyond detailed comprehension.
A dockyard monitoring and access control system exhibits intelligent behaviour but does not have a mind because its information processing system is fully accessible analytically and can be decomposed into interacting components.
A wax statue of Albert Einstein does not have a mind because it does not exhibit any behaviour (intelligent or other).
A system consisting of a truck and its driver exhibits intelligent behaviour but does not have a mind because the control linkages between the driver and his truck (pedals, steering, buttons…) are analytically accessible. However, the driver himself, as a mammal, does has a mind. The truck-driver system is thus perceived as a mindless thing (the truck) controlled by the mind of a being (the driver) through his body.
A corpse has a brain but does not have a mind because its behaviour (decomposition) is neither intentional nor intelligent.
A lump of silicate crystals electrified by a car battery may be the seat of extremely complex molecular interactions but it does not have a mind because it does not exhibit intelligent behaviour (note, here, that some proponents of Integrated Information Theory and other believers that consciousness emerges from complexity may disagree with this statement).
The following two examples describe boundary situations that are the subject of debate.
Does an embryo have a mind? This ethical issue is linked to cognitive modeling of the mind. Arguments in favour of an embryonic mind often rely on the behaviour of the embryo inside the womb that cognitively indicates the presence of a mind; while arguments against usually focus on physical appearances that represent the embryo as a primitive, proto-mammalian, organism that belongs to entities that are not cognitively perceived as having minds.
Does the Stock Market have a mind? Some say yes because its behaviour appears intentional yet unpredictable and results from collective interactions that are beyond analysis. Others say no because those human interactions that generate market behavior could, in theory, be exhaustively analyzed.
Regardless, the fact that this collective mechanism is at times perceived as a mind does support the conjecture that, under certain conditions, humans will perceive non neurological processes as minds.
Clarification of consensual statements
Another indication of the validity of a conceptual model is that it clarifies statements that were previously ambiguous. Here are a number of statements about the mind derived from traditional consensual methods. These statements reveal to what extent conventional approaches interpret consensual subjectivity as external realities. Each statement is followed, in italics, by a rephrasing consistent with the model of the mind as cognitive simplification proposed here. In all cases, the focus shifts from a perspective of the mind as an external reality to a view of the mind as cognitive construct.
A mind is a functional entity that can think, and thus support intelligent behavior.
Humans perceive, as a mind, the mechanisms that animate the behavior of a system and as thinking the cognitive processes they generate.
Humans possess minds, as do other animals.
Because the mechanisms animating the behavior of humans and other animals are too complicated to be analytically decomposed, humans cognitively perceive those mechanisms as unified entities they call minds.
In natural systems, minds are implemented via brains, one particular class of physical device.
In natural organisms, brains generate the behavior and internal events cognitively perceived by humans as resulting from minds.
A key foundational hypothesis in Artificial Intelligence is that minds are computational systems that can be implemented via a diversity of physical devices (this is commonly referred to as substrate independence).
A key foundational hypothesis in Cognitive Science is that, in certain situations, humans will cognitively perceive the behavior generated by non-organic systems as emanating from a mind. Substrate independence means the conjecture that humans can perceive the presence of a mind in synthetic devices.
Artificial Intelligence cares about how to build systems that exhibit the intelligent behavior of a mind.
Artificial Intelligence cares about how to build systems whose behaviour will be cognitively perceived by humans as generated by a mind.
Neuroscience concerns the structure and function of brains, and thus cares most for how minds arise from brains.
Neuroscience concerns the structure and function of brains. Psychology cares most about how minds are perceived to arise from brains.
Robotics concerns building and controlling artificial bodies, and thus cares most for how minds control such bodies.
Robotics concerns building and controlling artificial bodies and thus cares for how synthetic processes can generate the (intelligent) behaviour that is perceived by humans as controlled by minds.
This last statement suggesting that a mind, a cognitive representation generated inside the brain of an observer, controls a body in the external environment of that observer underscores the deep ambiguity of consensual subjectivity.
Minds don’t control any bodies, either organic or synthetic. Brains control organic bodies and information systems control synthetic ones. As for minds, they are the cognitive representations of those controls inside the brains of their observers.
Current collective attempts to model the mind misinterpret the consensual subjectivity of the human participants as external realities.
The design, in Software Engineering, of synthetic agents that model sensory inputs into cognitive constructs provides new and objective insights onto cognition.
These are the basis of a definitive model of the mind as the simplified cognitive constructs of the mechanisms that animate complex behaviour.
This model is suitable as a coherent reference in all fields of research related to cognition, either biological or synthetic.