The following article builds upon the arguments and evidences offered in the previous post How You Know What You Know; however, the contents below stand on their own. A further review of the History of Cognitive Science can be found at How do human minds work?: The Cognitive Revolution and Paradigm Change in Cognitive Science.
----
1. Sensory Integration and Interdependence
The transition from sensations to perceptions is commonly referred to as sensory integration. The importance of this process is such that it led the Rodney A. Brooks and the robotics team at MIT to postulate it as an ‘alternative essence of intelligence’ (Brooks et al. 1998) during their first attempt at building a humanoid robot, appropriately named Cog.
Sensations are modality-specific; perceptions are not, even though we can attempt to dissociate the different sense streams and partially succeed in doing this. As evidence, consider two phenomena: sensory illusions and synesthesia.
Sensory illusions can be uni-modal (involving one sense modality like the images above and below), multi-modal (involving two or more sense modalities; see, e.g., Turatto, Mazza & Umiltà 2005), or a sense modality and some piece of standing knowledge. As remarked by Fodor (2003), early 20th century Gestalt psychologists were more than justified in offering sensory illusions against their current-day empiricist counterparts. David Hume, and the tradition that ensued, granted an individual privileged access to his sensations. But, as the Gestalt psychologists would argue, perceiving involves construction, not just passive reception. Sensations decay— what persist are perceptions flowing through ideas.
(Just in case you thought the above illusion was due to the surroundings, see the image below.)
Hume’s agglomeration of impressions and ideas into the bucket of perceptions (classifying both impressions and ideas as types of perceptions), and his implacable loathing of skeptics, led him straight to an erred view of the mind. By compromising with the skeptic and contemporary cognitive scientists, it is possible to recognize the ephemeral character of sensations and identify perception with sensory integration, which necessarily involves active construction as is the activation of learned mental representations. This move does not undermine the core tenet of empiricism (i.e., there are no innate ideas); rather, it just delineates a point where bottom-up and top-down processing converge in the constant and continuous process of real-time experience.
(For Mobile users who cannot see the video embedded above, here is the short color-creating optical illusion.)
Synesthesia is less well-known. Synesthesia is a very rare condition that has its onset in early development and for which there is no treatment. Up until recently, very little research and funding had been directed towards the study this condition, mainly because it only rarely impairs a person’s productiveness and its incidence is quite low, around 1 in every 1150 females and 1 in every 7150 males (Rich, Bradshaw, & Mattingley 2005; however, Sagiv et al. 2006 has challenged the existence of a male-female asymmetry). These numbers are still under revision as the incidence of this condition is widely debated since synesthetes rarely see their condition as a problem, rather as a gift, and hence do not seek professional counsel.
A synesthete has two or more modalities intertwined, usually uni-directionally, such that some features in one modality reliably cause some unrelated features in another modality (Cytowic 1993, Cytowic 1995, Rizzo & Eslinger 1989, but see Knoch et al. 2006, who argue that even in clear uni-directional cases there is some bidirectional activation; also Paffen et al. 2015). The patterns of association are established early during development and are stable throughout the lifespan. Moreover, no two synesthesias are alike. On the one hand, not only are many modality combinations possible, such as colored hearing, tasting tactile textures, or morphophonetic proprioception, but also, though it is extremely rare, more than two modalities can become entangled. On the other hand, even synesthetes who belong to the same class, like colored hearing, have completely different patterns of feature association. For example, colored-alphabet synesthesia involves person-specific ‘color - written letter’ mappings where each letter always appears in a specific color.
Karen's Colored Alphabet
Carol's Colored Alphabet
But colored alphabet synesthesia is among the least invasive. In colored hearing synesthesia, certain sounds can trigger beams of colorful light situated in a personal space extending 1 meter in front of the face of the synesthete. The fact that colored hearing synesthesia typically involves a personal space is indicative of associations that were made very early on during development, as infants cannot see much past such a space. Indeed, the associations must have been made so early on as to be incorporated in the base perceptual code of the individual, a fact that illustrates not only the distinction between a sensation and a perception, but also the effect that ideas have in delimiting perception, and is firmly evidenced by the reality that, as of yet, no person with synesthesia has ever been found that remembers a time when they did not have their particular anomalous perceptions. As such, synesthesia ought to be deemed paradigmatic for any empiricist cognitive architecture because it not only shows (in an exaggerated manner) that sensory integration—perception—implies active construction, but also hints at how individual differences are the rule, rather than the exception, in the conformation of representational capacities, which would be indicative that these capacities are not innate.
In fact, synesthesia might be paradigmatic of cognition in general, so much so that it has led researchers (Baron-Cohen 1996, Maurer 1993) to seriously explore the Neonatal Synesthesia Hypothesis, which states that “early in infancy, probably up to about 4 months of age, all babies experience sensory input in an undifferentiated way. Sounds trigger both auditory and visual and tactile experiences” (Baron-Cohen 1996). Since neonatal nervous systems are in the process of approximating environmental properties and specializing in domains of processing, experience to the infant might just be one constant synesthetic flow. By adopting this view, synesthesia can be explained as a derailment of an early process of modularization that the brain undergoes as a function of neural competition in the processing of the input stream during development.
There is a second, competing explanation for synesthesia, what might be called the perceptual mapping hypothesis. According to this view, synesthesia occurs not so much as a function of modularization (although this process may still be relevant), but rather as a function of early induction of the associated pairs and subsequent entrenchment of these pairs into the base perceptual code of the individual (i.e., during some critical period; see Rich, Bradshaw, & Mattingley 2005). Since for most synesthetic associations, there is no clear source of what the target ought to be other than the input itself, the individual can go a prolonged time without knowing that their perceptions are irregular, and by then the association might be so entrenched in the representational system that it might either be too late for it to be corrected or it might be too dangerous because changing the base code would negatively affect all other cognitive capacities that are built upon it. Which account is correct is ultimately a scientific question that needs to be experimentally approached; nonetheless, either explanation affords support to present-day empiricism based on connectionism and dynamical systems theory (Beer 2014, Rumelhart 1989, van Gelder 1999).
The neuropsychological and ontological question underlying both sensory illusions and synesthesia is where to draw the line between a sensation and a perception. In the journal Current Opinion in Neurobiology, Shimojo and Shams (2001) of the California Institute of Technology go as far as to argue that there are no distinct sensory modalities, since the supposed sensory systems modulate one another continuously as a function of the transience of the stimuli. They reach this radical conclusion by considering a wealth of recent findings in neuropsychology that include the plasticity of the brain and the role that experience has on determining processing localization (i.e., emergent modularization). And they are very likely correct; sensory integration is the rule rather than the exception, even in adult ‘early’ cortical sensory processing. This claim is echoed by Ghazanfar & Schroeder (2006), who argue not only that there are no uni-modal processing regions in the neocortex at all but also that the entirety of the neocortex is composed of associative, multi-sensory processing.
So what is the difference between a sensation and a perception? Succinctly, a sensation becomes a perception when it is mediated by an idea. When a mental representation intervenes in the flow of a sensation, when it delineates its processing, the process of construction and integration begins.
2. Aspects of the Nature of Emotions
Damasio (1994) claims that what sets the stage for heuristic, full-blown human reason are limbic system structures that code for basic emotions and that help train the cortical structures on top of these, through experience, which then code for complex emotions. His somatic marker hypothesis states that emotional experiences set up markers that later guide our decision-making processes. It is a well-known fact that when we try to solve a problem we do not consider all the alternatives, only the tiniest fraction. These markers of past bodily state set up in our brain allow our minds to discard the vast majority of possibilities before we can even consider the vast array of options, and what is left is a small set that we may manage to ponder. Such training mechanisms are patently fruitful from an evolutionary standpoint, as illustrated by the following Artificial Life simulation.
Nolfi & Parisi (1991) simulated the evolution of agents made up of artificial neural networks whose only task was to find food in a simulated world. Two distinct types of evolution were explored. In the first, the networks that were most successful at finding food in each generation were allowed to reproduce, which meant that new neural networks would begin with similar, though not exact, connection weights. What evolves, in this scenario, is the solution to the problem of navigation and food localization. Over several generations, the resulting agents have no problems at finding food at birth, so to speak. This is the equivalent of evolution hand-coding the solution into the neural connections, that is, of evolution installing truly innate ideas. For complex organisms, however, this kind of pinpoint fixation is untenable. The second type of evolution involved agents that were made up of two distinct networks. The first network handled the navigation, as the agents in the first simulation did, and the second neural network was in charge of helping train the navigating network (that is, it did not navigate at all). In this simulation, the first network was a tabula rasa in every generation, and what was allowed to evolve were the connection weights for the training network. Upon comparison of the two end-state types of agents, Nolfi & Parisi found that the auto-teaching networks consistently performed better at the task than the agents that had the solution to the problem hard-wired at birth.
It strikes me as altogether probable, if not entirely undeniable, that tastes and emotions serve to guide the inductions of the tabula rasa toward specific ends, the same as Nolfi & Parisi’s teaching nets served the blank nets to solve the issues of their existence. Tastes and emotions are fundamental—even at birth, these instruct as to what is food and as to what can kill you. However, taking Nolfi & Parisi’s simulations at face value would mean that emotions would come preset in specific connection configurations, which are a means of mental representation. If, as has been claimed here, all mental representations are ideas, then such a solution would lead to an as-of-yet unseen kind of rationalism (an emotional rationalism - how bizarre!). But there are other ways in which nature might have implemented the mechanism. It simply might have implemented it into the brain through something other than the patterns of connections, for example, they could result from the global effects of neurotransmitters (see, e.g., Williams et al. 2006, Hariri & Holmes 2006), instead of their specific transmission, as suggests the fact that both selective serotonin reuptake inhibitors (SSRIs, like Prozac and Zoloft) and MDMA (street name: ecstasy; mechanism: makes neurons fire vast quantities of the serotonin available) affect mood significantly. Whereas with SSRIs, emotion is attenuated, with MDMA the user feels pure love, a sense of empathy that is unmatched by any drug on the market. The aforementioned hypothesis, however, is an open empirical question on which I take no stand.
For our purposes here, it might be enough to note that emotions have traditionally been included within the realm of sensations as inner sensations. As of yet, I’ve seen no evidence that even remotely challenges this ancient view. For all we know, evolution might have simply implemented a non-representational domain of sensation that serves to guide learning. Such a domain need not be innately represented in the brain because it may be induced from the body itself. This claim lies behind Schachter & Singer’s (1962) classic Attribution of Arousal Theory of Emotion, which claims that emotions are the product of the conjunction of a bodily state and an interpretation of the present environment. In fact, Antonio Damasio and his team have been hard at work attempting to figure out where basic emotions come from. In an admittedly preliminary finding (Rainville et al. 2006), they managed to reliably identify basic emotion types (e.g., fear, anger, sadness and happiness) with patterns of cardiorespiratory activity. Similarly, Moratti & Keil (2005), working independently out of the University of Konstanz in Germany, found that cortical activation patterns coding for fear depend on specific heart rate patterns (see also, e.g., Van Diest et al. 2009). Should these findings pan out, it would be indicative that emotions are a sensory modality. As a sensory modality, emotions permeate experience, leading to emotion recognition being widely-distributed (Adolphs, Tranel, & Damasio 2003) because these become intertwined in the establishment of ideas.
In the end, if emotions are sensations, they are not innate ideas. Ideas are formed from these sensations as a function of their being perceived, a process that could, in principle, account for fine-grained emotional distinctions (Damasio 1994). Be it as it may, it is clear that emotional experience lies at the base of all of cognition, even reasoning, since as a sensory modality its mode permeates directly or indirectly all other processing everywhere and always.
3. Corollaries & Implications
Contrary to what it may seem upon first inspection, there is an underlying feature that is shared by both rationalist classical cognitive architecture (Fodor & Pylyshyn 1988, Newell 1980, Chomsky1966, Chomsky 1968-2005) and traditional empiricist cognitive architectures like John Locke's and David Hume's, mainly that both suppose there is a domain of memory that constitutes a thorough and detailed model or record of states of (the body in the) world. This feature is part of a modern tendency, illustrated somewhat indirectly in the previous section, of overcrowding the mind with what it can get—and does get—for free from the body in the world. In classical architectures, this feature more prominently takes the form of sensory memory, constituting a complete and detailed imprint of the world, only part of the information of which will travel to working memory for further processing. On the empiricist side, this feature takes on a more insipid form.
Think of Hume’s use of the word impression as opposed to, for example, sensation. Whereas the term sensation emphasizes both the senses and what is sensed, the term impression mostly accentuates what is imprinted, rendering perception mainly a passive receptor (a photocopier, if you will) upon which states in the world are imprinted. Also, and more importantly, the process of imprinting in Hume’s cognitive architecture does not stop with impressions because ideas, given how he defined these, are nothing more than less lively copies of imprints of states (of the mind) in the world. Moreover, since these ideas record holistically (i.e., somewhat faded yet still complete), as opposed to Barsalou’s (1993, 1999) schematic perceptual symbols, the resulting view is a mind overcrowded with images, sounds, tastes, smells, emotions—full of all of the experiences that the body in the world ever imprints on the mind.
It is important to highlight the active character of perception by identifying perception with the real-time integration of fading sensations with lasting mental representation. Both sensory illusions and synesthesia are evidence of the active nature of perception because both phenomena illustrate the impact that ideas have upon sensations and the fact that what we perceive is not just an imprint of the world. In this respect, what must be emphasized is the character of neural networks as universal approximators of environmental properties (see How You Know What You Know for a review), allowing neural networks to get their representational constraints for free, from the information being processed. Moreover, as these approximations become entrenched in the processing mechanism, they partially delineate the processing of incoming stimuli.
The resulting view is of a mind primarily full—not of sensory impressions but—of self-organizing approximations to the patterns implicit in such sensations, approximations that serve to anchor further representations through association. These self-organizing approximations aren't just the substrates of "higher-order" processes—higher order reasoning carry their biases, their limitations, as well as their benefits, like speed and elasticity, as ongoing research on reasoning keep finding. Human beings are not logical or rational animals. We can become more logical by learning logic and more rational by learning argumentation and how to spot formal and informal fallacies when these are used (van Gelder 2005, 2002).
For centuries, the supposition that human thinking follows logical rules has permeated and biased explorations into our cognitive capacities. The view that we are endowed with innate ideas that underpin our thinking, that allow us to learn syntax and to think logically, has been the cornerstone of Rationalism in every epoch including our own. But this is a far-fetched fantasy. To paraphrase Bertrand Russell, logic doesn't teach you how to think, it teaches you how not to think.
Cognitive Science is gradually overcoming the rationalist bias that was set at the moment of the discipline's creation. The more evidence mounts, the more it becomes clear that mental processing follows the associative rules of the brain. With this realization, the computer metaphor (that mind is software to the brain's hardware) slowly but surely unravels.
Perhaps this is how dualism finally dies, not with a bang, but with a whimper.
No comments:
Post a Comment
Bombs away!