Showing posts with label cognitive architecture. Show all posts
Showing posts with label cognitive architecture. Show all posts
18.11.18
Preliminary report of closed, on-site polling
Poll #1
Duration: + / - 3 months
Sample size: A surprising amount. To be expanded as results showed promise.
Preliminary disclosure:
Users that scored elevated Scale 7 [Psychasthenia] on the MMPI-2 AND were in possession of standard blood test reported overwhelmingly to having higher white blood cell counts than red blood cell counts. With regards to if one or both of these numbers were in the abnormal range, the results were mixed, with the sample size proving insufficiently large to achieve the statistical significance necessary to back any correlation or, inversely, back the null hypothesis against any specific combination.
A follow-up poll will open, staying open over a longer period, and those numbers will be combined with those already obtained. MMPI-2 test takers that meet the criteria stated above are strongly encouraged to participate because it is likely that enough data will be collected to offer strong support with regards to a key skewed dynamic in the human psychoneuroimmunological system, which has been recently found to exhibit bidirectional communication between neurons and cells in the immunological system.
Poll #2
Duration + / - 10 months
Sample size: 500+ users
Results:
Users reviewed our on-site search engine, Cog, a custom Google search engine modified to filter out most of the noise on the Internet with the purpose of landing users directly on useful results or on the primary sources corresponding to their query. Cog received over 75% favorable reviews, with less than 20% of users reporting that they either weren't able to find the primary sources being sought out or having experienced any sort of processing or coding bug upon using the on-site CSE. To obtain positive percentages so high is extremely rare in anonymous, on-line attitude polling. Needless to say, I am very happy that I was able to provide you all with a useful tool that is becoming increasingly important as the Google search algorithms get hacked to the point that reliable information is no longer readily accessible. Since voters largely approved the design of the tool, I will be expanding and tweaking it over the medium-term.
Labels:
artificial intelligence,
Be Kind,
branding,
cartels,
cognition,
cognitive architecture,
Commerce,
defenses,
Free,
Free Market,
marketing,
MMPI-2,
neuropsychology,
neuroticism
Location:
Cupertino, CA, USA
16.10.15
How do human minds work?: The Cognitive Revolution and Paradigm Change in Cognitive Science
During the first half of the 20th century, empiricism permeated most fields related to the study of human minds, particularly epistemology and the social sciences. The pendulum swung toward empiricism at the end of the 19th century in reaction to the introspective and speculative methods that had become the standard in disciplines like psychology, psychophysics and philosophy. Based on technical advances mostly achieved in Russia and the United States, behaviorism took form, threatening to absorb philosophy of language and linguistics (e.g., respectively, Quine 1960, and Skinner 1948, 1957). In reaction to that movement, Cognitive Science emerged as an alternative for those discontent with the reigning versions of empiricism, that is, as a rationalist alternative.
When Chomsky (1959) pounced upon Skinner's Verbal Behavior, he later reasserted his victory as a vindication of rationalism in the face of “a futile tendency in modern speculation”, stating that he did not "see any way in which his proposals can be substantially improved within the general framework of behaviorist or neobehaviorist, or, more generally, empiricist ideas that has dominated much of modern linguistics, psychology, and philosophy" (Chomsky 1967). Noam Chomsky’s assault, backed by the research program offered alongside it (Chomsky 1957), would be followed by twenty-five years of almost completely uncontested rationalist consensus. Thus, the Cognitive Revolution is best understood as a rationalist revolution.
Researchers in the newly delineated interdisciplinary field coincided in arguing that the mind employs syntactic processes on amodal (i.e., context-independent) structured symbol, some of which must be innate. The computer metaphor guided the formulation of models, whereby mind is to nervous system what software is to hardware. Conceived as a new scientific epistemology, Cognitive Science built bridges across separate disciplines.
Though each field has its own terminology dissimilar to the others, potentially straining effective communication, academics could converge on the view that thought, reasoning, decision-making, and problem-solving are logical, syntactic, serial processes over structured symbols. As such, it may be suggested that the rationalist framework greatly facilitated the gestation and institutional validation of Cognitive Science as a academic domain in its own right. Human cognition could be though of as Turing Machines (Turing 1936), perhaps similar to a von Neumann architecture (von Neumann 1945), that obey George Boole's (1854) Laws of Thought, and this computational foundation worked equally well for generative linguists, cognitive psychologists, neuroscientists, computer programmers focused on artificial intelligence, and analytic philosophers fixated on the propositional calculus of inference and human reason. Consequently, most textbook on cognition contain a few diagrams like the one below.
Models that abide by the aforementioned rationalist premises are known as classicalist or as having a Classical Cognitive Architecture (Fodor and Pylyshyn 1988). It wasn’t until the mid-80s, with the resurgence of modeling via artificial neural networks, that the rationalist hegemony began to crack at the edges, as increasing emphasis was placed on learning algorithms based on association, induction, and statistical mechanisms that for the most part attempted to do away with innate representations altogether. This resurgence threw Cognitive Science into what Bechtel, Abrahamsen & Graham (1998) called an identity crisis, which they date from 1985 until the time of that publication. Almost two decades later, the identity crisis remains unresolved, as this new approach has been met with fierce resistance, displaying the unnerving, painstakingly slow characteristics of a Kuhnian paradigm shift (Kuhn 1962).
In Hume Variations (2003), Jerry Fodor, the most prominent and radical rationalist philosopher of Cognitive Science alive today, rescued the Cartesian in Hume along with his naïve Faculty Psychology at the cost of sacrificing his associationist view of learning. And of course Fodor did this since that maneuver would render Hume a rationalist and also Cartesian linguistics and reason are central to the inaugural program of Cognitive Science, a framework that Fodor helped construct from the very beginning. Chomsky's (1966) Cartesian Linguistics traces many of the developments of his own linguistic theory, including the key distinction between surface structure and deep structure, to the Port-Royal Grammar published by Arnauld and Lancelot in 1660. The Port-Royal Grammar and the Port-Royal Logic (Arnauld and Nicole 1662) were both heavily influenced by the work of René Descartes. However, the evidence is quickly mounting in a way that suggests that the maneuver needed is the opposite of Fodor's, that is, to rescue the associationist theory of learning while discarding the Cartesian aspects and the folk Faculty Psychology present in Hume's philosophy of mind.
A brief comparison between the prototypical rationalist and empiricist stances is provided in the following table.
Of these positions, the rationalist / empiricist distinction in philosophy of mind rests squarely on the issue of representational nativism. The other facets (listed in mind, processes, and representations above) seem to follow from what would be needed, wanted or expected of a cognitive architecture if there were either some or no innate ideas.
That there are no innate ideas is the core tenet of empiricist philosophy of mind. Hume believed that the mind was made up of faculties, a modular association of distinct associative engines, but he left open the question of whether the faculties arise out of experience (or ‘custom’) or are innately specified (and to what extent). There are two main reasons that suggest the former option to be the case. First, uncommitted neural networks approximate functions, both of the body and of the world, paving the way for functional organization through processes of neural auto-organization. Second, committed neural networks bootstrap one another towards the approximation of more complicated functions; as this occurs, the domain-general processes of neurons give way to domain-specific functional organizations. However, though the representations that constitute these domain-specific processes can become increasingly applicable to variable contexts, these do not become wholly amodal, that is, context-independent, because domain-specific functions are anchored in domain-general associative processes that are inherently context-dependent or modal. (See How You Know What You Know for a review of scientific research that supports the two aforementioned reasons.)
Having said this, it must be noted that neither rationalism nor empiricism actually constitute a theory of anything at all; their core is only one hypothesis – either there are some innate ideas or there are none. There is the third possibility, however, that ideas do not exist, at least not in minds, making the rationalist/empiricist debate obsolete (cf., Brooks 1991). This third option notwithstanding, even though neither empiricism nor rationalism is actually a theory of mind, it is possible to build one in the spirit of their corresponding proposition. That is what Locke, Berkeley and Hume did; it is also what Noam Chomsky did, and what Lawrence Barsalou is doing now (whose research program is stated in Barsalou 1999).
Be that as it may, the rationalist consensus that dominated Cognitive Science's first thirty years cannot be explained by mere technological or technical factors. While someone could argue that connectionism did not appear until the mid-80s because neural networks could not be artificially implemented, this claim would be historically unfounded. Bechtel, Abrahamsen & Graham (1998) pinpoint September 11, 1956 as the date of birth of Cognitive Science. Though one may be reluctant to accept such a specific date, it is clear that the inter-disciplinary field emerged around then, plus or minus a few years. However, already in 1943, McCulloch and Pitts proposed an abstract model of neurons and showed how any logical function could be represented in networks of these simple units of computation. By 1956, several research teams had tried their hand at implementing neural networks on digital computers (see, e.g., the project of Rochester, Holland, Haibt & Duda 1956 at IBM). By the early 60's, not only had the idea been explored, Rosenblatt (1962) had even tried building artificial neural networks as actual machines, using photovoltaic cells, instead of just simulating these on digital computers.
When Cognitive Science emerged, the technological tools existed so that research could have gone the rationalist’s or the empiricist’s way, or at least remained neutral on the matter; however, as the Cognitive Revolution is best understood as a rationalist revolution, nativism was hailed, construction began on a Universal Grammar (a project that failed miserably, by the way), decision-making processes were construed as syntactic manipulations on explicit symbol structures (Newell, Shaw, and Simon 1959, Anderson 1982), and neural networks were taken as simple instruments of pattern recognition that could serve to augment a classical cognitive architecture or, at most, to implement what would ultimately be a rationalist story. Fodor & Pylyshyn (1988) were surprisingly blunt on this last point by stating that the issue of connectionism constituting a model of cognition “is a matter that was substantially put to rest about thirty years ago” when the Cognitive Revolution took place. It took thirty years of work for frustration to set in with rationalist approaches; only then would connectionism reappear, augmented by the tools of dynamical systems theory, as a viable alternative to the rationalist or classicalist conception of cognition.
Paradigm Change in Artificial Intelligence
The term ‘connectionist’ was introduced by Donald Hebb (1949) and revived by Feldman (1981) to refer to a class of neural networks that compute through the connection weights. Thousands of connectionist nets, similar to some degree or other to the schematic below, have been created since the 1950s. The wide variety of artificial neural networks is due not only to the function each has been created (and raised) to carry out, which constrains the type of inputs and outputs to which the system has access, but also to their specific architecture—the number of neuron each layer contains, the kind of connections these exhibit, the number of layers, and the class of learning algorithm that calibrate its connection weights.
A clear and very simple example of a connectionist net (seen below) was developed by McClelland and Rumelhart (1981) for word recognition. The 3-layer network proceeded from the visual features of letters to the recognition of words through localist representations of letters in the hidden layer (for a richer discussion, see McClelland 1989). Given its function and the use of localist representations, both the mode of presentation of the input and the mode of generation of the output was constrained by the features of written language, which in turn delineated the network’s design.
Borrowed from the Empirical Philosophy of Science Project at the Natural Computation Lab of the University of California, San Diego, the graph below evidences the transition from the classicalist paradigm to the connectionist by presenting the frequency of appearance (by year) of the lexical items ‘expert system’ and ‘neural network’ in peer-reviewed academic journals of Cognitive Science. It can be clearly seen that the interest in neural networks supplanted the 1980's craze for expert systems.
For those lacking knowledge on the matter, an expert system is a decision-making program that is supposed to mimic the inferences of an expert in a given field; basically, the shell of the program is an inference engine that works logically and syntactically, and this engine must be given a knowledge base, a finite set of "If X, then Y" rules the sum of which ought to allow it to perform its target function correctly most of the time. Typically, an expert system asks you either questions or to input specific data, and using those inputs, the inference engine goes through its knowledge base to provide you an answer. Expert systems may be created for purposes of prediction, planning, monitoring, debugging, and perhaps most prominently for diagnosis, among several other possible purposes. WebMD's symptom checker, which you may have used once or twice, is perhaps the most well-known example; you click on what symptoms you have, its inference engine passes your data through its knowledge base, and it provides you with a list of all the sicknesses you may be suffering from. If you have used that symptom checker more than twice in your life, you probably know how inaccurate it tends to be, even to the point of being ludicrous at times. In stark contrast, many artificial neural networks have been created for detecting all sorts of cancers and can do with 99% accuracy, that is, better than almost any doctor, like this one for breast cancer created by a girl during her junior year of high school. This is just one out of countless domains where empiricist approaches vastly outperform their rationalist counterparts.
As a funny digression, I once had to make an expert system for a graduate class and built a program that would ask you 16 socioeconomic and political questions, from which it would diagnose your preferred political philosophy (e.g., anarchism, liberalism, republicanism, communism, constitutional monarchist, fascism, and so on). My artificial intelligence professor took it with him to the School of Engineering to test it out on his students, and when I saw him again, he commented that he was impressed by how accurate it was. It was definitely more accurate than WebMD but, then again, medical diagnosis is a way more complicated knowledge domain that contains many more possible outputs so that is an unfair comparison. On an unrelated but also funny note, my other artificial intelligence professor told the story of how he had lost faith in artificial neural networks while at grad school when he created a system that would either approve or reject a bank loan application. He would input the demographic and personal income data as well as the loan information, and the network would respond a simple Approve or Reject. But he created the network with a twist; he deliberately trained it with a racist data set in such a way that the network wouldn't give out any prime loans to anyone that wasn't white. He wanted to see if the network would ever learn the error of his ways or at least acknowledge its racism, but it never did, and he said that at that moment he lost all faith in connectionist networks. When he finished telling the story, I immediately raised my hand and said—"You do realize that that is exactly what happens with many bankers in real life, right? You network didn't fail; it behaved like a human would."
Reframing Cognitive Science
The seeds of empiricism have been sprouting almost everywhere. The last thirty years have seen an ever-increasing portion of scientific research dedicated, even if reluctantly, to proving some of the central tenets of empiricist theory of mind or attempting to articulate mechanisms to augment it.
In artificial intelligence, connectionist architecture emerged in the 80's as a clear and feasible alternative to symbolic approaches (a.k.a., good old-fashioned artificial intelligence or GOFAI; Haugeland 1985, Dreyfus 1992). The tools of dynamical systems theory, widely used in the field of physics, bolstered connectionism to provide for a robust account of a system’s ontogenetic evolution through time (van Gelder 1999). Connectionism provided that which behaviorist lacked, powerful learning mechanisms that could account for not only how intelligent agents derive knowledge from experience but also how we can surpass that limited amount of information to conceive an unlimited amount of possibilities; furthermore, the tools of dynamical systems theory opened the possibility of seeing what goes on inside the ‘black box’, while also helping psychology get in sync with physics and neurology. In this sense, connectionism ought not to be confused with behaviorism because neural network architectures permit an agent to surpass the limited stimulus-response patterns that it encounters (Lewis and Elman 2001, Elman 1998). It should be noted, however, that connectionist computation is not synonymous with empiricism, that it is, in fact, entirely compatible with rationalist postulates, as exemplified by Optimality Theory (Prince & Smolensky 1997), an attempt to implement universal grammar via a connectionist architecture; nevertheless, this compatibility is a token truism that goes both ways and is due to the fact that artificial neural networks and Turing machines exhibit equivalent computational power inasmuch as either can implement any definable function, which is why most people simulate neural networks using common personal computers (currently, the best open-source, free software for creating your own neural network with relative ease is Emergent, a program hosted by the University of Colorado that runs on Windows, Macintosh OS's, and Linux-Ubuntu, and can be downloaded here). Looking beyond this universal computational compatibility, connectionism clearly opens the door to empiricism, and the vast majority of connectionist models do away with rationalist tenets and clearly partake of the long-standing empiricist tradition even if many of their authors aren't willing to admit this publicly because of the entrenched stigma branded into that philosophical label.
In linguistics, a clear alternative to generativism surfaced during the 1980s in the form of Cognitive Linguistics (Langacker 1987, Lakoff 1987). Though cognitive linguistics is not wholeheartedly committed to an empiricist theory of mind, its rejection of the fundamental tenets of generativism is in itself a retreat from the rationalist consensus that stood almost uncontested. Specifically, its rejection of an autonomous, modular universal grammar and its grounding of linguistic abilities in domain-general learning and associative mechanisms represent a big leap towards empiricism. Moreover, as linguistics increasingly meshes with psychology and connectionism, slowly but surely an associationist flavor that had long been wiped out by Chomsky and his followers returns to the field. In consequence, much work in linguistics is being fruitfully redirected from devising categorical acquisition schemes toward testing statistical learning algorithms for the acquisition of syntax as well as for syntax's prehistoric origins (e.g., Hazlehurst and Hutchins 1998, Hutchins and Hazlehurst 1995) and also for how grammar changes throughout history (see, e.g., Hare and Elman 1995).
In psychology, many connectionist-friendly accounts have been offered. Perhaps the most ambitious is Barsalou’s (1999) perceptual symbol systems, an account that takes a firm empiricist stance in the face of rationalist psychology by dissolving the distinction between perception and conception. Moreover, the perceptual symbol systems approach has been recently applied, though not without difficulties, to theory of discourse (Zwaan 2004) and to theory of concepts (Prinz 2002). Still, this is not the only empiricist current in psychology, as the domain of psycholinguistics has been propelled mostly by psychologists, like Elizabeth Bates and Brian MacWhinney, and has led to findings and models that are very compatible with the tenet of empiricism (see, e.g., Thelen and Bates 2003, Tomasello 2006, Goldberg 2004, MacWhinney 2013). Not to mention that many of the early proponents of the parallel distributed processing (or PDP) approach to Cognitive Science, like Rumelhart and McClelland, were psychologist by profession.
Empiricist cognitive architecture has gained a voice in every discipline in the cognitive sciences. The increasing acceptance of empiricism is leading not only to the testing of a rapidly-growing number of so-inspired hypotheses but also to a vast reinterpretation of earlier findings in light of radically different postulates. What has been taking place is clearly a Kuhnian paradigm shift. Hence, an exorbitant amount is still to be done. For starters, oddly enough several empiricist researchers are not convinced that their standing agendas are in fact empiricist, that is, that replacing ‘empiricist’ with ‘interactionist’ or with ‘emergentist’ does not black out the ‘empiricist’.
Consider, for example, the book Rethinking Innateness: A Connectionist Perspective on Development (Elman et al. 1996). After a thorough and outstanding assault of rationalism and defense of empiricism, the group goes on to assert “We are not empiricists” (p. 357). Like many other fearful academics, they view the label ‘empiricist’ as a stigma, not unlike having to bear the Scarlet Letter. It is about time that this stigma be removed, and in that spirit I offer a few clarifications. First, regardless of what Chomsky and Fodor would like us to believe, behaviorism and empiricism are not synonymous, as most versions of connectionism clearly illustrate. Even the simplest neural learning algorithms, such as error backpropagation, offer that which behaviorist could not, statistical means that can carry cognition from learning through finite data to understanding an infinite amount of possibilities. Second, consider the following excerpt—
"We are neither behaviorists nor radical empiricists. We have tried to point out throughout this volume not only that the tabula rasa approach is doomed to failure, but that in reality, all connectionist models have prior constraints of one sort or another. What we reject is representational nativism." (Elman et al. 1996 1996, p. 365)
In Rethinking Innateness, the authors distinguish between three kinds of possible innate constraints: representational, architectural, and chronotopic (timing). A prime example of an architectural constraint is the characteristic 6-layer structure of the human neocortex; for chronotopic constraints, think of embryonic cell migrations. As stated above, the group offers a wealth of innate architectural and chronotopic constraints but reject representational constraints. It is the wealth of mechanisms that can go into delineating what kind of tabula the mind is that leads them to suggest that interactionism entails that empiricism is false. But empiricists have never shunned innateness altogether. The empiricist-rationalist distinction rests squarely on the issue of innate mental representations.
Advancing a strong view of architectural and chronotopic constraints does not depart one from the notion of a tabula rasa. The interaction of the many constraints with the world conforms the tabula—no sane empiricist would ever deny this! —but that does not render the tabula un-rasa, it just delineates what kind of tabula it is (i.e., a nervous system, not a DVD or a 35mm film or an infinite magnetic tape). To put it simply, denying all innate architectural and chronotopic features would be tantamount to claiming the children resemble their parents only because their parents raise them. No one ever claimed that! The debate between rationalists and empiricists has always been about whether there are certain pieces of knowledge that are represented in the mind that are simply not learned. If you reject representational nativism yet do not reject the existence of something like ideas or mental representations, then you are committed to the tabula rasa, whether you like it or not. It may be unpopular, but it is nevertheless so because rejecting representational nativism without discarding mental representation is affirming that there are no innate ideas. That the type of tabula that it is determines what kind of information can be written on it and that human brains are highly structured does not entail the falsity of empiricism, unless representation is preprogrammed into the slate. Without unlearned representations, a highly structured and complex tabula is as concordant with empiricism as a simple and amorphous pattern-seeking agent.
Clearly, the type of slate that is proposed today is different from what was proposed during the Enlightenment. To Hume, the mind was primarily a passive photocopier of experience; in contrast, current neural networks are much more active in their assimilation of environmental information. Moreover, while Hume thought that that human minds associate the compiled copies of experience according to three domain-general types of association, connectionist neural networks are universal approximators that modularize as functional approximations consolidate because of the details of the surrounding environment and, therefore, in consequence, these readily develop mechanisms that go beyond association through association itself (see How You Know What You Know for a review). Advancing a stronger, more complex view of the cognitive slate does not distance the account from empiricism since it rejects representational nativism, just like Elman et al. 1996 did.
It is telling that connectionists naturally gravitate toward empiricism in spite of the stigma surrounding the tradition and even their own explicit assertions and roundabout philosophical identifications. Ultimately, the hallmark dispute among connectionist and classicalists is the question of what kind of tabula the mind is, a question that does not directly concern the rationalist/empiricist distinction but results from it by entailment. It is really just a practical matter that, whereas syntactic or logical engines require innate representations, complex neuronal slates like ours do not. Then again, it is also a practical matter that the only intelligent beings we know of are born with highly complex neural networks. Deep down, I am inclined to think that Fodor’s Informational Atomism is logically correct—if the mind works like a logical or syntactic engine, then all simple concepts must be innate. As Barsalou (1999) notes, there are no accounts on offer for how simple symbols can be acquired by a classical cognitive architecture or any logical or syntactic engine, and this may very well be because there are no possible accounts at all. This admission, however, should not lead us to accept Fodor’s theory of concept, but rather it should convince us that the mind is not a Turing machine (like the image below) or a syntactic engine (cf., Pinker 2005).
As the evidence mounts, even Chomsky had to abandon most of the original postulates of generative linguistics, including the important distinction between surface structure and deep structure and also the view that syntax is a totally autonomous faculty that does not derive or associate at all with the lexicon. The Minimalist Program (1995) reduced the philosophical rationalism of Chomsky's theory to such an extent that several academics that have based their own work on generative models, suddenly finding themselves in a theoretical void that threatens to undermine their research, have chosen either to ignore it entirely or to attempt to undermine the program. But this is just one example of how rationalist philosophy of mind is undergoing its slow death, weakening as data piles up. As the first generation of cognitive scientists dies out and the third generation starts to assume positions of power, the stigma branded upon empiricism will weaken. The likely result is a renewal that will allow funding to flow to new experimental techniques and to innovative practical application across the interrelated disciplines. Exciting times lie ahead.
-------
REFERENCES
- Anderson, J.R. (1982). “Acquisition of cognitive skill”. Psychological Review 89: 369-406.
- Arnauld, A. & Lancelot, C. (1660). General and Rational Grammar: The Port-Royal Grammar. J. Rieux and B.E. Rollin (trans.). The Hague: Mouton, 1975.
- Arnauld, A. & Nicole, P. (1662). Logic, or The Art of Thinking; being The Port-Royal Logic. Thomas Spencer Baynes (trans.). Edinburgh: Sutherland and Knox, 1850.
- Barsalou, L.W. (1999). “Perceptual symbol systems.” Behavioral and Brain Sciences, 22: 577-609.
- Bechtel, W., Abrahamsen, A. & Graham, G. (1998). "The Life of Cognitive Science". A Companion to Cognitive Science. W. Bechtel & G. Graham (eds.). Massachusetts: Blackwell Publishers Ltd.
- Boole, G. (1854). An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities. London: Macmillan.
- Brooks, R.A. (1991). “Intelligence Without Representation.” Artificial Intelligence Journal 47: 139–160.
- Chomsky, N. (1957). Syntactic Structures. New York: Mouton de Gruyter.
- Chomsky, N. (1959). "A Review of B. F. Skinner's Verbal Behavior." Language, 35, No. 1: 26-58.
- Chomsky, N. (1966). Cartesian Linguistics: A Chapter in the History of Rationalist Thought. New York: Harper & Row.
- Chomsky, N. (1967). “Preface to the 1967 reprint of ‘A Review of Skinner's Verbal Behavior’.” Readings in the Psychology of Language. Leon A. Jakobovits & Murray S. Miron (eds.). Prentice-Hall, Inc. pp. 142-143.
- Chomsky, N. (1995). The Minimalist Program. Cambridge, MA: MIT Press.
- Dreyfus, H.L. (1992). What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press.
- Elman, J. L. (1998). “Connectionism, artificial life, and dynamical systems: New approaches to old questions.” A Companion to Cognitive Science. W. Bechtel & G. Graham (eds.) Oxford: Basil Blackwood.
- Elman, J.L., Bates, E.A., Johnson, M.H., Karmiloff-Smith, A., Parisi, D., Plunkett, K. (1996). Rethinking Innateness: A Connectionist Perspective on Development. Cambridge, MASS: MIT Press.
- Feldman, J.A. (1981). “A connectionist model of visual memory.” Parallel models of associative memory. G.E. Hinton y J.A. Anderson (eds.). Nueva Jersey: Erlbaum.
- Fodor, J.A. (2003). Hume Variations. New York: Oxford University Press.
- Fodor, J.A. & Pylyshyn, Z.W. (1988). “Connectionism and Cognitive Architecture: A Critical Analysis.” Cognition 28: 3-71.
- Goldberg, A.E. (2004). “But do we need Universal Grammar? Comment on Lidz et al.”(2003)” Cognition 94: 77-84.
- Hare, M. & Elman, J.L. (1995). “Learning and morphological change.” Cognition 56: 61-98.
- Haugeland, J. (ed.) (1985). Artificial Intelligence: The Very Idea. Cambridge, MA: MIT Press.
- Hazlehurst, B. & Hutchins, E. (1998). “The emergence of propositions from the co-ordination of talk and action in a shared world.” Language and Cognitive Processes 13(2/3): 373-424.
- Hebb, D. (1949). The Organization of Behavior: A Neuropsychological theory. New York: Wiley.
- Hutchins, E. & Hazlehurst, B. (1995). “How to invent a lexicon: the development of shared symbols in interaction.” Artificial Societies: the computer simulation of social life. N. Gilbert & R. Conte (eds.). London: UCL Press. pp. 157-189.
- Kuhn, T. (1962). The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1970. (2nd revised edition)
- Lakoff, G. (1987). Women, Fire, and Dangerous Things: What Categories Reveal About the Mind. Chicago: The University of Chicago Press.
- Langacker, R.W. (1987). Foundations of Cognitive Grammar. Stanford, CA: Stanford University Press.
- Lewis, J.D., & Elman, J.L. (2001). “Learnability and the statistical structure of language: Poverty of stimulus arguments revisited.” Proceedings of the 26th Annual Boston University Conference on Language Development.
- MacWhinney, B. (2013). “The Logic of a Unified Model”. S. Gass and A. Mackey (eds.). Handbook of Second Language Acquisition. New York: Routledge. pp. 211-227.
- McClelland, J.L. & Rumelhart, D.E. (1981). “An interactive activation model of context effects in letter perception: Part 1. An account of basic findings.” Psychological Review 88: 375-407.
- McClelland, J.L. (1989). “Parallel distributed processing: Implications for cognition and development.” Morris, R. (ed.) Parallel distributed processing: Implications for psychology and neurobiology. New York: Oxford University Press.
- McCulloch, W.S. & Pitts, W. (1943). “A logical calculus of the ideas immanent in nervous activity.” Bulletin of Mathematical Biophysics 5: 115–137.
- Newell, A., Shaw, J.C. & Simon, H.A. (1959). “Report on a general problem-solving program”. Proceedings of the International Conference on Information Processing . pp. 256-264.
- Pinker, S. (2005). "So How Does The Mind Work?" Mind and Language 20, 1: 1-24.
- Prince, A. & Smolensky, P. (1997). “Optimality: From Neural Networks to Universal Grammar”. Science 275: 1604-1610.
- Prinz, J.J. (2002). Furnishing the Mind. Massachusetts: MIT Press.
- Quine, W.V.O. (1960). Word and Object. Massachusetts: MIT Press.
- Rochester, N., Holland, J.H., Haibt, L.H., & Duda, W.L. (1956). “Tests on a cell assembly theory of the action of the brain, using a large digital computer.” IRE Transactions on Information Theory 2: 80-93.
- Rosenblatt, F. (1962). Principals of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Washington, D.C.: Spartan Books.
- Skinner, B.F. (1957). Verbal Behavior. Acton, MA: Copley, 1992.
- Thelen, E. & Bates, E. (2003). “Connectionism and dynamic systems: are they really different?” Developmental Science 6, 4: 378-391.
- Tomasello, M. (2006). “Acquiring linguistic constructions”. Handbook of Child Psychology. Kuhn, D. & Siegler, R. (eds.). New York: Wiley.
- Turing, A.M. (1936). "On Computable Numbers, with an Application to the Entscheidungsproblem". Proceedings of the London Mathematical Society, 2, 42: pp. 230–65, 1937.
- van Gelder, T.J. (1999). “Defending the dynamical hypothesis.” Dynamics, Synergetics, Autonomous Agents: Nonlinear Systems Approaches to Cognitive Psychology and Cognitive Science. W. Tschacher & J.P. Dauwalder (eds.) Singapore: World Scientific. pp. 13-28.
- von Neumann, J. (1945). "First Draft of a Report on the EDVAC". Originally confidential [property of the United States Army Ordnance Department].
- Zwaan, R.A. (2004). “The Immersed Experiencer: Toward an embodied theory of language comprehension.” The Psychology of Learning and Motivation 44: 35-62.
--------
If you enjoyed this article, you may also like:
1.9.15
On Perception, Emotion, & Decision-Making
The following article builds upon the arguments and evidences offered in the previous post How You Know What You Know; however, the contents below stand on their own. A further review of the History of Cognitive Science can be found at How do human minds work?: The Cognitive Revolution and Paradigm Change in Cognitive Science.
----
1. Sensory Integration and Interdependence
The transition from sensations to perceptions is commonly referred to as sensory integration. The importance of this process is such that it led the Rodney A. Brooks and the robotics team at MIT to postulate it as an ‘alternative essence of intelligence’ (Brooks et al. 1998) during their first attempt at building a humanoid robot, appropriately named Cog.
Sensations are modality-specific; perceptions are not, even though we can attempt to dissociate the different sense streams and partially succeed in doing this. As evidence, consider two phenomena: sensory illusions and synesthesia.
Sensory illusions can be uni-modal (involving one sense modality like the images above and below), multi-modal (involving two or more sense modalities; see, e.g., Turatto, Mazza & Umiltà 2005), or a sense modality and some piece of standing knowledge. As remarked by Fodor (2003), early 20th century Gestalt psychologists were more than justified in offering sensory illusions against their current-day empiricist counterparts. David Hume, and the tradition that ensued, granted an individual privileged access to his sensations. But, as the Gestalt psychologists would argue, perceiving involves construction, not just passive reception. Sensations decay— what persist are perceptions flowing through ideas.
(Just in case you thought the above illusion was due to the surroundings, see the image below.)
Hume’s agglomeration of impressions and ideas into the bucket of perceptions (classifying both impressions and ideas as types of perceptions), and his implacable loathing of skeptics, led him straight to an erred view of the mind. By compromising with the skeptic and contemporary cognitive scientists, it is possible to recognize the ephemeral character of sensations and identify perception with sensory integration, which necessarily involves active construction as is the activation of learned mental representations. This move does not undermine the core tenet of empiricism (i.e., there are no innate ideas); rather, it just delineates a point where bottom-up and top-down processing converge in the constant and continuous process of real-time experience.
(For Mobile users who cannot see the video embedded above, here is the short color-creating optical illusion.)
Synesthesia is less well-known. Synesthesia is a very rare condition that has its onset in early development and for which there is no treatment. Up until recently, very little research and funding had been directed towards the study this condition, mainly because it only rarely impairs a person’s productiveness and its incidence is quite low, around 1 in every 1150 females and 1 in every 7150 males (Rich, Bradshaw, & Mattingley 2005; however, Sagiv et al. 2006 has challenged the existence of a male-female asymmetry). These numbers are still under revision as the incidence of this condition is widely debated since synesthetes rarely see their condition as a problem, rather as a gift, and hence do not seek professional counsel.
A synesthete has two or more modalities intertwined, usually uni-directionally, such that some features in one modality reliably cause some unrelated features in another modality (Cytowic 1993, Cytowic 1995, Rizzo & Eslinger 1989, but see Knoch et al. 2006, who argue that even in clear uni-directional cases there is some bidirectional activation; also Paffen et al. 2015). The patterns of association are established early during development and are stable throughout the lifespan. Moreover, no two synesthesias are alike. On the one hand, not only are many modality combinations possible, such as colored hearing, tasting tactile textures, or morphophonetic proprioception, but also, though it is extremely rare, more than two modalities can become entangled. On the other hand, even synesthetes who belong to the same class, like colored hearing, have completely different patterns of feature association. For example, colored-alphabet synesthesia involves person-specific ‘color - written letter’ mappings where each letter always appears in a specific color.
Karen's Colored Alphabet
Carol's Colored Alphabet
But colored alphabet synesthesia is among the least invasive. In colored hearing synesthesia, certain sounds can trigger beams of colorful light situated in a personal space extending 1 meter in front of the face of the synesthete. The fact that colored hearing synesthesia typically involves a personal space is indicative of associations that were made very early on during development, as infants cannot see much past such a space. Indeed, the associations must have been made so early on as to be incorporated in the base perceptual code of the individual, a fact that illustrates not only the distinction between a sensation and a perception, but also the effect that ideas have in delimiting perception, and is firmly evidenced by the reality that, as of yet, no person with synesthesia has ever been found that remembers a time when they did not have their particular anomalous perceptions. As such, synesthesia ought to be deemed paradigmatic for any empiricist cognitive architecture because it not only shows (in an exaggerated manner) that sensory integration—perception—implies active construction, but also hints at how individual differences are the rule, rather than the exception, in the conformation of representational capacities, which would be indicative that these capacities are not innate.
In fact, synesthesia might be paradigmatic of cognition in general, so much so that it has led researchers (Baron-Cohen 1996, Maurer 1993) to seriously explore the Neonatal Synesthesia Hypothesis, which states that “early in infancy, probably up to about 4 months of age, all babies experience sensory input in an undifferentiated way. Sounds trigger both auditory and visual and tactile experiences” (Baron-Cohen 1996). Since neonatal nervous systems are in the process of approximating environmental properties and specializing in domains of processing, experience to the infant might just be one constant synesthetic flow. By adopting this view, synesthesia can be explained as a derailment of an early process of modularization that the brain undergoes as a function of neural competition in the processing of the input stream during development.
There is a second, competing explanation for synesthesia, what might be called the perceptual mapping hypothesis. According to this view, synesthesia occurs not so much as a function of modularization (although this process may still be relevant), but rather as a function of early induction of the associated pairs and subsequent entrenchment of these pairs into the base perceptual code of the individual (i.e., during some critical period; see Rich, Bradshaw, & Mattingley 2005). Since for most synesthetic associations, there is no clear source of what the target ought to be other than the input itself, the individual can go a prolonged time without knowing that their perceptions are irregular, and by then the association might be so entrenched in the representational system that it might either be too late for it to be corrected or it might be too dangerous because changing the base code would negatively affect all other cognitive capacities that are built upon it. Which account is correct is ultimately a scientific question that needs to be experimentally approached; nonetheless, either explanation affords support to present-day empiricism based on connectionism and dynamical systems theory (Beer 2014, Rumelhart 1989, van Gelder 1999).
The neuropsychological and ontological question underlying both sensory illusions and synesthesia is where to draw the line between a sensation and a perception. In the journal Current Opinion in Neurobiology, Shimojo and Shams (2001) of the California Institute of Technology go as far as to argue that there are no distinct sensory modalities, since the supposed sensory systems modulate one another continuously as a function of the transience of the stimuli. They reach this radical conclusion by considering a wealth of recent findings in neuropsychology that include the plasticity of the brain and the role that experience has on determining processing localization (i.e., emergent modularization). And they are very likely correct; sensory integration is the rule rather than the exception, even in adult ‘early’ cortical sensory processing. This claim is echoed by Ghazanfar & Schroeder (2006), who argue not only that there are no uni-modal processing regions in the neocortex at all but also that the entirety of the neocortex is composed of associative, multi-sensory processing.
So what is the difference between a sensation and a perception? Succinctly, a sensation becomes a perception when it is mediated by an idea. When a mental representation intervenes in the flow of a sensation, when it delineates its processing, the process of construction and integration begins.
2. Aspects of the Nature of Emotions
Damasio (1994) claims that what sets the stage for heuristic, full-blown human reason are limbic system structures that code for basic emotions and that help train the cortical structures on top of these, through experience, which then code for complex emotions. His somatic marker hypothesis states that emotional experiences set up markers that later guide our decision-making processes. It is a well-known fact that when we try to solve a problem we do not consider all the alternatives, only the tiniest fraction. These markers of past bodily state set up in our brain allow our minds to discard the vast majority of possibilities before we can even consider the vast array of options, and what is left is a small set that we may manage to ponder. Such training mechanisms are patently fruitful from an evolutionary standpoint, as illustrated by the following Artificial Life simulation.
Nolfi & Parisi (1991) simulated the evolution of agents made up of artificial neural networks whose only task was to find food in a simulated world. Two distinct types of evolution were explored. In the first, the networks that were most successful at finding food in each generation were allowed to reproduce, which meant that new neural networks would begin with similar, though not exact, connection weights. What evolves, in this scenario, is the solution to the problem of navigation and food localization. Over several generations, the resulting agents have no problems at finding food at birth, so to speak. This is the equivalent of evolution hand-coding the solution into the neural connections, that is, of evolution installing truly innate ideas. For complex organisms, however, this kind of pinpoint fixation is untenable. The second type of evolution involved agents that were made up of two distinct networks. The first network handled the navigation, as the agents in the first simulation did, and the second neural network was in charge of helping train the navigating network (that is, it did not navigate at all). In this simulation, the first network was a tabula rasa in every generation, and what was allowed to evolve were the connection weights for the training network. Upon comparison of the two end-state types of agents, Nolfi & Parisi found that the auto-teaching networks consistently performed better at the task than the agents that had the solution to the problem hard-wired at birth.
It strikes me as altogether probable, if not entirely undeniable, that tastes and emotions serve to guide the inductions of the tabula rasa toward specific ends, the same as Nolfi & Parisi’s teaching nets served the blank nets to solve the issues of their existence. Tastes and emotions are fundamental—even at birth, these instruct as to what is food and as to what can kill you. However, taking Nolfi & Parisi’s simulations at face value would mean that emotions would come preset in specific connection configurations, which are a means of mental representation. If, as has been claimed here, all mental representations are ideas, then such a solution would lead to an as-of-yet unseen kind of rationalism (an emotional rationalism - how bizarre!). But there are other ways in which nature might have implemented the mechanism. It simply might have implemented it into the brain through something other than the patterns of connections, for example, they could result from the global effects of neurotransmitters (see, e.g., Williams et al. 2006, Hariri & Holmes 2006), instead of their specific transmission, as suggests the fact that both selective serotonin reuptake inhibitors (SSRIs, like Prozac and Zoloft) and MDMA (street name: ecstasy; mechanism: makes neurons fire vast quantities of the serotonin available) affect mood significantly. Whereas with SSRIs, emotion is attenuated, with MDMA the user feels pure love, a sense of empathy that is unmatched by any drug on the market. The aforementioned hypothesis, however, is an open empirical question on which I take no stand.
For our purposes here, it might be enough to note that emotions have traditionally been included within the realm of sensations as inner sensations. As of yet, I’ve seen no evidence that even remotely challenges this ancient view. For all we know, evolution might have simply implemented a non-representational domain of sensation that serves to guide learning. Such a domain need not be innately represented in the brain because it may be induced from the body itself. This claim lies behind Schachter & Singer’s (1962) classic Attribution of Arousal Theory of Emotion, which claims that emotions are the product of the conjunction of a bodily state and an interpretation of the present environment. In fact, Antonio Damasio and his team have been hard at work attempting to figure out where basic emotions come from. In an admittedly preliminary finding (Rainville et al. 2006), they managed to reliably identify basic emotion types (e.g., fear, anger, sadness and happiness) with patterns of cardiorespiratory activity. Similarly, Moratti & Keil (2005), working independently out of the University of Konstanz in Germany, found that cortical activation patterns coding for fear depend on specific heart rate patterns (see also, e.g., Van Diest et al. 2009). Should these findings pan out, it would be indicative that emotions are a sensory modality. As a sensory modality, emotions permeate experience, leading to emotion recognition being widely-distributed (Adolphs, Tranel, & Damasio 2003) because these become intertwined in the establishment of ideas.
In the end, if emotions are sensations, they are not innate ideas. Ideas are formed from these sensations as a function of their being perceived, a process that could, in principle, account for fine-grained emotional distinctions (Damasio 1994). Be it as it may, it is clear that emotional experience lies at the base of all of cognition, even reasoning, since as a sensory modality its mode permeates directly or indirectly all other processing everywhere and always.
3. Corollaries & Implications
Contrary to what it may seem upon first inspection, there is an underlying feature that is shared by both rationalist classical cognitive architecture (Fodor & Pylyshyn 1988, Newell 1980, Chomsky1966, Chomsky 1968-2005) and traditional empiricist cognitive architectures like John Locke's and David Hume's, mainly that both suppose there is a domain of memory that constitutes a thorough and detailed model or record of states of (the body in the) world. This feature is part of a modern tendency, illustrated somewhat indirectly in the previous section, of overcrowding the mind with what it can get—and does get—for free from the body in the world. In classical architectures, this feature more prominently takes the form of sensory memory, constituting a complete and detailed imprint of the world, only part of the information of which will travel to working memory for further processing. On the empiricist side, this feature takes on a more insipid form.
Think of Hume’s use of the word impression as opposed to, for example, sensation. Whereas the term sensation emphasizes both the senses and what is sensed, the term impression mostly accentuates what is imprinted, rendering perception mainly a passive receptor (a photocopier, if you will) upon which states in the world are imprinted. Also, and more importantly, the process of imprinting in Hume’s cognitive architecture does not stop with impressions because ideas, given how he defined these, are nothing more than less lively copies of imprints of states (of the mind) in the world. Moreover, since these ideas record holistically (i.e., somewhat faded yet still complete), as opposed to Barsalou’s (1993, 1999) schematic perceptual symbols, the resulting view is a mind overcrowded with images, sounds, tastes, smells, emotions—full of all of the experiences that the body in the world ever imprints on the mind.
It is important to highlight the active character of perception by identifying perception with the real-time integration of fading sensations with lasting mental representation. Both sensory illusions and synesthesia are evidence of the active nature of perception because both phenomena illustrate the impact that ideas have upon sensations and the fact that what we perceive is not just an imprint of the world. In this respect, what must be emphasized is the character of neural networks as universal approximators of environmental properties (see How You Know What You Know for a review), allowing neural networks to get their representational constraints for free, from the information being processed. Moreover, as these approximations become entrenched in the processing mechanism, they partially delineate the processing of incoming stimuli.
The resulting view is of a mind primarily full—not of sensory impressions but—of self-organizing approximations to the patterns implicit in such sensations, approximations that serve to anchor further representations through association. These self-organizing approximations aren't just the substrates of "higher-order" processes—higher order reasoning carry their biases, their limitations, as well as their benefits, like speed and elasticity, as ongoing research on reasoning keep finding. Human beings are not logical or rational animals. We can become more logical by learning logic and more rational by learning argumentation and how to spot formal and informal fallacies when these are used (van Gelder 2005, 2002).
For centuries, the supposition that human thinking follows logical rules has permeated and biased explorations into our cognitive capacities. The view that we are endowed with innate ideas that underpin our thinking, that allow us to learn syntax and to think logically, has been the cornerstone of Rationalism in every epoch including our own. But this is a far-fetched fantasy. To paraphrase Bertrand Russell, logic doesn't teach you how to think, it teaches you how not to think.
Cognitive Science is gradually overcoming the rationalist bias that was set at the moment of the discipline's creation. The more evidence mounts, the more it becomes clear that mental processing follows the associative rules of the brain. With this realization, the computer metaphor (that mind is software to the brain's hardware) slowly but surely unravels.
Perhaps this is how dualism finally dies, not with a bang, but with a whimper.
2.7.15
How You Know What You Know
In a now classic paper, Blakemore and Cooper (1970) showed that if a newborn cat is deprived of experiences with horizontal lines (i.e., is raised in an environment that is without horizontal stripes), it will fail to develop neurons in visual areas that are sensitive to horizontal edges. If the cat is exposed to horizontal lines while the visual areas are still optimally plastic (when the effects of learning and entrenchment have yet to set in), some neurons will quickly become selective to the feature, firing reliably when horizontal lines are part of the incoming sensation. These neurons are often referred to as ‘feature detectors’ even though the actual detection of the feature is always a network effect, that is, not the result of an isolated neuron firing, leading some to use the term tuned filters instead (see, e.g., Clark 1997).
It is well-known that our ability to categorize depends on our experiences with the objects of such categorization; moreover, research keeps finding that this phenomenon has more than a mere neurological ‘substrate’, that it permeates the very fabric of the brain (Abel et al. 1995, Sitnikova et al. 2006, Doursat & Petitot 2005). The study of this fact, however, proves very difficult for ethical reasons. Ideally, neuroscientists would experiment with children in order to see how, for example, sensory distinctions or, better yet, abstract concepts are acquired and represented. But the best means of accessing such precise data are ethically inconceivable.
To name one of the best methods currently on the market, in an ongoing project at Stanford University School of Medicine that aims to study the formation and entrenchment of sensory distinctions, Niell and Smith (2005) have recently been able to study the development, in real-time, of whole populations of neurons and their connections straight from the retina to brain regions known to process visual information. The method consists of immobilizing the growing subject and effecting two-photon imaging of neurons loaded with a fluorescent calcium indicator while experimenters control for the stimulus in order to better understand the electrochemical activity. Now remember, the aim of their efforts is to study the development of the neural connectivity and sensory capacity, which means subjecting the organism to this method for extended periods of times. Obviously, you can’t do this with children, so they are doing it with zebrafish, but the procedure promises to dazzle and reveal a lot about how sensory distinctions become entrenched in neural networks as a result of experience.
For the first time, it is possible to see how populations of neurons respond selectively to certain types of features, such as movement direction or size, and see to what extent, if any, there are innate representational constraints, such as the triggering of unlearned appearance concepts (Fodor 1998). As you probably guessed, current evidence seems to back up the claim that there are no innate representations,that they are learned from experience. The following are 6 reasons to believe that there are no pieces of knowledge or ideas that are unlearned.
1. Universal Approximators
Most complex neural networks are Universal Approximators because they can approximate any continuous function in their environment given enough time (Hornik, Stinchcombe & White 1989 or see, e.g., Zhang, Stanley & Smith 2004, Elman et al. 1996).[1] The Universal Approximator description applies to 3-layer neural networks, and obviously to those networks with a higher degree of complexity.
The human cerebral cortex is composed of innumerable overlapping 6-layer networks and each neuron can have up to 10,000 connections (see Damasio 1994, for a leisurely review). Moreover, there are many 3-layer networks in subcortical structures, as well as unlayered networks which consist of nucleuses of neurons and can provide added plasticity to an already elastic arrangement.
Universal approximators make excellent blank slates. In this respect, the interesting thing to notice is that human brains approximate the functions that they do and not others because of the characteristics of their bodies and the way they afford interaction with the world. In this way, the interaction between body and world conform the environment, a set of “time-varying stochastic function[s] over a space of input units”, according to Rumelhart (1989), which the brain must approximate.
You can learn anything, relatively quickly too. But, on the flip side, you are also likely to become what you surround yourself with. If you are surrounded by a bunch of idiots, well, sooner or later...
2. Neural Representations Mirror the World
Neural representation is symbolic but not as arbitrary as linguistic symbolization. Since neural networks are sensitive to the analog aspects of environmental functions through inductive and associative means, the internal code mirrors real-world structure in many ways, linking to what is represented through learning processes that involve neural competition that lead to self-organization and self-organizing maps (Kohonen & Hari 1999, Beatty 2001).
The structure of the mental representations arises out of the structure of what is represented (Damasio et al. 2004, Dehaene et al. 2005, Elman 2004) and what is done with that therein represented (Goswami & Ziegler 2006, Churchland & Churchland 2002, P.S. Churchland 2002). Content is everything, and that information isn't linked through logic.
3. Go Ahead and Kill a Few Braincells: Neurogenesis
We all grew up being told that we ought not to drink because it kills braincells, and braincells don't come back. Well, they do... every day. You know what actually kills them? Other brain cells because you didn't learn anything today. Yes, that's right. Since new neurons consume energy and resources, other braincells will kill them if they don't have to accept them into already existing neural networks. (Corty & Freeman 2013)
Contrary to the long-held scientific dogma, there is widespread neurogenesis throughout the lifespan (Taupin 2006, Zhao et al. 2003, Gould et al. 1999). That argument in favor of ingrained pieces of knowledge went bust in 1998. However, the survival of new neurons depends on their becoming integrated into existing networks (Tashiro et al. 2006, So et al. 2006), which in turn depends on some degree on the richness and variety of the perceived environment. To say it another way, the more varied your life is, the stronger your brain will be.
Nevertheless, some networks are more entrenched than others because some processing domains are very rigidly articulated (e.g., sensory modalities, like vision, where a network’s expansion could come at the unthinkable cost of losing reliability). Other processing domains admit more flexibility and open-endedness, like language processing or memorizing your new favorite songs.
So what does kill braincells? Living a monotonous life, like that of a homeowner who day by day goes through some mindless routine. In an ironic twist of fate, that person that was telling you not to kill your braincells was probably killing way more braincells than you were by refusing to live beyond his or her routine (see, e.g., "Environmental enrichment promotes neurogenesis and changes the extracellular concentrations of glutamate and GABA in the hippocampus of aged rats" by Segovia et al. 2006).
4. The Nature of Ideas
Neural representations are function (i.e., action) specific. Knowledge gained through one action that is useful for a different, supplanting action does not transfer ‘free of charge’, so to speak (see, e.g., Thelen and Smith 1994). However, neural networks bootstrap one another towards the approximation of ever more complex functions, conforming emergent properties, whereby associations between committed functional webs (called modules in scientific circles) lead to new functional webs that subsume the previous ones.[2]
As previous action representations are co-opted instead of supplanted, the new functional web inherits the representations of its onstituent functional webs insofar as these representations become associated. But this does not occur because they transfer the information, rather because the neural networks learn to behave in concert, in tandem, for your greater good.
For 60 years, good old fashioned cognitive scientists have wanted to convince the world that we are born with some ideas (a.k.a. Classical Cognitive Architecture), based on the ideas of Kant, Descartes and Plato. Even now, the television blurs commercials about how your genetics cause this psychological disorder or that one, so take a pill for that chemical imbalance. Their assumption is that the chemical imbalance causes the psychological issue. They are wrong, and a new generation of cognitive scientists is just waiting for them to die out so that the next paradigm, dynamical systems, can take over, this time based on evidence instead of on theoretical assumptions and wishful thinking. Though it has been an uphill battle, the dynamical systems perspective of mind is certainly taking over.
The chemical unbalance doesn't cause the psychological disorder; it is the psychological disorder. Your mind isn't some byproduct of your nervous system. Your mind is your nervous system; hence, it processes information in the same way. Mind and body are one.
5. Artificial Neural Networks that Organize Themselves:
An example
Superimposed artificial self-organizing networks with recurrent connections (Kohonen 2006) and newly developed genetic algorithms that permit the neuron to grow, shrink, rotate, and reproduce or absorb another neuron (Ohtani et al. 2000), are bringing about an artificial medium capable of transparently exploring many computational issues that cannot be studied as precisely with biological brains.
These models do away with innate representations altogether. You can't "program" information into them; you literally have to raise them by giving them an environment fitting to what it is you actually want them to learn and do.
The following is an example developed at the Helsinki University of Technology. An artificial neural network called a Self-Organizing Map was trained by feeding it 39 types of measurements of quality of life factors, like access and quality of education and healthcare, nutrition, among many others. All the data used was provided by the World Bank. The following image is the map produced by the network.
For the benefit of our understanding, this very same map was then depicted as the world map below. My guess is that it wont take you long to figure out what colors represent a higher degree of poverty if you compare the image above to the one below.
6. How You Know What You Know
The moral of the story seems to be that neural networks have more plasticity than plastic. For example, if the visual cortex is damaged at birth, large, medium and small scale characteristics of the functional organization of normal visual cortexes appear in the auditory cortex of the damaged brain, as other functional webs specialize in the functions typically located in the visual cortex (Sharma, Angelucci & Sur 2000, Roe et al. 1990). This same effect can be replicated by surgically redirecting the optic nerves, which suggests that there is nothing special about the networks of the visual cortex or of any piece of cortex at all. So please, I beg of you, stop believing the hype that everything is genetic.
Findings like these led to the Neuronal Empiricism Hypothesis (Beatty 2001), which states that the whole of the cerebral cortex is just one large yet segmented unsupervised, knowledge-seeking, self-organizing neural network. But neural empiricism is characteristic not just of the 6-layer networks of the cerebral cortex; though these add vast computational power to the brain, neural organization as a function of experience is the rule rather than the exception even in ‘lower’ structures.
Krishnan et al. (2005), for example, show that language experience influences sensitivity to pitch in populations of neurons of the brainstem. It’s not only the 6-layer networks that just don’t need innate representations; embodied neural networks get their representational constraints for free, from the body in the world.
Early on in our development, sensations establish further yet lasting symbols in the mind, what Barsalou (1993) calls perceptual symbols. John Locke and David Hume called them simply ideas. Today, in scientific circles, these are commonly referred to as mental representations.
There is an important difference, however, between Barsalou’s account and Hume’s, mainly that in the latter ideas are construed as less lively yet still complete copies of sensory impressions and in the former the copies are only schematic. This difference merits being highlighted as it is easily overlooked, specifically because it concerns a not-oft observed difference between primarily inductive and primarily associative learning.
Given that Hume construed the emergence of ideas as he did, his tabula is a warehouse of countless sequences of images, smells, tastes, textures, emotions, in short all the objects the mind has ever had sensations of. In contrast, because schematic records are by definition abstractions, associations between properties if you will, Barsalou’s cognitive architecture may be construed as furnishing the mind firstly with analog approximations of continuous and co-occurring properties that have been experienced, approximations that can later be used through association to fill in the blanks of particular schematic representations.
This is why murder trials can no longer be had in the United States based on a single eye-witness testimony. It is also why, when a person is placed in front of a police lineup to identify the perpetrator, the police have to say, by law, something to the effect of "remember, the person that you are trying to identify may not be in the lineup". If the police do not say that, any identification becomes inadmissible in a court of law. Why? Because your memories are reconstructions through and through, and by not saying that they are inducing the person to create a false memory.
Differently stated, while it follows from Hume's theory that baby and toddler minds record the totality of experienced events — a view followed later by Sigmund Freud — only to abstract or induce (and later associate) recurrent properties from the set, contemporary findings indicate that, first and foremost, minds approximate the properties themselves, such as shapes and colors. It is only later that these approximations can be employed towards the conformation of memories of concrete sequences of images, sounds, tastes, textures, smells, and combinations thereof.
Recall is surprisingly reconstructive. The resulting view is of a mind populated - not by countless sensory impressions but - by auto-organizing approximations of sensed properties.
During the first months of life, uncommitted neural networks in the infant’s brain approximate in an associative manner the functions of color, form, movement, depth, texture, temperature, pitch, among many, many others. In so doing, the brain develops its own personal neural code, a code that is conjunctively contoured by the processing mechanism, by the individual’s experience, and by the characteristics of the input domains. As these approximations — these ideas — are established, they mediate the processing of incoming sensations. Perception emerges as the real-time process of mediation, as the integration of fading sensations with enduring mental representations.
You know what you know because you are one big ball of perception flowing through your own web of ideas. And you act how you act because you've been conditioned to smithereens.
Open your eyes and do things differently. Go live outside your routine. You'll be happier and healthier as a result.
[1] Sometimes it is claimed
that 3-layer, feedforward neural networks are not real universal approximators,
as these can only approximate problem domains that have graded structure. While it an open question whether more
complex networks (e.g., 6-layer recurrent networks) are able to reliably approximate
non-graded problem domains, it should be recognized that the immense majority
of problem domains have graded structure, as practically all natural variables
have graded structure. The domain of
morphology is a prime example.
Generative linguistics traditionally applied a rule-governed (non-graded)
approach to this domain; however, current evidence indicates that the
morphological domain has gradient structure (Hay & Baayen 2005), and thus
can be reliably approximated by 3-layer networks.
[2] Properties that result
from a network’s functioning as a whole, i.e., that do not result from the
activation of a single neuron, are known as emergent properties. Neural codes are network specific and emerge
from the interaction of the implicated neurons responding to the body in the
world.
---------
If you enjoyed this article, you may also like:
Subscribe to:
Posts (Atom)
Featured Original:
How You Know What You Know
In a now classic paper, Blakemore and Cooper (1970) showed that if a newborn cat is deprived of experiences with horizontal lines (i.e., ...

-
The Minnesota Multiphasic Personality Inventory (MMPI-2) is the most comprehensive personality test currently available. Using 567 true or ...
-
Both the long and short forms of the MMPI-2 but not the MMPI-A commonly given to adolescents are available through this link . The Minne...