Showing posts with label cognition. Show all posts
Showing posts with label cognition. Show all posts

18.11.18

Preliminary report of closed, on-site polling




Poll #1

Duration: + / - 3 months

Sample size: A surprising amount. To be expanded as results showed promise.


Preliminary disclosure:

Users that scored elevated Scale 7 [Psychasthenia] on the MMPI-2 AND were in possession of standard blood test reported overwhelmingly to having higher white blood cell counts than red blood cell counts. With regards to if one or both of these numbers were in the abnormal range, the results were mixed, with the sample size proving insufficiently large to achieve the statistical significance necessary to back any correlation or, inversely, back the null hypothesis against any specific combination.

A follow-up poll will open, staying open over a longer period, and those numbers will be combined with those already obtained. MMPI-2 test takers that meet the criteria stated above are strongly encouraged to participate because it is likely that enough data will be collected to offer strong support with regards to a key skewed dynamic in the human psychoneuroimmunological system, which has been recently found to exhibit bidirectional communication between neurons and cells in the immunological system.




Poll #2


Duration + / - 10 months

Sample size: 500+ users


Results:

Users reviewed our on-site search engine, Cog, a custom Google search engine modified to filter out most of the noise on the Internet with the purpose of landing users directly on useful results or on the primary sources corresponding to their query. Cog received over 75% favorable reviews, with less than 20% of users reporting that they either weren't able to find the primary sources being sought out or having experienced any sort of processing or coding bug upon using the on-site CSE. To obtain positive percentages so high is extremely rare in anonymous, on-line attitude polling. Needless to say, I am very happy that I was able to provide you all with a useful tool that is becoming increasingly important as the Google search algorithms get hacked to the point that reliable information is no longer readily accessible. Since voters largely approved the design of the tool, I will be expanding and tweaking it over the medium-term.




28.11.15

Mortals of Habit: Dying by - and the Death of - the 40 Hour Workweek


Creatures of habit that we are, we seldom revamp things that have been around since before we were born.  An example is the 8-hour workday.  Since when has it been around?  The eight-hour movement,or 40-hour movement, has its origins in the workers' struggles of the Industrial Revolution.  Karl Marx saw its vital importance to the workers' health, saying in Das Kapital: "By extending the working day, therefore, capitalist production [...] not only produces a deterioration of human labour power by robbing it of its normal moral and physical conditions of development and activity, but also produces the premature exhaustion and death of this labour power itself”. Studies have showed that working 8 hours is not ultimately productive.  Evan Robinson recently wrote - "working over 21 hours continuously is equivalent to being legally drunk. Longer periods of continuous work drastically reduce cognitive function and increase the chance of catastrophic error."



When discussing in HR circles, people will agree on the data and its conclusions, but do HR departments plan on executing a plan making a day more productive that is less conventional?  No. Sweden, in some attempt at reform and finding work-life balance, just switched to a 6-hour workday (or see here, or better here for an explanation).

Conventionality has been the killer of many great ideas. The great unknown has been terrifying traditional types for eons.  As I write this, I am standing up.  I’ve never considered being able to stand up and write.  Not because I didn’t think it wasn’t possible, it just didn’t cross my mind.  The fact that I have access to that option makes me want to try it.  If you are expecting me to say that is better than sitting down, I am sorry to disappoint.  It is not; it is simply different.  But different is what you need sometimes, and sometimes you just need your good ol’ chair.

The point I am trying to make is that there are so many ways to go through our day. There are exponential alternatives on how to structure our day, our time, and our life. So many people are looking for adventure or, when asked what they seek in their significant other, they say someone adventurous. If that were really true, why wouldn’t they take a different way home? That’s adventurous, even if on a smaller scale than what they may have envisioned. Instead of going home after work, take a scenic route or improvise a road trip to the next state over. We don’t do any of these. Well, I try not to do it.   It upsets my family, and I don’t want them to seriously consider placing a tracking chip on me, even though this would avoid driving and texting to notify them of my whereabouts, thereby making the road a much safer place because I am not swerving and trying to be grammatically correct at the same time. Now I am going on a tangent...

I am just tired of discussing and having studies show us how we can improve on our day yet nothing revolutionary happen. The graph below shows the relationship between productivity and annual hours worked.



A paper by John Pencavel implies that reducing working hours is good for productivity.

Don’t get me wrong. I think it is great that IKEA sells standing-up desks. That’s something. But I am still holding out on the 4 hour work days. Or, perhaps, we may see a Basic Living Wage implemented in our lifetimes, with recent pieces such as this one by Scott Santers making the point forcefully based on new research by the IMF and OECD.  Perhaps the time has come for some real change that redefines the position of labor in our societies in a way that strikes a better work-life balance.  I can hope.  Can't I?

16.10.15

How do human minds work?: The Cognitive Revolution and Paradigm Change in Cognitive Science


During the first half of the 20th century, empiricism permeated most fields related to the study of human minds, particularly epistemology and the social sciences. The pendulum swung toward empiricism at the end of the 19th century in reaction to the introspective and speculative methods that had become the standard in disciplines like psychology, psychophysics and philosophy. Based on technical advances mostly achieved in Russia and the United States, behaviorism took form, threatening to absorb philosophy of language and linguistics (e.g., respectively, Quine 1960, and Skinner 1948, 1957).  In reaction to that movement, Cognitive Science emerged as an alternative for those discontent with the reigning versions of empiricism, that is, as a rationalist alternative.

When Chomsky (1959) pounced upon Skinner's Verbal Behavior, he later reasserted his victory as a vindication of rationalism in the face of “a futile tendency in modern speculation”, stating that he did not "see any way in which his proposals can be substantially improved within the general framework of behaviorist or neobehaviorist, or, more generally, empiricist ideas that has dominated much of modern linguistics, psychology, and philosophy" (Chomsky 1967).  Noam Chomsky’s assault, backed by the research program offered alongside it (Chomsky 1957), would be followed by twenty-five years of almost completely uncontested rationalist consensus.  Thus, the Cognitive Revolution is best understood as a rationalist revolution.

Researchers in the newly delineated interdisciplinary field coincided in arguing that the mind employs syntactic processes on amodal (i.e., context-independent) structured symbol, some of which must be innate.  The computer metaphor guided the formulation of models, whereby mind is to nervous system what software is to hardware.  Conceived as a new scientific epistemology, Cognitive Science built bridges across separate disciplines.  
Though each field has its own terminology dissimilar to the others, potentially straining effective communication, academics could converge on the view that thought, reasoning, decision-making, and problem-solving are logical, syntactic, serial processes over structured symbols.  As such, it may be suggested that the rationalist framework greatly facilitated the gestation and institutional validation of Cognitive Science as a academic domain in its own right.  Human cognition could be though of as Turing Machines (Turing 1936), perhaps similar to a von Neumann architecture (von Neumann 1945), that obey George Boole's (1854) Laws of Thought, and this computational foundation worked equally well for generative linguists, cognitive psychologists, neuroscientists, computer programmers focused on artificial intelligence, and analytic philosophers fixated on the propositional calculus of inference and human reason.  Consequently, most textbook on cognition contain a few diagrams like the one below.


Models that abide by the aforementioned rationalist premises are known as classicalist or as having a Classical Cognitive Architecture (Fodor and Pylyshyn 1988). It wasn’t until the mid-80s, with the resurgence of modeling via artificial neural networks, that the rationalist hegemony began to crack at the edges, as increasing emphasis was placed on learning algorithms based on association, induction, and statistical mechanisms that for the most part attempted to do away with innate representations altogether.  This resurgence threw Cognitive Science into what Bechtel, Abrahamsen & Graham (1998) called an identity crisis, which they date from 1985 until the time of that publication.  Almost two decades later, the identity crisis remains unresolved, as this new approach has been met with fierce resistance, displaying the unnerving, painstakingly slow characteristics of a Kuhnian paradigm shift (Kuhn 1962).



In Hume Variations (2003), Jerry Fodor, the most prominent and radical rationalist philosopher of Cognitive Science alive today, rescued the Cartesian in Hume along with his naïve Faculty Psychology at the cost of sacrificing his associationist view of learning.  And of course Fodor did this since that maneuver would render Hume a rationalist and also Cartesian linguistics and reason are central to the inaugural program of Cognitive Science, a framework that Fodor helped construct from the very beginning.  Chomsky's (1966) Cartesian Linguistics traces many of the developments of his own linguistic theory, including the key distinction between surface structure and deep structure, to the Port-Royal Grammar published by Arnauld and Lancelot in 1660.  The Port-Royal Grammar and the Port-Royal Logic (Arnauld and Nicole 1662) were both heavily influenced by the work of René Descartes.  However, the evidence is quickly mounting in a way that suggests that the maneuver needed is the opposite of Fodor's, that is, to rescue the associationist theory of learning while discarding the Cartesian aspects and the folk Faculty Psychology present in Hume's philosophy of mind.

A brief comparison between the prototypical rationalist and empiricist stances is provided in the following table.



Of these positions, the rationalist / empiricist distinction in philosophy of mind rests squarely on the issue of representational nativism. The other facets (listed in mind, processes, and representations above) seem to follow from what would be needed, wanted or expected of a cognitive architecture if there were either some or no innate ideas.

That there are no innate ideas is the core tenet of empiricist philosophy of mind. Hume believed that the mind was made up of faculties, a modular association of distinct associative engines, but he left open the question of whether the faculties arise out of experience (or ‘custom’) or are innately specified (and to what extent). There are two main reasons that suggest the former option to be the case.  First, uncommitted neural networks approximate functions, both of the body and of the world, paving the way for functional organization through processes of neural auto-organization. Second, committed neural networks bootstrap one another towards the approximation of more complicated functions; as this occurs, the domain-general processes of neurons give way to domain-specific functional organizations. However, though the representations that constitute these domain-specific processes can become increasingly applicable to variable contexts, these do not become wholly amodal, that is, context-independent, because domain-specific functions are anchored in domain-general associative processes that are inherently context-dependent or modal. (See How You Know What You Know for a review of scientific research that supports the two aforementioned reasons.)

Having said this, it must be noted that neither rationalism nor empiricism actually constitute a theory of anything at all; their core is only one hypothesis – either there are some innate ideas or there are none. There is the third possibility, however, that ideas do not exist, at least not in minds, making the rationalist/empiricist debate obsolete (cf., Brooks 1991). This third option notwithstanding, even though neither empiricism nor rationalism is actually a theory of mind, it is possible to build one in the spirit of their corresponding proposition. That is what Locke, Berkeley and Hume did; it is also what Noam Chomsky did, and what Lawrence Barsalou is doing now (whose research program is stated in Barsalou 1999).

Be that as it may, the rationalist consensus that dominated Cognitive Science's first thirty years cannot be explained by mere technological or technical factors. While someone could argue that connectionism did not appear until the mid-80s because neural networks could not be artificially implemented, this claim would be historically unfounded. Bechtel, Abrahamsen & Graham (1998) pinpoint September 11, 1956 as the date of birth of Cognitive Science. Though one may be reluctant to accept such a specific date, it is clear that the inter-disciplinary field emerged around then, plus or minus a few years. However, already in 1943, McCulloch and Pitts proposed an abstract model of neurons and showed how any logical function could be represented in networks of these simple units of computation. By 1956, several research teams had tried their hand at implementing neural networks on digital computers (see, e.g., the project of Rochester, Holland, Haibt & Duda 1956 at IBM).  By the early 60's, not only had the idea been explored, Rosenblatt (1962) had even tried building artificial neural networks as actual machines, using photovoltaic cells, instead of just simulating these on digital computers.

When Cognitive Science emerged, the technological tools existed so that research could have gone the rationalist’s or the empiricist’s way, or at least remained neutral on the matter; however, as the Cognitive Revolution is best understood as a rationalist revolution, nativism was hailed, construction began on a Universal Grammar (a project that failed miserably, by the way), decision-making processes were construed as syntactic manipulations on explicit symbol structures (Newell, Shaw, and Simon 1959, Anderson 1982), and neural networks were taken as simple instruments of pattern recognition that could serve to augment a classical cognitive architecture or, at most, to implement what would ultimately be a rationalist story. Fodor & Pylyshyn (1988) were surprisingly blunt on this last point by stating that the issue of connectionism constituting a model of cognition “is a matter that was substantially put to rest about thirty years ago” when the Cognitive Revolution took place. It took thirty years of work for frustration to set in with rationalist approaches; only then would connectionism reappear, augmented by the tools of dynamical systems theory, as a viable alternative to the rationalist or classicalist conception of cognition.


Paradigm Change in Artificial Intelligence


The term ‘connectionist’ was introduced by Donald Hebb (1949) and revived by Feldman (1981) to refer to a class of neural networks that compute through the connection weights. Thousands of connectionist nets, similar to some degree or other to the schematic below, have been created since the 1950s. The wide variety of artificial neural networks is due not only to the function each has been created (and raised) to carry out, which constrains the type of inputs and outputs to which the system has access, but also to their specific architecture—the number of neuron each layer contains, the kind of connections these exhibit, the number of layers, and the class of learning algorithm that calibrate its connection weights.


A clear and very simple example of a connectionist net (seen below) was developed by McClelland and Rumelhart (1981) for word recognition. The 3-layer network proceeded from the visual features of letters to the recognition of words through localist representations of letters in the hidden layer (for a richer discussion, see McClelland 1989). Given its function and the use of localist representations, both the mode of presentation of the input and the mode of generation of the output was constrained by the features of written language, which in turn delineated the network’s design.


Borrowed from the Empirical Philosophy of Science Project at the Natural Computation Lab of the University of California, San Diego, the graph below evidences the transition from the classicalist paradigm to the connectionist by presenting the frequency of appearance (by year) of the lexical items ‘expert system’ and ‘neural network’ in peer-reviewed academic journals of Cognitive Science. It can be clearly seen that the interest in neural networks supplanted the 1980's craze for expert systems.


For those lacking knowledge on the matter, an expert system is a decision-making program that is supposed to mimic the inferences of an expert in a given field; basically, the shell of the program is an inference engine that works logically and syntactically, and this engine must be given a knowledge base, a finite set of "If X, then Y" rules the sum of which ought to allow it to perform its target function correctly most of the time.  Typically, an expert system asks you either questions or to input specific data, and using those inputs, the inference engine goes through its knowledge base to provide you an answer.  Expert systems may be created for purposes of prediction, planning, monitoring, debugging, and perhaps most prominently for diagnosis, among several other possible purposes.  WebMD's symptom checker, which you may have used once or twice, is perhaps the most well-known example; you click on what symptoms you have, its inference engine passes your data through its knowledge base, and it provides you with a list of all the sicknesses you may be suffering from.  If you have used that symptom checker more than twice in your life, you probably know how inaccurate it tends to be, even to the point of being ludicrous at times.  In stark contrast, many artificial neural networks have been created for detecting all sorts of cancers and can do with 99% accuracy, that is, better than almost any doctor, like this one for breast cancer created by a girl during her junior year of high school.  This is just one out of countless domains where empiricist approaches vastly outperform their rationalist counterparts.

As a funny digression, I once had to make an expert system for a graduate class and built a program that would ask you 16 socioeconomic and political questions, from which it would diagnose your preferred political philosophy  (e.g., anarchism, liberalism, republicanism, communism, constitutional monarchist, fascism, and so on).  My artificial intelligence professor took it with him to the School of Engineering to test it out on his students, and when I saw him again, he commented that he was impressed by how accurate it was.  It was definitely more accurate than WebMD but, then again, medical diagnosis is a way more complicated knowledge domain that contains many more possible outputs so that is an unfair comparison.  On an unrelated but also funny note, my other artificial intelligence professor told the story of how he had lost faith in artificial neural networks while at grad school when he created a system that would either approve or reject a bank loan application. He would input the demographic and personal income data as well as the loan information, and the network would respond a simple Approve or Reject.  But he created the network with a twist; he deliberately trained it with a racist data set in such a way that the network wouldn't give out any prime loans to anyone that wasn't white.  He wanted to see if the network would ever learn the error of his ways or at least acknowledge its racism, but it never did, and he said that at that moment he lost all faith in connectionist networks.  When he finished telling the story, I immediately raised my hand and said—"You do realize that that is exactly what happens with many bankers in real life, right?  You network didn't fail; it behaved like a human would."


Reframing Cognitive Science


The seeds of empiricism have been sprouting almost everywhere. The last thirty years have seen an ever-increasing portion of scientific research dedicated, even if reluctantly, to proving some of the central tenets of empiricist theory of mind or attempting to articulate mechanisms to augment it.

In artificial intelligence, connectionist architecture emerged in the 80's as a clear and feasible alternative to symbolic approaches (a.k.a., good old-fashioned artificial intelligence or GOFAI; Haugeland 1985, Dreyfus 1992). The tools of dynamical systems theory, widely used in the field of physics, bolstered connectionism to provide for a robust account of a system’s ontogenetic evolution through time (van Gelder 1999). Connectionism provided that which behaviorist lacked, powerful learning mechanisms that could account for not only how intelligent agents derive knowledge from experience but also how we can surpass that limited amount of information to conceive an unlimited amount of possibilities; furthermore, the tools of dynamical systems theory opened the possibility of seeing what goes on inside the ‘black box’, while also helping psychology get in sync with physics and neurology. In this sense, connectionism ought not to be confused with behaviorism because neural network architectures permit an agent to surpass the limited stimulus-response patterns that it encounters (Lewis and Elman 2001, Elman 1998). It should be noted, however, that connectionist computation is not synonymous with empiricism, that it is, in fact, entirely compatible with rationalist postulates, as exemplified by Optimality Theory (Prince & Smolensky 1997), an attempt to implement universal grammar via a connectionist architecture; nevertheless, this compatibility is a token truism that goes both ways and is due to the fact that artificial neural networks and Turing machines exhibit equivalent computational power inasmuch as either can implement any definable function, which is why most people simulate neural networks using common personal computers (currently, the best open-source, free software for creating your own neural network with relative ease is Emergent, a program hosted by the University of Colorado that runs on Windows, Macintosh OS's, and Linux-Ubuntu, and can be downloaded here). Looking beyond this universal computational compatibility, connectionism clearly opens the door to empiricism, and the vast majority of connectionist models do away with rationalist tenets and clearly partake of the long-standing empiricist tradition even if many of their authors aren't willing to admit this publicly because of the entrenched stigma branded into that philosophical label.

In linguistics, a clear alternative to generativism surfaced during the 1980s in the form of Cognitive Linguistics (Langacker 1987, Lakoff 1987). Though cognitive linguistics is not wholeheartedly committed to an empiricist theory of mind, its rejection of the fundamental tenets of generativism is in itself a retreat from the rationalist consensus that stood almost uncontested. Specifically, its rejection of an autonomous, modular universal grammar and its grounding of linguistic abilities in domain-general learning and associative mechanisms represent a big leap towards empiricism. Moreover, as linguistics increasingly meshes with psychology and connectionism, slowly but surely an associationist flavor that had long been wiped out by Chomsky and his followers returns to the field. In consequence, much work in linguistics is being fruitfully redirected from devising categorical acquisition schemes toward testing statistical learning algorithms for the acquisition of syntax as well as for syntax's prehistoric origins (e.g., Hazlehurst and Hutchins 1998, Hutchins and Hazlehurst 1995) and also for how grammar changes throughout history (see, e.g., Hare and Elman 1995).

In psychology, many connectionist-friendly accounts have been offered. Perhaps the most ambitious is Barsalou’s (1999) perceptual symbol systems, an account that takes a firm empiricist stance in the face of rationalist psychology by dissolving the distinction between perception and conception. Moreover, the perceptual symbol systems approach has been recently applied, though not without difficulties, to theory of discourse (Zwaan 2004) and to theory of concepts (Prinz 2002). Still, this is not the only empiricist current in psychology, as the domain of psycholinguistics has been propelled mostly by psychologists, like Elizabeth Bates and Brian MacWhinney, and has led to findings and models that are very compatible with the tenet of empiricism (see, e.g., Thelen and Bates 2003, Tomasello 2006, Goldberg 2004, MacWhinney 2013).  Not to mention that many of the early proponents of the parallel distributed processing (or PDP) approach to Cognitive Science, like Rumelhart and McClelland, were psychologist by profession.

Empiricist cognitive architecture has gained a voice in every discipline in the cognitive sciences. The increasing acceptance of empiricism is leading not only to the testing of a rapidly-growing number of so-inspired hypotheses but also to a vast reinterpretation of earlier findings in light of radically different postulates. What has been taking place is clearly a Kuhnian paradigm shift. Hence, an exorbitant amount is still to be done. For starters, oddly enough several empiricist researchers are not convinced that their standing agendas are in fact empiricist, that is, that replacing ‘empiricist’ with ‘interactionist’ or with ‘emergentist’ does not black out the ‘empiricist’.

Consider, for example, the book Rethinking Innateness: A Connectionist Perspective on Development  (Elman et al. 1996). After a thorough and outstanding assault of rationalism and defense of empiricism, the group goes on to assert “We are not empiricists” (p. 357). Like many other fearful academics, they view the label ‘empiricist’ as a stigma, not unlike having to bear the Scarlet Letter. It is about time that this stigma be removed, and in that spirit I offer a few clarifications. First, regardless of what Chomsky and Fodor would like us to believe, behaviorism and empiricism are not synonymous, as most versions of connectionism clearly illustrate. Even the simplest neural learning algorithms, such as error backpropagation, offer that which behaviorist could not, statistical means that can carry cognition from learning through finite data to understanding an infinite amount of possibilities. Second, consider the following excerpt—

"We are neither behaviorists nor radical empiricists. We have tried to point out throughout this volume not only that the tabula rasa approach is doomed to failure, but that in reality, all connectionist models have prior constraints of one sort or another. What we reject is representational nativism." (Elman et al. 1996 1996, p. 365)

In Rethinking Innateness, the authors distinguish between three kinds of possible innate constraints: representational, architectural, and chronotopic (timing). A prime example of an architectural constraint is the characteristic 6-layer structure of the human neocortex; for chronotopic constraints, think of embryonic cell migrations. As stated above, the group offers a wealth of innate architectural and chronotopic constraints but reject representational constraints. It is the wealth of mechanisms that can go into delineating what kind of tabula the mind is that leads them to suggest that interactionism entails that empiricism is false. But empiricists have never shunned innateness altogether. The empiricist-rationalist distinction rests squarely on the issue of innate mental representations.

Advancing a strong view of architectural and chronotopic constraints does not depart one from the notion of a tabula rasa. The interaction of the many constraints with the world conforms the tabula—no sane empiricist would ever deny this! —but that does not render the tabula un-rasa, it just delineates what kind of tabula it is (i.e., a nervous system, not a DVD or a 35mm film or an infinite magnetic tape). To put it simply, denying all innate architectural and chronotopic features would be tantamount to claiming the children resemble their parents only because their parents raise them.  No one ever claimed that! The debate between rationalists and empiricists has always been about whether there are certain pieces of knowledge that are represented in the mind that are simply not learned. If you reject representational nativism yet do not reject the existence of something like ideas or mental representations, then you are committed to the tabula rasa, whether you like it or not. It may be unpopular, but it is nevertheless so because rejecting representational nativism without discarding mental representation is affirming that there are no innate ideas. That the type of tabula that it is determines what kind of information can be written on it and that human brains are highly structured does not entail the falsity of empiricism, unless representation is preprogrammed into the slate. Without unlearned representations, a highly structured and complex tabula is as concordant with empiricism as a simple and amorphous pattern-seeking agent.

Clearly, the type of slate that is proposed today is different from what was proposed during the Enlightenment. To Hume, the mind was primarily a passive photocopier of experience; in contrast, current neural networks are much more active in their assimilation of environmental information. Moreover, while Hume thought that that human minds associate the compiled copies of experience according to three domain-general types of association, connectionist neural networks are universal approximators that modularize as functional approximations consolidate because of the details of the surrounding environment and, therefore, in consequence, these readily develop mechanisms that go beyond association through association itself (see How You Know What You Know for a review). Advancing a stronger, more complex view of the cognitive slate does not distance the account from empiricism since it rejects representational nativism, just like Elman et al. 1996 did.

It is telling that connectionists naturally gravitate toward empiricism in spite of the stigma surrounding the tradition and even their own explicit assertions and roundabout philosophical identifications. Ultimately, the hallmark dispute among connectionist and classicalists is the question of what kind of tabula the mind is, a question that does not directly concern the rationalist/empiricist distinction but results from it by entailment. It is really just a practical matter that, whereas syntactic or logical engines require innate representations, complex neuronal slates like ours do not. Then again, it is also a practical matter that the only intelligent beings we know of are born with highly complex neural networks. Deep down, I am inclined to think that Fodor’s Informational Atomism is logically correct—if the mind works like a logical or syntactic engine, then all simple concepts must be innate. As Barsalou (1999) notes, there are no accounts on offer for how simple symbols can be acquired by a classical cognitive architecture or any logical or syntactic engine, and this may very well be because there are no possible accounts at all. This admission, however, should not lead us to accept Fodor’s theory of concept, but rather it should convince us that the mind is not a Turing machine (like the image below) or a syntactic engine (cf., Pinker 2005).



As the evidence mounts, even Chomsky had to abandon most of the original postulates of generative linguistics, including the important distinction between surface structure and deep structure and also the view that syntax is a totally autonomous faculty that does not derive or associate at all with the lexicon.  The Minimalist Program (1995) reduced the philosophical rationalism of Chomsky's theory to such an extent that several academics that have based their own work on generative models, suddenly finding themselves in a theoretical void that threatens to undermine their research, have chosen either to ignore it entirely or to attempt to undermine the program.  But this is just one example of how rationalist philosophy of mind is undergoing its slow death, weakening as data piles up.  As the first generation of cognitive scientists dies out and the third generation starts to assume positions of power, the stigma branded upon empiricism will weaken.  The likely result is a renewal that will allow funding to flow to new experimental techniques and to innovative practical application across the interrelated disciplines.  Exciting times lie ahead.

-------

REFERENCES

- Anderson, J.R. (1982). “Acquisition of cognitive skill”. Psychological Review 89: 369-406.
- Arnauld, A. & Lancelot, C. (1660). General and Rational Grammar: The Port-Royal Grammar. J. Rieux and B.E. Rollin (trans.). The Hague: Mouton, 1975.
- Arnauld, A. & Nicole, P. (1662). Logic, or The Art of Thinking; being The Port-Royal Logic. Thomas Spencer Baynes (trans.). Edinburgh: Sutherland and Knox, 1850.
- Barsalou, L.W. (1999). “Perceptual symbol systems.” Behavioral and Brain Sciences, 22: 577-609.
- Bechtel, W., Abrahamsen, A. & Graham, G. (1998). "The Life of Cognitive Science". A Companion to Cognitive Science. W. Bechtel & G. Graham (eds.). Massachusetts: Blackwell Publishers Ltd.
- Boole, G. (1854). An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities. London: Macmillan.
- Brooks, R.A. (1991). “Intelligence Without Representation.” Artificial Intelligence Journal 47: 139–160.
- Chomsky, N. (1957). Syntactic Structures. New York: Mouton de Gruyter.
- Chomsky, N. (1959). "A Review of B. F. Skinner's Verbal Behavior." Language, 35, No. 1: 26-58.
- Chomsky, N. (1966). Cartesian Linguistics: A Chapter in the History of Rationalist Thought. New York: Harper & Row.
- Chomsky, N. (1967). “Preface to the 1967 reprint of ‘A Review of Skinner's Verbal Behavior’.” Readings in the Psychology of Language. Leon A. Jakobovits & Murray S. Miron (eds.). Prentice-Hall, Inc. pp. 142-143.
- Chomsky, N. (1995). The Minimalist Program. Cambridge, MA: MIT Press.
- Dreyfus, H.L. (1992). What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press.
- Elman, J. L. (1998). “Connectionism, artificial life, and dynamical systems: New approaches to old questions.” A Companion to Cognitive Science. W. Bechtel & G. Graham (eds.) Oxford: Basil Blackwood.
- Elman, J.L., Bates, E.A., Johnson, M.H., Karmiloff-Smith, A., Parisi, D., Plunkett, K. (1996). Rethinking Innateness: A Connectionist Perspective on Development. Cambridge, MASS: MIT Press.
- Feldman, J.A. (1981). “A connectionist model of visual memory.” Parallel models of associative memory. G.E. Hinton y J.A. Anderson (eds.). Nueva Jersey: Erlbaum.
- Fodor, J.A. (2003). Hume Variations. New York: Oxford University Press.
- Fodor, J.A. & Pylyshyn, Z.W. (1988). “Connectionism and Cognitive Architecture: A Critical Analysis.” Cognition 28: 3-71.
- Goldberg, A.E. (2004). “But do we need Universal Grammar? Comment on Lidz et al.”(2003)” Cognition 94: 77-84.
- Hare, M. & Elman, J.L. (1995). “Learning and morphological change.” Cognition 56: 61-98.
- Haugeland, J. (ed.) (1985). Artificial Intelligence: The Very Idea. Cambridge, MA: MIT Press.
- Hazlehurst, B. & Hutchins, E. (1998). “The emergence of propositions from the co-ordination of talk and action in a shared world.” Language and Cognitive Processes 13(2/3): 373-424.
- Hebb, D. (1949). The Organization of Behavior: A Neuropsychological theory. New York: Wiley.
- Hutchins, E. & Hazlehurst, B. (1995). “How to invent a lexicon: the development of shared symbols in interaction.” Artificial Societies: the computer simulation of social life. N. Gilbert & R. Conte (eds.). London: UCL Press. pp. 157-189.
- Kuhn, T. (1962). The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1970. (2nd revised edition)
- Lakoff, G. (1987). Women, Fire, and Dangerous Things: What Categories Reveal About the Mind. Chicago: The University of Chicago Press.
- Langacker, R.W. (1987). Foundations of Cognitive Grammar. Stanford, CA: Stanford University Press.
- Lewis, J.D., & Elman, J.L. (2001). “Learnability and the statistical structure of language: Poverty of stimulus arguments revisited.” Proceedings of the 26th Annual Boston University Conference on Language Development.
- MacWhinney, B. (2013). “The Logic of a Unified Model”. S. Gass and A. Mackey (eds.). Handbook of Second Language Acquisition. New York: Routledge. pp. 211-227.
- McClelland, J.L. & Rumelhart, D.E. (1981). “An interactive activation model of context effects in letter perception: Part 1. An account of basic findings.” Psychological Review 88: 375-407.
- McClelland, J.L. (1989). “Parallel distributed processing: Implications for cognition and development.” Morris, R. (ed.) Parallel distributed processing: Implications for psychology and neurobiology. New York: Oxford University Press.
- McCulloch, W.S. & Pitts, W. (1943). “A logical calculus of the ideas immanent in nervous activity.” Bulletin of Mathematical Biophysics 5: 115–137.
- Newell, A., Shaw, J.C. & Simon, H.A. (1959). “Report on a general problem-solving program”. Proceedings of the International Conference on Information Processing . pp. 256-264.
- Pinker, S. (2005). "So How Does The Mind Work?" Mind and Language 20, 1: 1-24.
- Prince, A. & Smolensky, P. (1997). “Optimality: From Neural Networks to Universal Grammar”. Science 275: 1604-1610.
- Prinz, J.J. (2002). Furnishing the Mind. Massachusetts: MIT Press.
- Quine, W.V.O. (1960). Word and Object. Massachusetts: MIT Press.
- Rochester, N., Holland, J.H., Haibt, L.H., & Duda, W.L. (1956). “Tests on a cell assembly theory of the action of the brain, using a large digital computer.” IRE Transactions on Information Theory 2: 80-93.
- Rosenblatt, F. (1962). Principals of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Washington, D.C.: Spartan Books.
- Skinner, B.F. (1957). Verbal Behavior. Acton, MA: Copley, 1992.
- Thelen, E. & Bates, E. (2003). “Connectionism and dynamic systems: are they really different?” Developmental Science 6, 4: 378-391.
- Tomasello, M. (2006). “Acquiring linguistic constructions”. Handbook of Child Psychology. Kuhn, D. & Siegler, R. (eds.). New York: Wiley.
- Turing, A.M. (1936). "On Computable Numbers, with an Application to the Entscheidungsproblem". Proceedings of the London Mathematical Society, 2, 42: pp. 230–65, 1937.
- van Gelder, T.J. (1999). “Defending the dynamical hypothesis.” Dynamics, Synergetics, Autonomous Agents: Nonlinear Systems Approaches to Cognitive Psychology and Cognitive Science. W. Tschacher & J.P. Dauwalder (eds.) Singapore: World Scientific. pp. 13-28.
- von Neumann, J. (1945). "First Draft of a Report on the EDVAC". Originally confidential [property of the United States Army Ordnance Department].
- Zwaan, R.A. (2004). “The Immersed Experiencer: Toward an embodied theory of language comprehension.” The Psychology of Learning and Motivation 44: 35-62.


--------
If you enjoyed this article, you may also like:


23.9.15

How to Relax Completely in 10 Seconds


Constantly feeling anxiety is a major part of the every day life for millions of individuals.  The prognosis for anxiety disorders is among the worst within the diverse families of psychopathology. From a medical perspective, treatment typically consists of prescribing benzodiazepines (e.g., lorazepam, clonazepam, diazepam), which yield substance dependence and chemical tolerance.  These medications relieve the symptoms but leave the causes untreated.  From a pure psychotherapy perspective, the prognosis for anxiety is just as bad; Cognitive-Behavioral Therapy, the most employed technique nowadays, targets specific ideas that trigger feelings of anxiety, but this is ineffective because of the nature of anxiety.  Unlike phobias - fears tied to specific triggers - anxiety results from persistent fear that has lost its triggers, spreading throughout the brain.  If you manipulate some ideas by frequent repetition, the anxiety resurfaces elsewhere, again because the causes are not being treated.

But not all is hopeless.  Relaxation techniques used properly and frequently both relive anxiety and rewire the very same neural networks that generate it.  Previously I posted a technique for combating anxiety in the morning by listening and singing to a specific adaptation of Beethoven's Ode to Joy.  In what follows, I will provide instructions for a shorter and way more effective relaxation technique.



How to Relax in 10 Seconds


The following technique is not well known, but it works like a charm.  You will have to stand up and adopt what I call the "Receptive Position".  This position is a variant of the so-called Anatomic Position, as shown below.



So here are the instructions for how to relax in 10 seconds with the Receptive Position:
Step 1:  Stand up straight, shoulders back but relaxed.

Step 2:  Raise your shin a little (as in a "proud" emotional stance).

Step 3:  Drop your arms to your side and completely relax all tensions that might be hiding there.

Step 4:  Make your palms face forward and try again to relax your arms. (This is the hardest part of the exercise; if it causes you some pain, you may slightly make them face a little bit towards you, so long as they are still mostly facing forward and not towards your body.)

Step 5:  Make sure your body is as free of tension as you can possibly get it to be.

Step 6:  Close your eyes.

Step 7:  Breathe deeply, counting in silence every exhalation until you reach 10. (If you are extra stressed, breathe and count each exhalation until 15.)

Step 8:  Upon counting 10 (or 15), immediately open your eyes. 


Do it!  After finishing, ask yourself - How do you feel at that precise moment?

If you are so anxious that your first attempt caused you some physical discomfort, please just do the exercise one more time.  This really does work for everyone.

Once you've learned how to do this easy procedure correctly, you know that you can always repeat it whenever anxious or overly stressed if you can find a place where you enjoy some privacy.

I hope that this exercise has provided you immediate relief.


BONUS:  You can check how anxious you are via elevating yourself by getting on the tips of your toes as you inhale, then lowering yourself during exhalation.  Be careful!  If you are anxious, you will feel that you are falling as you get on the tip of your toes (a vertigo-like feeling).  In contrast, if you are not anxious, elevating yourself in this way will not cause you any feeling of discomfort.

13.9.15

Genius, by Mark Twain





Genius, like gold and precious stones,
is chiefly prized because of its rarity.

Geniuses are people who dash of weird, wild,
incomprehensible poems with astonishing facility,
and get booming drunk and sleep in the gutter.

Genius elevates its possessor to ineffable spheres
far above the vulgar world and fills his soul
with regal contempt for the gross and sordid things of earth.

It is probably on account of this
that people who have genius
do not pay their board, as a general thing.

Geniuses are very singular.
If you see a young man who has frowsy hair
and distraught look, and affects eccentricity in dress,
you may set him down for a genius.

If he sings about the degeneracy of a world
which courts vulgar opulence
and neglects brains,
he is undoubtedly a genius.

If he is too proud to accept assistance,
and spurns it with a lordly air
at the very same time
that he knows he can't make a living to save his life,
he is most certainly a genius.

If he hangs on and sticks to poetry,
notwithstanding sawing wood comes handier to him,
he is a true genius.

If he throws away every opportunity in life
and crushes the affection and the patience of his friends
and then protests in sickly rhymes of his hard lot,
and finally persists,
in spite of the sound advice of persons who have got sense
but not any genius,
persists in going up some infamous back alley
dying in rags and dirt,
he is beyond all question a genius.

But above all things,
to deftly throw the incoherent ravings of insanity into verse
and then rush off and get booming drunk,
is the surest of all the different signs
of genius.

12.9.15

Take the Enneagram Personality Test


An Enneagram of Personality is a typology of nine interconnected personality types.  An Enneagram Personality test is similar to the Myers-Briggs Personality Test with the exception that it views the types as connected to one another in specific ways.  Like the Myers-Briggs, it is often used in business for recruiting purposes in order to build teams with members that complement one another instead of overlap and also to reduce conflict within the team.

The Enneagram of Personality looks like this:



There are different types of Enneagram tests.  The following link leads to one of the simplest and most fun versions of the test.  Enjoy!





Take the Enneagram Personality Test!


------------------
Other psychological personality tests you may enjoy:

Attachment Style Test (New article, with complete theory, dynamics, and free copies of the DSM V and ICD-10!)

The Defense Style Questionnaire

8.9.15

MMPI-2 Validity Scales: How to interpret your personality test


The Minnesota Multiphasic Personality Inventory (MMPI-2) is the most comprehensive personality test currently available. Using 567 true or false questions, it rates the tester on 130 categories (validity scales included). Once validity of the results are established, a profile is created employing the 10 Clinical Scales: hypochondriasis (Hs), depression (D), hysteria (Hy), psychopathic deviate (Pd), masculinity/femininity (Mf), paranoia (Pa), psychathenia (Pt), schizophrenia (Sc), hypomania (Ma), and social introversion (Si).  Each of these is in itself composed of various other sub-scales.

To take the MMPI-2 free of charge, click here.

Please note that the MMPI-2 produces T-Scores and Raw Scores.  What you will be paying attention to are the T-Scores, not the Raw Scores, unless otherwise specified.  T-Scores are not percentages, but may be translated into percentages. Usually, anything above a 75 T-Score denotes a very high ranking on that scale, that is, within the top 1% of the population. Likewise, anything above a T-Score of 65 falls outside the normal range (among the top 3 to 5% of the general population).  On the lower bound, any T-Score below 35 would not be considered normal.  This general guideline notwithstanding, keep in mind that these point ranges do not apply rigidly, that is, some scales accept certain T-Scores as normal while other scales consider the very same scores abnormal.

Given this complexity, you may find the task of interpreting your own MMPI-2 results overwhelming. I have written this instruction manual with the aim of being as exact, as exhaustive, yet also as simple as possible, such that anyone can do it and fully understand what they are doing.



How to interpret your own MMPI-2 results?

  • Step 1: Verify that your results are valid, and identify what bias these contain, if any.
  • Step 2: Once determined valid, see how your profile compares to the rest of the population on the 10 Clinical Scales, and analyze your strengths and weaknesses on each scale by looking at its components.
  • Step 3: Pinpoint your dominant psychological defense mechanisms.
  • Step 4: Use the supplementary scales to better understand yourself and your current psychological tendencies.

This article explores in-depth how to carry out Step 1, arguably the most important step because the accuracy of all future steps depends directly on Step 1 being carried out correctly.

Step 1: Verifying Validity


Are your test results valid, and what do the validity scales say about you?

These are the Validity Scales in the order presented in the results:

? = Cannot Say
VRIN = Variable Response Inconsistency
TRIN = True Response Inconsistency
F = Infrequency
Fb = Backside F
Fp = Infrequency Psychopathology
L = Lie
K = Correction
S = Superlative Self-Presentation

Each of these is described below in detail.  Nonetheless, the most important validity scales are F, L, and K

If L and K score higher than F, it is likely that the test taker attempted to appear healthier than is really the case. This is known as "Fake-Good". However, this pattern by itself does not make the profile invalid. It might be that the pattern describes a moralistic conformist whose strong defenses allowed them to adapt successfully to the world. Thus, the pattern must be supplemented with further information to determine whether "Fake-Good" actually took place. How to do this is explained below, along with all the scales.

Probable "Fake Good" slope on the graph of the Lie, Infrequency, and K-Correction scales of the Minnesota Multiphasic Personality Inventory
Probable "Fake-Good" slope.
The evaluating entity will treat
your results as overcompensations,
at best, or as outright misrepresentation,
at worst, thus relying on their own view.

On the opposite end of the spectrum, if F scores higher than L and K, it is possible that the subject tried to appear worse than what they are, which is known as "Fake-Bad".  Once again, more information is needed to establish "Fake-Bad" behavior.  It could be the case that this person described their current situation sincerely, and perhaps needs professional help.

Probable "Fake Bad" slope on the L, F, and K validity scales of all forms of the Minnesota Multiphasic Personality Inventory
Probable "Fake-Bad" slope.
The interpreter is likely to believe
that you are acting to gain some benefit
and will treat your results as if they
were manipulative, relying on their own
perception of you for what is deemed true.


? = Cannot Say
This scale adds how many questions were left unanswered. A high amount of blank responses may signal confusion, resistance to taking the test, or simply that they did not finish.  More than 10 omitted answers risks rendering invalid the totality of the results.  If 6 or more questions weren't answered, it would be wise to look at which items these were because there may be a pattern in the topics addressed that may reveal the respondent's level of comfort with an issue or with a psychopathology that they may be unwilling or unable to address.

Some problematic combinations (if the scales listed have a T-Score above 60):
  • ? + L = Person is trying to appear in a favorable light but uses a crude strategy to do so.
  • ? + L + F + K = Suggests highly-generalized, intense negativism.
  • ? + F = The profile is invalid, be it because of reading comprehension problems or mental confusion.
  • ? + K = Test taker is very defensive.


    VRIN = Variable Response Inconsistency
    Measures the tendency to respond inconsistently. There are questions in the MMPI-2 that repeat using different wording.  This scale scores the consistency of the answers. On the one hand, an elevated VRIN and F indicate that the person answered questions at random; thus, the profile is invalid. On the other hand, a normal VRIN coupled with a high F suggests one of two scenarios: either the person has serious psychological issues that probably require professional attention, or they are simply "Faking-Bad", that is, trying to appear worse than what is actually true.  Though a very low VRIN may be good and indicate outstanding memory and focus, were those traits untrue such a score may suggest that the person is being very careful about lying or portraying themselves as someone they are not.  Given the length of the Minnesota Multiphasic Personality Inventory, some response inconsistency is bound to happen to anyone.

    TRIN = True Response Inconsistency
    Scores whether the respondent answered all true or all false at random.  A T-Score above 65 is suspicious.  A TRIN T-Score of 80 or more indicates that the profile is invalid.  This scale needs to be considered along with other scales; it means little alone unless above a T-Score of 80.

    F = Infrequency
    This very important metric quantifies how much a person's responses deviate from the general population; hence, how infrequent the answers are when compared to everyone else. In a non-clinical setting (if you are taking the test at home under no supervision, you are in a non-clinical setting), a T-Score above 80 on this scale probably evidences the existence of a severe psychopathology. To make sure that this is the fact, check that the VRIN and TRIN scores are normal, and also compare the F T-Score with that of Fb for further confirmation. If F and Fb aren't both elevated, it is almost certainly an instance of "Fake-Bad" behavior, that is, of trying to appear worse than one is.

    A 65 T-Score on F is not uncommon; furthermore, being involved in unusual religious, political, or social groups can raise F as high as 75. Nonetheless, a score of 80 or above, once proven valid, is a clear indication that the test taker is having unusual thoughts and experiences that most likely require professional attention. (In clinical, outpatient settings, a score of 75 is already considered abnormal; in inpatient settings - i.e., in a psychiatric institution - a score of 65 suffices as evidence of abnormality.) An F T-Score above 100 will elevate all clinical scales (a.k.a., the profile) and is indicative that the person is reacting to everything because he or she is unable to pinpoint a particular problem area, as would happen to a confused mind in the midst of a severe psychosis.

    On the flip side, a low F score denotes a person that is relatively free from stress or major psychological issues, who is dependable, sincere, and may be considered conformist (unless the K and/or L scales suggest a case of "Fake-Good").  Lastly, it should be noted that minorities tend to get higher scores on this scale, and also that it is quite common for creative people to score within the 60-70 range without that entailing psychological issues that must be addressed.

    Some problematic combinations:
    • Moderately high L and K + really high F = Test may have been answered mostly at random; the profile is likely invalid.
    •  Similarly, high L + F + K = Responses recorded without considering the questions; profile is invalid.

    An invalid profile on the Minnesota Multiphasic Personality Inventory. High Lie, inFrequency, and K-corrections indicate that further interpretation of results would be a waste of time
    Invalid profile.  The elevations
    of L, F, and K together go beyond
    anything realistic.  Interpretation of results
    would be unnecessary and a waste of time.
    • High F + L = "Fake-Bad", that is, the person is attempting to appear worse off than what is true, making the profile likely invalid.
    • High F + K = Individual contradicts himself by responding in a self-enhancing and self-deprecating manner at the same time. Lack of insight, confusion, or difficulties understanding the nature of the test may be to blame.  The profile may be valid or invalid depending on which of the aforementioned reasons is true.
    • High F + Sc (Schizophrenia) = Subject may have a tendency towards withdrawal. Profile is valid.
    • High F + Ma (Hypomania) = May have mania or be undergoing a manic episode. Profile is valid.

    Fb = Backside F
    This scale is the same as F except that it compiles information from the last third of the questions on the MMPI-2.  It is mostly used: 1) to confirm the validity of F by observing that Fb T-Scores match F more or less, and 2) to detect test takers that answer at random because F and Fb will show significant disparity.

    Fp = Infrequency Psychopathology
    This scale was specifically constructed to identify people who are faking a severe psychopathology.  A T-Score above 100 on Fp almost certainly renders the profile invalid.  Though not necessary, when such a score is accompanied by a VRIN T-Score of 80 or more, the profile is invalid, no ifs or buts about it.  The Fp Raw Score (which is different from the T-Score but is listed alongside it in the results) ought to be 6 or less for an optimal psychological profile to be constructed with the 10 Clinical Scales.  This scale is composed of items that not even people with severe psychopathology would assent to.

    L = Lie
    Lie measures whether an individual is trying to look good or rather is willing to own up to basic human vulnerabilities. A high score means that the subject is claiming socially correct behavior the unreal nature of which is common sense to everyone else. T-Scores above 60 are rarely seen on this scale. A T-Score of 55 or more may suggest a presentation of moral righteousness. A high L may signify a naive nature, ill-prepared to deal with difficulties or problems as these surface in real-time.

    MMPI-2 validity scales either of an optimistic sufferer of hysteria (conversion, in defense mechanism terminology) or of person the psychological defense mechanisms of whom are no longer functional
    Profile indicative either of a hysteric
    trying to look on the bright side
    or of  an individual whose psychological
    defense mechanisms no longer work.

    An elevated L with a moderately high Hy (Hysteria) suggests a character that looks to the bright side, attempting not to think badly about themselves or about other people.  Similarly, simultaneous elevated readings on L, K, and Hy points to highly defensive people that may not even be aware of the anomalous degree of their own defensiveness. A high L can be expected to be accompanied by lower readings on the 10 Clinical Scales profile of the MMPI-2, and, therefore, the results should be interpreted with that bias in mind. If, however, the scores on the 10 Clinical Scales are not all consistently low or in the normal range, this indicates that the person's preferred psychological defense mechanisms are not working well enough to keep a lid on their problems. In contrast, low L scores are associated with higher levels of education, non-righteousness, and a more relaxed mind.

    K = Correction
    This scale measures defensiveness in a much more subtle way than Lie.  Correctly interpreting K scores isn't easy as the background of the subject and the conditions under which he is taking the MMPI-2 must be taken into account.  College students, for example, typically display T-Scores between a 55 and 70, which signifies that they are competent in managing their lives; if their score is a little higher, it may be that they are on guard because they do not trust their professor or because the reason why they are taking the test wasn't fully or convincingly explained to them. A drop from that scoring range implies that the student is undergoing a stressful time in their lives. Outside of a well-educated population, high K scores indicate defensiveness. This is true, for example, for job applicants forced to take the MMPI-2; as a result of that peculiar situation, applicants attempt to appear as decent as possible, for obvious reasons, resulting in validity scale charts that typically follow the pattern of the image below.

    Minnesota Multiphasic Personality Inventory validity scale slope of a stereotypical job applicant.  Human Resources would do well to reject the attached application
    Typical slope of a job applicant
    trying to look better than is actually true.
    Though the profile is valid, K-corrections
    ought to be applied in order to see what is
    more likely the case. Employer should reject
    the job applicant, regardless of the K-corrected scores.

    In contrast, a low T-score of 45 or less hints that a psychopathology is probably present (and sometimes this is the only hint that the interpreting psychologist gets when all the profile scales fall within normal bounds).  Interestingly, a really low K of 35 or less correlates with a poor prognosis because it signals that the test taker does not have the tools or the psychological strength to respond well to traditional (no-drugs) therapies, most likely lacking sufficient Ego-Strength (Es). On the flip side, a really high score also suggests a poor therapy prognosis as the psychological defenses could be so strong that they prevent any internal change or therapeutic progress. Thus, this scale measures how intact the existing psychological defenses are.  A corollary of a high score on K, therefore, is a marked fear of emotional intensity along with an avoidance of intimacy.

    Some problematic combinations:
    • Elevated L + K + Hy (Hysteria) + R (Repression) = Too defensive to look at the bad in others or see the problems in himself.
    • A high K is associated with the psychological defense mechanisms of repression and rationalization.
    • When very high Ks co-occur with high scores on one or more of the clinical, profile scales, it is all-too-likely that these individuals will refuse to look at the problem, seeing themselves as having no problems at all.  
    • If both K and Es (Ego-Strength) record T-Scores of 45 or less, the person will tend not to feel good about themselves and will feel that they lack the skills necessary to tackle their problems.
    • When K is below 45 yet F scores below 60, the individual often believes that life has been rough on them because they didn't have the advantages that were available to others. This belief is probably true as this combination usually occurs with people from impoverished or otherwise disadvantaged backgrounds.
    • Moderately elevated K + F + Hy (Hysteria) + Sc (Schizophrenia) = Conventional people overly concerned with being liked and accepted into a group, unrealistically optimistic even when the facts do not merit it, have difficulty expressing and receiving anger, and find themselves unable to make decisions that would be unpopular within their group.
    • High K + Ma (Hypomania) = An organized, efficient person living with consistent hypomania.
    • Moderately high K + high F = People with longstanding psychological issues that have learned to cope with them and adapt to the world successfully, resulting in validity charts patterns like the ones below.
    MMPI validity profile of individuals with prolonged psychopathology who have nevertheless learned coping skills and live normally
    Validity profile of individuals
    with persistent psychopathologies
    who have nevertheless learned
    how to cope and live a normal life.







    S = Superlative Self-Presentation
    Highly correlated with K, this scale is defined by five characteristics: Belief in Human Goodness, Serenity, Contentment with Life, Patience and Denial of Irritability and Anger, and Denial of Moral Flaws.  A high score on S is positively correlated with Ego-Strength (Es).

    If the results appear normal and that of a fully-functional human being but S has a T-Score less than 65, consider that the subject is "Faking Good"; thus, at worst the profile is possibly invalid and at best the profile presents a significant bias that ought to be taken into account when interpreting the rest of the MMPI-2 results.


    Overview

    Congratulations!  If you have read and applied the many rules and concepts described above, you ought to have been able to not only verify the validity of your MMPI-2 results but also identify what biases, if any, permeate the rest of your results so that you may compensate for these accordingly in your interpretation of the scores that follow.

    I know the task at hand has not been easy... far from it.  But I have good news --- you are in luck!  Step 1: Verifying Validity is the most important of the steps; and it is also the hardest (and most technical) by far.  If you managed to complete this step successfully, the rest will be a breeze.


    ---------------
    Other psychological personality tests you may enjoy:


    Enneagram Personality Test

    Lüscher Color Test (Updated with expanded information!)

    Defense Style Questionnaire


    Related MMPI-2 information:


    And, always. the Free MMPI-2 link here.


    1.9.15

    On Perception, Emotion, & Decision-Making


    The following article builds upon the arguments and evidences offered in the previous post How You Know What You Know; however, the contents below stand on their own.  A further review of the History of Cognitive Science can be found at How do human minds work?: The Cognitive Revolution and Paradigm Change in Cognitive Science.

    ----

    1. Sensory Integration and Interdependence


    The transition from sensations to perceptions is commonly referred to as sensory integration. The importance of this process is such that it led the Rodney A. Brooks and the robotics team at MIT to postulate it as an ‘alternative essence of intelligence’ (Brooks et al. 1998) during their first attempt at building a humanoid robot, appropriately named Cog.

    Sensations are modality-specific; perceptions are not, even though we can attempt to dissociate the different sense streams and partially succeed in doing this. As evidence, consider two phenomena: sensory illusions and synesthesia.



    Sensory illusions can be uni-modal (involving one sense modality like the images above and below), multi-modal (involving two or more sense modalities; see, e.g., Turatto, Mazza & Umiltà 2005), or a sense modality and some piece of standing knowledge. As remarked by Fodor (2003), early 20th century Gestalt psychologists were more than justified in offering sensory illusions against their current-day empiricist counterparts. David Hume, and the tradition that ensued, granted an individual privileged access to his sensations. But, as the Gestalt psychologists would argue, perceiving involves construction, not just passive reception. Sensations decay— what persist are perceptions flowing through ideas.

    (Just in case you thought the above illusion was due to the surroundings, see the image below.)


    Hume’s agglomeration of impressions and ideas into the bucket of perceptions (classifying both impressions and ideas as types of perceptions), and his implacable loathing of skeptics, led him straight to an erred view of the mind. By compromising with the skeptic and contemporary cognitive scientists, it is possible to recognize the ephemeral character of sensations and identify perception with sensory integration, which necessarily involves active construction as is the activation of learned mental representations. This move does not undermine the core tenet of empiricism (i.e., there are no innate ideas); rather, it just delineates a point where bottom-up and top-down processing converge in the constant and continuous process of real-time experience.

    (For Mobile users who cannot see the video embedded above, here is the short color-creating optical illusion.)


    Synesthesia is less well-known. Synesthesia is a very rare condition that has its onset in early development and for which there is no treatment. Up until recently, very little research and funding had been directed towards the study this condition, mainly because it only rarely impairs a person’s productiveness and its incidence is quite low, around 1 in every 1150 females and 1 in every 7150 males (Rich, Bradshaw, & Mattingley 2005; however, Sagiv et al. 2006 has challenged the existence of a male-female asymmetry). These numbers are still under revision as the incidence of this condition is widely debated since synesthetes rarely see their condition as a problem, rather as a gift, and hence do not seek professional counsel.

    A synesthete has two or more modalities intertwined, usually uni-directionally, such that some features in one modality reliably cause some unrelated features in another modality (Cytowic 1993, Cytowic 1995, Rizzo & Eslinger 1989, but see Knoch et al. 2006, who argue that even in clear uni-directional cases there is some bidirectional activation; also Paffen et al. 2015). The patterns of association are established early during development and are stable throughout the lifespan. Moreover, no two synesthesias are alike. On the one hand, not only are many modality combinations possible, such as colored hearing, tasting tactile textures, or morphophonetic proprioception, but also, though it is extremely rare, more than two modalities can become entangled. On the other hand, even synesthetes who belong to the same class, like colored hearing, have completely different patterns of feature association. For example, colored-alphabet synesthesia involves person-specific ‘color - written letter’ mappings where each letter always appears in a specific color.

     Karen's Colored Alphabet

    Carol's Colored Alphabet



    But colored alphabet synesthesia is among the least invasive. In colored hearing synesthesia, certain sounds can trigger beams of colorful light situated in a personal space extending 1 meter in front of the face of the synesthete. The fact that colored hearing synesthesia typically involves a personal space is indicative of associations that were made very early on during development, as infants cannot see much past such a space. Indeed, the associations must have been made so early on as to be incorporated in the base perceptual code of the individual, a fact that illustrates not only the distinction between a sensation and a perception, but also the effect that ideas have in delimiting perception, and is firmly evidenced by the reality that, as of yet, no person with synesthesia has ever been found that remembers a time when they did not have their particular anomalous perceptions. As such, synesthesia ought to be deemed paradigmatic for any empiricist cognitive architecture because it not only shows (in an exaggerated manner) that sensory integration—perception—implies active construction, but also hints at how individual differences are the rule, rather than the exception, in the conformation of representational capacities, which would be indicative that these capacities are not innate.


    In fact, synesthesia might be paradigmatic of cognition in general, so much so that it has led researchers (Baron-Cohen 1996, Maurer 1993) to seriously explore the Neonatal Synesthesia Hypothesis, which states that “early in infancy, probably up to about 4 months of age, all babies experience sensory input in an undifferentiated way. Sounds trigger both auditory and visual and tactile experiences” (Baron-Cohen 1996). Since neonatal nervous systems are in the process of approximating environmental properties and specializing in domains of processing, experience to the infant might just be one constant synesthetic flow. By adopting this view, synesthesia can be explained as a derailment of an early process of modularization that the brain undergoes as a function of neural competition in the processing of the input stream during development.

    There is a second, competing explanation for synesthesia, what might be called the perceptual mapping hypothesis. According to this view, synesthesia occurs not so much as a function of modularization (although this process may still be relevant), but rather as a function of early induction of the associated pairs and subsequent entrenchment of these pairs into the base perceptual code of the individual (i.e., during some critical period; see Rich, Bradshaw, & Mattingley 2005). Since for most synesthetic associations, there is no clear source of what the target ought to be other than the input itself, the individual can go a prolonged time without knowing that their perceptions are irregular, and by then the association might be so entrenched in the representational system that it might either be too late for it to be corrected or it might be too dangerous because changing the base code would negatively affect all other cognitive capacities that are built upon it. Which account is correct is ultimately a scientific question that needs to be experimentally approached; nonetheless, either explanation affords support to present-day empiricism based on connectionism and dynamical systems theory (Beer 2014, Rumelhart 1989, van Gelder 1999).

    The neuropsychological and ontological question underlying both sensory illusions and synesthesia is where to draw the line between a sensation and a perception. In the journal Current Opinion in Neurobiology, Shimojo and Shams (2001) of the California Institute of Technology go as far as to argue that there are no distinct sensory modalities, since the supposed sensory systems modulate one another continuously as a function of the transience of the stimuli. They reach this radical conclusion by considering a wealth of recent findings in neuropsychology that include the plasticity of the brain and the role that experience has on determining processing localization (i.e., emergent modularization). And they are very likely correct; sensory integration is the rule rather than the exception, even in adult ‘early’ cortical sensory processing. This claim is echoed by Ghazanfar & Schroeder (2006), who argue not only that there are no uni-modal processing regions in the neocortex at all but also that the entirety of the neocortex is composed of associative, multi-sensory processing.

    So what is the difference between a sensation and a perception? Succinctly, a sensation becomes a perception when it is mediated by an idea. When a mental representation intervenes in the flow of a sensation, when it delineates its processing, the process of construction and integration begins.


    2. Aspects of the Nature of Emotions


    Damasio (1994) claims that what sets the stage for heuristic, full-blown human reason are limbic system structures that code for basic emotions and that help train the cortical structures on top of these, through experience, which then code for complex emotions. His somatic marker hypothesis states that emotional experiences set up markers that later guide our decision-making processes.  It is a well-known fact that when we try to solve a problem we do not consider all the alternatives, only the tiniest fraction.  These markers of past bodily state set up in our brain allow our minds to discard the vast majority of possibilities before we can even consider the vast array of options, and what is left is a small set that we may manage to ponder. Such training mechanisms are patently fruitful from an evolutionary standpoint, as illustrated by the following Artificial Life simulation.

    Nolfi & Parisi (1991) simulated the evolution of agents made up of artificial neural networks whose only task was to find food in a simulated world. Two distinct types of evolution were explored. In the first, the networks that were most successful at finding food in each generation were allowed to reproduce, which meant that new neural networks would begin with similar, though not exact, connection weights. What evolves, in this scenario, is the solution to the problem of navigation and food localization. Over several generations, the resulting agents have no problems at finding food at birth, so to speak. This is the equivalent of evolution hand-coding the solution into the neural connections, that is, of evolution installing truly innate ideas. For complex organisms, however, this kind of pinpoint fixation is untenable. The second type of evolution involved agents that were made up of two distinct networks. The first network handled the navigation, as the agents in the first simulation did, and the second neural network was in charge of helping train the navigating network (that is, it did not navigate at all). In this simulation, the first network was a tabula rasa in every generation, and what was allowed to evolve were the connection weights for the training network. Upon comparison of the two end-state types of agents, Nolfi & Parisi found that the auto-teaching networks consistently performed better at the task than the agents that had the solution to the problem hard-wired at birth.

    It strikes me as altogether probable, if not entirely undeniable, that tastes and emotions serve to guide the inductions of the tabula rasa toward specific ends, the same as Nolfi & Parisi’s teaching nets served the blank nets to solve the issues of their existence. Tastes and emotions are fundamental—even at birth, these instruct as to what is food and as to what can kill you. However, taking Nolfi & Parisi’s simulations at face value would mean that emotions would come preset in specific connection configurations, which are a means of mental representation. If, as has been claimed here, all mental representations are ideas, then such a solution would lead to an as-of-yet unseen kind of rationalism (an emotional rationalism - how bizarre!). But there are other ways in which nature might have implemented the mechanism. It simply might have implemented it into the brain through something other than the patterns of connections, for example, they could result from the global effects of neurotransmitters (see, e.g., Williams et al. 2006, Hariri & Holmes 2006), instead of their specific transmission, as suggests the fact that both selective serotonin reuptake inhibitors (SSRIs, like Prozac and Zoloft) and MDMA (street name: ecstasy; mechanism: makes neurons fire vast quantities of the serotonin available) affect mood significantly. Whereas with SSRIs, emotion is attenuated, with MDMA the user feels pure love, a sense of empathy that is unmatched by any drug on the market. The aforementioned hypothesis, however, is an open empirical question on which I take no stand.

    For our purposes here, it might be enough to note that emotions have traditionally been included within the realm of sensations as inner sensations. As of yet, I’ve seen no evidence that even remotely challenges this ancient view. For all we know, evolution might have simply implemented a non-representational domain of sensation that serves to guide learning. Such a domain need not be innately represented in the brain because it may be induced from the body itself. This claim lies behind Schachter & Singer’s (1962) classic Attribution of Arousal Theory of Emotion, which claims that emotions are the product of the conjunction of a bodily state and an interpretation of the present environment. In fact, Antonio Damasio and his team have been hard at work attempting to figure out where basic emotions come from. In an admittedly preliminary finding (Rainville et al. 2006), they managed to reliably identify basic emotion types (e.g., fear, anger, sadness and happiness) with patterns of cardiorespiratory activity. Similarly, Moratti & Keil (2005), working independently out of the University of Konstanz in Germany, found that cortical activation patterns coding for fear depend on specific heart rate patterns (see also, e.g., Van Diest et al. 2009). Should these findings pan out, it would be indicative that emotions are a sensory modality. As a sensory modality, emotions permeate experience, leading to emotion recognition being widely-distributed (Adolphs, Tranel, & Damasio 2003) because these become intertwined in the establishment of ideas.

    In the end, if emotions are sensations, they are not innate ideas. Ideas are formed from these sensations as a function of their being perceived, a process that could, in principle, account for fine-grained emotional distinctions (Damasio 1994). Be it as it may, it is clear that emotional experience lies at the base of all of cognition, even reasoning, since as a sensory modality its mode permeates directly or indirectly all other processing everywhere and always.


    3. Corollaries & Implications


    Contrary to what it may seem upon first inspection, there is an underlying feature that is shared by both rationalist classical cognitive architecture (Fodor & Pylyshyn 1988, Newell 1980, Chomsky1966, Chomsky 1968-2005) and traditional empiricist cognitive architectures like John Locke's and David Hume's, mainly that both suppose there is a domain of memory that constitutes a thorough and detailed model or record of states of (the body in the) world. This feature is part of a modern tendency, illustrated somewhat indirectly in the previous section, of overcrowding the mind with what it can get—and does get—for free from the body in the world. In classical architectures, this feature more prominently takes the form of sensory memory, constituting a complete and detailed imprint of the world, only part of the information of which will travel to working memory for further processing. On the empiricist side, this feature takes on a more insipid form.

    Think of Hume’s use of the word impression as opposed to, for example, sensation. Whereas the term sensation emphasizes both the senses and what is sensed, the term impression mostly accentuates what is imprinted, rendering perception mainly a passive receptor (a photocopier, if you will) upon which states in the world are imprinted. Also, and more importantly, the process of imprinting in Hume’s cognitive architecture does not stop with impressions because ideas, given how he defined these, are nothing more than less lively copies of imprints of states (of the mind) in the world. Moreover, since these ideas record holistically (i.e., somewhat faded yet still complete), as opposed to Barsalou’s (1993, 1999) schematic perceptual symbols, the resulting view is a mind overcrowded with images, sounds, tastes, smells, emotions—full of all of the experiences that the body in the world ever imprints on the mind.

    It is important to highlight the active character of perception by identifying perception with the real-time integration of fading sensations with lasting mental representation. Both sensory illusions and synesthesia are evidence of the active nature of perception because both phenomena illustrate the impact that ideas have upon sensations and the fact that what we perceive is not just an imprint of the world. In this respect, what must be emphasized is the character of neural networks as universal approximators of environmental properties (see How You Know What You Know for a review), allowing neural networks to get their representational constraints for free, from the information being processed. Moreover, as these approximations become entrenched in the processing mechanism, they partially delineate the processing of incoming stimuli.

    The resulting view is of a mind primarily full—not of sensory impressions but—of self-organizing approximations to the patterns implicit in such sensations, approximations that serve to anchor further representations through association.  These self-organizing approximations aren't just the substrates of "higher-order" processes—higher order reasoning carry their biases, their limitations, as well as their benefits, like speed and elasticity, as ongoing research on reasoning keep finding. Human beings are not logical or rational animals.  We can become more logical by learning logic and more rational by learning argumentation and how to spot formal and informal fallacies when these are used (van Gelder 2005, 2002).

    For centuries, the supposition that human thinking follows logical rules has permeated and biased explorations into our cognitive capacities. The view that we are endowed with innate ideas that underpin our thinking, that allow us to learn syntax and to think logically, has been the cornerstone of Rationalism in every epoch including our own. But this is a far-fetched fantasy. To paraphrase Bertrand Russell, logic doesn't teach you how to think, it teaches you how not to think.

    Cognitive Science is gradually overcoming the rationalist bias that was set at the moment of the discipline's creation.  The more evidence mounts, the more it becomes clear that mental processing follows the associative rules of the brain.  With this realization, the computer metaphor (that mind is software to the brain's hardware) slowly but surely unravels.

    Perhaps this is how dualism finally dies, not with a bang, but with a whimper.


    Featured Original:

    How You Know What You Know

    In a now classic paper, Blakemore and Cooper (1970) showed that if a newborn cat is deprived of experiences with horizontal lines (i.e., ...