Neuroscience
Whose representation?
One of my pet peeves is when different groups of psychologists use one term to refer to something for which they have multiple, sometimes contradictory, definitions. When I started studying similarity, I wasted a lot of time trying to clarify what everyone meant by relations. See, for one group of people, relations occurred within a stimulus (a cat’s legs are under its body, its whiskers are on its face). For another group of people, relations occurred between two stimuli (you use a hammer to hit a nail). These are very different types of relations and they affect similarity in very different ways. But, by using that one word, relation, the literature was all muddied about how relations influenced similarity. Clarifying the terms (we now speak of structural relations vs. thematic relations) helps clarify how we think about the subject.
The word
representation is used in a comparably muddied fashion. Depending on who you’re talking to,
representation might refer to something symbolic, perceptual, discrete, or continuous; and these symbolic/perceptual/discrete/continuous things might be transformed or acted on via ordinary computations or differential equations.
To get to the bottom of this, I want to clarify the different ways in which
representation is commonly used. Then, I want to figure out how to introduce some precision in talking about representations. This will make it much easier to discuss the problem of representation and to consider the alternatives.
Today’s installment: Discrete computational representations (based on Dietrich & Markman, 2003).
Representations are internal mediating states. Anything that changes / transforms / acts on input to a system in a way that changes / transforms output (i.e., actions) is a representation.
The authors provide four conditions for this definition.
1) There needs to be at least one system, which has internal states governing its behaviour.
2) There needs to be an environment, although this doesn’t have to be the external environment. It could just be an adjacent system.
3) Some types of relations have to exist between the system’s internal states and the environment.
4) Processes must act on the internal states to satisfy goals or solve problems. Dietrich and Markman believe that these processes are computational.
On top of these conditions, the authors argue that semantic content needs to be explicit. In other words, the authors contend that psychological-level descriptions of internal states are real and that this level is more relevant that the physical-level description. Representations and processes are more important than chemicals and neurons.
How representations get their content:
1) The relations between internal states and the environment connect particular internal states with particular external states (i.e., correspondence).
2) Representations acquire some content by virtue of the types of interactions they have with other representations (i.e., functional role).
The authors suggest that 1 contributes primarily to the content of low level DC representations like a vibrating eardrum responding to sound, while 2 contributes to higher level DC representations like “hope”, “democracy” or other abstract concepts. It’s necessary for every DC representation to have at least some content from correspondence to external states.
Now, representations could be either discrete or continuous, but Dietrich and Markman argue that they must be discrete. These terms map on perfectly to the mathematical sense of continuity/discreteness. So, discrete representations are uniquely identifiable. E.g., I have a unique cat representation that is different from all of my other representations. And, discrete representations have gaps between them. My cat representation doesn’t seamlessly transition into my tiger representation (although there may be overlap).
To sum up, this notion of
representation is that they are internal mediating states that are discrete and computational. Each representation is uniquely identifiable (discrete) and the processes that act on representations are ordinary computations. From now on, when I’m talking about this type of representation, I will refer to DC (discrete computational) representations.
Dietrich, E. & Markman, A. B. (2003). Discrete thoughts: Why cognition must use discrete representations. Mind and Language, 18, 95-119.
-
Oh Crap. Re-thinking Van Gelder (a Purple Peril)
I have this problem where I like pretty much everything William Bechtel writes except when it pertains to cognitive science. It's annoying because, even when I disagree with him, I think he's worth taking seriously. This was on my mind when I...
-
Grounded Vs. Embodied Cognition: A (hopefully Uncontentious) Note On Terminology
Our Frontiers paper made the case that embodied cognition is, by definition, a fairly radical affair. We argue ...if perception-action couplings and resources distributed over brain, body, and environment are substantial participants in cognition,...
-
A Selection Of Problems With Representation
As Andrew alluded to in the comments to his last post, I'm cooking up some replies to various questions that have been raised. Being relatively time poor at the moment, I'm going to take these in small bits, rather than attempt a more comprehensive...
-
Chemero (2009), Chapter 3: Theories Of Representation
The radical part of 'radical embodied cognitive science' is anti-representationalism. Simply put, the claim is that the brain does not internally represent states of the world in any way, nor are these the basis for our experience. It's radical...
-
The Only Non-representational Cognitive Psychologist In The Village
Hi. I’m a cognitive psychologist, but I’m not that kind of cognitive psychologist. Specifically, I don’t believe in representations, and I reject the computational model of cognition. Yes, this makes me very unpopular. I this post I want to quickly...
Neuroscience