Philosophy of Mind
Salepage : Philosophy of Mind
Jon Leefmann and Elisabeth Hildt, The Human Sciences after the Brain Decade, 2017.
While theoretical philosophy, particularly philosophy of mind, provides widely acknowledged examples of some kind of interaction between empirical neuroscience and philosophical a priori reasoning, the possible interactions of neuroscience with other humanities disciplines have received far less attention. As a result, in the last chapter of Part I, Mattia Della Rocca turns to history as another subject of study that has been influenced by neuroscience in the recent decade. In his work, he demonstrates how the discipline of neurohistory evolved alongside the more recognized history of neuroscience, and how the two methods vary. The former presents itself as a methodology for doing historiography based on current cognitive and brain science findings, with the goal of explaining how historical and cultural changes developed as a result of the nervous system’s interaction with an ever-changing physical and cultural environment. The chapter, on the other hand, is concerned with the organization of neuroscientific victories into a chronological collection of historical records, as well as the finding of “precursors” of the subject in order to justify and celebrate present neuroscience research. Della Rocca contends that both methods are one-sided and insufficient for a meaningful connection between history and neuroscience. Neurohistory has a propensity toward presentism, ahistoricism, and the ignoring of the sociocultural embeddedness of neuroscientific explanation by explaining historical transitions based on the prevailing knowledge available in neuroscience at the time. Instead, the history of neuroscience is unable to account for the effect of cognitive underpinnings driving human behavior throughout history. Using neuroplasticity as an example, Della Rocca demonstrates how a third possibility—a fundamental combination of neuroscience and history—could circumvent both types of constraints.
Philosophical conundrums defy empirical evidence.
I. Sarhan, The Human Sciences After the Brain Decade, 2017.
This chapter examines the relationship between brain sciences and philosophy of mind in order to determine how philosophy can help neuroscience and neuroscience can help philosophy. Since the 1980s and the rise of “neurophilosophy,” an increasing number of philosophers have brought morality from neuroscience home to settle philosophical questions. I use examples from the issue of consciousness and philosophy of perception to show that such attempts to resolve concerns like whether psychology can be reduced to neuroscience or whether we perceive the external world directly in perception are futile. The failure is caused by the philosophical questions’ capacity to avoid the evidence. What makes these philosophical concerns persistent is that there is no way to answer them by empirical evidence since they are conceptual questions, and their solution resides in conceptual analysis.
Consciousness and Mind as Philosophical and Psychological Problems
George Mandler, Perception and Cognition at the End of the Century, 1998
II WHAT IS MIND PHILOSOPHY ABOUT?
There are two fundamental issues that arise when attempting to define a philosophy of mind. For starters, some philosophers are skeptical that any comprehension of mind, whatever it is, is attainable. Second, there is no consensus on whether the term “mind” relates to the contents of awareness or if something other or greater is suggested.
Thomas Nagel is a great example of a philosopher who, although tacitly saying otherwise, denies the possibility of knowing the mind while without specifying what this “mind” could be. It is stated as a “universal property of the world” like matter (1986, p. 19) that is beyond physical explanation and also beyond evolutionary explanation. Nagel promises us that “something else” must be happening, and that whatever it is, it will lead us to a “truer and more detached” view of the universe (p. 79). While I do not desire to promote any significant advances in contemporary psychology, it is difficult to follow someone who refuses to evaluate current psychological knowledge while still insisting that “the skills required to comprehend ourselves do not yet exist” (p. 10). According to Nagel, “the universe may be unfathomable to our thoughts” (p. 10). Humans are far from omniscient, yet no one can genuinely claim to know or prejudge what knowledge is or is not achievable. There are certainly elements of the world that are today unthinkable, as well as others that were centuries ago, although many of the latter are no longer and may not be in the future.
There appears to be no general agreement on the meaning of the omnipresent term mind, and dictionaries aren’t much help. Webster’s Dictionary, for example, admits “the complex of elements in an individual that feels, senses, thinks, wills, and especially reasons” AND “the conscious mental occurrences and capacities in an organism” AND “the structured conscious and unconscious adaptive mental activity of an organism.” Philosophers seldom reveal which of these brains they are thinking about. One can only imagine how perplexing these discussions may look to a French or German reader who does not have a perfect counterpart for our “mind” and must rely on esprit, Sinn, Seele, Geist, or Psyche. Aside from the public exhibition of discord, most philosophers are likely to agree on the use of “mind” as a quasi-theoretical object that is causally engaged in mental phenomena, including awareness. I’ll return to the tension between viewing “mind” as expressing the contents (and occasionally functions) of awareness and using “mind” as a catch-all phrase for the many mechanisms we give to conscious and unconscious activities. First, some thoughts on the questions that we have concerning awareness.
Overview of Cognitive Science
G. Strube, 2001, International Encyclopedia of the Social and Behavioral Sciences
2.2 Philosophical Foundations: Functionalism and Computational Psychology
In the philosophy of mind, mental states have been characterized as ‘intentional attitudes,’ consisting of a propositional substance (e.g., P = the sun is shining) and an attitude that describes one’s personal relationship to that proposition (e.g., I wish that P would become true). Fodor (1975) expanded on this idea, resulting in a ‘language of thinking’ that considers propositional content as data and the intentional connection as an algorithmic one.
If we accept them as parts of a ‘language of thinking,’ the question of how mental states connect to brain states arises: a well-known philosophical dilemma. Fodor and others, following Putnam (1960), describe the relationship between brain and mental states as analogous to the relationship between a computer (i.e., the hardware) and a program running on that computer: the mind as brain software. The computational theory of mind is the name given to this approach. It is compatible with the PSSH and quickly became the leading framework in CS. It does, however, only address (possibly) conscious thought, neglecting lower cognitive processes.
Ludwig Wittgenstein (1889–1951)
International Encyclopedia of the Social and Behavioral Sciences, E. von Savigny, 2001
Ludwig Wittgenstein (1889–1951) was a pioneer in twentieth-century philosophy of language and philosophy of mind. He developed a theory of logical truth based on a naturalistic metaphysics, as well as a referential and speaker-oriented theory of declarative sentence meaning in his Tractatus Logico-Philosophicus. Because its essential postulates were shown to be inapplicable to ordinary language, Wittgenstein began again in his Philosophical Investigations by rooting linguistic meaning in the role of a sign in social interaction. Meaning is primarily public rather than purposeful since relevant methods of employing signals must be socially developed. This resulted in public accessibility and social determination of subjective psychological states when applied to first person sensory language, especially as Wittgenstein considered linguistic expressive conduct as a subset of expressive action in general. While his theories were immensely important in philosophy, they had limited impact in other disciplines. The Tractatus inspired semantic theories of meaning, although later concepts are still waiting to be converted into study programs.
Thinking Through the Body
Ricardo Sanz and Idoia Alarcón, 2008, Handbook of Cognitive Science
Control in a hierarchy
A genuine plant might be incredibly basic or extremely sophisticated. Room thermostats, a favorite in philosophy of mind, are bang-bang controls that govern a single magnitude in the plant room temperature. To attain the necessary performance, a true temperature control in a chemical industrial reactor may require tens of sensors, actuators, and heterogeneous-nested control loops.
A true industrial plant may regulate hundreds of magnitudes, and organizing all of these control loops is a huge control system design issue. This is because, in order to fulfill the overall goals of plant management, not only must the various magnitudes be controlled, but they must also be coordinated.
The approach used to do this is to structure the control loops in a hierarchy, with lower-level controller references calculated by upper-layer controllers attempting to obtain more abstract and universal setpoints. For example, in a chemical plant’s reaction unit, multiple low-level controllers manage individual temperatures, pressures, flows, and so on in order to meet the unit’s higher level control objectives, such as production and quality standards (Figure 20.11).
Sign in to view the full-size picture.
20.11 Diagram An industrial plant’s hierarchical distributed control system (DCS) is divided into many control levels. At higher levels of control, control goals become more vague. The greater the importance of temporal criticality and accuracy, the lower the level.
The striking parallels between the phenomena of control in biological systems and large-scale process plant controllers are what fascinates researchers. While robot control systems frequently attempt to replicate biosystems by using what is known—or hypothesized—about their control systems, the bioinspired movement has yet to come in the case of process plants (unless potentially at expert process control levels, ström et al., 1986).
From the early analogical controllers of the mid-20th century to today’s fully computerized, billion lines-of-code complete plant controllers, industrial control systems technology has evolved on its own evolutionary path. Organizations of many forms have emerged in the structuring of core processes, the structuring of control structures, and, more recently, the co-structuring of process and control.
From the standpoint of the control system, we can see an evolution that is similar to the development of mental skills in biosystems:
The most basic control system is merely reactive, triggering some activity when specific circumstances are satisfied. Some examples include a significant portion of all protection and safety devices in industrial systems. The overall behavior is similar to a variety of biosystem safety responses.
When raw sensory input is minimally processed to extract useful information for behavior triggering, an additional degree of complexity is achieved. This is accomplished in basic control and protection systems. In the case of biological systems, Lettwin et al. (1959) conducted a well-known investigation that involved retinal processing in frog eyes. 5
When it is able to envision the functioning of the controller and supply its precise parametric values, the next layer occurs (e.g., setpoints or controller parameters). As a result, this layer may be integrated with upper-level controls, allowing for a control hierarchy. It is also widely recognized in biosystems that some motor movements originating in the CNS are carried out by low-level controllers (core examples are the homeostatic control systems of the body, Cannon, 1932).
Using the control loop’s conceptual openness, it is therefore feasible to stack control loop upon control loop—this is known as control loop nesting—so that upper-level behavior is dependent on the robust performance of lower-level behavior—thanks to the integrated controller. In this manner, a production quality control in a chemical reactor with a plethora of lower level controllers beneath may be used to keep flows, pressures, and temperatures at a reasonable level. Following on from the prior case’s homeostatic example, we see that major systemic activities, such as digestion, rely on lower level processes to keep physiological magnitudes in check. Another intriguing example is how gait control is dependent on lower level muscle control (Grillner, 1985).
Engineers take an intriguing step forward when they conclude that it is feasible to divide controllers into two parts: a universal engine and data that specifies the specific control method to be used. This brings up new options for engine reuse. The MPC controllers discussed in the section “Model-Predictive Control” and the controllers based on expert systems technology are clear examples of this (Sanz et al., 1991).
The next and most exciting step in the development of complex control systems is realizing that conceptualizing this separability (engine+knowledge) results in a new degree of controller receptivity to metacognitive processes (Meystel & Sanz, 2002). In the case of human control systems, this results in introspective skills as well as the well-known phenomena of memetics and culture (Blackmore, 1999).
What is most intriguing about the parallelism between technological industrial control systems and biological controllers is that they evolved nearly entirely independently. To be sure, the emergence of technological controllers has had little impact on the evolution of control mechanisms in biosystems. However, the inverse is also true—with the probable exception of knowledge-based control, in which human expertise does play a role in the technical system.
This might be read as indicating that evolutionary pressure on control/cognition points in the direction of layered metacognitive controllers, i.e., consciousness (Sanz et al., 2002). To properly comprehend this occurrence, a more in-depth examination of the model-based nature of the control capacity is required.
Reasoning in Comparison
Reasoning, Daniel Krawczyk and Aaron Blaisdell, 2018.
Animals are being tested for a sense of self.
Theory of mind is a concept that describes the ability to assign mental states or intents to oneself and others. It derives from philosophy of mind, a branch of study concerned with the ability to interpret the intentions of other creatures. Developmental psychology has widely accepted theory of mind to characterize tiny children’s ability to display empathy or comprehend others. Theory of mind has also been used to define variations in other people’s perceptions noticed in people with autism or schizophrenia. Such persons frequently struggle to understand the likely ideas, intentions, or thoughts of others, making social interactions more challenging.
To have a theory of mind, one must recognize that there is a distinction between oneself and others. This appreciation may subsequently lead to a person’s capacity to simulate or think that the way he or she sees the world is comparable to the way another individual perceives the world. Alternatively, data gleaned from social encounters might be used to argue that another person perceives things differently. A sense of self has been investigated in several animals using a technique known as the “mirror test,” which was first used by Gordon Gallup in 1970 to assess if chimps could distinguish themselves from other individuals. Gallup first administered the mirror test to two chimps (1970). The mirror was presented, and a range of behaviors such as making threat gestures and expressions were observed. Gallup performed the main test by placing a mark on the forehead ridges of the chimps. They were able to scratch at their own bodies rather than study the mark on the odd chimp in the mirror when given a mirror (Fig. 4.15). Since then, numerous more animals, including elephants and magpies (a kind of crow), have passed the mirror test. Captive dolphins were evaluated after a trainer put tattoo-like patterns of lines and forms on their backs in one of the most fascinating versions. The dolphins passed the mirror test when a mirror was put outside the glass of their tank by swimming close to the mirror and twisting and turning at angles that would have allowed them to observe their freshly adorned bodies (Tschudin, Call, Dunbar, Harris, & van der Elst, 2001). In this demanding test, “mock” marks were produced with a non-marking pen. The fake markings meant that the dolphins were not just reacting to tactile sensations, but rather looked to be interested in seeing the new visual marks that formed on their bodies. Having a positive attitude about oneself is a key step for understanding people and reasoning about their motives. While the mirror test provides an intriguing signal regarding an organism’s capabilities, it may not be sufficient to determine if an animal has a self-concept. The reality is that we cannot fully comprehend how the animal is reacting in this specific circumstance, or what factors may impact its job.
Sign in to view the full-size picture.
4.15 Diagram Passing the mirror test is said to indicate that a person has a sense of self. This is a talent that leads to more advanced social thinking abilities.
Philosophical Aspects of Phenomenology
K. Mulligan, 2001, International Encyclopedia of the Social and Behavioral Sciences
5 Different Types of Coexistence
Scheler’s taxonomy of social cohabitation draws on distinctions found in the works of Tönnies, Weber, and Simmel, as well as his own philosophy of mind and ethical psychology. With regard to distinct forms of collective intentionality, four categories of cohabitation are distinguished: masses, communities, societies, and superordinate entities such as nation-states, state-nations, empires such as the Belgian Empire, and the Catholic Church. A mass is distinguished by emotive contagion; its members do not behave as independent individuals. Instead of individual accountability, the community—families, tribes, clans, and peoples—shows expressions of collective intentionality such as sympathy, trust, piety, loyalty, and collective responsibility. A community emerges.