| 15 November
PM Room 226
| Krisztina Kessel
| Institute of Philosophy, Eötvös University, Budapest
Extended Mind Theory and Counterarguments
their 1989 paper entitled The
Andy Clark and David Chalmers presented a new concept of externalism
they called ‘active externalism’,
has led to a two decade long discussion on their theory.
of the 20th
century have mostly argues that the brain is the embodied place of
cognition, however Clark and Chalmers asserted that we have to accept
the ‘parity principle’, which states that “If,
as we confront some task, a part of the world functions as a process
which were it done in the head, we would have no hesitation in
recognizing as part of the cognitive process, then that part of the
world is (so we claim) part of the cognitive process”.
Refuting the ‘parity principle’ and calling it ‘brain
chauvinism’ Clark and Chalmers emphasized that we should let go of
this long held prejudice and allow the notion of ‘extended
cognition’ come more naturally to us.
not only argue that cognitive function can extend beyond the confines
of one’s own brain into the environment, but opposed to the views
of ‘passive externalism’ featured by Putnam and Burge. In their
concept of active externalism, the
relevant external features are active, meaning that they have a
direct impact on the person then and there, meaning that they
influence his or her behaviour and the relevant external features are
inside the cognitive loop „not
dangling at the other end of long causal chain.” It
is not merely that there is a causally active feature in the
environment, that influences the brain, but the cognitive process is
constituted by this active feature of the environment.
my presentation I
want to briefly summarize the main arguments of Clark and Chalmers’
original essay, in addition I
will introduce some counterarguments by theorists who questioned,
criticized or were opposed to their concept. Moreover, I will
elaborate on the theoretical development of Clark and Chalmers’
concept with regard to its most recent contributions to EMT and
examine the question if the acceptance of the extension of cognitive
function necessitates the extension of beliefs and intentions as
|22 November (Wednesday)
| Hajnal Andréka
| Rényi Institute of Mathematics, Budapest
|How different are classical and relativistic spacetimes?
is part of an ongoing joint research with Madarász, J. and Székely, G.
This research was inspired by László E. Szabó's paper [S].
We take classical (Newtonian, or pre-relativistic) spacetime to be the
geometry determined by the Galilean transformations. In more detail: Let
the universe of the structure CST be four-dimensional real space R4
together with the binary relation of simultaneity, ternary relation of
collinearity, and quaternary relation of orthogonality, where four
points are said to be orthogonal iff they are distinct and the first two
points and the other two points are pairwise simultaneous and they
determine orthogonal lines in the Euclidean sense. Let CST represent
Relativistic spacetime is the geometry determined by the Poincaré
transformations. In more detail: The universe of the structure RST is
four-dimensional real space R4 and its relations are collinearity and
Minkowski-orthogonality (or, equivalently, the only binary relation of
light-like separability). Let RST represent special relativistic
The question whether two structures are identical except for renaming of
basic notions is a central topic in definability theory of mathematical
logic. It is formulated as whether the two structures are
definitionally equivalent or not (see e.g., [Ho]).
Clearly, CST and RST are not definitionally equivalent in the
traditional Tarskian sense, since in CST one can define a nontrivial
equivalence relation (the simultaneity), while in RST one cannot define
any nontrivial equivalence relation on the universe. However, in
"modern" definability theory of mathematical logic one can define new
universes of entities, too (cf e.g., [H], [M] or [BH]). In this extended
modern sense, in RST one can define a new universe with nontrivial
equivalence relations on it (e.g., one can define a field isomorphic to
R4). In fact, both spacetimes can be faithfully interpreted into the
other. In the following, by definitional equivalence we always mean
definitional equivalence in the modern sense. Definitional equivalence
of two theories is a mathematical notion expressing "identiy of"
theories. Two theories are definitionally equivalent iff there is a
one-to one and onto correspondence between the defined concepts of the
two theories such that this correspondence respects the relation of
definability. The same notion is applicable to structures.
Theorem 1. CST and RST are not definitionally equivalent.
To prove Theorem 1, it is enough to prove that the automorphism groups
(i.e., groups of symmetries) of CST and RST are not isomorphic. The
automorphism group of CST is the general inhomogeneous Galilean group,
where "inhomogeneous" means that we include translations and "general"
means that we include dilations. Analogously, the automorphism group of
RST is the general inhomogeneous Lorenz group. The two automorphism
groups are not even definitionally equivalent. This follows from the
following theorem which seems to be interesting in its own. It sais that
the abstract automorphism groups of the two spacetimes contain exactly
the same "content" as the geometries themselves, they "do not forget
(i) CST is definitionally equivalent to its automorphism group as well as to the inhomogeneous Galilean group.
(ii) RST is definitionally equivalent to its automorphism group as well as to the inhomogeneous Lorenz group.
Similar investigations can be found, e.g., in [E], [EH] and [P].
[BH] Barrett, T. W., Halvorson, H., From geometry to conceptual relativity. PhilSci Archive, 2016.
[E] Ellers, E.W., The Minkowski group. Geometriae Dedicata 15 (1984), 363-375.
[EH] Ellers, E.W., Hahl, H., A homogeneous dexctiption of inhomogeneous Minkowski groups. Geometriae Dedicata 17 (1984), 79-85.
[H] Harnik, V., Model theory vs. categorical logic: two approaches to
pretopos completion (a.k.a. Teq). In: Models, logics, and
higher-dimensional categories: a tribue to the work of Mihály Makkai.
CRM Proceedings and Lecture Notes 53, American Mathematical Society,
[Ho] Hodges, W., Model theory. Cambridge University Press, 1993.
[M] Madarász, J., Logic and relativity (in the light of definability
theory). PhD Dissertation, ELTE Budapest, 2002. xviii+367pp.
[P] Pambuccian, V., Groups and plane geometry. Studia Logica 81 (2005), 387-398.
[S] Szabó, L. E., Does special relativity theory tell us anything new
about space and time?
| 29 November
PM Room 226
| István Danka and Péter Neuman
| Department of Philosophy and History of Science
Budapest University of Technology and Economics
Are we able to find out new things about Nature with the sole help of thought experiments?
this paper we make an attempt to construe the epistemological status
of scientific thought experiments in general. In other words, we will
show that it is not impossible for a scientific thought experiment to
knowledge, which we
cannot derive form the underlying theory using logical methods. Our
assessment can be viewed as a refutation of John D. Norton’s widely
quoted claim that thought experiments are “epistemologically
unremarkable”. Our treatment uses a special type of thought
experiment as counter-example to Norton’s claim. This is the Monte
Carlo simulation based method as used in elementary particle theory.
We will argue that this type of simulation provides knowledge about
Nature that could not have been derived solely from the underlying
exact theory. Monte Carlo simulation is a powerful simulation tool,
used extensively in different fields of science, e.g. meteorology,
economics, physics, mathematics, etc. For the sake of logical
completeness, we will first show that the kind of simulation based
methods can and should be categorised as thought experiments.
particle physics heavily relies on the relativistic quantum theory of
fields. It is known that this theory, while being extremely
successful both in predicting experimental results and accuracy,
suffers from serious ambiguities and mathematical inconsistencies.
Perfectly legitimate physical questions sometimes get completely
unphysical, false answers. By using the machinery of quantum field
theory, we may get infinite results for quantities that intuitively
and experimentally cannot be infinite. It is a great achievement of
20th century theoretical particle physics to get rid of these
infinities, although the solution leaves certain questions open, and
it is far from being mathematically (even physically) rigorous and
complete. One way of getting rid of the harmful infinities is to
define the theory on a discrete lattice (instead of the continuous
space-time) and by performing numerical calculations on this lattice
we attempt to infer the behaviour of the continuous world. Lattice
field theory (with or without computer simulation) is able to
provide results that are in certain cases absolutely remarkable
because of their unprecedented accuracy, and sometimes otherwise
unattainable ontological conclusions. Kenneth Wilson’s 1974
pseudo-proof of quark confinement using lattice treatment is one
example of this, but we can also consider the recent result of
determining hadron masses using Monte Carlo methods. We will explain
that none of the above results can be reached just by relying on the
underlying continuous field theory. Moreover, the methods used here
are not simply numerical approximations within the paradigm of the
shall argue, that deriving finite results from infinite ones, cannot
be exclusively inferential, especially if we want to avoid the
classical problem of induction. However, it seems that computer
simulations viewed as thought experiments cannot grant empirical
knowledge without relying on empirical observations, provided we
clearly understand what we are in the process of studying. Although,
Norton’s claim is not tenable, its naive denial will not bring us
any closer to the solution.
will show, however, that the empirical
distinction is a false dilemma. The assessment is based on Kant’s
view about the existence of predicates providing new knowledge, that
are not empirical. The tenet of the synthetic a priori judgements has
its impacts on the understanding of thought experiments. We shall
reconstruct these impacts in both historical and theoretical
contexts. Following Hintikka’s Kant approach we shall interpret
Kant using the terminology of possible worlds. We will see that
Kant’s synthetic a priori judgements provide non-empirical fresh
knowledge, because the thought experiments establish different
possible worlds, in which the laws of physics are valid. If these
worlds are “close enough” to each other, the method will provide
valuable and fresh knowledge about our world, too. The treatment will
thus extend our knowledge about Nature.
can be deemed problematic though from a historical point of view,
that Kant himself did not study thought experiments. On the other
hand, one of his contemporaries, the physicist
Hans Christian Ørsted
base of a Kantian theory of thought experiments already in 1802. His
ideas stayed practically unnoticed, he had no followers most probably
because he did not make any distinction between physical and
mathematical thought experiments, neither did he distinguish a priori
and empirical judgements. Defending Ørsted’s
results, we shall show that his Kant interpretation is not only
completely proper, following Kant’s traditions, but it is also at
least partially tenable for modern thought experiments, as well.
original problem was the question of the infinitesimal. He studied
the validity of the knowledge via inferences gained through
convergences within the framework of calculus. This problem is
analogous to the problem we encounter in the case of Monte Carlo
simulation. This is not a surprise, however, we shall make ti clear
that the analogy is not perfect.