Preamble

Axiomatic approach of quantization
Quantum mechanics has been developed as a theory describing microscopic (atomic and subatomic) processes.
In textbooks it is sometimes presented as being derivable from classical mechanics by a substitution
referred to as quantization like
p → −iħ∂/∂q, ħ = h/2π,
in which h = Planck's constant
(compare). However, in such a "derivation" there always is a certain arbitrariness:
there are many different ways to quantize classical quantities.
A quantization scheme can be justified only by the fact that its result is yielding a useful
description of microscopic reality. For this reason it seems better
to introduce quantum mechanics in an axiomatic way, as is done in the following, letting experiment
decide about its applicability.
 Measurement in the microscopic domain
Quantum mechanics (as well as relativity theory) distinguishes
itself in a fundamental way from classical
theories by hinging on the notion of measurement.
Whereas in classical theories reference to `measurement'
is dispensable, this no longer is the case in quantum mechanics. Here
the socalled measurement problem
has been a longstanding source of discomfort.
In particular the suggestion
of the `standard formalism of quantum mechanics' that `in measurement processes
nature would behave differently from nonmeasurement processes (compare)'
has been a bone of contention.
Far from concluding from this that measurement should be
exorcized from the foundations of quantum mechanics, I will
abide in the following with the spirit of
Bohr's fundamental insight
(however, not with the specific way Bohr implemented that insight) that
the only way of obtaining knowledge about microscopic reality
is by performing measurements that are sensitive to the microscopic
information, and that are able to amplify this information to the macroscopic
dimensions we are able to observe. Unless we find ways to compensate for the influence of the particular
way we have performed our measurements, the knowledge obtained on the microscopic object
must be dependent on it. It seems to me that Bohr's conclusion is justified that
`Einstein's ideal of having an objective
description of microscopic physical reality' is simply unattainable, and that
the suggestion both in quantum mechanics textbooks as well as in the scientific literature
that `(standard) quantum mechanics can be seen as such an objective description'
is misleading.
In the following this view will be illustrated by

i) duly taking into account the `influence of measurement
on the experimentally obtained empirical data';

ii) learning from a thorough analysis of measurement procedures
`which elements of the mathematical formalism are essential, and
which should be dispensed with', thus demonstrating a necessity to
generalize the formalism;

iii) performing an analogous analysis of `interpretations
of the mathematical formalism' by trying to evade any wishful
thinking in estimating the `relation between the mathematical formalism
and physical reality', thus being able to
prevent paradoxical conclusions based on too ``classically realist'' interpretations,
without being obliged to resort
to the vagueness and ambiguity of instrumentalist ones.
 Physics and philosophy
Physicists can learn from philosophers, and vice versa. The following issues will be
particularly important:

Ontology versus epistemology
The distinction will have to be observed between `what is'
and `what we know'^{1},
dealt with by the philosophical disciplines of ontology and
epistemology, respectively.
Failure to take into account this distinction
has caused much confusion with respect to the meaning of quantum mechanics. In particular has it
led in too uncritical a way to the general acceptance of
a `realist interpretation of the mathematical formalism of quantum mechanics',
to be criticized in the following.
An alternative interpretation, referred to as
empiricist interpretation,
is proposed, being able to better take into account the distinction.
 `Microscopic reality' versus the `phenomena'
Logical positivism/empiricism has had a large influence during the years
quantum mechanics was being developed.
It, in particular, has boosted an empiricist attitude,
to the effect that `only the phenomena' were deemed
worthy of scientific attention. The acceptance of a
realist
interpretation can be seen as a physicist's revolt
against this empiricism, to the effect that quantum mechanics is thought
to describe microscopic reality itself rather than `just the phenomena'.
However, it may very well be that by this revolt
valuable philosophical insights are put aside rather too drastically. Even though
logical positivism/empiricism is an obsolete philosophy by now (compare),
its influence has been
appreciable in selecting the standard formalism of quantum mechanics as a means
of "understanding" the experimental phenomena of the day. Although we no longer require
that quantum mechanics should describe `just the phenomena', is it not unreasonable
to surmise that quantum mechanics `just describes the phenomena in a de facto way, simply
because the theory has been developed to do so in the first place'.
From the history of science we can learn that it is often just like that: when a new domain of physics
is entered, the first thing to do is `to yield a description of the phenomena'; subsequently, questions about `causality'
often induce speculations about the `reality behind the phenomena'.
In order to describe that `reality behind the phenomena' we need to develop (sub)theories
(for instance, the classical theory of
rigid bodies describes a billiard ball only as far as it behaves as a rigid body;
we need an `atomic solid state subrigid body theory' to include in the description also atomic vibrations).
In contrast to `logical positivism'
an `empiricist interpretation of quantum mechanics' leaves open an analogous possibility
of subquantum theories describing a
`reality behind the phenomena described by quantum mechanics'.
The plausibility of these
ideas has been one reason (next to `indications into the same direction stemming
from a generalization of the mathematical formalism of quantum mechanics')
to develop an empiricist interpretation of quantum mechanics as an alternative to the
(generally accepted) realist one.

Empiricist versus rationalist influences
In the present account of quantum mechanics its relation to empiricism
will play an important role. As is well known, it was empiricism
that induced Heisenberg's conclusion that quantum mechanics had to describe
`just the phenomena' rather than `intrinsic properties of microscopic objects'
(thus starting `matrix mechanics'). For Heisenberg
measurement result a_{m}
did not refer to an `(allegedly unobservable) property of the microscopic object possessed
prior to the measurement', but to a `property of that object observed in the
final stage of the measurement' (e.g. a phenomenon like a flash on a
scintillation screen, or a track in a Wilson chamber).
However, as a consequence of Heisenberg's antiempiricist thesis
that `theory decides what can be measured' (purporting to understand one of the key issues
of quantum mechanics, viz. complementarity)
it is questionable which part empiricism has really played in Heisenberg's (and others's) contributions to the development of
quantum mechanics. It seems that rationalist influences like the availability of
a useful mathematical theory (viz. the theory of matrices) have been equally influential. Probably the development of
quantum mechanics was above all a pragmatic rather than a purely empiricist affair.
 Theoryladenness of observation
One purpose of the present account is to demonstrate that empiricism does
play an important role in the interpretation of quantum mechanics, but
that this empiricism cannot be the logical positivist/empiricist one. It is here that
physicists can learn from philosophers, the latter having realized that it is impossible to completely base a theory on
observation, thus opening doors for antiempiricist theses like Heisenberg's one.
The insight that measurement in quantum mechanics needs to be described by quantum mechanics itself rather than
by classical mechanics (as advocated by the
Copenhagen interpretation), can be seen as an application within physics
of the philosophical discovery of theoryladenness of observation.
On the other hand, philosophers can learn from physicists as well, since quantum mechanics offers a splendid
opportunity to figure out in a concrete example how `theoryladenness of observation' can be implemented into a
physical theory without succumbing to the danger of metaphysics involved in the concomitant circularity.
 Inadequacy of the standard formalism
`Simultaneous or joint nonideal measurements of incompatible
standard observables' are
examples of `measurements that are not described by the `standard formalism of quantum mechanics'.
Such joint measurements were studied during the early stages of
the development of quantum mechanics in a rather informal way
as `thought experiments' like e.g. the
doubleslit experiment^{0} or
Heisenberg's γmicroscope^{0}.
At present such experiments
are being carried out as real experiments. They turn out to defy the `standard formalism', requiring a
generalized formalism
for their description, the latter formalism corroborating the early insights about `mutual disturbance in a joint measurement
of incompatible standard observables' based on the `thought experiments'.
In the formal analysis based on the `generalized formalism of quantum mechanics'
an important part is played by the Martens inequality,
being an adequate mathematical expression of the
`mutual disturbance in a joint measurement of incompatible standard observables'
predicted by the informal analyses of the `thought experiments'.
This result clarifies a longstanding mystification, to the effect that
it is not the HeisenbergKennardRobertson inequality
(derived from the standard formalism) but the Martens inequality
that should be seen as an expression of the Copenhagen notion of
`complementarity as mutual disturbance in a joint measurement
of incompatible observables'.
Standard formalism of quantum mechanics

Quantum mechanical standard observable
(restricting ourselves to discrete nondegenerate spectra):
A `quantum mechanical standard observable' is mathematically represented^{37} by a
Hermitian operator A = ∑_{m} a_{m}
E_{m}, with eigenvalues a_{m}, and
E_{m} the projection operator on its eigenvector
a_{m}>, satisfying E_{m}^{2} =
E_{m}. The set of projection operators
{E_{m}} is called the spectral resolution
of A. The set {E_{m}} is also referred to as
an `orthogonal resolution of the identity operator', satisfying
∑_{m}E_{m} = I,
E_{m}E_{m′}= E_{m}
δ_{mm′}.
The eigenvalues a_{m} are the possible measurement results
of the standard observable, the latter being referred to by its mathematical representation A,
or rather by the corresponding orthogonal resolution of the identity {E_{m}}
(in the generalized formalism
also nonorthogonal resolutions will be allowed).

Quantum mechanical state, the superposition principle:
A `quantum mechanical state' is mathematically represented by a `state vector ψ>
(pure state)' or `density operator (or statistical operator) ρ (mixture)';
state vectors and density operators
are normalized according to <ψψ> = 1 and Tr ρ = 1, respectively,
Tr ρ (the trace of ρ) being defined according to
Tr ρ =
∑_{m} <a_{m}ρa_{m}>.
The vector character of quantum mechanical pure states is in agreement with the
superposition principle, asserting the additivity of state vectors to the effect that
c_{1}ψ_{1}> + c_{2}ψ_{2}>,
if suitably normalized, is a possible state vector if ψ_{1}>
and ψ_{2}> are.
An important application of the `superposition principle' is the possibility of representing an arbitrary state vector
ψ> as a superposition of eigenvectors of an observable A according to
ψ> = ∑_{m}c_{m}a_{m}>,
∑_{m} c_{m}^{2} = 1.
State vectors are elements of a linear vector space (more particularly a
Hilbert space^{0}).
The `density operator corresponding to the state vector
ψ>' is given by
ψ><ψ =
∑_{m}c_{m}^{2}a_{m}><a_{m} +
∑_{m≠m′} c_{m}c_{m′}^{*}
a_{m}><a_{m′},
the latter terms being referred to as `cross terms' or `interference terms'.
 Entangled states
The existence of `entangled states' is a consequence of the `superposition principle'
as applied to a system of two (or more) particles (or degrees of freedom)^{88}. Let
ψ_{1a}> and ψ_{1b}> be states of
particle 1, and let ψ_{2a}> and ψ_{2b}> be states of a
second particle. Then the superposition principle allows to form the state ψ_{12}> =
c_{a}ψ_{1a}>ψ_{2a}> +
c_{b}ψ_{1b}>ψ_{2b}> (c_{a} and c_{b}
complex numbers) from the
product states ψ_{1a}>ψ_{2a}> and
ψ_{1b}>ψ_{2b}>. Such a superposition
is called an `entangled state'. A twoparticle state
ψ> is entangled if it cannot be represented as a product of a state of particle 1
and a state of particle 2. Hence, entangled states are correlated states.
It should be noted that, although entangled states are correlated states, not all correlated states are entangled states.
Thus, a state described by the density operator
ρ = p_{a}ρ_{1a}
ρ_{2a} + p_{b}ρ_{1b}
ρ_{2b}, ρ_{ix} =
ψ_{ix}><ψ_{ix}, i=1,2, x=a,b, p_{x} probabilities,
is correlated but not entangled,
as in this state the correlation is `classical correlation'. The notion of `entanglement' is restricted to `quantum correlation',
stemming from typically quantum mechanical properties of the `cross terms' (compare) in the density operator
ψ_{12}><ψ_{12} corresponding to a
pure state ψ_{12}>.
Entangled states are often seen as manifestations of a certain
inseparability^{87} of the particles that are involved,
because in an entangled state it is impossible to attribute a welldefined state to each of the particles separately.

Probability distribution of measurement results of a measurement of
standard observable A performed in state ψ> or ρ:
p_{m} =
<a_{m}ψ>^{2}
or p_{m} = Tr ρE_{m}.
This is often referred to as the Born rule.
The quantity <a_{m}ψ> is referred to as a probability amplitude.

Expectation value of measurement results of a measurement of
standard observable A performed in state ψ> or ρ:
<A> = ∑_{m}p_{m} a_{m} =
<ψAψ> or <A> = Tr ρA.

Time evolution:

Pure states: Schrödinger equation:
iħdψ>/dt = H ψ>,
H the Hamilton operator. As is usual, in the following I put ħ = 1.
If H is not explicitly timedependent, then the solution of the Schrödinger equation can be written according to
ψ(t)> = e^{−iHt} ψ(0)>.
The linearity of the Schrödinger equation warrants general validity of the superposition principle.

Mixtures: Liouvillevon Neumann equation:
idρ/dt = [H,ρ]_{−}, [H,ρ]_{−} =
Hρ−ρH,
having for timeindependent H the solution
ρ(t) = e^{−iHt}ρ(0)
e^{iHt}.

Remarks on the standard formalism:
This is essentially all there is to the standard formalism of quantum
mechanics. Of course, for practical applications the formalism has to be implemented
in ways appropriate for that particular application. I restrict myself here to
a small number of such applications.

Schrödinger and Heisenberg pictures
The time dependence of quantum mechanical measurement results can be expressed in two
equivalent ways, known as the `Schrödinger picture' and the `Heisenberg picture', respectively.

Schrödinger picture:
<A>(t) =
<ψ(t)Aψ(t)>
or <A>(t)= Tr ρ(t)A,
ψ(t)> and ρ(t) solutions of the
Schrödinger equation
and the Liouvillevon Neumann equation, respectively.

Heisenberg picture:
<A>(t)=
<ψA(t)ψ>, ψ> = ψ(0)>, or <A>(t)= Tr ρA(t),
ρ = ρ(0),
observable A(t) being the solution
A(t) = e^{iHt}A(0) e^{−iHt}, A(0) = A,
of the equation
idA(t)/dt = −[H,A(t)]_{−},
in which [H,A(t)]_{−} = HA(t)−A(t)H.
The sign difference with the Liouvillevon Neumann equation may serve as a warning that
the density operator is not an ordinary quantum mechanical observable, even though
it is Hermitian: its time development is different.

Von Neumann's projection postulate
Often also von Neumann's projection (or reduction) postulate is taken as
part of the standard formalism of quantum mechanics, to the effect that during the
measurement the state changes in a way assumed not to be described by
a Schrödinger equation. Von Neumann's projection postulate
may be encountered in a strong or in a weak form:

Strong von Neumann projection:
It is assumed that during a `measurement of standard observable
A yielding measurement result a_{m}' the state vector ψ>
undergoes a discontinuous transition
ψ> =
∑_{m}c_{m}a_{m}> → a_{m}>.

Weak von Neumann projection:
In `weak von Neumann projection'
all possible measurement results a_{m} (rather than just a single one)
are taken into account in the transition from the initial to the final state. Thus
ψ>=
∑_{m}c_{m}a_{m}> →
ρ = ∑_{m}c_{m}^{2}
a_{m}><a_{m},
the final state now being described by a density operator ^{24}.

Objections to the projection postulate
Since it assumes `measurement processes' to differ in an essential way from `nonmeasurement processes',
`strong von Neumann projection' is often seen as a strange element within quantum mechanics.
Although I do not consider this a convincing argument because it neglects
the very special requirements to be met by the `interaction
between object and measuring instrument' in order that a process may function as a `measurement',
I agree with its conclusion: by invoking the `human observer as an active principle'
(compare) `strong von Neumann projection' has exerted a pretty harmful
influence on our understanding of the meaning of quantum mechanics^{55}.
I consider von Neumann's projection postulate (either strong or weak) to be
neither a necessary (i) nor a useful (ii) property of a quantum mechanical measurement:
i) It is not a necessary property, because it is a
consequence of a certain `interpretation of the quantum
mechanical state vector' (viz. a realist version either of an
individualparticle interpretation (in case of strong projection),
or of an ensemble interpretation (in case of weak projection)) that is dispensable.
ii) It is not useful since
by most practical experimental measurement procedures it is
not even satisfied in the weak sense. Although to a certain extent `weak projection'
can be justified by means of an explicit account of the interaction of object and measuring
instrument (compare), it turns out
that `weak von Neumann projection' is too restrictive to encompass even the most common measurement
procedures.
Thus, the SternGerlach experiment (often presented as a paradigm
of measurements satisfying von Neumann projection) does not exactly satisfy the `weak projection
postulate', `deviation from projection' even being a crucial precondition of its functioning
as a `measurement of spin' (e.g. Publ. 37).
Another example is the ideal photon
counter which detects photons by absorbing them. Hence, ideally the
final state of the electromagnetic field is the vacuum state 0>
rather than the `eigenvector n> of the photon number observable
corresponding to the number n of detected photons'. Since photons
that have survived the detection process are not registered at
all, it follows that also the functioning of a photon counter
depends crucially on not satisfying von Neumann's
projection postulate: it is operating better to the extent it is
violating the projection rule.
 Remarks on the projection postulate:
 Its experimental origin
It must be realized that `von Neumann's projection postulate' stems from the early days of quantum mechanics
in which experiments were mostly scattering experiments (compare), measuring
differential scattering cross sections^{0}
by determining the relative numbers of particles scattered into solid angles supported by
a scattering sample.
Von Neumann's projection postulate was inspired in particular by the
ComptonSimon experiment
^{0}, in which
conservation of energy and momentum is tested in a collision of an electron and a photon.
Using the conservation laws (found in the experiment to hold),
von Neumann was able to infer photon momentum from a measurement of
electron momentum, thus concluding that, as a consequence of the measurement of electron momentum,
the photon wave function must have collapsed to `the eigenstate
of its momentum observable agreeing with the value found experimentally for the electron'.
 Nondisturbing character of the ComptonSimon experiment
It is important to note here that in von Neumann's application of the ComptonSimon experiment
the disturbing influence of the measurement interaction is evaded because the
measurement is performed on a particle (the electron) that is different from the object (the photon)
the wave function of which is assumed to collapse.
Here the similarity with the EPR problem
should be noticed, in which Einstein interpreted a `measurement
of an observable of particle 1' as a `measurement of a (strictly correlated) observable of particle 2', thus
assuming the measurement procedure not in any way to interact with the measured object.
Although application of `strong von Neumann projection' to this particular experiment is justified
(cf. Publ. 57), is it impossible to generalize this feature to
a property of an arbitrary quantum measurement.
By supposing his `projection postulate to be valid for any quantum mechanical measurement'
von Neumann committed an `unjustified generalization': most measurements are not of the ComptonSimon/EPR type
in which the influence of the interaction of the measuring instrument with the microscopic object can be ignored.
In general, by the measuring instrument a disturbing influence is executed, causing the measurement
to be of the second kind rather than satisfying von Neumann's prescription.
This explains the frequent `deviation from von Neumann projection' found
in actually performed measurements.
Probably von Neumann has been trapped into his unjustified generalization by the
classical paradigm,
suggesting that if the outcome of a measurement is a_{m} then after the
measurement the object must have that same value with certainty (and consequently must be described by the corresponding
eigenfunction). The failure of von Neumann's projection postulate may be seen as a first indication of the inadequacy of
the realist interpretation of quantum mechanics,
to be criticized here.
 Von Neumann projection is a `preparation principle'
rather than a `measurement principle'
Von Neumann projection is a `preparation principle' rather than
a `measurement principle': it is a procedure describing
conditional preparation in
measurements of the first kind^{26}.
`First kind measurements', however,
are virtually nonexistent due to the influence exerted on the microscopic object by the measurement.
Only in very special cases like the ComptonSimon experiment and the EPR experiment,
in which care is taken that the preparation is not influenced by the measurement, may
von Neumann projection be applicable (compare).
Consistency problems of `quantum mechanical measurement theory' arising because of von Neumann
projection, could better be dealt with by `abandoning the
postulate as a measurement principle', rather than by
ignoring the existence of welltried measuring instruments
like photon counters (compare).

`Von Neumann projection' versus `faithful measurement'

`Von Neumann's projection postulate' should be distinguished from a principle of
`faithful measurement', to the effect that
a faithful measurement of an observable would reveal the `value the measured observable had
immediately preceding the measurement'^{2}
(rather than `preparing that value after the measurement',
as required by von Neumann projection).

Remarks on `Von Neumann projection versus faithful measurement':

Contrary to `von Neumann projection', `faithful measurement' is a `measurement principle',
inspired by the idea, implicit in the classical paradigm, that a measurement should reveal
`reality as it objectively was prior to measurement'.
The concept of `faithful measurement'
is in agreement with the possessed values principle
(implying that quantum mechanical measurement result a_{m} is found
because it can be attributed to the microscopic object as an
`objective property, possessed prior to and independent of measurement'). Hence, the
impossibility to implement the
possessed values principle into quantum mechanics makes `faithful measurement'
impossible within that theory.

However, this need not be the end of `faithful measurement'. If the possibility of
subquantum theories is acknowledged,
the `faithful measurement principle' may refer to `properties that are not described
by quantum mechanics' (e.g. hidden variables or subquantum properties).
In the following this
will be considered a possibility, thus attributing to
the `faithful measurement principle' an applicability as a measurement principle that is not reflected by
quantum mechanics.

`Von Neumann projection' and `faithful measurement' are sometimes combined into a view of
quantum measurement as a filter or sieve which selects the microscopic object
into a cell corresponding to measurement result a_{m} without changing that value.
This, actually, was Einstein's
position in his controversy with the
Copenhagen interpretation over the `completeness of quantum mechanics'.
It is interesting to notice that within the discussion of this controversy
there is an interplay between `Einsteinian application of faithful measurement in the initial stage of the experiment'
(which is in disagreement with the `Copenhagen idea that before the measurement an
observable does not have a welldefined value') and `Copenhagen application of von Neumann projection in the final stage'
(preparing the final state of a distant object in a nonlocal way not appreciated by Einstein). By Einstein
this interplay is used to propose a
tradeoff between `completeness' and `locality'.

Relative frequencies and probability distribution
Quantum mechanical probability p_{m} is connected with
experiment by comparing it with the relative frequency
N_{m}/N that value a_{m} is obtained when a measurement of observable A
is performed a large number (N) of times. Thus,
p_{m} = lim_{N→∞}N_{m}/N.
We have
p_{m} ≥ 0,
∑_{m} p_{m} = 1.

Remarks on `relative frequencies and probability distribution':

In actual practice relative frequencies N_{m}/N are always determined for finite N.
For this reason the limit
N →∞ is not to be taken in the mathematical sense because this limit is not attainable in actual practice.
As a practical criterion it is sufficient to require that the
number N be large enough to allow for `sufficient stability of the relative
frequencies N_{m}/N if the number N is increased'
so as to allow a determination of the limit with sufficient accuracy.

It is far from selfevident that the limit exists in the above sense. Its existence requires that the
experiment be repeatable so as `to make individual preparations
identical in a certain sense' (thus, consecutive position
measurements of an electron freely traveling into outer space
will not yield such a limit). A physical condition, comparable to the
`ergodicity condition^{0}
of statistical thermodynamics', will probably be necessary. In general it is taken for granted that
the physical conditions for the existence of the limit are
satisfied, i.e., that presentday measurements on microscopic
systems are within the domain of application of quantum mechanics (compare).

(In)compatibility of observables

The difference between classical and
quantum mechanics is embodied by the possibility that, contrary
to classical quantities, two quantum mechanical standard
observables A and B may be incompatible, i.e.
the Hermitian operators do not commute,
their commutator [A, B]_{−} ≡
AB − BA satisfying
[A, B]_{−} ≠ O.
Position operator Q and momentum operator P constitute an example
of a pair of incompatible observables, satisfying [Q, P]_{−} = iI.
Observables satisfying a similar maximal amount of incompatibility
(in the sense that their eigenfunctions are maximally different)
are sometimes called canonically conjugate or `complementary' observables
(however, see footnote 48).
For compatible standard observables (for which [A,
B]_{−} = O) the definition
of a probability distribution can
be generalized to a `joint probability distribution' according to
p_{mn} = <a_{mn}ψ>^{2}
or p_{mn} = Tr ρE_{m}F_{n},
where a_{mn}> are the joint eigenvectors of
A and B (the observables having spectral resolutions
{E_{m} = a_{m}><a_{m}} and
{F_{n} = b_{n}><b_{n}}, respectively). Compatible observables are
`jointly or simultaneously measurable', the `joint measurement' yielding
p_{mn} = Tr ρE_{m}F_{n}
as `joint probability distribution' expressing the `correlation of the measurement results a_{m}
and b_{n} obtained in the joint measurement of A and B'.
The observable AB is a correlation observable.
Joint measurements of compatible observables are `mutually nondisturbing' in the sense that
the marginal distributions ∑_{n}p_{mn} and
∑_{m}p_{mn} yield the results
Tr ρE_{m} and
Tr ρF_{n}, respectively, obtained if
A or B is measured separately.
According to the principle of local commutativity `observables that are measured
in causally disjoint regions of spacetime' are mutually compatible, and, hence, are `mutually nondisturbing'.
Note that, as a consequence, only observables measured in the same region can be incompatible.
According to
Gleason's theorem^{0}
within the domain of quantum mechanics a `probability distribution' is a linear functional of the
density operator. This implies that within the domain of application
of the `standard formalism of quantum mechanics' a `simultaneous or
joint measurement of incompatible observables' is impossible.
Although nonlinear functionals exist that could serve as
joint probability distributions of incompatible standard
observables, these are generally thought to be unacceptable^{51}.
Note that within the generalized formalism of quantum mechanics
linear functionals of the density operator do exist, and have a physical meaning,
also if incompatible observables are involved^{86}.

The HeisenbergKennardRobertson inequality
The HeisenbergKennardRobertson inequality
(the socalled uncertainty relation) of standard
observables A and B is given by
ΔAΔB ≥ ½<[A,B]_{−}>,
in which ΔA and ΔB
are standard deviations of measurement results,
defined according to
(ΔA)^{2} =
<(A − <A>)^{2}> =
∑_{m} p_{m}a_{m}^{2} −
(∑_{m} p_{m}a_{m})^{2}
(and analogously for B).

Entropic uncertainty relation
A useful alternative to the HeisenbergKennardRobertson inequality
is the following entropic uncertainty relation
H_{{Em}}(ρ) +
H_{{Fn}}(ρ) ≥
−ln(max_{m,n}<a_{m}b_{n}>^{2}),
in which H_{{Em}}(ρ) is a `von Neumann entropy', defined by
H_{{Em}}(ρ) =
−∑_{m} p_{m} ln p_{m}, p_{m} =
Tr E_{m}ρ,
{E_{m}} the spectral resolution of standard
observable A (and analogously for B). An advantage
of the entropic uncertainty relation is that it is
`independent of the (eigen)values of the observables', and
therefore can be used in an empiricist interpretation
(compare).

Constants of the motion

Assuming the Hamiltonian H to be timeindependent, then an observable A commuting with H
(thus, [A, H]_{−} = O) is
a `constant of the motion' or `conserved quantity', satisfying
A(t) = A.
A state vector, written in the `representation of the constant of the motion'
according to
ψ(t)> =
∑_{m} c_{m}(t)a_{m}>, c_{m}(t) = <a_{m}ψ(t)>,
satisfies
c_{m}(t)^{2} = c_{m}(0)^{2}.

Remarks on `constants of the motion':

Note that, unless the state ψ(t)> is an eigenstate of
a constant of the motion A, conservation of that observable is implemented in the quantum mechanical
formalism in a statistical sense only: the formalism only predicts that a measurement of
A at a later time will yield the same probability c_{m}^{2};
it does not predict that the same value a_{m} will be found at times 0
and t.

It might be thought that `conservation of a constant of the motion
in a deterministic sense' could be proved by considering consecutive measurements of
observable A. Indeed, since A(t) = A, the observables
A(t) and A are compatible, and the theory of
joint measurement of compatible observables
might seem to be applicable, yielding the joint probabilities
p_{mn} =
<ψ(0)E_{m}E_{n}ψ(0)> =
c_{m}(0)^{2} δ_{mn},
δ_{mn} the Kronecker delta. Hence,
a constant of the motion seemingly is not conserved just in the statistical sense expressed
by c_{m}(t)^{2} = c_{m}(0)^{2}, but
in the deterministic sense in which consecutive measurements of the
constant of the motion, performed on an individual object, yield identical results a_{m}.
However, this reasoning ignores the failure of the `possessed values principle',
telling us that a_{m} cannot be an objective (i.e. measurementindependent) property of the microscopic object.
Indeed, the first measurement may disturb the microscopic object
as a consequence of the interaction with its measuring instrument.
In general the commutator
[A, H_{int}]_{−} of A
and the `Hamiltonian H_{int} of the interaction between object and measuring instrument'
does not vanish (for instance, for the SternGerlach measurement nonvanishing of this commutator is even
necessary for the experimental arrangement to do its job properly,
cf. Publ. 37).
In general, if the interaction
is properly taken into account, observable A(t) will not even be compatible with A,
thus rendering the theory of joint measurement of compatible standard observables inapplicable.

Nevertheless, the meaning of quantum probabilities as `relative frequencies
of measurement results obtained whenever a measurement is performed' seems to
indicate that during free evolution (i.e. when there is no interaction with a measuring instrument)
a constant of the motion might satisfy a deterministic conservation law, leaving the `individual
measurement result' independent of the time the measurement is carried out.
If not, it would be inexplicable how it is possible that
the relative `probabilities p_{m} found if a measurement would be carried out' either at
time 0 or t are the same
notwithstanding measurement results of individual particles might have changed between 0 and t.
This circumstance may invoke different reactions:
i) declare the problem a metaphysical one because it cannot be experimentally decided
(since an experimental test would
need carrying out measurements at both 0 and t, and, hence, would
be confronted with a `disturbance of the object by the first measurement');
ii) try to "solve" the problem by assuming von Neumann projection to take place during the first measurement,
applicability however being seriously limited;
iii) consider the theoretical result p_{mn} =
c_{m}(0)^{2} δ_{mn}, although not experimentally verifiable,
as an incentive to try to understand what quantum
mechanics tells us about reality, taking seriously not only observational data but also general ideas
like conservation laws, which have always been fruitful principles in the development of physical science.

Reaction iii) is consistent
with the `empiricist interpretation of quantum mechanics' I prefer.
It acknowledges the possibility that quantum mechanics
does not describe `reality behind the phenomena',
explanation of conservation of individual values of a constant of the motion during free evolution
being relegated to some subquantum theory (compare).
By von Neumann's projection postulate, assumed in reaction ii),
equality of `individual measurement results a_{m} of two consecutive
measurements of a constant of the motion' is warranted without having to rely on subquantum theory.
This may explain the general acceptance of
von Neumann's postulate as a description of the influence of a measurement on the microscopic object.
It is questionable, however, whether this proves a deterministic behaviour of
constants of the motion during free evolution. If it did, this would imply that von Neumann's
projection postulate is a necessary attribute of quantum mechanical measurement, which, however,
it is not (as is exemplified, for instance, by its inapplicability to the SternGerlach measurement).
Notwithstanding "proofs to the contrary", reliance on subquantum theory seems to
yield better prospects (compare).
 Comparison with classical (statistical) mechanics
It is important to note the difference between `quantum mechanics' and `classical mechanics' as regards the relation of
state and observables. Whereas time evolution of the classical state is defined
in terms of the time evolution of the classical quantities (observables), are within quantum mechanics
state and observables defined
independently, and can time evolution be expressed in terms of either of these quantities separately.
This is reminiscent of `classical statistical mechanics',
in which time dependence of the statistical state can analogously be
expressed in two different ways.
As a consequence, a closer analogy between `quantum mechanics' and
`classical mechanics' is obtained if a comparison is made between `quantum mechanics' and `classical statistical
mechanics' rather than between `quantum mechanics' and `classical mechanics proper'
(compare).
This can be seen particularly clearly by comparing the `(quantum mechanical)
Liouvillevon Neumann equation' with the
(classical)
Liouville equation^{0},
the quantum mechanical commutator (up to a constant) being the natural counterpart of the classical Poisson bracket.
This remark is not unimportant because it makes obsolete the idea of `strong von Neumann projection'. It is an indication
that an individualparticle interpretation of the quantum mechanical state vector may be problematic
(compare).
Generalized formalism of quantum mechanics

It is an important recent
insight that, if the description is restricted to the microscopic object,
many quantum mechanical experiments that have
actually been performed are outside the domain of applicability
of the standard formalism (compare). In
particular, the concept of a quantum mechanical observable must be generalized.

Generalized quantum mechanical observable (restricting ourselves to
discrete spectra):
Resolution of the identity operator
I, consisting of a set of positive (better: nonnegative)
operators M_{m}, satisfying
∑_{m} M_{m} = I,
M_{m}≥ O.
Contrary to standard observables,
for a `generalized observable' the resolution of the identity operator need not be orthogonal.
The operators M_{m} generate a socalled positive operatorvalued
measure (POVM). A `generalized observable' will be denoted^{69} by
{M_{m}}.

Probability distribution
of measurement results of generalized observable {M_{m}} performed in state ψ> or ρ:
p_{m} = <ψM_{m}ψ>
or p_{m} = Tr ρM_{m}.

Remarks on the `generalized formalism of quantum mechanics':

The `generalized formalism of quantum mechanics'
encompasses the standard formalism. If all operators
M_{m} of a POVM are `mutually commuting projection
operators', then the POVM reduces to a projectionvalued
measure (PVM), and the generalized observable reduces to a
standard one (or, in case of degeneracy, to a set of
standard ones, viz. standard observables
f(A) having
compatible spectral resolutions).

Note that for characterizing a `generalized observable'
it is neither necessary nor useful to introduce an operator
∑_{m} a_{m}M_{m}, analogous to
the operator A =
∑_{m} a_{m} E_{m} in case of a standard observable.
Actually, both for standard and generalized
observables probabilities are determined by m rather than by
a_{m}, the latter's magnitude being irrelevant. This irrelevance will
play an important role in choosing an interpretation of the mathematical
formalism of quantum mechanics.

On the basis of a `quantum mechanical description of the interaction of
microscopic object and measuring instrument' it can be
demonstrated that the generalized
formalism is a natural extension of the standard one. It has many applications
(including the SternGerlach experiment, which, on closer scrutiny,
is a `paradigm of the standard formalism only in an approximate sense',
cf. Publ. 37).

The generalized formalism will be
particularly useful for describing measurements in which
information is jointly obtained on incompatible standard observables (deemed
impossible by the standard formalism, but nevertheless playing an
important role in foundational discussions of quantum mechanics).
Quantum mechanics and `reality'
 We should distinguish two different kinds of realism, to be
referred to as ontological realism and epistemological
realism, respectively. Unfortunately, often these levels of discourse are not sufficiently
distinguished, the `(epistemological) description' being considered to be a `faithful
representation of (ontological) reality'.

Ontological realism
Most physicists are ontological
realists in the sense that they believe that physics deals with
an outside world. In the `realism versus
idealism^{0}' dichotomy it is
`ontological realism' that is opposed to an
`idealism which positions reality within some inside world'.
Whereas an idealist may believe that the moon does not
exist either `if he does not look' (subjectivistic
idealism) or `if nobody looks' (objectivistic idealism),
does an ontological realist believe that the existence of the
moon is independent of anybody's looking.
Plato's idealism^{0}
is an example of `objectivistic idealism' in which `true reality'
is thought to constitute an `abstract world of ideas'.
By antirealism^{0}
I will understand^{11} the idea that
`only the phenomena have a real ontological existence',
and that, hence, there is no `reality behind the phenomena'. For
methodological reasons most physicists are not antirealists (if
they were, they would probably still believe that billiard balls
are rigid spheres instead of being composed of atoms). Atoms, and
even quarks, are believed to be as real as is the moon, even
though there may exist disagreement about specific properties
these objects are thought to possess.

Epistemological realism
`Epistemological realism' refers to a certain way of attributing
physical meaning to the terms of a physical theory. Hence, `epistemological realism' is about
interpretation, that is, about `what one thinks the
theory is describing'. An interpretation of a physical theory in which the (theoretical) terms
are taken in an `epistemologically realist' sense, is referred to as a
realist interpretation.
At the epistemological level a `realist interpretation' may be involved in different dichotomies.
First, the `realist interpretation' may be opposed to the `instrumentalist one'.
Another dichotomy is that between realist and empiricist interpretations, playing a
prominent role in my account of quantum mechanics. This latter
dichotomy is expressing the possibility that the mathematical
formalism of quantum mechanics may be thought either to refer to
the microscopic objects themselves (realist interpretation), or to
macroscopic phenomena that are directly observed (empiricist interpretation).

Remarks on `Quantum mechanics and reality':

It is important to distinguish ontological and epistemological
levels of discourse. Physicists usually combine `ontological
realism' with the `realist interpretation of
epistemological realism'. That's why the distinction is
often not noticed. However, it is very well possible to combine
`ontological realism' with an empiricist interpretation (in
my view it is even preferable to do so). There is a difference
between quantum mechanics (the theory) and the reality it is
describing (electrons etc.). Electrons are not wave
packets flying around in space: electrons are physical objects,
wave packets are theoretical notions. Probably the only way to
have a wave packet flying in space is by throwing one's quantum
mechanics textbook. If a quantum mechanical observable
were a Hermitian operator, one could observe it by looking
into one's quantum mechanics textbook. Unfortunately, these trivial remarks are
not superfluous because whole generations of physicists have been
trained to think about electrons as `wave packets flying around in
space', and to look upon values of Hermitian operators as `properties of
microscopic objects'.

One should be aware
of the possibility that quantum mechanics is not the `theory
of everything', yielding a complete description of `all there
is', but that it may only be yielding a description of certain
aspects of microscopic reality, much in the same way as
classical mechanics just describes certain aspects of
macroscopic reality (an electron is not a wave packet, just
like the planet Mars is not the point particle nor the rigid body
figuring in textbooks of classical mechanics).

The domain of application of
quantum mechanics is microscopic reality. It is an open
question whether this domain can be extended either to the macroscopic
world or to the far submicroscopic world at the Planck length.
Of course, it is recommendable to explore the
applicability of quantum mechanics by applying it to experiments
on a wider set of objects than just the microscopic ones (for
instance, mesoscopic objects and submicroscopic elementary particles), but there is no warrant that this
will keep working all the way up to the macroscopic and/or down to the submicroscopic domains. The
idea that quantum mechanics should also be applicable to the
macroscopic and submicroscopic worlds is a consequence of a scientific
methodology in which theories are supposed to be either true
(i.e. universally applicable) or false. However, it is
more and more realized^{0}
that this is not how science works in actual
practice: in general a physical theory has a restricted
domain of application in which experimental data are described by
it in a more or less exact way (in general, less exact as the
boundaries of the domain are approached). It is not evident why
this should be different for quantum mechanics.
Quantum mechanics and `observation'

No human observer has ever seen an
electron or even an atom. Everything we know about such objects
stems from `indirect observation by means of measuring
instruments obtaining their information by interacting with the
microscopic objects, and amplifying this information to the
macroscopic level of the human observer'.

The human
observer is as dispensable in quantum mechanics as he (short for
`he or she') is in classical mechanics. Classical mechanics describes
macroscopic objects as these are seen by the human observer. For this reason
within the macroscopic domain in general there is no obvious reason to consider a possible difference
between `what is seen' and `what there is': `observation/measurement' is thought to be
nondisturbing^{81} here.
Within microphysics we should draw a distinction between `observation' and `measurement'.
Whereas `measurement' no longer can be considered as nondisturbing is there no difference
between classical and quantum mechanics with respect to `human observation'.
Within the domain of quantum physics the human observer sees only the
macroscopic parts of his measuring instruments, his influence
preferably being negligeable during the measurement. In
presentday physical practice within the microscopic
domain `human observation' is largely restricted to the tables and
graphs that have been printed by the scientist's printer on the basis of data obtained from a measuring instrument by
the scientist's computer, the
measurement results having been sent to the computer
without any human interference (compare).

An influence like the
reduction (collapse) of the wave packet,
allegedly exerted by a human observer on a microscopic object by means of mere observation,
would be equally miraculous as killing a fly by just looking at one's fly swatter.
In the past this problematic aspect of the socalled
"measurement problem" has given rise to ample discussion,
in which the problem is often rightly identified as a `pseudo problem'
(as it is not a problem of quantum mechanics itself but a spinoff of an unnecessarily
restrictive interpretation of that theory,
viz. an individualparticle interpretation).

In order not to be seduced into any kind of
psychophysicalism with respect to observation in quantum
mechanics it is recommendable to avoid any reference to the human
observer. The "measurement problem" should be distinguished from the
quantum mechanical problem of measurement,
in which the physical interaction is studied between the microscopic object and the measuring
instrument used to get to know that object's properties.
Contrary to the "measurement problem" this is a real problem, yielding a better understanding of
the meaning of quantum mechanics.

It is certainly true that the observer has played an important role
in early versions of the Copenhagen interpretation.
In particular Wigner has been a longlived proponent of this idea.
It seems to me, however, that nowadays in most of the literature on the foundations of quantum mechanics,
even if professing allegiance to
the Copenhagen interpretation, the observer has been replaced by a measuring instrument,
abandoning the term `observation' in favour of `measurement'.
The two problems referred to above are not always distinguished. Although, on one hand a
`quantum mechanical account of measurement'
is rather "unCopenhagenlike", is, on the other hand, a vivid interest for the
`fundamental role of measurement' so "Copenhagenlike"
that already Heisenberg and von Neumann considered `quantum mechanical accounts of measurement'.
Unfortunately, they ignored
the inconsistency arising from the distinction
between `strong' and `weak von Neumann projection'.
Quantum mechanics and `measurement'

It is important to note that in textbooks of quantum mechanics the notion of `measurement'
is dealt with in a very insufficient way. In general the
measuring instrument is not even dealt with at all.
What is a `measurement' is left largely unspecified, thus allowing, for instance,
speculations like those induced by Schrödinger's cat
to generate confusion.
By the `quantum mechanical problem of measurement' (to be distinguished from the socalled
"measurement problem") I will understand the problem
of obtaining knowledge about a microscopic object by probing it with a measuring instrument
that is sensitive to the microscopic information, and is able to amplify that information
to macroscopically observable dimensions, allowing to treat the human observer
in the same way as is possible in classical mechanics: he can be ignored.

Premeasurement
In a quantum mechanical measurement microscopic
information is transferred from a microscopic object to a
measuring instrument. As far as it is a microscopic process it belongs to the
domain of quantum mechanics; it is to be described by means of quantum mechanics.
This part of the measurement process is referred to as premeasurement.
It is not clear whether also the amplification process to the macroscopic
level is `completely within the domain of application of quantum mechanics'.
There is no a priori reason to believe it does (unless one thinks
that quantum mechanics is the theory of everything).
The premeasurement phase is the crucial part of the measurement.
For instance, in the SternGerlach experiment the premeasurement phase
is the initial part in which an atom interacts with an inhomogeneous magnetic field,
thus establishing a correlation between its spin and its position.
Amplification to macroscopic dimensions is realized by putting detectors in the outgoing beams,
thus ascertaining which of the outgoing beams the atom is in.
Since `atomic position' can to a good approximation be treated in a semiclassical way,
it is not unreasonable to assume that the amplification may be viewed upon as a (semi)classical process.
In general this part of the measurement process is left undiscussed as being irrelevant to
the understanding of quantum mechanics. I shall follow this custom, being well aware that
an important field is left open here.

Quantum mechanical description of premeasurement
In the premeasurement phase of
the `measurement of standard observable A (having eigenvalues
a_{m} and eigenvectors a_{m}>)' the
interaction between object and measuring instrument is described
by a Schrödinger equation. We restrict ourselves here to a
rather simplistic model in which, if the initial state of the
object is given by a_{m}>, the measurement
interaction realizes the transition
a_{m}> θ_{0}> → ψ_{m}>
θ_{m}>,
in which θ_{0}> is the initial state of the measuring instrument
and θ_{m}> are the socalled
`pointer states', corresponding to the possible postmeasurement
positions of the pointer of the measuring instrument (indicated
by m in figures 1 and 2), and the states
ψ_{m}> are normalized states of
the microscopic object, determined by the specific properties of the interaction between object
and measuring instrument.
If the initial state of the object is given by
ψ> = ∑_{m}c_{m}a_{m}>,
then, due to the validity of the superposition principle, the
final state Ψ_{f}> of the
system `object + measuring instrument' is given by
Ψ_{f}> =
∑_{m}c_{m}
ψ_{m}>
θ_{m}>.
The pointer states are generally considered to be mutually orthogonal:
<θ_{m}θ_{m′}> = δ_{m,m′}.
This is not unreasonable if such states are characterized by the `pointer's position being in a welldefined interval of its scale,
observationally different from all other possible pointer positions'.
Although we can choose <ψ_{m}ψ_{m}> = 1
(so as to have ∑_{m} c_{m}^{2} = 1), in general there is no reason to think that the states ψ_{m}>
should be mutually orthogonal too.
In general they are not:
<ψ_{m}ψ_{m′}> ≠ 0 for m ≠ m′.

From the state Ψ_{f}> the quantum mechanical final state
of the microscopic object
is derived as the (reduced) density operator
ρ_{of} =
Tr_{a} Ψ_{f}><Ψ_{f} =
∑_{m} p_{m}
ψ_{m}><ψ_{m},
p_{m} = c_{m}^{2},
where Tr_{a} denotes taking the
partial trace^{0}
over the degrees of freedom of the measuring instrument.

The state vector Ψ_{f}> = ∑_{m}c_{m}
ψ_{m}> θ_{m}>, being a
superposition of product terms ψ_{m}> θ_{m}>, is an example of an
entangled state. Due to the fact that its
origin is of a typically quantum mechanical nature (viz. the superposition principle), such states
are particularly interesting.

Measurements of the first and second kind

In the literature on the foundations of quantum
mechanics attention is often restricted to socalled
`measurements of the first kind' for which
ψ_{m}> = a_{m}>.
For a `measurement of the first kind' of standard observable A
we obtain as the `(reduced) density operator
of the final state of the microscopic object':
ρ_{of} =
Tr_{a} Ψ_{f}> <Ψ_{f} =
∑_{m}c_{m}^{2}
a_{m}><a_{m}.
Note that this coincides with the von Neumann prescription of weak projection, which,
hence, follows from the `quantum mechanical theory of measurement' under the assumption
ψ_{m}> = a_{m}>. Note also, however, that
strong projection is not corroborated by it (compare).
For the `final state of the measuring instrument' we analogously find in case
of `measurements of the first kind':
ρ_{af} =
Tr_{o} Ψ_{f}> <Ψ_{f} =
∑_{m}c_{m}^{2}
θ_{m}><θ_{m}.
This expression is usually interpreted as a description of a von Neumann ensemble
in which each element of the ensemble is supposed "to be in one of the states
θ_{m}>".

Measurements for which ψ_{m}> ≠ a_{m}>
are called `measurements of the second kind'.
As seen here such measurements fail to satisfy `(weak) von Neumann projection'.
Determining the final state of the measuring instrument, an additional `deviation from
first kind behaviour' is found, viz.
ρ_{af} =
Tr_{o} Ψ_{f}> <Ψ_{f} =
∑_{mm′}c_{m}c_{m′}^{*}
<ψ_{m′}ψ_{m}>
θ_{m}><θ_{m′},
implying that it is impossible to interpret the final state of the measuring instrument
as a `von Neumann ensemble of pointers in states
θ_{m}>' (compare).

Probably, the restriction to `measurements of the first kind',
to be observed in a large part of the literature on quantum mechanical measurement,
is caused by these two `failures of measurements of the second kind' to satisfy canonized rules.
yet, since many measurement procedures, applied in actual practice, turn out
to be `measurements of the second kind',
we do not seem to be able in general to live up to this restriction.
It therefore seems important to scrutinize the reasons why these `failures to satisfy canonized rules'
are so often ignored in discussions of the foundations of quantum mechanics by dealing with quantum measurement
as if it has to be `of the first kind'.

At least three reasons for the abovementioned restriction to
`measurements of the first kind' can be distinguished:

i) The first reason mainly derives from the Copenhagen interpretation, more
particularly its embracement of
von Neumann's projection postulate.
Neglect of the projection postulate's failure
(in the sense that the postulate is satisfied by hardly any realistic measurement procedure,
including widely used ones) is sometimes defended by pointing at the `theoretical possibility of the existence
of an "ideal" measurement procedure satisfying the postulate' next to, and equivalent with,
the experimentally realized ones.
Although I do not know of any general proof of the nonexistence of such "ideal"
measurements^{25}, I do not think this to be a strong argument
because there are at least two reasons why `von Neumann projection' may be irrelevant to `quantum measurement'.

A first reason is the classical origin of the postulate,
analogously to what is possible in classical physics warranting an object
`to possess a property after it has been observed'.
This holds good, for instance, for
Schrödinger's cat, which will remain either dead or alive after having been
observed to be so.
However, in general a measurement performed on a microscopic object will be more invasive
than `looking at a cat'. It should be noted that in this respect the
ComptonSimon experiment, inspiring von Neumann to formulate his `projection postulate',
is far from generic.
There is no reason to believe that all measurements will have similar properties.
For instance, it is hard to imagine how an "ideal" version of the
SternGerlach measurement procedure
would be possible, this latter procedure being effective only at the expense of
not satisfying von Neumann's projection postulate.

As a second reason to doubt the general relevance of von Neumann projection should be mentioned
that not the `final state of the object', but rather the `final state of the pointer of a measuring instrument'
seems to be relevant when it comes to determining a measurement result. Hence, whereas application
of the projection postulate to the measuring instrument might be justified, is there no reason to
assume a similar behaviour of a microscopic object. The requirement that von Neumann projection
be satisfied seems to be based on a confusion
of `preparation' and `measurement' that can be observed
within the Copenhagen interpretation.

ii) A more pressing reason to restrict oneself to `measurements of the first kind' might be derived
from the result obtained for the final state ρ_{af} of the measuring instrument.
Only for `measurements of the first kind' this state
seems to have an unambiguous meaning in terms of the
pointer states θ_{m}>, cross terms
arising in case of a `measurement of the second kind' (compare footnote 65).
Whereas for `measurements of the first kind'
explicit treatment of the measurement interaction seems to circumvent the problems posed by
Schrödinger's cat paradox (essentially caused by the `cross terms'),
is it evident that this solution is not available for `measurements of the second kind'.
Even so, this does not need to be the end of `measurements of the second kind'. Ignoring here
the socalled decoherence solution (assuming stochastic fluctuations
of the environment
to be active so as to wash out the `cross terms' in ρ_{af}),
the problems encountered in contemplating `measurements of the second kind' nowadays can be viewed upon as
signs that the notion of `quantum measurement' needs to be generalized still further in order
to encompass realistic measurement procedures within the microscopic domain. In fact, it has been realized that it is
necessary to introduce the notion of a generalized quantum mechanical observable,
making obsolete any reference to `von Neumann projection' since no `orthonormal set of state vectors
a_{m}> or θ_{m}>' has to be referred to any longer.
The problem encountered in interpreting the state
ρ_{af}
is just a precursor of the more general problems obtaining in
realist interpretations of the quantum mechanical formalism.
Indeed, the density operator ρ_{af}
of a `measurement of the second kind' would not pose any problem in an
instrumentalist interpretation.

iii) A third reason may derive from the relative simplicity of the assumption of
firstkindness, seemingly making it superfluous to determine what
is really going on in a quantum mechanical measurement, and
allowing to take part in the discussion on the foundations of
quantum mechanics without having to invest too much energy.
However, in view of their
`red herring'like character the route via `measurements of the first kind' is not available.
We shall have to look for a different solution to the "measurement problem", a main inspiration
in this endeavour being the observation that it is necessary to approach `quantum mechanical
measurement' from a far more general point of view than is allowed by von Neumann projection
(compare). This is regarding both the mathematical
formalism as well as its interpretation.

The Heisenberg cut
In the amplification process, in which the information is amplified
from the microscopic dimensions of the `microscopic object' to the
macroscopic dimensions of `directly observable phenomena', or even
of a human observer's mind, there may exist some point after which the
process can be considered to be macroscopic and describable
classically. This point is called the Heisenberg cut. This
cut is positioned somewhere between the microscopic object and
the observer's mind.
It is often assumed that its precise
position is arbitrary, and can be anywhere in this range (this
has even been "proven" by von Neumann by considering a chain of
consecutive measurements connecting microscopic object and observer; however, in this "proof" von Neumann
employed his projection postulate, thus relying on
a model of quantum measurement having a dubious applicability).
It seems to me that, although there is a certain arbitrariness with respect to the position of the cut,
this arbitrariness is strongly limited both on the side of the object
as well as on the side of the observer. On the object side it should not be
placed prior to completion of the premeasurement phase, that is, the
point where a correlation is established of the type described here.
It also does not make much
sense to involve the human observer in an explicit way.
Nor does it seem to make
sense to extend the applicability of quantum mechanics to the
phase of `registration of pointer positions of
measuring instruments within the macroscopic parts of the physical apparata'
(unless quantum mechanics is believed to be the `theory of everything').
For all these reasons the `Heisenberg cut' has largely lost its importance as a characteristic
property of quantum measurement. I shall have to say something more on the relation between
microscopic object and observer here.
Conditional preparation

Every quantum mechanical measurement has also a preparative aspect in the sense that it not only prepares
the measuring instrument in some final state, but it does so also with the microscopic object.
In the final state
Ψ_{f}> = ∑_{m}c_{m}ψ_{m}>
θ_{m}>
of the (pre)measurement process
this preparative aspect is represented by the
state vectors ψ_{m}>. Such a state vector
can be legitimately interpreted as describing the `state of the object, conditional on measurement result m
having been read off as the position of the measuring instrument's pointer'.
Note that the state vectors ψ_{m}>
are completely determined by the interaction between object and measuring
instrument. Only in very special cases it may occur that the vectors ψ_{m}>
are equal to the eigenvectors of a standard observable. In that case von Neumann's
projection postulate may be satisfied. However, in actual experimental
practice this is seldom, if ever, fulfilled (compare). In general the
vectors ψ_{m}> are not even orthogonal.

In a `conditional preparation by a measurement' the state vector undergoes a transition
ψ> → ψ_{m}>,
generalizing von Neumann's projection^{22}.
The necessity of taking ψ_{m}>
as the postmeasurement state conditional on measurement result m, follows from the fact that a measurement
of an arbitrary standard observable B (with eigenvectors b_{n}>)
performed immediately after the first measurement, is yielding (conditional) probabilities
p(nm)=<ψ_{m}b_{n}>^{2}.
These conditional probabilities are derivable from the relation p(m,n) = p(nm)p(m) with the joint probabilities p(m,n) of obtaining the pair of
measurement results (m,b_{n}) in a joint measurement of the POVM {M_{m}} and B
in the state Ψ_{f}>.

`Measurement' as a `preparation procedure'
In an individualparticle interpretation the discontinuous change
ψ> → ψ_{m}> of
the state vector may cause a Schrödinger cat problem
that can be tentatively solved by switching to a realist ensemble interpretation
in which the transition from ψ>
to ψ_{m}> is interpreted as a selection of a subensemble
from the `total ensemble of microscopic objects prepared by the measurement, the selection being carried out on the basis of
an observation of measurement result (pointer position m)'.
Alternatively, and to be preferred from the point of view of an empiricist interpretation,
it is possible to interpret the transition to ψ_{m}> as a
transition to a different preparation procedure, `observation of measurement result m'
and `selection on the basis of this observation' being parts of this preparation procedure.

figure 3
Whereas von Neumann's projection postulate in general is inapplicable as a measurement principle,
in the generalized form given above it is a highly useful and practically applied `preparation principle'
(sometimes referred to as a `preparative measurement' or a `filter'). That it, yet, is presented so often as a feature of
a measurement is a consequence of a very restrictive view of `measurement', in which the states
ψ_{m}> correspond to outgoing beams that do not spatially overlap (see figure 3).
In that case the measurement result m is determined by the beam the particle is found in
after leaving the apparatus. In such experiments the final position of the
microscopic object is taken as the pointer observable.
Due to this very special choice of the pointer observable the preparation
of the final state of the object is important lest the procedure function as a measurement
of a property of the initial state.
In general, `pointer observables' are quite different, however.
In general, they correspond to `properties of the measuring instrument' rather than to
`properties of the microscopic object'.
Unfortunately, in foundational discussions this very restricted procedure
(the SternGerlach measurement of spin is approximately of this type) has become more or less paradigmatic of
quantum mechanical measurement. In particular, Heisenberg's interpretation of his `uncertainty relation' hinges on it,
this relation being taken to be valid in the final state of the microscopic object, and,
allegedly, expressing the measure of disturbance of the microscopic object by the measurement.
The Copenhagen confusion of `preparation' and `measurement' is largely a consequence
of this restricted view on quantum mechanical measurement.
It has had a considerable confusing effect in the discussion
on the foundations of quantum mechanics. This can only be resolved by means of a careful description
of quantum mechanical measurement processes.

The quantum Zeno effect
One example of the abovementioned confusion is the application of von Neumann projection to the socalled
quantum Zeno effect, which allegedly has as a result that the object by continuous observation
is frozen in its initial state
(``a watched pot never boils''). This can only happen if the interaction with
the `measuring instrument that performs the observation' is such that it has this freezing effect.
This means that the interaction of object and measuring instrument must be sufficiently energetic.
Thus, it will certainly not be possible to prevent a radioactive nucleus from decaying
by merely continuously looking at a 4π counter surrounding the nucleus.
Interpretations of quantum mechanics
 `Interpretation' as a `mapping'
Since `quantum mechanics' is a physical theory it is different from the `reality it purports to describe'.
Therefore we need an interpretation establishing a relation between the two.
I will use the notion of `interpretation' in the following sense:
An interpretation of a physical theory is a mapping from its mathematical
formalism into the physical world.
There are different possibilities for interpreting the quantum mechanical formalism.
In the following two of these will play an important role, viz. (cf. figure)
a: the realist interpretation in which the quantum mechanical formalism is
assumed to be mapped into the `microscopic reality of the atomic and subatomic world';
b: the empiricist interpretation in which
the quantum mechanical formalism is thought to be mapped into the
`macroscopic world of phenomena caused by
microscopic objects (like flashes on a fluorescent screen, or clicks of a Geiger counter)'.
Like the mathematical formalism, also its
interpretation cannot be derived. An interpretation is neither
true nor false. It is either useful or less useful for
establishing a correspondence between the mathematics of the
theoretical formalism and the physics of the world we are dealing
with. For several reasons, to be discussed in the following, the
realist interpretation (which is the usual textbook
interpretation) turns out to be less useful than the empiricist
one.

Prejudice
In my view quantum mechanics is an ordinary
physical theory. I do not think that its domain of
application encompasses notions like the `human mind', `human
consciousness', `free will', or the `universe as a whole' (although,
of course, quantum mechanics certainly is important for cosmology
as far as the behaviour of microscopic objects is important to
it). Attempts to apply quantum mechanics to such notions are
largely ignored in the following. I also do not think that
quantum mechanics can be understood on the basis of any influence exerted
by a human observer on a microscopic (let alone macroscopic) object by just looking at it
(compare Schrödinger's cat).
Realist versus empiricist interpretation of quantum mechanics

Realist interpretations of quantum mechanics
figure 1
In a realist interpretation of quantum mechanics the
interpretational mapping
is assumed to be from the mathematical formalism into
microscopic reality^{35}. In this interpretation the
mathematical entities of the
theory (state vector ψ>,
density operator ρ,
standard observable
A, and generalized observable {M_{m}}) are
assumed to represent `properties of the microscopic object'. Thus,
the quantum mechanical momentum operator is assumed to describe
the `physical momentum of an electron' much in the same way as the
classical quantity mv is assumed to describe the `physical
momentum of a billiard ball'. `Instruments of preparation, used to
prepare the microscopic object in an initial state', as well as
`measuring instruments', are not represented within the quantum
mechanical description, even if they are physically
present. This is symbolized in figure 1 by dashing the
preparing and measuring apparata.

Objectivisticrealist versus contextualisticrealist interpretations of quantum mechanics
We should distinguish `objectivistic
and contextualistic versions of a realist interpretation of
quantum mechanics'. In an `objectivisticrealist
interpretation' the quantum mechanical description is assumed to
refer to `objective
reality', that is, a `reality independent of any observer,
including his measuring instruments'. In a
`contextualisticrealist interpretation' quantum mechanical
concepts are assumed to have a meaning only within a certain
physical context (like the object's environment, or a measurement arrangement).

Objectivisticrealist versus contextualisticrealist interpretations in classical physics
Classical theories are usually interpreted in an objectivisticrealist
sense. In general, however, it is not possible to do so in a rigorous sense. For instance,
due to its atomic constitution, and
the possible vibrations of its atoms, a billiard ball is not a
rigid body (as assumed in the classical theory of rigid bodies). At most is a billiard ball satisfying
`classical rigid body theory' in an approximate sense,
and only within certain contexts (e.g. of
experiments allowing the ball to `behave as if it were a rigid
body'). Hence, as far as `classical rigid body theory' can be interpreted realistically,
it at most allows a contextualisticrealist interpretation: if the ball is hit
so hard that the atoms are set in violent vibrational motion, then
`classical rigid body theory' is not applicable any more.
Nevertheless, it would be pedantic to deny
a billiard ball its property of `rigidity' within experimental contexts in which atomic vibrations
although existing are unobservable.

Different implementations of a `realist interpretation of quantum mechanics'
In a `realist interpretation of quantum mechanics' the probabilities
p_{m}
are thought to refer to `properties of the microscopic object'
(represented by the measurement results a_{m}),
being possessed by the object either before the
measurement (Einstein's
objectivistic realism), during the measurement
(Bohr's contextualistic
realism) or after the measurement (Heisenberg's "empiricism").
It turns out that Einstein's proposal is
impossible, that
Bohr's one is too vague to prevent
inconsistencies and unjustified applications,
and that Heisenberg's one is confounding
preparative and determinative aspects of measurement.

Empiricist interpretation of quantum mechanics

figure 2
In the empiricist interpretation the interpretational mapping
of ψ>, ρ, A, and {M_{m}} is
assumed to be from the mathematical formalism into the
`macroscopic reality of sources for preparing
microscopic objects and instruments for performing
measurements': 'phenomena of preparation and of measurement' are assumed to correspond to
`knob settings of preparing instruments', and to `pointer positions
of measuring instruments', respectively^{91}.
Thus, wave functions and density
operators are assumed to be `symbolic representations (labels) of
preparation procedures' (for instance, referring to a cyclotron
with specified knob settings as a preparing instrument of a beam
of particles; or to less specified natural
processes like the preparation of an electromagnetic field by the sun).
A quantum mechanical observable is assumed to be `a label of
a measuring instrument' (for instance, a photosensitive device for detecting photons)
or measurement procedure.
Even though the
microscopic object is present, in the empiricist interpretation
it is assumed not to be represented by the quantum
mechanical formalism (like atomic vibrations are not represented within the classical theory of rigid bodies).
This is symbolized in figure 2 by dashing
the object. In the `empiricist interpretation' the quantum
mechanical formalism is assumed to describe `just the phenomena',
phenomena being located within the macroscopic sources for
preparing microscopic objects and within the instruments for
measuring their properties.
In the `empiricist interpretation' the probabilities p_{m}
are thought to refer to `properties of the
measuring instrument (pointer positions)' rather than to `properties
of the microscopic object'.

The `empiricist interpretation of quantum mechanics' should be distinguished from
Heisenberg's empiricism, since with Heisenberg
a measurement result is thought to correspond to a `property of the microscopic
object', made observable in its postmeasurement state.
Thus, with Heisenberg the meaning of a flash on a fluorescent screen is the `manifestation of a
microscopic particle at a certain position on the screen' rather than
as a `reaction of that screen to the impact of a microscopic particle'.
Although in `such position measurements' Heisenberg's
interpretation may be acceptable because both alternatives may be correct, is its scope too restricted: in
general the relation between a microscopic object and the final pointer
position of the measuring instrument is much more remote. It even is
possible that the microscopic object is annihilated in the process of measurement,
and, hence, does not have a final value of any observable at all.
Heisenberg's "empiricism" is closely related to von Neumann's projection postulate, and
meets similar objections.
Like the latter it cannot be extended to more general measurement procedures.

The empiricist interpretation considers quantum mechanics
to have executed, although often inadvertently, Mach's methodological
imperative of `just describing the phenomena'. Note, however,
that the `phenomena' considered here are different from the
`sensations in human perception' dealt with by
Mach^{0}.
In a sense the `empiricist interpretation' is a realist one too, since it is
a mapping of the mathematical formalism into physical reality, be
it into the `macroscopic (directly observable) part of that
reality' rather than the microscopic one. `Pointer positions of measuring instruments' can be interpreted
as `properties of the pointer' analogously to the sense in which `rigidity' can be attributed to
a billiard ball. It is important to note here, that such a realist view of the
empiricist interpretation of quantum mechanics requires the measuring instrument to be explicitly
represented within the quantum mechanical description (compare).

Notwithstanding similarities should the
`empiricist interpretation of quantum mechanics' be carefully
distinguished from the empiricist views as fostered by the
philosophical doctrine of logical positivism/empiricism,
the latter considering metaphysical anything that is
unobservable. An empiricist interpretation
of quantum mechanics is perfectly consistent with a belief in the
"real" existence of microscopic objects like atoms and electrons
(`reality behind the phenomena'), even though these are not
directly observed. The relevant point is that, according to the
empiricist interpretation, quantum mechanics does not describe
these objects, but it merely describes the `relations between the
preparing and measuring procedures mediated by them'. Far from
embracing an antirealist philosophy denying any `reality behind
the phenomena', the empiricist interpretation of quantum
mechanics accepts the possibility of such a reality analogously
to the way it accepts the reality of atoms within a billiard
ball.

In order to describe the `microscopic objects themselves', new
(subquantum) theories have to be developed, quite
analogously to the necessity of developing atomic solid state
theories as subtheories to `classical rigid body theory'.
It should be noted that by interpreting quantum mechanics as a
description of the `macroscopic reality of the phenomena'
the `empiricist interpretation of quantum mechanics' leaves
considerably more room for subquantum theories than is left by
a `realist interpretation of quantum mechanics'. Indeed, the
`empiricist interpretation of quantum mechanics' is consistent with
incompleteness in the wider sense.
This holds true
in an analogous way for `classical rigid body theory', which, if interpreted in
a realist sense (rather than an empiricist one), would yield a rather absurd model of a billiard
ball as a `closely packed configuration of rigid atoms', but in the empiricist interpretation of that theory
allowing subrigid body theories to yield more appropriate descriptions of the constituting atoms.

The `empiricist interpretation' should also
be distinguished from the Copenhagen interpretation,
which, although having an empiricist
reputation, actually contains many realist elements stemming from
an inclination to view quantum mechanics as a description of
`microscopic reality itself' (for instance, von Neumann's projection postulate).
Nevertheless, the `empiricist interpretation' is indebted to the
Copenhagen one by taking seriously the importance attributed to
the role of the measuring instrument in assessing the meaning of
the quantum mechanical formalism (compare), be it without neglecting the
`distinction between the realities of microscopic object and
measuring instrument' while being engaged with quantum mechanical
measurement.

The `empiricist interpretation' in a natural way takes into account the distinction between
`preparation' and `measurement', too often ignored within quantum mechanics.
On the basis of this distinction it is possible to account in a trivial way for the
equivalence of the Schrödinger and Heisenberg pictures.
Thus,
Schrödinger picture: Prepare ρ(t) by preparing
ρ and then waiting time t before applying measurement procedure
A;
Heisenberg picture: Prepare ρ, and simultaneously apply measurement
procedure A(t), where A(t) is defined as `wait after the preparation ρ
during a time t before applying measurement procedure A'.

Reasons to prefer a realist interpretation of quantum mechanics

Ontological reason:

Quantum reality
According to our observations there seems to be a fundamental
difference between macroscopic and microscopic reality, the
latter being experienced as "strange and paradoxical". Such
"strangeness" is incorporated into the quantum mechanical
formalism (entanglement,
incompatibility), which,
for this reason, might be thought to yield a description of a
`microscopic quantum reality'.
Such strangeness of microscopic reality has an
appeal of its own, lacking in the empiricist interpretation. It
is widely felt that such a strange quantum reality may open up
new vistas for understanding, and for future applications of
quantum mechanics (quantum computing, quantum information).
It also is attractive to ignore all problems related to `quantum mechanical
measurement' by adopting a `no nonsense attitude' in which these problems are simply ignored,
and attributed to the strangeness of an `objective reality'.

Epistemological reasons:

The classical paradigm
Classical mechanics is generally
thought to be successfully interpretable in a realist sense
(however, compare). From this
point of view it is not unreasonable to try to analogously
interpret quantum mechanics in a realist way (just changing the
notions of state, physical quantity, as well as the equation governing
time evolution, but maintaining as much as possible the classical
way of thinking). Hence, the classical paradigm favours a `realist
interpretation of quantum mechanics'. Indeed, in their fundamental
discussion on the meaning of quantum mechanics both Bohr and Einstein
entertained a `realist interpretation of quantum mechanical observables'
(be it that Einstein's interpretation was objectivisticrealist,
whereas Bohr's can best be characterized as a contextualisticrealist one).

Platonism
In Plato's allegory of the cave it is
realized that our knowledge about reality may be compared with
the knowledge of cave dwellers who are able to look only into the
inward direction of the cave, and who obtain knowledge about what
is happening outside the cave only by means of the shadows cast
by the objects of the outside world on the cave's inside wall.
The upshot is that what we see is just an imperfect image of
reality. The difference between a realist and an empiricist
interpretation of quantum mechanics can be characterized in terms
of this allegory by asking whether the theory is describing
either the real (outside) quantum world, or just its shadows, the
"phenomena". Plato would consider unscientific any theory
aspiring to `just describe the shadows'. It would be in agreement
with Plato to assume that quantum mechanics could only derive its
scientific status from the deep insights it gives with respect to
the constitution of microscopic reality. Platonism
favours a `realist interpretation of physical theories'.

Methodological reason:

Inference to the best explanation
The hypothesis that `reality is like quantum mechanics says it
is' is often assumed to provide the best explanation of the
success of quantum mechanics. Here `reality' means the `reality
of the microscopic object itself' (rather than `just the
phenomena'). It is felt that an empiricist interpretation is overestimating the role of measurement and is
renouncing too much the possibilities of quantum mechanics as an
explanatory device (for instance, atoms are felt to be stable objects
because they satisfy quantum mechanics; it would be absurd to assume that atoms are stable or unstable as a
consequence of measurements that are performed on them: ``To restrict quantum mechanics to be exclusively about
piddling laboratory operations is to betray the great
enterprise^{85}.'' Einstein too criticized the Copenhagen
interpretation for resigning to
contextualism, and for abandoning the
requirement that a physical theory describe `objective
reality' rather than a `reality that is interacting with a
measuring instrument'.

Reasons to prefer an empiricist interpretation of quantum mechanics

Ontological reason:

Experimental data are exclusively
`pointer positions of measuring instruments'
The main reason to prefer an `empiricist interpretation of quantum
mechanics' is that any experimental test of quantum mechanics
compares the probability distributions of quantum mechanics with experimental data
(relative frequencies) obtained by letting a microscopic object
interact with a measuring instrument and subsequently observing a
phenomenon corresponding to the `state the measuring
instrument is in' (socalled `pointer readings'). We expect
the pointer position to tell us something about the microscopic
object, but we should be aware that a `pointer reading of a
macroscopic measuring instrument' is not a direct observation of
a `property of the microscopic object'.
Distinguishing between `pointer readings' and `properties of the microscopic object'
solves one of the ontological riddles of quantum mechanics,
namely `that it would be impossible to attribute the measurement
result to the microscopic object' as a `property possessed already
before the measurement'. For instance, in a position
measurement it allegedly would be impossible to infer from a click of the
particle detector that this is a consequence of the particle
being there immediately before the measurement event (Jordan: ``By
measuring we force the electron to take up a particular position;
previously it was neither here nor there; it had not yet decided
about taking up any particular position.''). Within the `realist
interpretation of quantum mechanics' this is corroborated by the inapplicability of the
'possessed values principle': if it is a property of the microscopic object at all,
a quantum mechanical measurement result is an `emergent property'.
Nevertheless, the Copenhagen idea that `microscopic objects do not have
properties (like position) unless such a property is measured', is
rather counterintuitive. Not many experimenters will be prepared
to deny the causal relationship between a `click of a Geiger counter'
and the `objective
presence of a microscopic particle at the position of that counter'.
In the `empiricist interpretation'
the problem of possessed values of quantum mechanical observables
simply does not arise because it is completely natural that the
measurement result (the postmeasurement pointer position) comes
into being only during the measurement, and, hence, cannot be
attributed to the microscopic object as a `property possessed
prior to measurement'. According to the empiricist interpretation
such a property is not even described by quantum mechanics; it
requires a subquantum theory for its description. From
this perspective there is no reason to doubt that an electron had
a welldefined position prior to measurement, be it that
quantum mechanics does not tell us all about it: quantum mechanics only contains
certain (statistical) information about the reaction of a measuring instrument
that is brought into interaction with the microscopic object.

Epistemological reason:

Analogous cases
Consider e.g. `classical rigid body theory', which, as argued
here, does not allow an
`objectivisticrealist interpretation', although a
contextualisticrealist one might be feasible (although improbable). However, on closer
scrutiny it turns out that `classical rigid body theory' should
better be interpreted in an empiricist than in a
contextualisticrealist way. Indeed, due to its atomic
constitution there does not exist any physical context in which a billiard ball is really a rigid body,
even if no deviation from rigidity is observed. `Classical rigid
body theory' just describes the `phenomena within its
domain of application' even if there are atomic vibrations,
provided these are imperceptible on the macroscopic scale. Within the domain of
application of `classical rigid body theory' the atomic
constitution may become important if e.g. optical
measurements are performed.
Another analogy is provided by thermodynamics, which theory
describes `thermal phenomena within its domain of application'
(determined by the requirement that a condition of
molecular chaos^{0}
be satisfied, which here defines the context). Here, too,
the common usage of considering `temperature' as a property of an
object (rather than as a pointer reading on a thermometer scale)
is questionable (thus, if temperature were objectively defined by kinetic
energy, a way to get one's tea water boiling would be to drop one's
teapot from a sufficient height). Evidently, even a
contextualisticrealist interpretation of thermodynamics has its
problems, which can be solved by an empiricist one (doubtless, a thermometer
dropped together with the object will not show the increase of
temperature referred to above).
In the following it will be seen that similar problems exist for the
interpretation of quantum mechanics^{84}: also here a
contextualisticrealist interpretation, as involved in the
Copenhagen ideas of correspondence and
complementarity, is not
sufficient to solve all interpretational problems; an `empiricist
interpretation' is better suited for that purpose.

Methodological reasons:

Evading the paradoxes of a realist interpretation
The failure of the possessed values principle is
one of the many problems and paradoxes that plague the realist
interpretation (for instance,
particlewave duality/complementarity, the
"measurement problem",
EPR nonlocality). Evading
these paradoxes is an important reason to prefer the `empiricist
interpretation', in which they do not arise.
`Realist interpretations' are inexhaustible sources of speculative ideas
about microscopic reality. It is questionable whether all these
ideas, although often very exciting, are equally fruitful. Some of
these are rather reminiscent of ideas about the world
aether, made obsolete by the (empiricist!) development of
relativity theory. Note that EPR nonlocality
is as nonempirical as 100 years ago was the world aether.
As another possibility to evade paradox it has been attempted
to simply assume the nonexistence of certain troublesome elements. For instance, in Mermin's
`correlation without correlata' it is assumed that quantum correlations
can be conceived without assuming the existence of `entities that
are correlated'. Apart from the fact that in quantum mechanical
measurements the correlata are evident (viz. the pointer positions of
measuring instruments). Only if the distinction between EPR and EPRBell experiments
is ignored could this be overlooked,
in the `billiard ball analogy' Mermin's assumption would be analogous
to the idea that it would be impossible to explain the `correlation
observed to exist between the positions of different
points of the `ball when it is moving as a rigid body', by the
`tight bindings of the positions of the different individual atoms
caused by the interatomic forces between neighbouring atoms'.
Sticking to an `empiricist interpretation of quantum
mechanics' is a way to evade being distracted from our
`experimentally relevant observations of pointer positions' toward
an imagined world of properties attributed to microscopic reality
on the basis of the `peculiarities of the mathematical formalism of quantum mechanics'.

Escaping from suggestions implied by the standard formalism
The applicability of the
generalized formalism to quantum mechanical
measurements is an important indication of the usefulness of an
empiricist interpretation. As a matter of fact, experimental data
are never crucially dependent on the (eigen)values a_{m} of observables
because the data refer to `pointer positions of a measuring
instrument', the labeling of which is manmade, and, for this
reason, rather arbitrary^{3}
(in the generalized formalism this is symbolized by denoting labels by
m rather than by a_{m}). Generalized observables
(represented by POVMs) are independent
of the precise way measurement results are labeled. For generalized observables the probability distributions are
only dependent on m, not on a_{m} (there is no empirically relevant operator
∑_{m} a_{m} M_{m} to provide eigenvalues).
For relations that do depend on the (eigen)values (like the
HeisenbergKennardRobertson inequality
or the Bell inequality) there exist
alternatives which are independent of these values, and which yet have
essentially the same physical meaning (e.g.
entropic uncertainty relation and
BCHS inequality, respectively).

Acknowledging the essential role of measurement
In a `realist interpretation of quantum mechanics' there is a tendency to ignore
the role played by `measurement' in assessing the meaning of the
quantum mechanical formalism. This role was stressed by the
founding fathers of quantum mechanics (compare the
Copenhagen interpretation), however without
having much response in textbooks of quantum mechanics, in which the theory
is generally presented in an objectivisticrealist way, and the measuring
instrument is completely left out of consideration.
One reason for this negligence may be the fact that initially
`measurement' was treated in a halfhearted way, the measuring instrument not
being treated as a quantum mechanical object dynamically interacting with the microscopic
object, but rather as just providing a `context for the microscopic object
to contract certain properties'. By restricting themselves to a
`contextualisticrealist interpretation
of the quantum mechanical formalism' the founding fathers only went part of the way
towards acknowledging the `crucial role played by measurement in quantum mechanics'.
Restriction to the standard formalism
(including von Neumann projection) contributed to the possibility of `avoiding
measurements that could experimentally demonstrate the crucial influence of the
interaction with the measuring instrument' (compare).
In an empiricist interpretation the role of measurement is given full credit. By exploiting the
generalized formalism of quantum mechanics it can
be seen that the Copenhagen
emphasis on `measurement' was not without a physical basis.
Presumably it is not very wise to ignore the role of measurement
(as well as preparation) in accounting for the strangeness of quantum
mechanics. By attributing `properties of the quantum mechanical
formalism' to `reality behind the phenomena' rather than to
`preparations and measurements' we take the risk to fool ourselves
by mistaking, as in Plato's allegory of the cave, the shadows for reality.
Contrary to Platonist contention, according to the `empiricist interpretation'
quantum mechanics is dealing with the shadows only.
Realist versus instrumentalist interpretation of quantum mechanics

In the literature on the philosophy of quantum mechanics the choice is
often between a realist or an instrumentalist interpretation rather than between
a realist or an empiricist one.

In an
instrumentalist interpretation^{0}
the mathematical formalism of quantum mechanics is considered as "just an instrument for
calculating measurement results". No physical meaning is
attributed to either state vector or observable. An `instrumentalist interpretation' is
in a pragmatic way directed towards the `mathematical representation of phenomena',
not asking too much questions that might be induced by the strangeness
of the new quantum phenomena (as compared to those of classical physics).

Remarks on the `instrumentalist interpretation'

In the past instrumentalism has been an acceptable way to circumvent the
problems and paradoxes raised by a `realist interpretation'.
Thus, Bohr entertained an `instrumentalist view with respect to
the quantum mechanical state vector or wave function'
(``There is no quantum world.'')^{8}.
In certain versions of the Copenhagen interpretation
this instrumentalism is even extended to the whole mathematical formalism.
Within the `context of discovery' this may have been a fruitful attitude because it prevented
asking `questions with respect to microscopic reality' that at that moment could not (yet) be answered
("Shut up and calculate").
On the other hand, within the `context of justification' we are interested in here,
`instrumentalism' may become a drawback.
Indeed, an `instrumentalist interpretation of quantum mechanics' suffers from too large a vagueness, thus causing
confusion. For instance, in the `instrumentalist interpretation' it is left undecided whether the
quantum mechanical term `measurement result'
is referring to a `property of the microscopic object' or to a `pointer position of a
measuring instrument' (compare).
This ambiguity has played an important role in the EPR discussion
by not preventing confounding of EPR and EPRBell experiments.

At least in Bohr's view, the
Copenhagen interpretation is
instrumentalist with respect to the state vector.
Unfortunately, in many textbooks of quantum mechanics the way the
state vector is dealt with can hardly be distinguished from a
realist one, even when adherence to the Copenhagen (orthodox)
interpretation is acknowledged. The paradoxes of quantum mechanics are
mainly caused by these realist tendencies; they might be avoided
by a more strict instrumentalism. Thus, a `strictly instrumentalist
version of the Copenhagen interpretation' would not be liable to
the paradoxes stemming from von Neumann's projection postulate, which
would not have the undesirable features it may have
in a `realist interpretation' (like, for instance, EPR nonlocality).
 Empiricist versus instrumentalist interpretation
The empiricist interpretation is sometimes confounded with
the `instrumentalist interpretation' because both interpretations are shunning assertions with respect
to `reality behind the phenomena'. However, there are two differences between these interpretations:
i) whereas the `instrumentalist interpretation' does not attribute a physical meaning to the state vector at all,
does the `empiricist interpretation' look upon it as a referring to a preparation procedure;
ii) in the `instrumentalist interpretation' it is left unspecified whether
a `quantum mechanical measurement result a_{m}' is referring to a `property of the microscopic object
(suitably amplified)' or to
a `pointer of a measuring instrument'; on the other hand does the `empiricist interpretation' make an unambiguous choice.
Both the empiricist and realist interpretations remedy the vagueness of the
`instrumentalist interpretation' by specifying in an unequivocal way a physical reference for each term of the
`mathematical formalism of quantum mechanics'; this is done in
different ways, however, the `empiricist interpretation'
strengthening the empiricist tendencies
taken up in a rather halfhearted way by instrumentalism, whereas realist tendencies probably are stemming from a
classical paradigm.

By not attributing `quantum mechanical concepts' as properties to the microscopic object
but rather to the measuring instrument an
`empiricist interpretation of quantum mechanics' is gaining the
same advantage as an `instrumentalist interpretation' in dealing with the
problems and paradoxes of quantum mechanics. This is achieved
without introducing any of the confusing vagueness characterizing
the `instrumentalist interpretation'. Therefore in my opinion the `instrumentalist
interpretation of quantum mechanics' is obsolete by now.
It is possible to upgrade the `Copenhagen interpretation' in a consistent way from
its `instrumentalist/realist makeup' to an `empiricist interpretation'
liable to be called a neoCopenhagen interpretation (cf.
Publ. 53).
`Objectivity versus subjectivity' in quantum mechanics

I restrict myself here to the situation where the choice has been made to accept quantum mechanics as the theory describing the
`premeasurement phase of
measurement within the microscopic domain'. Hence, I ignore
a possible subjectivity inherent in theory choice, being particularly important because of
theoryladenness of measurement/observation. The possibility to do so
stems from a structuralist view of physical theories (cf. footnote 30),
in which each physical theory turns out to have a restricted `domain of application'.
What I cannot ignore is the difference with respect to the issue of `objectivity versus subjectivity'
stemming from different interpretations of the mathematical formalism of quantum mechanics. We shall
have to deal with two different dichotomies with respect to choices of interpretations, viz. the
`individualparticle interpretation' versus `ensemble
interpretation' and `realist interpretation' versus `empiricist interpretation'
dichotomies.

Epistemological versus ontological aspects of `objectivity versus subjectivity'

The issue of `objectivity versus subjectivity' is an
epistemological one in the first place, since it refers to
our knowledge: either our knowledge about some object may
be `objective knowledge', describing the object `as it exists
independently of the knower', or knowledge may be coloured by the knower's subjective beliefs
(possibly based on having more information than another knower).

Objectivistic versus subjectivistic ontologies
However, as far as our knowledge is about physical reality, the
dichotomy is liable to obtain an ontological significance as well.
Restricting ourselves to the `knowledge encompassed by quantum
mechanics', it is here that `interpretations' come in because they specify the object
the knowledge is referring to: either an `individual object' or an `ensemble' (compare the
dichotomy of
`individualparticle interpretation' versus `ensemble interpretation');
either the `microscopic object' or the `preparing and measuring instruments/procedures'
(compare the dichotomy of `realist interpretation' versus `empiricist interpretation').
The corresponding reality may either be thought to be
independent of our knowledge (thus corresponding to an `objectivistic ontology assuming
existence of an objective quantum reality');
or the reality may be thought to depend on our knowledge
(either in the interactional way implicit in von Neumann's projection postulate
according to which an observation by an observer is causing the state of the microscopic
object to change, or in a relational way, compare).
The latter views may even entertain a subjectivistic ontology.
Proponents of such a subjectivistic ontology are, e.g., London and Bauer, Wigner, but some traces can also be found with Bohr and Heisenberg.
An early adversary was Schrödinger, who preferred an `objectivistic ontology' not referring to a human observer^{90}.

Reluctance to accept a subjectivistic ontology has caused physicists like
Einstein to stress the epistemological nature of this subjectivity
(implemented by an ensemble interpretation of the quantum mechanical state vector).
We, indeed, do not have any reason to believe, as suggested by von Neumann's projection postulate,
that a human observer is able to influence physical reality by merely looking.
The idea of a `subjectivistic ontology' is made obsolete by realizing that the
human observer is as dispensable
within quantum mechanics as he is within classical mechanics,
and that `observation within the quantum domain' currently not any longer being based on human perception
should meet a requirement of intersubjectivity quite analogous to the way it
is required within the macroscopic domain. In particular,
Schrödinger's cat paradox, allegedly demonstrating the
influence of an `individual observation' on an `individual cat's state', should preferably be discussed in an
intersubjectivist vein, viz. as a `problem of quantum mechanical
measurement' rather than as a `means to kill an individual cat by
just observing it'. Unfortunately, `intersubjectivity' is often referred to as `objectivity',
thus blurring the distinction between the ontological and epistemological meanings
the concept of `objectivity' may have.

From `ontological objectivity' to `noncontextuality'
There is still another way in which the ontological notion of `objectivity'
may arouse ambiguity, viz. if not the `relation of the microscopic
object to the observer' is considered but instead the `relation of the microscopic
object to other objects in its environment', in particular the measurement arrangement'.
It might be recommendable within this context to refer to `ontological objectivity'
as noncontextuality^{82}, thus stressing that the important
point is `whether or not the microscopic object can be regarded as
an isolated object, independent of preparing and measuring
instruments that are present in its environment', more particularly,
whether or not `measurement results obtained during the measurement phase'
can be attributed to the microscopic object as `objective/noncontextual properties possessed
prior to and independent of the measurement'.^{67}
The crucial distinction between classical and quantum physics is the `fundamental ontological dependence
of the microscopic object on the
measurement context whenever a measurement on it is carried out', such a dependence on the measurement context
being assumed to be irrelevant in a measurement on a macroscopic object.

Objectivity, (inter)subjectivity, and contextuality in `realist interpretations of quantum mechanics'
The roles of objectivity, (inter)subjectivity, and contextuality in quantum measurement as encountered in
`realist interpretations of quantum mechanics' are symbolized in the following
table illustrating the difference between `observation (macroscopic interface)' and
`measurement (microscopic interface)' (compare).



In this picture the Heisenberg cut is split into two interfaces, made necessary
in order to explicitly take into account the role of `quantum measurement'
as opposed to `human observation'. As a result of the classical paradigm
this difference is underestimated in `realist interpretations of quantum mechanics'. In
these interpretations the measuring
instrument is completely ignored in the quantum mechanical description, and `measurement results'
(referred to as `quantum phenomena' or `measurement phenomena') are
treated as properties of the microscopic object, possessed either before (Einstein),
during (Bohr), or after (Heisenberg) the measurement.
As a result features that are easily unraveled in the `empiricist interpretation'
(compare) are hard to distinguish.
Apart from the identification
of `measurement results' and `properties of the microscopic object' the issue is obscured by the
possibility of maintaining either an `individualparticle interpretation'
or an `ensemble interpretation' of the quantum mechanical formalism.
In the present discussion I
have taken into account the `impossibility of noncontextuality of the microscopic interface' as argued
here.
 Remarks on `Objectivity,
(inter)subjectivity, and contextuality in realist interpretations'
 Macroscopic interface
In `realist interpretations of quantum mechanics' a quantum mechanical measurement result corresponds to a
`macroscopic measurement phenomenon (e.g. a track in a Wilson chamber)', its
interface with the human observer being in the macroscopic domain.
The `measurement phenomenon' is subjected to the same objectivity/intersubjectivity as
is any other macroscopic phenomenon. We therefore may assume for the `measurement phenomenon' the
same `ontological independence from the observer (noncontextuality)' as is usual in classical physics.
The macroscopic interface may be `subjective in an epistemological sense' because different observers may
have different additional knowledge (as, for instance, may be the case in an EPR experiment
where Alice may or may not send to Bob information about her measurement results).
This classical feature has not been taken into account in the above table because
after both Alice and Bob have set up their measurement arrangements, their measurement
results are as objective/intersubjective as in general macroscopic phenomena are.
 Microscopic interface
Within quantum measurement the crucial interface is rather that between the `microscopic object' and
`that part of the measuring instrument that is sensitive to
the microscopic information' (for instance, in the SternGerlach measurement this interface is
situated where the interaction between atom and magnetic field is). Since this interaction is a microscopic process
it is liable to be described by quantum mechanics (premeasurement).
Since the direct influence of the observer does not seem to reach that far into the interior of the measurement
process, the notion of `subjectivity' does not apply here. However, the quantum mechanical character of the
interaction between object and measuring instrument entails an `ontological contextuality'
that marks a fundamental difference with `classical measurement'. For a given measurement arrangement
(encompassing possible classical information transfer protocols)
this contextuality does not imply any subjectivity of the measurement results.
The idea of `ontological contextuality' is at the
basis of the contextualisticrealist interpretation of quantum mechanics.
 Is an objectivistic ontology possible?
Einstein combined his subjectivistic epistemology
with a noncontextual (objectivistic) ontology. In particular,
due to his objectivisticrealist interpretation of
quantum mechanical observables he assumed
the individual quantum mechanical measurement
result a_{m} to be an `objective property
possessed by the microscopic object prior to measurement'
(compare the `possessed values principle';
see also Einstein's view on `determinism',
and determinism versus causality). As a consequence of the
KochenSpecker theorem^{0}
by now it is well known that Einstein's approach is impossible in general.
However, this does not imply that an objectivistic/noncontextual ontology would be impossible;
it just means that an `objectivisticrealist interpretation
of quantum mechanical observables' is not
possible: a quantum mechanical measurement result may refer to
microscopic reality, but probably not in the sense of describing that reality in a rigorous manner
(compare).
A denial of such a reference would amount to confounding
ontological and epistemological issues: a `contextualisticrealist interpretation of quantum mechanics' may be
consistent with a `noncontextual (objectivistic) ontology', be it that our knowledge of that reality
(represented by our physical theories) may be codetermined by the measurement arrangement and, hence, may be
contextual only (compare, also).

Objectivity, (inter)subjectivity, and contextuality in the `empiricist interpretation of quantum mechanics'
There are two reasons why the issue of `objectivity, (inter)subjectivity, and contextuality' is more perspicuous
in the `empiricist interpretation of quantum mechanics' than it is in `realist interpretations':
i) in the empiricist interpretation there is a clear distinction between `measurement results
(being properties of the measuring instrument)'
and `properties of the microscopic object'; `measurement phenomena'
are identified with `pointer positions of the measuring instrument';
ii) the `empiricist interpretation' is an ensemble interpretation,
thus allowing to evade confusion like the
one alluded to in footnote 83.
 Remarks on `Objectivity,
(inter)subjectivity, and contextuality in the empiricist interpretation'
 Dispensability of the human observer
In contrast to the realist interpretations (where the measuring and preparing instruments are
not explicitly dealt with), in the empiricist interpretation it is evident that
the human observer is
screened off from the microscopic object at the
`macroscopic interfaces of the measuring and preparing instruments'.
As a result it is even more evident than in the realist interpretations that the human
observer/experimenter does not deal directly with the
microscopic object; he is just dealing with the knobs of his preparing apparata and with the
pointer positions of his measuring instruments (or even with the data collected
from the measurements by a computerized data retrieval system), thus
relegating to the macroscopic domain any direct influence exerted by him on the measurement
(Bohr).
 Ontological and epistemological meanings of
quantum mechanical measurement results
In the empiricist interpretation measurement result
a_{m} has both an
ontological meaning (in the sense of referring to an `objective/noncontextual property
of the measuring instrument', viz. its pointer position) as well as an epistemological one
(in the sense of representing `knowledge on the microscopic object').
Of course, individual observations may be subjective
(for instance, due to misreading on a pointer scale an `8' for a `3';
or because different individual observers have different information (like Alice and Bob
may have when performing an EPRBell experiment).
But the requirement of intersubjectivity can be met in principle
by employing sufficiently many trustworthy assistant observers: quantum mechanics does not need to be interpreted
as `describing the observations of an individual observer' any more than is classical mechanics.
In the empiricist interpretation `von Neumann projection in an EPR experiment'
is interpreted as an `application of conditional preparation' (compare), the conditional
preparation being realized by selecting
a subensemble of particles 2 conditional on Bob's knowledge about the measurement results Alice got from
the correlated particles 1.
 Information transfer from microscopic object to measuring
instrument as a physical process
The empiricist interpretation enables to renounce the question of the
`subjectivity of knowledge' in favour of a `physical
investigation of the manner in which, starting from the initial state of the microscopic object,
the final (postmeasurement) state of the pointer is established by the measurement
(compare).
`Psychophysical consideration of the information transfer
from the microscopic object to the human observer'
can be replaced by a `physical investigation of the transfer of information
from the microscopic object to the measuring instrument'.
As a result, by using the empiricist interpretation
we seem to be better equipped to treat
quantum mechanics as the ordinary physical theory
conjectured here,
even though we presently do not dispose of a theory
in a rigorous way describing the whole measurement process
ranging from the microscopic to the macroscopic domain.

In view of my prejudice
with respect to quantum mechanics as an ordinary physical theory,
describing measurement results not relying on influences
exerted by human observation or by the human mind, `subjectivisticontological views'
will not be discussed here any further. This holds for von Neumann
projection as far as it is thought to be caused by an `individual
observer merely looking at the object', as well as for
relational views based on a `relation between a microscopic object
and an individual human subject'. It should be remembered, however, that von Neumann projection
may play an objective role as a preparation procedure.
`Irreducibility versus reducibility' of quantum probability

The subject discussed under this heading refers to the question of
whether the `quantum mechanical probability p_{m}
of the Born rule' is a `fundamental property of an individual
microscopic object', or whether it is just a `relative frequency in an
ensemble of measurement results',
each measurement result being reducible to a `property the microscopic object
possessed initially (i.e. immediately before the measurement)'.
The `irreducibility versus reducibility' dichotomy
is an ontological issue since it is referring to the `physical existence or nonexistence
of such an initial property, independently of the theory by which
that property is described' (in particular, it is not required that the property be
described by quantum mechanics; it might be describable by some subquantum theory).
A possible `characterization on the basis of a subquantum theory'
of the `difference between reducible and irreducible indeterminism'
is given here.

The Copenhagen preference of
`irreducibility' is codified in the quantum postulate.
Historically, important sources of the `assumption of irreducibility' are:
i) (experimental source): observation of `exponential decay processes',
implying that the decay rate is dependent only
on the `number of decaying particles' present at a particular moment, and, hence, is independent of
the state of that particle (`a decaying particle has no age');
ii) (theoretical source): the superposition principle.
It is felt that the state described by the state vector ψ> =
∑_{m}c_{m}a_{m}> is different from
the state described by the density operator
ρ = ∑_{m}c_{m}^{2}
a_{m}><a_{m} (even though the states cannot be distinguished by a measurement
of observable A), the latter state allegedly describing an `ensemble of particles each of which having a
welldefined value a_{m} of A (von Neumann ensemble)'.
Within the `Copenhagen interpretation' the distinction between the two states has been interpreted as
signifying a distinction between `irreducibility' (the superposition) and `reducibility' (the ensemble);
iii) (philosophical source): the logical positivist/empiricist abhorrence of metaphysics,
`reducibility of quantum probabilities to the existence of hidden variables' being considered unscientific.

Remarks on `irreducibility versus reducibility'

`Irreducibility' is at the basis of the Copenhagen thesis of the
`completeness of quantum mechanics'. It is associated with
`indeterminism'. Einstein was a strong proponent
of `reducibility' (compare the discussion of the EPR experiment (1935),
which is actually meant to be a `disproof of irreducibility'). Notwithstanding by the experimental falsification
of the
BohrKramersSlater (BKS) theory^{0}
(featuring `irreducibility of quantum probability') it was demonstrated that a certain determinism is present
in certain quantum mechanical measurement processes (viz. conservation of energy and momentum
in the ComptonSimon experiment), has the `Copenhagen interpretation' for a long time been the dominant one.

The issue of `reducibility versus irreducibility' is closely related to
the question of whether quantum mechanical measurement processes are either
deterministic or indeterministic in an ontological sense to the effect
whether or not a measurement result is uniquely determined by a `property the microscopic object possessed
immediately preceding the measurement'.
Note that this question is not answered (in the negative) by the
KochenSpecker theorem^{0},
which is a `theorem of (standard) quantum mechanics', and therefore might have only a restricted applicability.
Moreover, the answer may depend on which interpretation of the mathematical formalism
is adopted (compare).

Unfortunately, the `irreducibility versus reducibility' dichotomy
being a purely ontological issue is often equated with the
ontic versus epistemic^{7} dichotomy,
`irreducibility' being assumed to have an ontological meaning, but `reducibility'
being associated with (epistemological) `lack of knowledge'
(implemented by Einstein's allegation of incompleteness of quantum mechanics).
However, if probabilities p_{m} are taken as relative frequencies in an
ensemble (as is done by Einstein), then they can be seen as
`properties of the ensemble', and, hence, they are ontological (be it not in the sense of `properties of an
individual object').
In order not to confound ontological and epistemological issues it is advisable to omit
the `ontic versus epistemic' dichotomy (compare).

The issue of `irreducibility versus reducibility' can also be met in the literature under the headings:

i) `Probabilistic versus statistical' interpretation of the Born rule
The terminology `probabilistic versus statistical' refers to
`irreducible quantum probabilities' as being probabilistic,
whereas `reducible probabilities' are referred to as being
statistical^{83}.
In evaluating the `probabilistic versus statistical' dichotomy it is important to
answer the question ``Probabilities of what?''
Unfortunately, the Born rule does not answer this question
because it does not specify what is the physical meaning of the term `quantum mechanical measurement result'. There are
three possibilities:
a) like in `classical mechanics' quantum mechanical measurement result a_{m} may be equated with
a `property the object possessed prior to and independent of the measurement' (and, ideally,
`posterior to that measurement' too);
this classical view has been endorsed by Einstein;
b) stressing the `empirical importance of measurement'
in the `Copenhagen interpretation' quantum mechanical measurement result a_{m} is equated, in agreement with
von Neumann's strong projection postulate,
with a `postmeasurement property of the microscopic object';
c) in the empiricist interpretation quantum mechanical measurement result
a_{m} is equated
with a `final pointer position of a measuring instrument' (note that a) and b) correspond to different versions of the
realist interpretation).
Since neither in b) nor in c) the quantum mechanical measurement result a_{m} is referring to the
initial state of the object, the problem of a `probabilistic interpretation of the Born rule'
does not arise in these interpretations:
the mathematical formalism of quantum mechanics is simply thought
not to say anything about the issue of (ir)reducibility.

Remarks on the `probabilistic versus statistical' dichotomy:

The terminology `probabilistic versus statistical' purports to reflect the difference between `quantum probability' and
`classical statistics', the latter satisfying the
Kolmogorovian axioms^{0},
whereas quantum probability does not satisfy these (as follows, for instance, from
violation of the Bell inequality by certain quantum probabilities).

The crucial idea behind the `statistical interpretation of quantum mechanics'
is the notion of `objective property',
to the effect that the microscopic object is thought to `have as a property
the measurement result a_{m} already before the measurement'
(compare). In this interpretation the probabilities p_{m} are thought just to reflect
our `lack of knowledge' about the welldefined value a_{m} an individual object is thought to have in an `ensemble prepared by a quantum
mechanical preparation procedure'.
Note that this reference to `knowledge' is the origin of the custom to denote the `statistical interpretation' as an
epistemic interpretation.

In the `probabilistic interpretation' a reduction to an `initial value of the observable,
possessed by the object', is assumed to be impossible
because the microscopic object is thought prior to measurement not to have such a property
(e.g. Jordan).
In the probabilistic interpretation of the Born rule a
probability p_{m} is thought to be an `intrinsic
property of an individual microscopic object'.
Quantum statistics is thought to reflect an
`irreducible indeterminism of quantum measurement
processes', not reducible to any `lack of knowledge' about
specific circumstances that could cause a `welldefined individual
measurement result' to be obtained in a deterministic way.

If the terminology is used consistently, then the `probabilistic versus statistical' dichotomy
is equivalent to the `irreducibility versus reducibility' one. However, such consistency cannot always be observed.
Thus, in the empiricist interpretation
probabilities p_{m} refer to `relative frequencies in an ensemble of pointer
positions of a measuring instrument'; here the `statistical interpretation' is not
even applicable because `pointer positions' are different from `properties of
the microscopic object'. Therefore the `irreducibility versus reducibility' terminology seems preferable.
Nevertheless for historical reasons I occasionally refer to the `probabilistic versus statistical' dichotomy,
taking into account that it may be possible to generalize the `statistical interpretation' so as
to refer to `subquantum properties' as `properties possessed by the microscopic object prior to measurement'
rather than to `quantum mechanical measurement results' (compare).

ii) `Objective versus subjective probability' in quantum mechanics

The `objectivity versus subjectivity' dichotomy
is often used as referring to the
`irreducibility versus reducibility' dichotomy, `objective probability' being
equated with `irreducible probability', and `subjective probability' with `reducible probability'.

Remarks on `objective versus subjective probability':

In my opinion the `objectivity versus subjectivity'
dichotomy should better not be applied to quantum mechanical probabilities, because, as with the
ontic versus epistemic dichotomy, there is a danger
of confounding ontological and epistemological issues. Thus,
whereas an "ontic" interpretation allegedly is `objective', an "epistemic" interpretation is thought to
be `subjective' in the sense that `reducible probability' is referring to the `subjective
knowledge an observer may have about some property of a microscopic object, even though,
as a member of an ensemble, the object
may possess that property with certainty'. However, in that case that probability is an
`objective property of that
ensemble', the `ensemble' being a physical object as real as an `individual object'
(compare).
Whereas the `irreducibility versus reducibility' dichotomy has an ontological meaning only, is
the `objectivity versus subjectivity' dichotomy seen to have both an ontological and an epistemological one.
This is particularly evident in the empiricist interpretation
in which a `quantum mechanical measurement result' is thought both to refer `ontologically to a pointer position
of a measuring instrument' as well as `epistemologically to a property of a microscopic object'.

Referring to `irreducible probability' as `objective probability' has as an additional disadvantage
the confusion caused by the fact that according to the Copenhagen
interpretation `irreducibility of probability' is a consequence of
disturbance by measurement, and, therefore, cannot be
`objective' in the sense defined here.
`Determinism versus indeterminism' in quantum mechanics

Ambiguous use of the notion of `(in)determinism'
There are at least three ways in which the notion of `(in)determinism' may be
used within quantum mechanics:
i) the notions of `(in)determinism' and `(a)causality' may be used
in an indiscriminate way;
ii) the notion of `(in)determinism' may refer either to the
free evolution or to the measurement process, or to both of these;
iii) the notion of `(in)determinism' may be used either in
an epistemological sense (as a property of a
physical theory) or in an ontological one (as a property of physical reality).

i) (In)determinism versus (a)causality

`Determinism' is understood to signify that `the state of a system is uniquely determined by an initial one'
(as implemented by the uniqueness of the solutions of the system's evolution equations).
By `causality' I will understand `liability to causal explanation'.
In order to avoid confusion I will stick to these characterizations, even though in the quantum mechanical
literature `determinism' and `causality' are often identified.

The confusing identification of
`determinism' and `causality' is a consequence of the idea that measurement result
a_{m}, obtained in a measurement of observable A,
could be explained by the assumption that `the observable had
that value immediately before the measurement'.
Although such `explanation by determinism' (if
suitably generalized) may be a possibility
it is a `causal explanation' all the same. Hence, the issue is `causality' in the first place.

Remarks on `(in)determinism versus (a)causality':

The possibility or impossibility of
`explanation by determinism' marks the distinction between the
`statistical' and `probabilistic' interpretations of the Born rule,
the former contrary to the latter assuming such an explanation to be possible.

The issue of `explanation by determinism
of the quantum mechanical measurement result a_{m}' is at the basis of
the possessed values principle as well as
the notion of faithful measurement. It hinges on the existence of
an element of physical reality,
introduced by Einstein in the famous EPR paper
to act as the (deterministic) cause of measurement result a_{m}, thus
implementing Einstein's view that `the good Lord does not throw dice'.
Einstein's idea of an `element of physical reality as a property of the microscopic object,
possessed independent of measurement', was rejected by the
Copenhagen interpretation
on the basis of empiricist reservations
(Hume^{0},
Mach^{0})
with respect to the "metaphysical" notion of `causality'
as well as by the `Copenhagen quantum postulate'.

Inapplicability of
the possessed values principle (as demonstrated by the
KochenSpecker theorem^{0})
makes obsolete Einstein's proposal to consider the measurement result a_{m} itself as an
`element of physical reality'. This circumstance, based on the `mathematical formalism of quantum mechanics',
might seem to endorse `Copenhagen indeterminism' as against `Einstein's determinism'.
Nevertheless, from a methodological point of view
abandoning `causality' or `explanation by determinism' is not very attractive.
Few experimental physicists, if pressed to critically compare `the way they use to discuss their experiments'
with `what they may have learned during their quantum mechanics courses',
will be ready to take seriously Jordan's assertion
to the effect that `a Geiger counter may have been triggered by a particle that immediately before the measurement
need not have been at the counter's position'.

`Explanation by determinism' seems to be particularly useful for explaining
in EPRBell experiments the possible occurrence of
strict correlations between values of certain standard spin observables measured in causally disjoint
regions of spacetime (e.g. in the singlet state of two spin1/2 particles a joint measurement
of equal spin components exclusively yields opposite values, which could not be so easily explained
if beforehand that correlation were not present as an objective property).

If, as a consequence of the KochenSpecker theorem, quantum mechanics is not able to implement
`explanation by determinism', it seems natural for this purpose to rely
on a different (subquantum) theory encompassing a more general
subquantum element of physical reality
as a `deterministic cause of quantum measurement result a_{m}'. In agreement with the
empiricist interpretation
of the quantum mechanical formalism (in which a_{m} does not refer to the microscopic object but to
the measuring instrument) the possibility of such
subquantum properties of microscopic objects may be acknowledged to be as natural as is
the assumption that a billiard ball consists of atoms
(rather than being a rigid body), properties of the
atoms being described by a `theory different from the classical theory of rigid bodies'.

The Copenhagen idea that quantum mechanics is
incompatible with `causality' (in the sense of `explanation by determinism')
has been rather generally accepted following the discussions on the
EPR paper. However, rejection of Einstein's `reliance on causality'
was not based on the mathematical formalism
(the KochenSpecker theorem only came 30 years later), but it was a consequence of
a logical positivist/empiricist abhorrence of metaphysics,
shunning `hidden variables'
even if they were disguised as `quantum mechanical measurement results' (as was the case
with the EPR proposal).

It seems to me that, because the discussions were generally confined to the
realist interpretation of quantum mechanics,
the possibility of `causes having a subquantum nature' (so plausible in an
empiricist interpretation) was brushed aside too easily,
and the Copenhagen idea of `indeterminism' or `acausality' accepted too readily
(compare).

ii) (In)determinism during free evolution and/or measurement

(In)determinism may refer either to the situation in which the system is isolated
during free evolution^{16},
or to its behaviour when it is interacting with a measuring instrument.

In quantum mechanics free evolution of the state vector is described by the
solutions of the Schrödinger equation.
By this an even more deterministic
behaviour is suggested than is usual in classical mechanics (thus, for a `threebody system'
solutions of the quantum mechanical evolution
equation may be less chaotic than the solutions of the corresponding classical equations).

Nevertheless, a deterministic behaviour of
the quantum mechanical state vector during free evolution is sometimes
combined with an allegation of indeterministic behaviour of quantum mechanical observables,
to the effect that the values of these observables are thought to perform
`quantum jumps'.
For instance, a free quantum particle may be thought to move in a chaotic manner, jumping to and fro
within the region where the wave function is nonvanishing,
thus within the quantum domain allegedly causing
welldefined trajectories (like those of classical mechanics)
to be nonexistent during free evolution.

`Measurement indeterminism' in the first place refers to the behaviour of the state vector during measurement.
It has been stipulated by von Neumann as well as by Dirac to be described by von Neumann's
projection postulate. Hence, unless the initial state is an eigenvector of
the measured observable, the state vector allegedly behaves indeterministically during measurement.

`Deterministic behaviour of quantum mechanical observables during measurement' is at issue
in explanation by determinism
of quantum mechanical measurement result a_{m}. It played a key role in
Einstein's challenge of the Copenhagen idea
of `completeness of quantum mechanics',
culminating in the EPR paper.

Remarks on `(in)determinism during free evolution and/or measurement':

`(In)determinism during free evolution' was not at issue in the discussions on the
`completeness of quantum mechanics as
endorsed by the Copenhagen interpretation'.
Indeed, Heisenberg acknowledges the possibility of the existence of
deterministic trajectories during the
free evolution of an electron, be it that these are considered by him to be metaphysical
as a consequence of their unobservability (such unobservability being
attributed to the impossibility of
a `simultaneous determination of position and momentum' as a consequence of
`irreducible indeterminism of
such a measurement').

Indeterminism of an observable's value a_{m} during free evolution
is permitted by the standard mathematical formalism of quantum mechanics,
since in general only probabilities p_{m}(t) =
<a_{m}ψ(t)>^{2}
are completely determined by applying the Schrödinger equation to the initial state
ψ(0)> = ∑_{m} c_{m}a_{m}> (in which c_{m} = <a_{m}ψ(0)>).
This even holds true if the observable is a
constant of the motion.

On the other hand, with respect to a constant of the motion
a case could be made for constancy of measurement result a_{m}
in the deterministic sense that its value would be independent of
the time of the measurement.
Although this cannot be verified experimentally, it would be hard to explain the timeindependence
of the (experimentally testable) relative frequencies p_{m} = c_{m}^{2} for
arbitrary quantum mechanical states if not an explanation would be possible
on the basis of a deterministic behaviour of a_{m}
(c.q. the subquantum element of physical reality determining a_{m}).
If such an explanation would not be available it would be incomprehensible why the `mechanism causing the
alleged quantum jumps of the constants of the motion' would
in a conspiratory way have to prevent its activity from being observable
by keeping p_{m}(t) constant.

Since it is equally metaphysical
on the basis of its unobservability to exclude `determinism during free evolution' as to assume it,
it does seem wise to take an agnostic position with respect to this issue.
Since quantum mechanics does not seem to provide a definite answer,
a solution may have to be sought outside the domain of application of quantum mechanics.
The subject has been
discussed by Bohm, whose theory is widely held to yield a description
in terms of deterministic trajectories underpinning the statistical quantum mechanical description (see,
however, my criticism of this idea).
Probably, in order to find indications in which direction an answer should be sought, we have
to await experiments probing the domain of application of quantum mechanics in order to see where
quantum mechanics fails to describe our experiments.

An answer to the question of whether during measurement the value of an observable
can behave deterministically notwithstanding an indeterministic behaviour of the state vector
(as described by von Neumann projection
or its generalization)
hinges on the interpretations of both `state vector' and `observable'.
Thus, in the `Copenhagen interpretation' the indeterministic behaviour of the state vector is translated
into a similar behaviour of the measured observable (by assuming the observable to be transformed by
von Neumann projection from indeterminate into
`having a welldefined value'). Alternatively,
Einstein tries to save determinism of the value of an observable during measurement by `relaxing the link between state vector and observable',
interpreting the former as a `description of an ensemble'
rather than `referring to an individual particle', and assuming
each individual member of the ensemble
to have a welldefined value of the observable.
The dependence of the answer on interpretation (viz. the
individual particle versus ensemble dichotomy)
is still further complicated by taking into account the distinction between
realist and
empiricist interpretations.

iii) Ontological and epistemological senses of `(in)determinism during measurement'
In view of my agnostic position with respect to
`(in)determinism during free evolution' I shall restrict myself here to
`(in)determinism during measurement'.

In the ontological sense `determinism' is meant to be a (possible)
`property of physical reality', possessed independently of any theory (compare).
In the epistemological sense `determinism' is a (possible) `property of a physical theory'.
`Ontological indeterminism' and `epistemological indeterminism' should be distinguished.
We have:

`ontological indeterminism' implies `epistemological indeterminism'
(since `ontological indeterminism', if existing, should be implemented in our theories)',
but:
`epistemological indeterminism' does not imply `ontological indeterminism'
(since `indeterminism of our theories' does not necessarily imply `ontological indeterminism').



Von Neumann projection epitomizes the `epistemological indeterminism'
of quantum mechanical measurement theory. The big question is whether this implies the
ontological indeterminism characterizing the Copenhagen interpretation.
This issue is closely related to the question of `explanation by determinism',
answered in different ways by Einstein
and by the `Copenhagen interpretation'. Whereas Einstein tried to reconcile
`epistemological indeterminism' (associated with the `statistical character of quantum mechanics')
with `ontological determinism'
(corresponding to reducible probability),
did the Copenhagen interpretation (minus Bohr,
who in these matters used to avoid ontological assertions as much as possible)
interpret that character (now referred to as probabilistic)
as reflecting `ontological indeterminism' (corresponding to
irreducible probability).

The question of `ontological indeterminism' is connected with the
superposition principle, to the effect that
superpositions ψ> = ∑_{m}c_{m}a_{m}>
were at the basis of Jordan's assertion
that an observable does not have a welldefined value prior to measurement, thus assuming a
measurement to be an `indeterministic process in which the observable obtains its value in an ontological sense'.
On the other hand, Einstein assumed that superpositions just
pointed to `lack of knowledge about the value observable A already had prior to measurement',
thus leaving open the possibility that measurement in a deterministic way reveals that value.
In this view von Neumann projection
could be considered as having an epistemological meaning, describing the `increase of knowledge
realized by the measurement' (compare the ensemble interpretation),
rather than the `ontological meaning
endorsed by the Copenhagen interpretation'.

Remarks on `ontological and epistemological senses of
(in)determinism during measurement':

`Ontological indeterminism' combined with `epistemological indeterminism' is at the basis of the
probabilistic interpretation of the Born rule.
A combination of `ontological determinism' and `epistemological indeterminism' is at the basis of the
statistical interpretation of the Born rule.
This distinction can also be expressed in terms of the
individualparticle versus ensemble dichotomy.

Negligence within the main part of the quantum mechanical literature of the distinction between ontology and epistemology plays
a confusing role in assessing the meaning of quantum mechanics.
For instance, if the HeisenbergKennardRobertson inequality
is considered as an expression of an
uncertainty principle this fits
into the idea of `epistemological indeterminism';
on the other hand, as an indeterminacy principle the same
inequality seems to be an expression of `ontological indeterminism' (compare).

We should be aware of Bohr's
cautiousness, inducing him to avoid as much as possible ontological assertions by
restricting himself to `what we can tell about a microscopic object' rather than
referring to `properties possessed by the object'.
For Bohr `measurement indeterminism' was epistemological in the first place.
He repeatedly warned^{49}
against Jordan's assertion that by
the measurement `measurement results' would be created in an ontological sense
as `properties of the microscopic object' (also).

It is less clear to what extent Heisenberg sided either with Jordan or with Bohr.
Heisenberg's empiricism induced him to take
nearly as cautious a position
with respect to ontology ("metaphysics") as Bohr was taking.
However, his acknowledgement of the possibility of trajectories as well as
his disturbance theory of measurement suggest that
Heisenberg, as a physicist, may have been thinking more
ontologically with respect to microscopic reality
than did the "philosopher" Bohr.

In the
`realist interpretation of the quantum mechanical formalism'
the distinction between ontological and epistemological (in)determinism is easily
overlooked since here the theoretical quantity a_{m}
is interpreted as a `property of the microscopic object'. Then `epistemological indeterminism'
may readily be held to `necessarily imply ontological indeterminism' (as was done by the majority of
the supporters of the Copenhagen interpretation).
As a result, the `empirical quantity
interpreted as quantifying indeterminacy' (viz. the standard deviation
used in the HeisenbergKennardRobertson inequality) was thought not to refer to
the premeasurement state of the microscopic object (since measurement results
a_{m} allegedly could not be attributed to that state);
instead they were taken to refer to the postmeasurement state of the microscopic object, being interpreted as values
of observable A created by the measurement (as implemented by
von Neumann projection).
Such ideas have been developed by Heisenberg and by Dirac, and have become the standard view of the majority
of physicists during the second half of the 20^{th} century. Warnings against such a view
by a "philosopher" like Bohr were not taken too seriously.

Bohr was one of the few to realize
that the distinction between ontological and epistemological (in)determinism is relevant.
Although Bohr blamed `quantum indeterminacy' on the interaction between the microscopic object
and a macroscopic measuring instrument (cf. the quantum postulate), he
did not venture to draw ontological conclusions with respect to microscopic reality
(contrary to Jordan), but remained on
the epistemological level. According to him `standard deviations' expressed the `latitude
a quantum mechanical observable is
defined with within the experimental arrangement set up to measure that observable'.
According to Bohr it is impossible
to draw a sharp distinction between the microscopic object and the measurement arrangement,
thus preventing ontological statements with respect to the microscopic object's behaviour during a measurement.
Bohr's `latitude', being a matter of `unsharp definition',
represents an `epistemological indeterminateness'.

It should be noted that Bohr and Einstein agreed on the epistemological meaning of
`standard deviations' (be it in completely different ways).
What they disagreed on, was whether an (ontological)
`explanation by determinism' is possible.
With respect to this issue there is a certain asymmetry between the adversaries.
Whereas Einstein accepted the possibility of such explanations by assuming the existence of
elements of physical reality, did Bohr (in agreement
with his cautious position with respect to ontology) not claim the
`nonexistence of such elements of physical reality' but he just
pointed at the `ambiguity of Einstein's definition
of these quantities' (which ambiguity is attributed by Bohr to their dependence, ignored by Einstein,
on the measurement arrangement that is actually present).
Bohr's reference to the `measurement arrangement'
can be seen as the advent of contextualism within quantum mechanics,
be it that it was meant to be
taken in a strictly epistemological sense (in particular, a latitude,
even if represented by a standard deviation,
is not supposed to correspond to a sharp, although incompletely known, value of an
observable^{68},
but should be considered to be a `restriction on the definability of that observable'). The ontological
implementation of `unsharpness' should be attributed to physicists like Jordan and Heisenberg. It was adopted
by the majority of physicists.
It was seldom realized that the Copenhagen assumption of `ontological indeterminism' is as metaphysical
as Einstein's assumption of `ontological determinism'. In particular, the circumstance that `measurement is
deterministic in case of an eigenstate of the measured observable' might have been seen as an indication at the epistemological
level of `ontological determinism'. This, however, was ignored by the Copenhagen interpretation.

In the `empiricist interpretation' the distinction between ontological and
epistemological issues presents itself in a natural way, reference to the pointer of a measuring instrument evidently
being `ontological with respect to that instrument' but `epistemological with respect to the microscopic object'
(since it is representing `knowledge on that object'). By restricting the attention to the microscopic object
(as is done in a `realist interpretation') we lose the possibility of drawing such a clear distinction.
In particular, in the `empiricist interpretation' there is no reason to interpret the
HeisenbergKennardRobertson inequality
as a consequence of `ontological measurement indeterminism' (even though, as a consequence of
a gross misinterpretation, the inequality has widely been
interpreted as such). Note that Einstein did not think the HeisenbergKennardRobertson inequality
to be inconsistent with `measurement determinism' (although he did not realize that for its implementation
`subquantum elements of physical reality' should be taken into account,
compare).
Seen in this light we
may conclude that the Copenhagen fear of metaphysics, causing a rejection of hidden variables, has been
instrumental in introducing an equally metaphysical `ontological indeterminism'
(compare).

Apart from allowing more readily to see that the mathematical formalism of quantum mechanics does
not necessarily imply `ontological indeterminism', an empiricist interpretation
allows to implement in a natural way notions of `epistemological indeterminacy'
(like Bohr's latitude) that are not
related to the dynamics of the microscopic object
but may be seen as `properties of the measuring instrument or the measurement procedure
applied to measure a quantum mechanical observable'. Indeed, applying the
billiard ball analogy,
it is not unreasonable to suppose that quantum mechanical measurements
(to be compared with macroscopic observations which, as a consequence of `limited resolution of the observation procedure'
are incapable to distinguish between all distinct atomic configurations of the ball)
may not distinguish between all distinct `subquantum elements of physical reality' of a microscopic object.
Einstein's (epistemological) idea of `incompleteness of quantum mechanics' may be analogous to the idea of `incompleteness
of a description of a billiard ball by the classical theory of rigid bodies'.

Epistemological indeterminateness is further
discussed within the context of the individualparticle versus ensemble
dichotomy. It should be noted, however, that, as is seen from Bohr's answer to EPR,
the BohrEinstein controversy over the `(in)completeness of quantum mechanics'
was not about this latter dichotomy, but about the question of whether or not Einstein's `element of
physical reality' is `independent of the measurement arrangement', or not.
Even if Bohr may have meant this in an epistemological
sense, his idea `that there is a dependence on the measurement arrangement'
does not seem to be physically relevant unless it has also an ontological dimension.
The controversy between Einstein and Bohr can best be understood as a controversy over the relative merits of
objectivisticrealist (Einstein) and
contextualisticrealist (Bohr)
interpretations of quantum mechanical observables, restriction to realist interpretations by both contestants
making it more difficult to distinguish between ontological and epistemological issues.

