book review: the quantum enigma

The Quantum Enigma: Finding the Hidden Key
by Wolfgang Smith (1995)

Smith makes one of the more promising attempts to connect classical metaphysics to modern physics.  Interestingly, he thinks this reconciliation is easier in quantum than Newtonian physics, the latter being too inhospitably Cartesian.  Quantum mechanics is famously weird.  First, there is the issue of noncommuting operators.  Thus, for instance, if my spin 1/2 particle is in a definite z-axis spin state, it cannot be in a definite x-axis spin state.  A particle being in a definite superposition of eigenstates with respect to at least some observables is inescapable.  Second, there is the “measurement problem”, that if I do measure the x-axis spin, the state vector will “jump” to one eigenstate or the other, and state vector collapse is nondeterministic, nonlocal, and not obviously consistent with the Schroedinger equation.  Regardless of your philosophical commitments, at least one of the above features is probably horrifying to you.  Smith thinks it’s just our false modernist beliefs that make this seem troubling.  He claims that in quantum superposition, we have rediscovered the Aristotelian principle of potency (a suggestion, he notes, also made by Heisenberg), and in state vector collapse we are witnessing “vertical” (i.e. formal) causation.

The early chapters hinge on a distinction Smith draws between the “corporeal” world that we directly perceive and the “physical” world that we measure.  This is reminiscent of the Cartesian/Lockean distinction between subjective, qualitative features and objective, geometrical features of the world, except Smith emphatically locates them both outside our minds yet maintains that they point to real, ontologically distinct aspects of objects.  I admit that I found this discussion difficult to follow, but “primary”/”secondary” quality distinctions have always confused me, and Smith would probably say that I shouldn’t understand what early modern philosophers wrote about them.

Things became much clearer for me in the last chapter.  As I understand it, Smith’s case is as follows.  Quantum particles being indeterminate (in some observables) allows them to act as a material principle (in the Aristotelian as well as modernist sense) that receives determination from its formal principle when incorporated–literally–into a corporeal being.  The distinguishing feature of being a corporeal object is not being macroscopic, but having a substantial form.  Substances (in the Aristotelian sense) have forms and are not subject to quantum superposition–they are the Copenhagen interpretation’s classical measuring devices.  A particle being incorporated into a corporeal object (e.g. by affecting the state of a corporeal measuring device) collapses its wavefunction, at least with respect to those observables that affect the substance’s corporeal state.

This is certainly an elegant move–Aristotelians are looking for a way to find formal causes in modern physics, while physicists are looking for a way to understand a type of causality that at least appears very different from the usual deterministic Schroedinger evolution, and lo, the two needs can be made to answer each other nicely.  Whether it works in detail would be a wonderful topic for further thought; this would no doubt have to deal with the usual Scholastic difficulty of identifying what qualify as substances (“corporeal” objects), but that would be an excellent project for Scholastics looking to re-engage with the corporeal world.

Book review on absolute vs. relational theories of space and time

World Enough and Space-Time:  absolute versus relational theories of space and time
by John Earman (1989)

 

I loved this book.  For one thing, it’s the first philosophy book I’ve read that announces before the introduction that it will be using the Einstein summation convention.  The debate between Newton and Leibniz lives on among philosophers.  Do objects have spatial relations between them directly or by virtue of being embedded in space?  Is spacetime a substance or an abstraction?  Is there an absolute measure of motion, whether of velocity, acceleration, or rotation?  Professor Earman is refreshingly unafraid to extract ontological claims from physical theories and unimpressed by positivist reduction of the sciences.  In this, he is like the “new philosophers” who invented physics.  It was a pleasure, for instance, to read about Newton entertaining the proper Aristotelian question of whether space is to be regarded as a substance or an accident.  (He concludes it is neither, but a sort of necessary emanation of God.)

All of the famous arguments of the relationist vs. substantivalist debate are reviewed.  Leibniz and Huygens pointed to the Galilean invariance of the laws of nature to argue that all motion is relative.  Newton countered with his “bucket” argument:  one certainly can tell from centrifugal forces whether the water in a bucket is spinning, even though no relations between parts are changing.  Mach suggested that the distant fixed stars somehow pick out what is the inertial frame.  Leibniz thought that the reality of spacetime points would create an intolerable dilemma for God.  If everything in the universe were shifted a meter to the left, nothing would be different, so how could God decide which arrangement to make?  The strength of this argument will depend on how much weight one puts on Leibniz’ principle of sufficient reason.  Kant objected to the relationist thesis that it cannot account for the difference between a left and a right hand–all the internal relations are the same.

Earman clarifies the historical debate a great deal by translating it into the language of differential geometry.  For each spacetime ontology, certain things (distances, proper times, absolute acceleration, absolute movement) are and are not meaningful and unique, so different structures will be present on the spacetime manifold for use when one wants to write dynamical equations in covariant form.  For example, it’s amusing that the relationist’s position is worse in a genuinely 4D theory like relativity where there is a spacetime metric (not just a spatial or time metric) that picks out a unique spacetime connection, thus a unique definition of parallel transport and a unique measure of acceleration.  One actually has more freedom to build relational-friendly theories in a Newtonian-like spacetime.  (Newtonian physics requires the connection to be given.)

In Earman’s telling, the substantivalist’s case is consistently stronger than the relationists, until the last chapter where he somewhat reverses himself.  Earman uses Einstein’s “hole argument” to recast Leibniz’ argument against spacetime points in a form that doesn’t depend on a very questionable model of divine decision making.  Einstein was concerned that his field equations would not make unique predictions for the evolution of the metric.  The issue is not that matter distribution is insufficient to predict the spacetime.  (Spacetime has its own degrees of freedom corresponding to gravitational waves.)  That’s not a problem for determinism.  The problem is that the field equations are generally covariant, and a diffeomorphism of a solution is also a solution.  One could have two solutions related by a diffeomorphism that is the identity before a given time but different afterwards, and the theory cannot distinguish and predict one solution rather than the other.  Earman’s argument is not that this is wrong because philosophy somehow knows determinism to be true, but that philosophy should not be able to say a priori that determinism is false.  Hearing this argument, one is tempted to respond “So what?”  These “two solutions” are just the same physical solution in two different coordinate systems.  But Earman asks us to think in terms of active rather than passive diffeomorphisms:  moving events around to different spacetime points rather than changing coordinates.  My mind rebels against this:  how, absent some absolute background which general relativity doesn’t have, can one distinguish what one has done from a coordinate transformation?  But I think that’s Earman’s point.  If spacetime points had their own distinct existence independent of events on them, then we could label them (“A”, “B”, …) and it would be meaningful to say that our diffeomorphism had actually changed the system.  That most of us just assume that solutions related by diffeomorphism (members of a “Leibniz equivalence class” as Earman calls them) are physically the same shows an underlying relationism, an anti-realism with respect to spacetime points, in our thinking, although we probably are still substantivalists about spacetime in some deeper sense.

Pope Francis clarifies: preservation of cultural and even religious identity is not an acceptable reason to limit Islamic colonization

From the Vatican:

Dear friends, I cannot fail to express my concern about manifestations of intolerance, discrimination and xenophobia that have appeared in various parts of Europe.  Often this reaction is motivated by mistrust and fear of the other, the foreigner, those who are different.  I am even more worried about the disturbing fact that our Catholic communities in Europe are not exempt from these defensive and negative reactions, supposedly justified by a vague moral obligation to preserve an established religious and cultural identity.  The Church has spread to all continents thanks to the “migration” of missionaries convinced of the universality of the saving message of Jesus Christ, meant for men and women of every culture.  Throughout the history of the Church, there have been temptations to exclusivity and cultural rigidity, but the Holy Spirit has always helped overcome them by ensuring constant openness to others, viewed as a positive opportunity for growth and enrichment.

We really should appreciate His Holiness for his forthrightness.  He really is demanding cultural genocide, the eradication of Western civilization.

San Diego bishop calls for purge of homophobia

which can, ultimately, only mean a purge of homophobes.

Neuhaus’ Law:  Where orthodoxy is optional, sooner or later orthodoxy will be proscribed.

No matter how tolerant someone says he is, ultimately everyone believes in the law of non-contradiction.

Continue reading

Who owns the dead white men?

The mainstream progressive position, as I infer it, is subtle.  It’s not contradictory, but the balance is delicate.  On the one hand, racists/sexists/etc are losers.  They’re dumb; they’re ugly; they have low-status jobs.  Progressives are not just right; they’re superior people in every way.  Therefore, great men of the past were surely proto-liberals. If they had lived today, Plato and Shakespeare would surely have voted for Hillary Clinton.  On the other hand, Western civilization is so irredeemably racist/sexist/etc that all its ideas, institutions, and cultural products are tainted.  This includes, of course, the works of our geniuses, and the fact that they are thus corrupted shows how wicked our civilization is to its very core.

So, insofar as the dead white men were genuinely accomplished, they belong to progressivism, rays of light that shown in spite of, and in no way because of, their white, Christian, European context.  Insofar as they held unprogressive beliefs or their creations reflected such, they belong to the West and add to the list of things that lower class whites should be ashamed of.  To sum up, we get all of the guilt, but none of the glory.

In the past, liberals wisely focused on the first first point.  Denying racists/sexists/etc any proud heritage was the most important thing.  Now I’d say the balance is shifting.  An insecure high-status man wants to show that he’s the same as other high status men.  A secure and ambitious high-status man wants to show that he’s different.  Progressives are secure in their status as monopolists of morality and feel no further need to associate with dead white men.  So I see more articles on how Feynman was a sexist, Watson and Crick were racists, Puccini was a fascist, etc.  I’m all for this.  If every great scientist, artist, and composer can be cast as a deplorable, it’s going to be very hard to keep up the pretense that we’re all losers.

The Mystery of Consciousness

The Mystery of Consciousness
by John Searle
(1997)

This is a collection of essays by philosopher-of-mind John Searle, each one critiquing the writings of some other thinker who has waded into this difficult field.  Searle is well-known as the creator of the Chinese Room thought experiment, the point of which is to highlight the difference between syntax and semantic content.  A computer can manipulate symbols without giving them meaning, so it’s not an adequate model of the mind.  In fact, when you think of it that way, it’s hard to understand how semantics, and thus real consciousness, could emerge from any kind of machine.  Searle admits we have as yet no model for how this could work even in principle.  And yet, clearly it does, since we are thinking machines of the biological sort, so Searle is confident that once we’re done banishing our Cartesian misunderstandings, what is left is just a biological problem:  we know that the brain causes consciousness, but how does it do it?

A couple of mischievous asides, mischievous because I don’t necessarily disagree with any of the above, but feel the need to poke fun anyway.  First, how is “the brain causes consciousness but we have no idea how such a thing could work even in principle” better than the infamous “interaction problem” of Cartesian dualism?  If only poor Descartes had thought to appeal to the “c” word!  Second, the idea of an emergent property–which is not a big part of Searle’s thesis but always haunts these discussions–seems to have several pieces, not all of which are equally well argued.  Neurons firing is the small-scale reality; a brain thinking is the large-scale reality; these are not two realities but one considered on different scales; both are real, but the small-scale reality is ontologically prior and thus enjoys the status of “cause”.  That last bit seems to have a lot of metaphysical baggage in it.  Sure, it seems more natural to say that neurons firing causes me to think than that me thinking causes neutrons to fire, but in these days of metaphysical parsimony, are we really allowed to rank these two realities at all?

In most of the essays, I agreed with Searle, who overall has a pretty balanced perspective on these issues.  Unfortunately for him, for the two essays that I found most interesting, I found myself siding more with his opponent, and those are the ones I’m going to talk about.

In Chapter 4, Searle discusses Roger Penrose’s argument that consciousness cannot be algorithmic and thus cannot be modeled on a computer.  His argument is based on Godel’s theorem, which proves that there are true statements in mathematical systems which cannot be proven within those systems from their axioms.  The fact that these statements can be recognized as true by us proves that we are not limited to the set of “knowably-sound” theorem-proving procedures.  Searle grants that a computer program couldn’t model a mathematician’s thoughts using such procedures, but he suggests that the mathematician’s brain could still be modeled, just not as a disembodied infallible theorem-proving machine, that we could just drop the requirement of definite soundness and let the model make the same sorts of mistakes mathematicians make.  It seems to me that this solution carries a large price:  that mathematical statements that seem certain to us might actually be mistaken.  Grant that, and we are in danger of more general skepticism about our reasoning powers.  Godel and Penrose were motivated by a belief in Platonism; the desired conclusion is that our minds must somehow have access to the Platonic realm of mathematical truths.  But how?  Penrose realizes that if brains are subject to the laws of physics as we know them, then they must ultimately be algorithmic, so he proposes some new physics involving deviations from quantum mechanics must be happening in our brains, and our neutrons just amplify this to macroscopic scales.  Here I think Searle does make a very valid criticism, in that it’s not clear how one can explain something like consciousness even by rearranging the laws of physics in any imaginable way.

Searle’s belief that consciousness is an irreducible phenomenon material sentient beings will sound similar to property dualism, but John Searle is not a property dualist.  His hostility toward property dualism comes out in his chapter 6 attack essay on David Chalmers’ The Conscious Mind.  The property dualist will find himself confronted with a couple of bizarre consequences of his theory, and Chalmers apparently decides to accept both of them.  First, if qualia can’t be reduced to material properties, then it’s not clear how they can causally affect the material world; consciousness (in the fully mental, rather than physical functional sense) cannot even affect human behavior.  Anybody who accepts the idea of zombies must grant this; a zombie will plead until he’s blue in the face that he has qualia.  Second, if brain states find their “echo” in this whole other realm of properties, it’s natural to wonder if this is a property of matter more generally.  Perhaps all material configurations possess psychic states, although usually more rudimentary than what our brains and those of animals produce.  (I should note that in the correspondence between the two, Chalmers claims that Searle misrepresents his position on panpsychism, which he’s open to but doesn’t endorse.)  Searle is appalled by these conclusions, but doesn’t really give any arguments against them.

 

becoming a white supremacist Nazi without noticing it

It’s a funny thing.  I became a “white supremacist”, and probably a “Nazi”, without even noticing it.  My own beliefs never changed.  I didn’t go to the label; it came to me.

Continue reading