Some Comments on
the Religion of Science
Dan Massey
IC 93
The Urantia Book found me about nineteen years ago. Previous to that event, I had experienced an unsatisfying relationship to religions and their exponents. Although I was born and raised in Chattanooga, Tennessee, the belly button of the Bible Belt, where fundamentalist preachers grow as thick as kudzu, I developed an immunity to that sort of thing at an early age. I attended a conventional, big-city Baptist church, absorbed endless Sunday School tales of baby Moses in the bulrushes and baby Jesus in the manger, attended a private secondary school which required three years' study of Bible for graduation, suffered through morning and afternoon chapel services every schoolday for six years, endured week-long revival services two and three times a year, won awards for excellence in theology and comportment, and got and lost religion a couple of times.
Only the innocent and the ignorant can wholeheartedly embrace evil without being consumed. My early experience of religion convinced me that there was very little to distinguish the faithful from the unchurched. With the piercing and arrogant insight of youth, I perceived both camps to contain their share of thieves, liars, hypocrites, and the like. If the makeup of secular society was balanced towards thieves and whoremongers, while the churches were more populated by liars and hypocrites, that was just the nature of the turf on which they operated.
About the time I was thirteen, I lost whatever faith I had in Bible stories and embraced a new, powerful, and universally true religion. What was this wonderful revelation, which swept aside all doubts and produced a lifelong commitment to Truth? Certainly not Jehovah and Jesus. No, it was the religion of Science. It was the religion of rationally examined, factually based, experimentally tested, observable objective reality. How can I describe the attraction of a philosophical system that provided a way to predict the future, understand the past, and control the present—and that always worked when it claimed to work and could tell you how likely it was to work when things were chancy. The system was complete and explained everything.
Of course, I knew there were a few details that weren't fully modeled by the system, like human behavior and national politics, but it was obvious that this was only a short-term problem because, best of all, Science was a growing, self-correcting religion that could ultimately embrace all of reality to any desired degree of accuracy. In the years that have passed, I have learned that it is often very hard for people whose thinking has developed outside the world of scientific and logical thought to understand why this world view is so compelling to us in the face of competition from more emotionally satisfying systems. No doubt, there are many temperamental factors involved; however, today I want to try to help you understand the appeal of the scientific viewpoint in the abstract.
It is hard to say what science really is. I could quote dictionary definitions to you, but I choose to try to portray the discipline as it feels to me personally. The word "science" may designate a diverse body of knowledge, an ongoing process of knowledge acquisition, or a philosophical view of reality. The central values, though, are that knowledge is good, that objective reality is true, and that the fundamental relationships of things are beautiful. The core of the faith is a profound conviction that the universe is orderly, systematic, and amenable to human understanding. For these and other reasons, science has usually found itself closely allied with another, similar discipline, mathematics.
I do not know whether mathematics is properly a science or not. The issue has long been debated by philosophers of both subjects, but no thinking scientist or mathematician would be satisfied with the simple idea that "mathematics is the science of number." Unfortunately, far fewer people understand the universe viewpoint of the mathematician than understand that of the scientist. During the past hundred years, the frontiers of mathematical thought have pushed back our understanding of symbolic reality to disclose the logical foundations of enumeration, while propounding self-consistent universes of symbol having no known correlate in the physical world.
I think that, for me, mathematics is the science of mind on the mechanical level, or the science of symbol. As such, mathematics deals with matters that generally lie beyond the domain of direct observation. Mathematics is concerned more with describing how we know the things we know and why we think we know the things we think we know than with the actual knowing of anything in particular. But as soon as a way of knowing or thinking is described in mathematical formalism, it becomes a thing itself able to be known in its own right.
Let me give an example of this. Let's consider one of the most basic ideas of mathematical logic—implication. This concept belongs to the realm of symbolic logic, which is now considered to underlie all forms of mathematical thought. Suppose there are two statements, "a" and "b". It does not matter what these statements are, because the principle I am going to describe is a universal, mathematical truth. I can describe a real world situation which could correspond to these two statements, called a model, but that would confuse the discussion. For a moment, consider simply that a and b are simple statements, either of which can be true or false, and which allow no intermediate interpretations. These two statements are simple mathematical abstractions.
Now, I will construct from a and b a statement "a implies b," which means "if a is true then b is true." This new statement about a relationship between statements a and b is itself a mathematical object, which, like a and b, may be true or false. Already I have combined two things symbolically to create a third. I will call this third statement "c". At this point, you can begin to see how the world of mathematical discourse about true statements grows in complexity. There is much more possible, however. For example, I can now use the statements a, b, and c to enunciate a new principle which gives a formal, operational definition to the symbolic relationship of implication. I assert that the following statement about a, b, and c is a universal mathematical truth, "If a is true and c (defined as a implies b) is true, then b is true."
So, while a, b, and c may or may not be true, and while the truth of any of these three statements is most specifically dependent on the statement itself, I have constructed from them a statement that is universally true for any pair of statements a and b. And this principle, which is called "inference" is the basic logical method by which the truth of all other mathematical statements is established. This is a fundamental symbolic description of the mechanical level of mind function and becomes another mathematical reality which can be examined, discussed, and manipulated.
At this point you may quite reasonably complain that this is a quite sterile effort in attaching an abstract symbolism to a self-evident fact of being. You may also complain that there must be more to human thought than is disclosed by this almost tautological statement. I cannot argue with these concerns, but I can point out that this concept, with a few additional trivialities, embraces essentially all the "thinking" that is done by machines (that is, by computers), and that a great many students of the human mind and human psychology are utterly convinced that this is "all there is."
Perhaps more important, I can identify a few of the mathematically true models which have been developed from the systematic and rigorous application of this elementary principle. From these elementary notions we can derive all arithmetic, all algebra, all geometry, all calculus, all group theory, all topology, all function theory, and, in fact, all mathematics. In each domain the subject matter under discussion in the elementary statements or propositions, a and b, changes, but the metamathematical principle of inference does not change in any way at all. Now, you may reasonably ask what this has to do with real, as opposed to symbolic, science.
Well, it has long been understood that the propositions to which mathematical argument is applied are not necessarily limited to the domain of symbol, albeit only in the domain of symbol that we can assure the absolute truth of the conclusions reached. Because there are analogs in the physical world of various absolutely true abstract mathematical models, this same principle underlies all applications of mathematics in commerce, engineering, and daily life. There is nothing which we do in the physical world, by design, that does not ultimately depend on this apparently trivial, self-evident foundation.
I cannot imagine that such a fundamental, pervasive abstract reality does not, in fact, disclose a part of the mind of the Absolute. If it were not so, the orderly patterns of mathematics would be valueless for understanding and interpreting the world of experience. And it is a central tenet of scientific thought that the universe is a rational, systematic place that it is worth trying to understand. What this actually means is that, if you understand the causes of things, and if you understand the relationships of cause and effect, then you may interpret the effects systematically (and usefully) in terms of their causes. In other words, the metamathematical relationship of inference mirrors the metaphysical relationship of cause and effect.
And here we approach the surface of the looking-glass that divides the abstract, symbolic world of the mathematician from the real, material world of the scientist. Mathematics has been called "The Queen of the Sciences." This phrase indicates the primary role of mathematics in scientific thought and, by analogy with a chess queen, the extensive power of symbolic thought, unconstrained by the inertia of facts. If mathematics is the Queen of the sciences, then Physics is surely the King, for physics is likewise concerned only with the most fundamental things, but is constrained by the limitations of observational discovery, the inertia that comes from being grounded in objective facts.
It is no coincidence that much mathematics has been developed by physicists in an effort to explain their observations and that many other physicists have found that the reasoning mechanisms needed to explain their observations were anticipated much earlier in the development of pure mathematics. To explain this relationship, I will need to draw on the history of the two disciplines. Unfortunately, the history of mathematics is not as well known as the history of physics. Most people will recognize the names of Archimedes, Galileo, Isaac Newton, Benjamin Franklin, Michael Faraday, Neils Bohr, and Albert Einstein.
But how many know of al-Kwarizmi, Karl Friedrich Gauss, Wilhelm Leibnitz, Leonhard Euler, Évariste Galois, and Henri Poincaré? I daresay that, if I were compelled to name mathematicians with names most people would recognize, I would have to say Pythagoras, Euclid, and, who else? Yet, within the list of names I have just mentioned are several examples of crossover between physics and mathematics. Although the most obvious example is Isaac Newton, whose need for a mathematics to describe physical motion led him to invent the differential calculus, it is less well known that Poincaré was the first to enunciate the philosophical principle of the invariance of physical laws for which Einstein found the mathematical expressions we know as the special and the general theories of relativity.
One of the most remarkable such stories, however, is that of Galois who, from the age of 16, up until his death in a duel at age 21 (1842), wrote manuscripts which laid the foundations of algebraic group theory, a subject which became more widely appreciated in the early years of this century as it provided ideas essential to the develop of the quantum theory. Let us spend some time, now, discussing the work of Isaac Newton. I think that his work is especially important because it affords us a way to grasp and understand the definitive rigor of a very fundamental science, like physics, and how closely it related to the absolute verities of pure mathematics. I do not mean to demean other sciences by focusing to such a degree of physics and mathematics.
Physics is, to a great extent, concerned with physical things and their most basic physical properties. To the Newtonian physicist, a ball is a spherical lump of matter characterized by a certain size, a certain mass, and a certain position in space, observed at a certain time. It may also be in motion, which describes how its position in space and rotation about its center vary over time, and it may chance to collide with another ball, in which case the two will exchange energy and momentum. By comparison, a chemist cares about what the ball is made of, how the parts of it (molecules) are held together, and, should the ball strike another ball of different materials, how the molecules that come in contact will interact. The chemist may also grind the ball up into little pieces, dissolve it in a solvent, and subject the physicist's abstract sphere to additional indignities. Clearly, chemistry is more involved with real-world properties of and operations on things.
To carry this to an extreme, a biologist would not be interested in the ball at all unless it were consumed, excreted, or used as shelter by some living thing. Pure, Newtonian physics is called classical mechanics. It is concerned with the properties and relationships of abstractions of reality that are almost as pure as those of mathematics. Except that the things about which classical mechanics makes statements are modeled on and stand for real, physical things, rather than ideas and the stuff of thought. The core ideas of Newton, which laid the foundations of classical mechanics, are capture in his three laws of motion and his law of gravitation.
Newton's formulation and popularization of this way of viewing the abstract mechanical universe was a synthesis of data which had been collected by himself and other observers, including, in particular, Galileo Galilei, Nicholas Copernicus, and Johannes Kepler. There was no laboratory on earth in which Newton's laws could be truly tested; however, the fact that they correctly described the observed motion of the moon, the planets, the moons of the planets, and comets became sufficient evidence for their acceptance. In addition, they pointed towards additional phenomena (such as friction and viscosity) which would have to be examined to interpret the results of increasingly accurate laboratory experiments.
It is important to understand that Newton's laws did not really add new information to the body of physical knowledge. Rather, they presented a new, a unified way of interpreting, understanding, and extending existing knowledge. They also provided large numbers of predictions which could be tested experimentally and, in time, applied in engineering. Exactly how good are Newton's laws? Well, if you don't move anything very fast (compared to the speed of light) and if you don't do your experiment close to anything very massive (compared to the sun) and if you only use objects of a fairly good size (a milligram or bigger), and if you stick to things made of ordinary matter, these laws are about as perfect as you will ever be able to measure.
Of course, in any real experiment there will be lots of secondary effects, like friction, viscosity, and sticky surfaces that will cause deviations from the laws; but if you account for these effects accurately, or arrange experiments in which they are minimized, Newton's laws will work for you with mathematical precision, if not absolute perfection. With Newton's synthesis of classical mechanics we come to the first time in human scientific thought when the idea of a totally deterministic universe can be entertained. Newton's laws seem to describe perfectly how concrete things bouncing around in a vacuum behave, if we can just manage to do the mathematics and solve the equations. The whole physical world is made of concrete things bouncing around (even the molecules of the air) so the behavior of the whole world is determined by Newton's laws. And Newton's laws are totally deterministic.
If you could enumerate all the things in the universe and measure their position and state of motion at a single time, you could predict everything that will ever happen throughout eternity. And even though man cannot actually do this, physical reality is such an enumeration, and therefore, the future of everything is fixed forever. Fortunately, the universe has proven to be more complex than a table full of billiard balls. Newton died in 1750, just as observational scientists were beginning to collect information about a whole range of phenomena which had nothing to do with classical mechanics or gravitation.
These electrical effects involved a wide range of disparate phenomena which we now know to be due to electrical charges and the passage of electrical currents. Static electricity, magnetism, lightning, capacitance, electricity, batteries, induction, and many other effects were observed and described. Scientists like Galvani and Franklin observed the natural world. Volta and Ampere built devices and conducted crude experiments. Faraday and Coulomb began to draw these observations together into theories of electrical induction. And Heinrich Hertz made the astounding discover that palpable amounts of electrical energy could traverse free space and be recovered without any physical or inductive connection to an active circuit. None of these phenomena had been anticipated in classical mechanics, yet they seemed to be nearly universal phenomena, which interacted with the world of mechanics, and which demanded a full explication.
Although the principles of mechanical electromagnetism were fairly well understood in Faraday's time, the definitive synthesis of this new knowledge was achieved by James Clerk Maxwell, who propounded the four equations which bear his name. Maxwell's equations brought stability and unity to a field of scientific knowledge. No less accurate a model of reality than Newton's laws, these equations provided a comprehensive description of all the possible interactions, in space and time, of electrical and magnetic fields. When combined with a few other relationships, such as Coulomb's law, they provided a complete theory of the interaction of such fields with physical matter.
This theory, which is called classical electrodynamics, like the classical mechanics which preceded it, seemed to be a perfectly deterministic explanation of everything that had been observed and could be measured about these phenomena. And if details were missing in some areas, it was easy to believe that all other physical realities would be found to yield to this elegant analysis. The world did not have long to wait for an answer. A great deal of understanding of the behavior of heat had been developed during the 19th century, while electrodynamics was evolving towards Maxwell's great unification. This knowledge was summarized in two principles, called the Laws of Thermodynamics.
These laws were less solidly grounded in fundamental principles than Maxwell's equations, although they were known to be correct from experiment and had substantial theoretical underpinning from the emerging mathematical theories of statistical mechanics. The conventional interpretation of Maxwell's unified theory posited that both light and heat must be forms of electromagnetic radiation, akin to the waves which Hertz had detected in his crude apparatus. The principles governing the emission of such radiation by moving objects were assumed to be the same as the principles which had been established in the laboratory for Hertzian waves. A simple thermodynamic analysis of the distribution of mechanical heat vibrations throughout a solid body led to a prediction of the way a warm body would emit heat as radiation.
Fortunately, the predictions of a combined classical electrodynamics and thermodynamics proved completely wrong. In fact, it was hardly necessary to go to a laboratory because the conjoined theories predicted that a body at any temperature above absolute zero would emit a constantly increasing amount of radiation at shorter wavelengths. The true emission spectrum of a hot object could be readily observed, in some cases by the naked eye, and it is quite clear that the spectrum has a peak at some intermediate wavelength that is a function of temperature.
For example, if an object is heated until it glows a dull red, further heating will raise it to yellow heat. At a high enough temperature, the color of the emitted radiation trends towards the white and then the blue. Theoretical predictions, on the other hand, suggested that all warm bodies should emit large amounts of energy in the blue and ultraviolet (and beyond). A tentative answer to this problem was proposed by Max Planck.
He suggested that there were certain constraints on the ability of the vibrating molecules of a warm object to emit energy as electromagnetic waves. Specifically, he proposed that a particular property of the vibrating molecules, called action, could only change in definite, discrete units. That is, a molecule could emit one, two, three or more units of action, but could not emit, for example, one-and-a-half units. Planck showed that there was a certain value of this smallest unit or quantum of action which, when applied in existing theory, would predict an emission spectrum corresponding correctly to the experimental data.
This physical value of the quantum of action is known as Planck's constant, and it turns out to play a very important role in all of modern physics. At the turn of the century, it seemed that the pace of theoretical invention and practical discovery accelerated greatly. Wilhelm Roentgen discovered cathode rays and X-rays. Henri Becquerel discovered radioactivity. Marie and Pierre Curie achieved the chemical isolation of radium. Albert Einstein, in one rapid series of papers, explained Brownian motion thermodynamically, explained photoelectricity in terms of Planck's quanta, showed that electromagnetic waves have mass equal to their energy divided by the square of the speed of light, and showed how to reformulate classical mechanics and classical electrodynamics to be correct and consistent for experiments conducted while in relative motion.
These papers linked the hypothesis of the molecular structure of matter to an observable physical phenomenon, provided a boost to Planck's quantum hypothesis, and laid the foundations of the ideas we have come to know as the special theory of relativity. More and more attention focused on the idea that matter was made up of chemical molecules, which were made up of discrete atoms, which were made up of electrically charged particles.
The experimental observation that ionized gases emit light of specific, well-defined colors began to be explained when Neils Bohr showed that the emission spectrum of ionized matter could be interpreted in terms of Planck's quantum hypothesis. A profusion of results emerged relating to the discrete parts of matter. The charge and mass of the electron were established, along with the proton. The neutron and positron were discovered. The behavior of particles and waves in varying situations was explored, and theories were constructed to unify different discoveries and viewpoints. The foundations of observational physics were reformulated to take into account that, in observing very small things, the observer necessarily becomes a part of the system being observed and cannot obtain perfect classical knowledge of what is occurring.
One composite theory emerged from this vast intellectual ferment. Known as the theory of relativistic quantum electrodynamics, or, more simply, QED, it achieves a full unification of relativity theory, quantum theory, and classical electrodynamics into a single, consistent formulation. Although there are certain specialized cases in which quantum theory contradicts the classical notion of causation which underlies the other sources of QED, these problems have not impaired the effectiveness of this theory as a mathematical description of reality. It has been said that there is no theory of science which has been more fully tested than QED or found to be more accurate.
To date, every increasingly precise test of this amazingly replete theory continues to affirm its correctness. It is possible that, as far as it goes, QED is an accurate description of the universe. If so, it is the first mankind has ever achieved. QED is by no means a complete theory of the physical world. It does not deal with the forces that bind atomic nuclei together or the forces that bind the individual parts of protons and neutrons. It also does not provide any explanation for gravity. It is solely concerned with the relativistic quantum physics of electromagnetic radiation and matter.
The amazing success of QED may, therefore, reflect the limited scope of its application; however, it should be remembered that this theory synthesized just about everything ever discovered in physics up to 1935. I realize that this has been a rather long discourse on some rather abstruse topics, which I have so far failed to explain in any great detail. There are a few simple ideas I would like you to take away from this discussion. First, I want you to understand that, although there are many areas of factual human knowledge that are speculative or of doubtful accuracy or have little predictive value, and these areas surely range from astronomy and archaeology to zoology, there is in science, down deep, a foundation of factual knowledge and tested theory far too solidly established to yield readily to revisionist ideas. Anyone who would assay to revise the fundamental thinking of the world of science in an area like QED must content with an almost overwhelming body of evidence of the correctness of the theory.
With this as background, let us examine how a change in the prevailing view might occur for such a solidly established theory. To initiate a revision process, there must be some anomaly detected in the predictions of the theory. We have seen how science works by assembling experimental evidence over a long period of time, developing partial and fragmentary theories which explain some of the data, until a grand unification emerges which ties up a large number of the loose ends in one, mathematically neat, even beautiful, theoretical package.
At any such time there are usually some loose ends that are too far removed from the core vision of the new theory to inhibit its acceptance as a working model of the universe. But science is not dogmatic (or tries not to be). Although a unified theory may be accepted as basically correct, the labor continues (albeit on a reduced scale) of testing its predictions. In some cases (as with the emergence of relativity) an anomaly is discovered which leads to the emergence of a revised theory. In other cases (as with the emergence of quantum theory) the union of two theories that are well-established in their own domains of discourse leads to new predictions which disclose an area of fundamental weakness along the interface between the theories.
In either case, the discovery of the unexpected draws the attention of a generation of graduate students who zealously map the anomaly and propound corrections to existing theory. It is important to realize that, to be successful, a new theory must subsume and absorb all the evidence which supported the old theory. This has generally meant that theories grow by refinement and extension into new areas of application, after an initial beachhead of rationality has been established. For example, the Physics of Aristotle, which enjoyed the authority of tradition, was not grounded in the scientific method of open-minded collection of evidence and rational testing of the predictions of theories. Galileo discovered the problem and was unwise enough to publicize his views in opposition to the religious dogma of the day. The great man's indelicate manner in seeking publicity for his views may have had as much to do with his persecution by the church as did the allegedly heretical nature of his conclusions.
Physicists and astronomers, however, have found it hard to forgive the church for its treatment of Galileo. It remained to Newton to propound the first really rational and thoughtful theory of mechanics, which subsumed the observations of his predecessors and forever displaced the crude descriptive work of Aristotle, Eudoxus, and Ptolemy.
Since Newton established the first beachhead of rationality in the world of physics, each successive theory has been an extension of ideas from the old into new domains. Electromagnetics, thermodynamics, relativity, and quantum mechanics involved such extensions, respectively, into the worlds of electricity, heat, fast motion, and the very small. Even today, physicists work to extend their understanding into the worlds within the nucleus and within the proton. The fact that modern day physics spends little time seeking the world within the electron merely indicates that, to date, no one had brought forward compelling evidence that there is such a world to be studied.
And this result surely stems from the limited energy and the limited precision of current instruments. Interesting work is also in progress in studying the very cold and the very few. The study of individual atoms, immobilized at very low temperatures, promises to provide new insights into quantum phenomena which we have hitherto dealt with on a statistical or aggregate basis. I would also like you to understand that there is a certain quality which the scientist seeks in his thinking, which I have characterized as mathematical beauty.
A good, sound theory must not be unnecessarily elaborate, while adequately explaining all the relevant experimental results. It must possess a certain clarity of purpose and symmetry which discloses a systematic underlying harmony in the universe that is being described. For example, physicists have long been troubled by the fact that Maxwell's equations are not perfectly symmetric because they describe an electrical charge, but no magnetic charge. Since no other theory has emerged to explain this asymmetry, physicists have long conducted experiments to search for an elusive "magnetic monopole," the discovery of which would allow Maxwell's equations to be rendered in a perfectly symmetric form. Of course, this sought for symmetry is a mathematical and not a physical property.
There is not reason for believing that the physical world should display such symmetry other than an almost religious belief that symbolic symmetry is a good thing and the universe is well-made.
Caino