 
### THE FALLACY OF SCIENTIFIC TRUTH

### Why Science Succeeds Despite Ultimate Ignorance

and

### How to Solve Theoretical & Practical Scientific Problems

### Notwithstanding Ultimate Science Ignorance

Donald R. Miklich, Ph.D.

Published at Smashwords by Donald R. Miklich

Copyright 2014 Donald R. Miklich

ISBN 9781311009135

This free ebook may be copied, distributed, reposted, reprinted and shared, provided it appears in its entirety without alteration and the reader is not charged to access it.

The author's e-mail address is drmiklich@gmail.com where interested readers may send comments, criticisms and questions. If time and circumstances permit, I will try to respond when a response is appropriate.

### Table of Contents

The Purpose of This Essay

The Dogma of Scientific Truth

A Brief History of Science

Ancient Greek Science

# Hypothetico-Deductive Reasoning

# Affirmation of the Consequent

# Data Inaccuracy

Ancient Greek Celestial Mechanics

The Birth of Modern Science

Modern Classical Physics

Truth Dogma's Damage: A Contemporary Example

What Scientific Knowledge Really Is

The Superiority of Scientific Knowledge

An Erroneous Criterion of Truth

Reason vs. Experience

Quascience

Big Bang Cosmogony

# Redshift

Indirect Measurement

Quascience Myths

Coping with Quascience Mythology

Discerning Believable Scientific Knowledge

Global Warming

Conclusion

Appendix

### The Purpose of This Essay

I am a scientist. And I am proud to be, maybe a bit overly so. Proud because, although by only the tiniest amount, as a scientist I have contributed to scientific knowledge, and scientific knowledge overwhelmingly is humanity's greatest achievement. Furthermore, if humanity is to survive, either on this planet or perhaps eventually elsewhere in the universe, it will do so only because of scientific knowledge.

Nevertheless, scientific knowledge is not truth.

It's well to begin the elucidation and defense of this assertion by defining what truth means in the context of this essay. To assert any scientific knowledge to be true is to claim that beyond doubt we know the phenomena at issue occur for exactly and only the reason(s) specified by the scientific knowledge, and we similarly know they do so occur and will so occur at any time and at any place in the universe where the causal reason(s) obtain.

To say scientific knowledge does not reach this truth standard is not new. Science is based on empirical evidence. Since we can never experience everything, new and different evidence is always possible. If and when it arises, current scientific conclusions must be refined or replaced. Therefore, scientific knowledge necessarily is always tentative. I am unaware that anyone has ever seriously contested this view. Nevertheless, over approximately the past century it has been more and more sidetracked and ignored. Today there are major areas of contemporary science where scientific knowledge is treated as true, but this presumption is neither acknowledged nor defend. This practice is deceptive. However, we can be fairly sure the deception isn't deliberate, for it appears the perpetrators themselves are first among the deceived.

Usually deceit is practiced because of failure. But this instance is different. Scientific knowledge is being taken as true not because of failure, but because the technologies based on it are so spectacularly successful the science seems necessarily to be true. If it were not, how could all the modern world's magnificent devices work? For many years I so believed. I felt science's conspicuous effectiveness proved its conclusions to be essentially true. Nor was I alone in this. A fairly common opinion starts by admitting scientific knowledge can never be definitive, but continues by dismissing this limitation as inconsequential. Every additional bit of scientific knowledge, these people claim, has moved us closer to truth. And now we are so close we may fairly consider basic sciences such as physics, at least, to be effectively at truth's threshold.

This contention is an analogy with the mathematical concept of an asymptote, the value which some math functions approach closer and closer, but never reach. For example, the fraction 1/N approaches closer and closer to zero as N increases. Since there is no limit to how large N can be, 1/N can never be exactly zero. But clearly, once N is fairly large, for all intents and purposes it is zero, so the fact that it can never be exactly zero is immaterial

Many scientists, especially those in heavily mathematical disciplines such as physics, think scientific knowledge is analogous. According to this opinion, every scientific advance moves us closer to truth, and though we can never reach it, close enough it good enough, essentially equivalent to truth. For many years I also so believed. The effectiveness of scientific technologies made the proposition seem so self-evident I never bothered to examine it. However, in the recreational reading of retirement I came across the work of persons who did bother with such examination, and I discovered, as so frequently happens with analog based thinking, that I was wrong. As is shown in what follows, the unequivocal empirical fact is this: _Wrong science can and often has led to effective technologies_. Therefore, indubitable as it is paradoxical, the only scientific truth is that scientific knowledge, no matter how many marvelous things may be done with it, can never be taken as true.

The principal goal of this essay is to explain this profound paradox, and to do so in ordinary language without considering any of the abstract and abstruse arguments a philosopher of science might use. Such complications are unnecessary, for once one knows where to look, it is obvious why science works brilliantly and reliably notwithstanding its ultimate uncertainty. Any competent high school student can understand what is said here. There is nothing complicated or difficult about it. Indeed, one and the same simple incontestable fact both shows why scientific knowledge is effective and also proves it can never be trusted to be true.

Theoreticians, however, do not want to acknowledge this because it calls into doubt one of their favorite methods. So instead they implicitly treat science laws as true. The second purpose of this essay is to call attention to this practice and to its insidious, potentially science destroying effect. The dubious method involves the extrapolation of scientific knowledge beyond possibility of empirical confirmation. This is illogical. No conclusion which is beyond empirical substantiation is, or can ever be scientific, because the essence of science, its _sine qua non_ is empirical evidence. What makes this method even more suspicious, if not spurious, is that these extrapolations usually involve unverifiable assumptions the uncertain nature of which also is seldom divulged in popular science writings. By inescapable logical necessity, therefore, these assumption-based extrapolations are and can only be "maybe so" speculations. There is nothing wrong with speculations. They sometimes turn out to be spectacularly fruitful, but this can only happen when the speculation has empirically testable consequences. When it does not, speculation becomes metaphysical belief, not science.

As is illustrated at several places later in this essay, when addressing the nonscientist public theoretical scientists implicitly treat scientific knowledge as true. But they seldom note this nor its inherent ambiguity and uncertainty. Thus nonscientists are led to look on speculations as scientific knowledge. This practice borders on deception. One can only guess why theoreticians do it. Perhaps they are kicking the uncertainty of their speculations under the rug in order to garner popular support. It also has been suggested to me that theoreticians simply take it for granted that everyone, nonscientist as well as scientist, knows all scientific conclusions are tentative, and all scientific theory necessarily only learned guesses. Thus, theoreticians may feel there is no need to emphasize the obvious.

But it might be that theoreticians are deceiving themselves. Perhaps they are so enamored of their ideas they are closing their minds to their uncertainty. To the extent that they may be self deceiving, to the extent that theoreticians popular ballyhooing of uncertain theory reflects conviction that it is true, they are undermining science. Science inescapably is a work in progress, and it necessarily always will be. Scientists must never close their minds to the possibility that even our most securely established scientific laws may be wrong, and might be improved if we are continually aware of this.

But the more immediate danger is the not unlikely possibility that theoreticians' practice of passing off uncertain theories about matters completely beyond human experience as scientifically sound may be leading some nonscientists to suppose all scientific knowledge is mere speculation. Thus the selling of grandiose but dubious theories may be partly responsible for the growing refusal of many nonscientists to accept and act upon genuine, reliable scientific knowledge. Such refusal has the potential of denying society the benefits which genuine science offers, or of leading society to ignore the harmful effects science identifies. Either is a terrible price to pay for metaphysics.

This essay seeks to explain the real nature of science. Science is limited. Within the domain of phenomena capable of producing empirical evidence, it is powerful and reliable. Outside this domain, it is neither. Though not in this order, this essay will: Illustrate how theoreticians treat scientific knowledge as truth; Explain why scientific knowledge is not truth; Explain what scientific knowledge really is; Show why it is enormously successful, reliable and useful even though it is not truth; Point out how the presumption of scientific truth is itself destructive of scientific knowledge; and finally, Note the implications of all of this for theoretical and practical science issues.
The Dogma of Scientific Truth

The dogma of scientific truth, whether openly acknowledged, as occasionally is done, or the more usual practice of disingenuously implying it, is more corrosive of, and probably a greater impediment to the advancement of scientific knowledge than any other thing. It is harmful to the scientific enterprise to consider or treat any scientific conclusion to be truth. This statement seems paradoxical. Therefore, you may be inclined to doubt it. So before proceeding, let's consider it.

In contradiction of my assertion, many people say scientific knowledge is damaged most by religion. Strong support for this contention comes from the United States where, for over a century, Creationist Christians have waged an unremitting war on the scientific fact of evolution. Their attacks have had appreciable success. Public opinion polls say a substantial minority, not a great deal less than a majority of US citizens reject the idea of evolution because they consider it inconsistent with their religious beliefs. US Creationists have also managed to bowdlerize many texts used in pre-college schools, eliminating the scientific fact of evolution from some and minimizing it in others. Thus, in the US, at least, religion has indeed restricted and impeded the acceptance of scientific knowledge.

This damage is less than some alarmists allege. Creationists have not prevented the fact of evolution from being taught in institutions of higher learning which do not have the teaching of creation affirming religious doctrine as their basic purpose. Nor have these religionists inflicted damage on evolution science itself. Scientists in the US are as committed to the fact of evolution, and are as vigorous in pursuing issues concerning it as any in the world. Indeed, US biologists are conspicuously competent in the technology of genetic modification, a technology based upon a fundamental premise of evolution, _viz_., the fact that all known life forms have the same basic biochemistry, with no essential differences among them.

{Genetically modified organisms provide an amusing illustration of how duplicitous we humans are when our self interest is involved. Many US farmers who on Sundays devoutly proclaim Creationism for religious reasons, during the rest of the week use genetically modified seeds for economic ones.}

In general, while religionists' vigorous and persistent war against evolution has not destroyed evolutionary science, it has done real and substantial damage to it. It has restricted the number of persons who accept the fact of evolution, and it has impaired the biological educations of many students.

Creationists came close to inflicting even greater damage with their principle of Irreducible Complexity. This idea says some organs, _e.g._ , the eye, could not have evolved by the natural selection of those few favorable ones of a long succession of small, purposeless, random genetic mutations. Rather, Irreducible Complexity says organs as complex as the eye have an irreducible set of interdependent parts with an all-or-nothing character. It claims unless all of these disparate parts simultaneously occur as a complete, integrated unit, the organ can not function. Therefore, according to this principle, organs as complex as the eye could not evolve by the natural selection of random genetic mutations. And although it does not necessarily follow (It is exceedingly unlikely but not logically impossible for an irreducibly complex eye to spontaneously evolve intact.), these Creationists further conclude organs like the eye must have been purposefully, intelligently designed, presumably by some deity.

With this idea Creationists almost succeeded in legally passing their dogma off as science, enabling, if not forcing its inclusion in school curricula. Though this threat was averted by judicial decision, Creationists still use Irreducible Complexity and similar ideas with some success to attack evolution, to defend their beliefs, and to sway the opinions of open-minded persons who have little scientific knowledge.

Intuitively, Irreducible Complexity seems obviously valid. It is not. At least not for eyes. Computer simulations have shown it to be possible for eyes to have evolved in precisely the way Irreducible Complexity says they could not. However, the validity or invalidity of the principle is irrelevant to the fact of evolution. Whether or not Irreducible Complexity is valid, it addresses not evolution _per se_ , but rather the means by which evolution might occur. Even if Irreducible Complexity is true (perhaps for some organ other than the eye), this would only show evolution could only occur by a process other than the one biological scientists favor. It would no more prove evolution does not happen than the undeniable (and highly regrettable) fact that I have never kissed any Miss America proves no Miss America has ever been kissed.

In truth, there is a truth here. The truth is this: Evolution is scientific knowledge, a fact of science. But how evolution happens is not. The Creationists who use Irreducible Complexity to attack the scientific fact of evolution are shooting at the wrong target. They are attacking an idea of how evolution occurs, not the fact that it does. Why are they so misguided? In order to answer this, it's necessary first to describe their erroneous target.

With Irreducible Complexity (and other similar arguments) Creationists are attacking an idea called the Modern Synthesis. It claims evolution occurs in precisely the way Irreducible Complexity says is impossible. Its name indicates how the Synthesis was derived: By combining the natural selection idea with some scientific facts of genetics, then mixing them with a good deal of statistical theory. To prevent the kind of confusion which misleads Creationists to suppose they are attacking the fact of evolution when they are only attacking the Synthesis, I need to characterize it more candidly than usual, more bluntly than theoreticians do. The Modern Synthesis is only biological theorists' best guess about how evolution happens. It very well may be wrong. However, even if it is wrong, this says nothing about the fact of evolution _per se_.

The Synthesis is seldom so presented to the nonscientist public. Rather, usually by implication, but sometimes by explicit claim, it is presented as the whole and only scientific truth concerning evolution. Thus, when Creationists suppose they attack the fact of evolution by attacking the Synthesis, they are acting on misinformation scientists themselves either slovenly (and perhaps disingenuously) allow, or deliberately advocate. It is evolution theorists' own exaggerated, fallacious claim of truth which has armed their religious antagonists, lending them their best and most successful weapon. Thus, theoreticians themselves are largely responsible for the harm Creationists have done to the scientific fact of evolution. And this, I submit, illustrates one way in which the fallacious dogma of scientific truth harms scientific knowledge. When scientists claim speculations attempting to explain some particular scientific knowledge are truth, they give those who doubt the speculation (either reasonably or unreasonably) good reason to doubt the scientific knowledge itself.

You may object to my conclusion. It was not, you may say, the doctrine of scientific truth _per se_ which enabled Creationists' most successful attack against evolution. Rather, it was evolution theorists' dishonest claim that the Synthesis is scientific truth. True. But since scientific knowledge is not truth, then any claim, either direct or by implication, that anything is scientific truth is necessarily false. Therefore, if we scientists were more honest with ourselves and, especially, with the nonscientist public, if we forthrightly admitted there is no scientific truth, if hypotheses such as the Modern Synthesis were acknowledged as only the learned suppositions they can only be, then instances like Creationists' misguided Irreducible Complexity attack would be conspicuously irrelevant.

This, then, leads us to the crux of the issue, the fallacy of the dogma of scientific truth. The scientific fact of evolution provides a suitable and, I hope, convincing way of illustrating this.

Evolution happens. On earth we know of only one kind of biochemistry, and all known life forms, their vast and conspicuous differences notwithstanding, are simply different exemplars of it. Over time life forms change, new ones evolve and others die out. These are facts, scientific knowledge.

But all scientific facts are not equivalently secure. They are arrayed along a dimension of believability ranging from tenuous at one end to as securely supported as humanly possible at the other. The scientific fact of evolution lies at the latter end. It is one of science's most well-established, unequivocal facts. It is supported by a plethora of evidence from different scientific disciplines, and it is contradicted by none. Anyone who accepts the validity of evidence must accept evolution as a scientific fact.

But not everyone accepts the universal validity of evidence, and some Creationists reject all the evidence for evolution with a contention something like the following. They say the universe was purposefully created by an omnipotent Creator. And when it was, the Creator, for His/Her/Its own inscrutable reasons, built in false evidence of evolution. Perhaps this was done in order to provide a challenge to religionists' faith. True believers, the Creator may have reasoned, would demonstrate and prove their devotion by rejecting evidence inconsistent with their faith. But whatever may have been the Creator's purpose, these Creationists argue, the truth is that the evidence which seems to establish evolution as a scientific fact is divine fakery. Therefore they reject it.

When I mention this contention to persons who accept the scientific fact of evolution, the virtually universal response is to challenge me. "Surely, you don't believe all the evidence for evolution is a heavenly hoax!" these people say. Well, no. Of course I don't believe it. I consider this fake evidence argument to be a pathetic attempt to salvage Creationist dogma. But my beliefs are irrelevant. Truth no more is determined by my beliefs than it is by Creationists'. Truth is independent of and superior to all beliefs. And this statement is itself a truth, something which, unfortunately, many theoretical scientists can't get through their heads. They can't imagine how their particular beliefs could be anything but irrefutable truths. They are as dogmatic as the religionists they criticize. Well, there is at least one irrefutable truth here. Whether or not the evidence for evolution is false, the issue is necessarily and inescapably a matter of belief. For neither I, nor the Creationists, nor any one of the world's multi-billions of persons can either prove or disprove the Creationist _ersatz_ evidence contention.

And when I point this out to my questioners, almost all answer something like this: "It doesn't matter whether the Creationists' fake evidence argument can be refuted. Scientific knowledge is the most successful and useful thing humanity has, and it is successful because scientists use evidence as the key with which to unlock and discover truth. Scientists base scientific knowledge on evidence. They never dismiss or discard it."

But in fact scientists always dismiss and discard evidence, and they always have. Evidence is derived from empirical data, and despite widespread naive beliefs to the contrary, data are never self explanatory. They must be made into evidence by being interpreted, and these interpretations are inescapably subjective. Scientists interpret, misinterpret, select, discard, distort, conceal and sometimes, to their great shame, a few even fake data either to get evidence they believe makes sense of the phenomenon they are studying or to avoid evidence which doesn't. And though it is profoundly paradoxical, the result of all this belief driven evidence tampering is scientific knowledge, humanity's greatest achievement, its most powerful and most reliable tool.

If you doubt such subjective interpretation and selectivity, let me give you a conspicuous and, I trust, convincing example. Quantum events, things such as the decay of a radioactive atom, are the most basic of all physical phenomena. Physicists conducted extensive, exhaustive research attempting to find a cause of quantum events, but always failed. This research has found precise probabilities for the occurrence of particular quantum events, but never any discernable cause. Therefore, physicists nearly unanimously conclude, these events are not caused, but instead they are inherently probabilistic. Quantum events just spontaneously occur with characteristic frequencies. But the man often considered to be history's greatest scientist, Einstein, contemptuously rejected this conclusion with the remark that God does not play dice with the universe. Einstein believed causality to be a self-evident truth. Therefore, he insisted, the failure of data to show any hint of quantum causality is not evidence of inherent probabilism, but rather evidence that the quantum data are defective. Such data, he always adamantly insisted, could not be evidence that quantum events are uncaused. Therefore, the great theorist rejected the inherent probabilism conclusion and the evidence (fallacious in his view) on which it is based.

So whose scientific truth is the true scientific truth? I don't know. More to the point, nobody does. Nor can anybody. Nobody can know truth for a simple, fundamental and inescapable reason. To wit: **For something to be truth it must be known with absolute certainty. But until we known everything, we can not be absolutely certain we know anything**. This isn't a philosophical proposition, not some pretty paradox based on obscure and airy abstract reasoning. It is a well established empirical fact. Let's return to the issue of evolution, for it provides an irrefutable illustration.

Sometime after Darwin (and Wallace) proposed natural selection as the means of evolution this mechanism was proven impossible by one of the greatest physicists of the time, indeed, one of the greatest physicists of all time, William Thomson, Lord Kelvin. Natural selection would require a vast amount of time. Thomson proved earth could not possible be as old as the natural selection process would require. Thermodynamics, a science to which Kelvin made significant contributions, requires the earth to have cooled over time. Given its present temperature and the laws of thermodynamics, one can calculate how long the cooling has lasted. Even if earth were initially composed totally of the most energetic combustible material known, Kelvin's unquestionable mathematical analysis showed, Earth could not be more than about one hundred million years old, far below the amount of time needed for natural selection to produce earth's abundant different life forms.

As you probably are well aware, current estimates of earth's age place it at approximately four and a half billion years. Thomson's upper limit was four-and-four-tenths billion years too low. But not because of any mistake he made. Rather his well reasoned conclusion was undone by what at the time neither he nor anyone else knew, radioactivity. The enormous energy release of radioactive isotopes decaying inside the earth resupplies the heat Thomson's calculations had concluded would be completely exhausted in only a few million years. Although he was one of the most learned and intelligent physicists of his time, and although his thermodynamics and mathematics were flawless, Thomson's excellently well reasoned conclusion was rendered nonsense by something he did not know, something which, when he made his estimate of earth's maximum age, nobody knew.

This is merely one example of an event which has repeatedly bedeviled science. I'll provide other examples below. The general and inescapable point it this: **We do not know what we do not know, but what we do not know always has the potential of invalidating what we think we know**. Therefore, until we are omniscient, until we absolutely know absolutely everything, we can never be sure anything we know is true. What is even more damaging to our truth seeking aspirations, we can not even be sure scientific knowledge is on track to truth. Since we do not know what we do not know, it is conceivable that what we think we know (as well as all that we can accomplish on the basis of what we think we know) has an entirely different explanation in ultimate truth (if, indeed, there is an ultimate truth). And this ultimate uncertainty is a logical and empirical fact which is not ameliorated no matter how brilliant our mathematical deductions, no matter how empirically well supported our presumed knowledge, and no matter how many marvelous things we can do on the basis of our presumed knowledge.

The dogma of scientific truth says scientific knowledge is, or at the very least is approaching a description of reality perfectly isomorphic with the natural laws governing reality. But to establish this conceit it would be necessary to compare scientific knowledge with natural law and show they are the same. This is impossible. It is because nobody has _a priori_ knowledge of any natural law. If we did, we wouldn't have to do science to try to learn them. Indeed, nobody even has _a priori_ certainty that natural laws really do exist. Like the vast majority of scientists, I am convinced they do. But neither I nor my co-believing colleagues can prove it. The only certainty, _i.e._ , supposed truth, anyone has, or can have, are dogmatic beliefs such as religionists' belief in creation or Einstein's belief in causality. All anyone, scientist or nonscientist, has or can have are beliefs because, without absolutely sure and certain _a priori_ awareness of all natural law, it is impossible to compare it and scientific knowledge in order to determine if the two are in fact isomorphic. They may be. But until we are omniscient nobody can ever know whether or not they are. And this conclusion, a very sad one for truth seeking theoreticians like Einstein, is logically irrefutable.

Paradoxically, indeed, astoundingly paradoxically, its ultimate uncertainty notwithstanding, scientific knowledge is overwhelming the most useful thing humans have ever created. And this profound paradox implies another profoundly paradoxical scientific fact: Truth does not matter. Truth is unnecessary and inconsequential. Scientific knowledge may or may not be true, but it works. And that is all we need in order to control some events for human betterment and survival.

Well then, since its truth can not be established, what is scientific knowledge? And if it isn't truth, why is it so spectacularly successful and useful? The quickest and most convincing way to answer is by a synoptic history of science.
A Brief History of Science

Abundant evidence shows humans have a need to explain things. Quite understandably our explanations are made in terms of what we know, or think we know. Humans are group dwelling animals, and one thing group dwellers well know is the importance of the intentions of one's companions. So it is understandable that humans' most common explanatory paradigm is one of intention attribution. This is most clearly shown in primitive cultures. When primitive people attempt to explain something they say the event occurred because some entity possessed of sufficient appropriate power willed it. The entity may be a person, a nonhuman animal, an inanimate object or a nonmaterial spiritual being. Such entities' powers are presumed to vary from quite specific and limited to omnipotence. Indeed, the variety of entities and of their powers are enormous. But all have a fundamental common feature, intentionality. Things happen because some entity with suitable power intentionally causes them. This common assumption might be called the Universal Intentionality premise.

Some people, especially many scientists, consider Universal Intentionality to be accepted only in primitive cultures or by persons who themselves are primitive and uneducated. The notion that natural events, _e.g._ , earthquakes, happen because some entity intends it is, these people claim, patently absurd to persons having even a modicum of modern education. But this claim is itself uneducated. The world's population includes billions of persons, many millions of whom are highly educated in various disciplines, including the sciences, who explicitly believe every event happens because, sometimes immediately but always ultimately, some god or gods intentionally caused it. Far from being a notion which is destroyed by education, the evidence strongly suggests the Universal Intentionality premise is humanity's default assumption, education not (necessarily) withstanding.

In the prehistoric period of human existence, Universal Intentionality appears to have been humanity's only notion of how things happen. But eventually a different explanation evolved. Brief histories such as the present one often say this idea was original to the ancient Greek thinker Thales of Miletus. But while Thales is an historical figure known to have advocated the new premise, historians view his origination crediting as legend, and we are wise to accede to their well considered opinion. Moreover, for our purposes, it is immaterial who developed this new notion and how and why it was done. The notion itself, and its adoption by ancient Greek intellectuals is what concerns us, for it is essentially our conception of natural law.

About two and a half millennia ago ancient Greek thinkers began to replace the notion of reality controlled by the whims of gods, spirits, witches, _etc_ , with an idea holding things to be governed by totally impersonal and totally non-intentional, natural, cause-effect processes. There is an obvious inconsistency between this idea and Universal Intentionality. But there are ways to reconcile the ideas, and literally millions of persons subscribe to both. For example, some modern monotheists say natural laws are the tools intentionally created by an omnipotent deity to govern the ordinary day-to-day functioning of the world, sort of a divine autopilot. But the deity can and does manipulate the natural laws, and occasionally supercedes them in order to intentionally control events. Thus, while many people, scientists and nonscientists alike, totally reject the Universal Intentionality notion and consider all events to be under the control only of impersonal natural processes, many other people, scientists and nonscientists alike, subscribe to both notions, with or without trying to logically reconcile them.

Of supreme importance, the non-intentional processes the ancient Greeks considered responsible for events were explicitly deemed to be humanly comprehensible, discoverable by reason, and capable of being precisely communicated between persons. It is important to emphasize these characteristics for they are not necessarily implied by the idea of a reality governed by non-intentional natural processes. Ancient intellectuals on the Indian subcontinent also developed ideas holding reality to be governed by non-intentional processes inherent in the nature of things. However, these Indian thinkers held these processes to be beyond rational human understanding, incapable of being grasped or discovered by reason, and therefore incapable of being communicated to others. The processes are not considered to be irrational. Rather, their functioning is considered to transcend reason. Thus each person who seeks to gain an appreciation of these transcendent processes must engage in activities like asceticism, yoga or meditation in order to gain enlightenment, an immediate, insightful but not rational comprehension. And because such enlightenment transcends reason, it can not be analytically communicated to others. An enlightened person might help others in their own quest for an insightful understanding of the transrational natural forces governing reality, but each seeker can gain enlightenment only for one's self.

Science is a social phenomenon. Its conclusions are built from the observations, ideas and criticisms of all the many persons who can and do contribute to the scientific enterprise. Quite obviously, therefore, this Indian idea of a reality governed by processes transcending reason, processes which can not be explicitly described and communicated, could not be a foundation for science. Westerners often consider the Indian idea therefore to be inadequate and inferior to the Greeks'. Insofar as providing a basis for science, this charge is self-evident.

But a further conclusion which many westerners make says the notion of reality's governing processes transcending reason is necessarily and obviously wrong. This charge is nonsense, pure culture-centric prejudice. A compelling case can be made for evolution having selected humans to have intelligence able to comprehend the environment wherein our evolution occurred. Only by an enormous leap of faith, however, can we assume this particular and conspicuously limited intelligence is necessarily capable of comprehending everything no matter how distant from and dissimilar to the earthly environment which has shaped it. There is no logical reason why ultimate reality must be humanly comprehensible. Indeed, there is no logical reason why reality itself must ultimately be logical, and many profound thinkers believe quantum theory, with ideas such as its uncaused spontaneous quantum events and its wave-particle paradox, proves ultimate reality is not logical. (Quantum theory says every quantum of energy is both an object, a particle, and a wave, an action of a body containing vast numbers of particles. Every physicist agrees, this is an absurd contradiction. However, every physicist also agrees, this absurdity is indubitably established by abundant quantum data.) Those westerners who assume reality is self-evidently completely logical, and who assume humans are self-evidently capable of completely understanding reality, are demonstrating nothing more than culture-centric dogma.

And this dogma holding reality to be completely rational and completely humanly comprehensible is an essential foundation for the idea that science discovers truth. It is possible for science to discover natural laws only to the extent this rational reality presumption may be true. For obviously, we can neither discover nor explicate natural laws if we can not comprehend them. So implicit in the notion that science can and does discover truth is an assumption holding human intelligence to be essentially unlimited. Some people say we scientists are unbearably arrogant. In view of this assumption, such charge may very well be very well founded.

But arrogance isn't the problem. Implausibility is. Is it reasonable to suppose human animals, mere talking cousins of chimps and gorillas, are possessed of unlimited intelligence? Such an assumption is conceivable if humans are, as many religionists believe, created in the image of an omniscient god. But those same believers, I understand, consider it blasphemous to claim human intelligence to be on a par with that of their all-knowing god. Arrogance and blasphemy aside, the belief that science can discover truth is _a priori_ implausible because it necessarily presupposes an implausible unlimited degree of human intelligence.

However, there are other more immediate reasons for rejecting the notion of science as a truth discovery tool, so let us continue the story in order to learn some of them.

**Ancient Greek Science**

On the basis of their premise holding events to be governed by rationally comprehensible non-intentional natural laws the ancient Greeks built a science. I say "a science" to emphasize the differences between it and modern science. For while ancient Greek philosophy, _i.e._ , its science, is the principal intellectual forebear of modern science, the two are profoundly different. Today science and technology are so intertwined they are virtually synonymous. Ancient Greek science, however, had almost no technological aspect nor implications. If one of its thinkers did develop a technical device it was considered incidental and irrelevant to philosophy. Thus, when Archimedes invented his water-lifting screw, he considered it so insignificant he never wrote a description of it. The goal of ancient Greek science (as its usual title, philosophy, indicates) was truth. These philosophers didn't seek to do anything. They only tried to achieve a rational understanding of how impersonal non-intentional processes, _i.e._ , natural laws, make things happen.

Explanations based on reason are vastly superior to any of those whole cloth, Universal Intentionality fairytales previous cultures had accepted. Nevertheless, reason provides far less than the infallible truth the Greeks and many others have claimed for it. And though they didn't realize what they had done, the ancient Greeks themselves proved this. One of their (and humanity's) greatest achievements is geometry. The method they used to develop it is the archetype of all proper, rigorous rational thought. On the basis of a set of assumptions or axioms which Greek thinkers took as self-evident, they rigorously deduced a geometry in which every conclusion is logically proven to be truth.

Or so everyone thought until a few hundred years ago when mathematicians realized the conclusions of geometry are no more true than their underlying axioms. This fact was long hidden because everyone considered geometry's axioms to be self-evident, _i.e._ , necessarily true. Only when mathematicians, using one different axiom, developed a different but equally rigorously logically consistent geometry (Non-Euclidian geometry), did philosophers fully realize the limits of reason. No matter how exquisite and perfect one's logic, it is now universally acknowledged, a reasoned conclusion is only as true as are the axioms, _i.e._ , assumptions, it is based upon. And there can be no necessarily true axioms, for any attempt to prove an axiom's truth leads to an infinite regress.

This is the Achilles' heel of every rational system. None can be truer than its starting axioms, but there is absolutely no way ever to ascertain axioms' truth. One simply has to take them with what, in the last analysis, can only be blind faith. The axioms the ancient Greeks used to develop geometry are certainly plausible. They clearly hold within immediate human experience, but whether they are also true of the universe everywhere beyond or beneath human experience is inherently unprovable and therefore, unknowable. Unfortunately the axioms the Greeks used as the foundation for their science, though they seemed more than plausible to the Greeks, were not so fruitful. I'll illustrate below one way in which Greek science was led astray by infelicitous axioms. But first we need to consider another characteristic of the ancient Greek's science, a characteristic modern scientists consider an incapacitating defect, a defect introduced into the ancient Greeks' science by their unreasonable devotion to reason, anti-empiricism.

**Hypothetico-Deductive Reasoning:** The Greeks' adulation of reason was so great it led them to minimize or ignore data and evidence. A highly relevant example is found in the work of Aristotle. He developed an explanation of physical phenomena. From this Aristotelian physics one may deduce that the speed of an object's freefall should be proportionate to its weight. Though there does not seem to be any extant Aristotelian writings stating this deduction, it is universally agreed to follow from his physics, and Aristotle, the genius who first systematized formal logic, most certainly would have been aware of this. Yet there's no evidence he ever made any attempt to test this deduction by the simple and obvious expedient of dropping a light and heavy object and comparing their speeds of fall.

To modern scientists this omission is incomprehensible. The gold standard of modern science, the means by which it claims to prove its conclusions to be natural laws, is the hypothetico-deductive method. Starting with an hypothesis, _e.g._ , Aristotle's physics theory, one deduces consequences which must follow, _e.g._ , faster freefall for heavier objects. One then looks for this event either by carefully observing natural events or, as with the dropping of different weight objects, by direct experimentation. If the predicted outcome is not observed, one then concludes the premise from which the hypothesized effect was deduced is not true. But if the hypothesized event is observed one concludes it is true, ergo a natural law has been proven. Or so says modern science.

Aristotle had everything needed to do a hypothetico-deductive test of his freefall explanation. Yet he didn't. Nor did he ever tell why he did not. This omission, the usual science history implies, reveals a colossal ignorant blind spot in the ancient philosopher. But I firmly believe it is we modern scientists, not Aristotle, who suffer from ignorant blindness. We unthinkingly suppose it self-evident that data reveal truth. But Aristotle, the man universally and rightfully regarded as one of humanity's greatest thinkers, certainly knew better. He is the man who first formalized logic, and one of the fundamental theorems of his logic is the Fallacy of Affirmation of the Consequent. And this theorem says any empirical test of his freefall proposition would prove nothing. Whether the results were or were not in accord with his deduction, there is no way to prove the results were not due to some unknown extraneous factor. And as a matter of logical fact this uncertainty is itself certain.

Whatever the results of such a test might be, it is impossible to logically prove they are due to, and due only to the processes presumed by the hypothesis under test. Any empirical observation which might be done must, of necessity, be done in the real world. Obviously we can't temporarily exit time and space and enter some external reality where no possible irrelevant, contaminating factors exist, therein to do our test. We must of necessity do it in the real world where there well may be, and likely are a plethora of such contaminants. But we do not know them. After all, we do science because we do not know what causes the phenomenon we are studying. In a word, we are ignorant. So how can we in our ignorance know no unknown irrelevant factors are responsible for whatever results we observe? Absent omniscience, there is no way to be sure there were none.

Therefore, there is no way to prove the positive results a hypothetico-deductive test might achieve are due exclusively to the factor an investigator supposes and not, in part or wholly, to some unknown other factor. Nor is there any way to prove the negative results one might suffer did not occur because some extraneous factor(s) camouflaged positive results. Therefore, empirical tests do not and can not logically prove the natural law status of the premise from which a tested hypothesis was derived. The ugly truth of the matter is this: It is logically impossible for empirical data to establish truth, and truth was what Aristotle and the other ancient Greek philosophers sought.

Eventually other less astute thinkers put Aristotle's prediction to hypothetico-deductive test. They dropped objects of different weights and observed their relative speeds of fall. And a perfect demonstration of the contamination of empirical data by unknown factors is found in the best known such case. The Italian Renaissance scientist Galileo, although he was neither the only nor first one to consider the test, reported that if a heavy and light object are dropped simultaneously the lighter will hit first, _i.e._ , it will fall faster, the opposite of the deduction from Aristotle's physics, and a result inconsistent with modern physics as well. Surprising as Galileo's report is, I can assure you, if you hold a pair of different weight heavy objects in your hands and simultaneously drop them, the lighter usually does hit the ground first. (With the aid of the science historian Thomas Settle I conducted a test of the this and have shown it to be so. See the Appendix.) Because Galileo's report is inconsistent with modern physics' description of freefalling objects (a description of unquestionable accuracy) we can be sure some such unknown contaminating factor(s) clearly affected his results. Were one only concerned with which, the heavier or lighter, of hand dropped objects falls faster, from Galileo's data one could conclude the lighter usually does. But if one presumes the test to have divulged truth, _i.e._ , if one claims to be able to deduce with certainty the governing natural law(s) from the data of such a hypothetical-deductive test, or from any other empirical data, one is being naive to the point of abject ignorance.

This problem exists with all empirical data. Without being in complete and absolute control of every single factor affecting an empirical observation or test, a degree of control which is exceedingly and, sometimes, impossibly difficult for known factors, and necessarily is always absolutely impossible for unknown ones, a scientist can never know whether, nor the extent to which, an empirical test's results are due to something other than the factor(s) he or she supposes.

**Affirmation of the Consequent:** To derive an unquestioning conclusion from empirical data which can never be known not to be contaminated by extraneous unknown factors is to commit the error Aristotle identified and called Affirmation of the Consequent. This logical fallacy is usually explained with abstract generalizations, but I find the abstracting distracting. So I'll illustrate it with a concrete example.

Malaria was long known to be frequent in swampy locales where the air is damp and fetid, and such air was presumed to cause the disease. In fact, the word malaria is simply an English approximation to _mala aria_ , Italian for "bad air". As I am sure you know, this premise is wrong. Malaria is caused by a mosquito borne pathogen, not bad air. Knowing this, with the arrogance of hindsight some people ridicule the bad air hypothesis, but it was not at all unreasonable. Bad air can cause sickness. Having spent years as a researcher in an institute studying asthma, I can assure you that pollen saturated air is bad indeed for allergic asthmatics, and the pathological effects of air pollution and tobacco smoke are well established.

Thus, it would not have been at all unreasonable for some scientist in the Eighteenth Century, let us say, to test the bad air hypothesis via the hypothetico-deductive method. Such scientist could have deduced, completely logically, that if bad air causes malaria, then removal of bad air from a swamp by draining should reduce the incidence there of malaria. We now know this is exactly what would happen. However, it would have been illogical for our imagined scientist to conclude from this result that the bad air hypothesis had been proven true. Not because the conclusion is wrong. The conclusion would be illogical even if it had been right. It is illogical because unknown factors may have caused the observed effect. In this case we know that indeed would be the case. While draining a swamp reduces its bad air, this is incidental to the hypothesis being tested. The decrease in malaria incidence occurs because draining also removes the habitat for the disease vector, the mosquitoes which carry the malaria causing parasite. In the Eighteenth Century this knowledge didn't exist, so our imagined scientist could not be criticized for not knowing it. But he could be criticized for not knowing he was committing the Fallacy of Affirmation of the Consequent if he thought the result of his swamp draining experiment allowed a definitive conclusion about the bad air hypothesis, for Aristotle had identified this logical error two millennia earlier.

There is something psychologically compelling about the positive results in hypothetico-deductive reasoning. After all, what better demonstration of knowledge can there be than calling one's shot? Unfortunately, however, our intuitions are fallacious and untrustworthy. As the malaria illustration shows, one can call one's shots accurately yet still be totally wrong. That's what the Fallacy of Affirmation of the Consequent is all about. Precisely accurate predictions can be and often have been made from completely wrong hypotheses, _i.e._ , explanations or theories.

What has just been said may reasonably be misconstrued, so let me prophylactically protect you. The Fallacy of Affirmation of the Consequent does NOT say an accurately called shot is necessarily wrong. It doesn't even say it is probably wrong. Indeed, it is probably right. But it _always_ may be wrong. Something one does not know may make it wrong. And because one does not know what one does not know, one can neither know if it is wrong, nor know how likely or unlikely it is to be wrong. Ergo, one can not and must not trust this method (or any other empirical methodology, for that matter) to determine truth. But as long as one keeps this limitation in mind, as long as one is perpetually aware that all empirical findings are potentially misinterpreted, one can use the hypothetico-deductive method to establish believable, though necessarily tentative scientific knowledge. Vastly more importantly, as long as the conclusion one draws from such research works, it doesn't matter if it isn't true. For example: If all one wants is to reduce the incidence of malaria, one can do so by draining swamps. This was one of the main things the US Army did to suppress malaria (and yellow fever, another mosquito borne disease) when the US built the Panama Canal. In that case they did know why draining swamps was effective, but it would have worked even if they hadn't known. One does not need to know why something works if it reliably does.

There simply is no escape from the consequences of our lack of omniscience. We do not know what we do not know, and what we do not know always has the potential of invalidating what we think we know. Though many modern scientists either don't know this, or ignore it, Aristotle certainly knew the Fallacy of Affirmation of the Consequent and therefore, I believe, never tested his deduction that an object's freefall speed is proportional to its weight. He knew such test could never establish truth, and truth was all he sought. Reason, he believed, when properly done, infallibly identifies truth. Empirical data, reason told him, does not.

**Data Inaccuracy:** Nor was Aristotle the only great ancient Greek intellectual to dismiss empirical data. For example, Plato once advised those interested in understanding planets' movements not to study the planets themselves. Rather, he maintained, one must study geometry instead. To modern ears Plato's dictum sounds like patent drivel. How can one learn anything about a phenomenon by ignoring it?! But there is a sound rational basis for Plato's recommendation. His thinking seems to have gone something like this: Multiple occurrences of the same event tend to vary unsystematically. Plato called his readers' attention to this by likening empirical observations to the flickering shadows on a cave wall. Therefore, if one makes an intensive study of the planets' actual movements, these shadowy irregularities will obscure the underlying orderliness of the phenomena with a lot of inconsistent, adventitious detail, leading to the error commonly called "failing to see the forest for the trees". To understand the planets' movements, Plato apparently intended to say, one should eschew the data from planetary observations, data which are always contaminated with accidental, irrelevant irregularities, and focus instead on developing an explanation of the geometry underlying and capable of elucidating the essence of planetary movements.

Let us now consider how an almost exclusive reliance on reason and logic impaired ancient Greek science. And in fact, the most useful illustration is its explanation of the planets' movements.

**Ancient Greek Celestial Mechanics**

Despite the Greeks' dislike of data and evidence, they had no choice but to begin their reasoning on their basis. After all, there would be nothing to explain without data. The movement of things on earth, Aristotle concluded from observation, is twofold. Things move by being made to move, by applying a force to them. But unrestrained things may spontaneously move on their own. They do so, he concluded, to get to their proper place in the universe, which he reasoned was different for different fundamental classes of things. For example, the proper place for heavy things, Aristotle said, is the center of the earth. And unrestrained heavy things move toward it.

In the heavens, however, things were different because heavenly objects were supposed to be perfect. And as a result of this perfection heavenly objects moved perpetually. Two other aspects of their perfection was that heavenly objects moved at constant speed and in perfect circles. An obvious next question arose: What is the center of these perfect circles? Reason was able to provide an answer, to show the center to be the earth itself. Many modern persons believe the ancient Greeks childishly, naively and unthinkingly assumed from superficial appearances that the earth is the center of heavenly objects' perfectly circular orbits. But in fact this modern belief is the naive assumption. The Greeks did not simply take an unmoving earth at the center of the universe for granted. They used reason and evidence to prove that it must be so.

First, if the sun were the center, as in fact one ancient Greek philosopher had suggested, then the earth must have two separate and independent movements. It must move around the sun once a year and also spin around itself once every day. One movement might be plausible, but to suppose there existed two entirely different ones seemed like special pleading. Obviously, any and everything can be explained if one invents a separate processes for every separate aspect of a phenomenon. But such an explanation is no explanation at all. All modern scientists agree. Modern scientists extol something we call theoretical parsimony, whereby we strive to make our scientific explanations contain as few separate hypothetical process as we can. Similarly, the Greeks selected the most parsimonious explanation of earth's movements. They concluded it had none. Instead, they decided it sits stationary in the center of the universe.

But this geocentric conclusion wasn't simply a matter of theoretical parsimony. There were (and still are) solid facts to support it. Consider an object thrown straight up. If the earth were moving, the Greeks quite logically reasoned, in the time it took for the object to fall, the earth would move forward, Thus, the object would land behind the point from which it was launched. But no such backward landing is ever observed. Also, just as a rock spun in circles at the end of a string will fly away if one lets go the string, if the earth were spinning around an axis it would throw off unattached things. And this effect also is never observed.

Yet another result of earth's hypothesized movement could not be detected, stellar parallax. Parallax is an apparent movement of something caused by a change in the location from which it is seen. Observe a finger at arm's length while you successively close one eye then the other. The finger will appear to move relative to the background. If the solar system were heliocentric, _i.e._ , if the sun were at the center, then whenever an observer on earth sees the heavens, he/she is seeing it from a viewpoint which is the earth's diameter distant from the viewpoint a half year earlier. Therefore, at different times of year nearby stars should appear to be located differently with respect to background stars. The Greeks could detect no such stellar parallax. Ergo, they again concluded, the earth does not move.

So, employing what was essentially modern science's supposed truth establishing hypothetico-deductive method, the ancient Greeks tested the moving earth idea and rejected it because these tests failed. Their hypothetico-deductive analyses proved the truth that the earth does not move. Or so they thought.

{It's necessary to pause here for emphasis. With respect to the issue of whether the earth moves, the ancient Greeks' used exactly the kind of hypothetico-deductive reasoning which supposedly enables science to discover truth. There was absolutely nothing wrong with their analyses. If you make the same empirical observations the Greeks did with the instruments they used, you will find exactly the phenomena they reported. Modern science can fault neither their reasoning nor their data. Yet modern science considers the Greeks' stationary earth conclusion to be in error. It does so because, on the basis of things unknown to the ancients, modern science explains why their hypothetico-deductive reasoning failed. Better than any logical argument Aristotle or anyone has made or could make, this proves what he said about the Fallacy of Affirmation of the Consequent. Unknown factors can and do contaminate and invalidate the conclusions of science's supposedly infallible method.

{Unlike the claim of those who believe science discovers truth with the hypothetico-deductive method, these examples prove science's premier methodology is fallible. As the above shows, its results can be completely invalidated by what its users do not know. Modern physics identifies these Greek unknowns. Using ideas the ancient Greeks did not have, modern science explains how and why their hypothetico-deductive proofs of a stationary earth were erroneous. These modern explanations are plausible and believable. I certainly believe them. But these modern explanations themselves depend upon exactly the kind of hypothetico-deductive reasoning which led the Greeks astray. So how can we be certain these modern explanations are also not invalidated by factors we don't know? I don't think they are, but I can't prove they are not. Nor can anyone else. And without such proof we have no truth.

{Inscribe these examples of fallacious hypothetico-deductive conclusions indelibly in your memory, for they prove science's gold standard to be tarnished. Just as Aristotle knew, the empirical method which modern science says can discover and prove truth does not prove anything. Even when carefully and properly applied, it can lead to, and it has led to seriously misleading conclusions. The Greeks' fallacious hypothetico-deductive tests are especially relevant to the present micro history, but they are only some of equally fallacious such tests which, over the years, have complicated and misled scientists' search for knowledge. There can be no doubt. Those who say science discovers truth with this method are speaking utter nonsense. Their claim is demonstrably false.}

Returning to our story: On the basis of unambiguous results from different hypothetico-deductive analyses the Greeks concluded the earth is stationary. Therefore, celestial objects move around earth, and according to the Greeks' Aristotelian axioms, this movement is at constant speed and in perfect circles. The vast, overwhelming majority of heavenly objects, the uncountable myriads of stars clearly seemed to follow these supposed natural laws exactly. But about a half dozen celestial objects did not, the planets. This tiny handful of wanderers moved quite differently. They moved with changing speed. Indeed, sometimes some also changed the direction of their movements, going backwards occasionally.

The Greeks might have ignored these trivially few exceptions. But they apparently thought this would be an irrational cop-out. And it's reasonable they would. Reason says an explanation isn't really an explanation unless it can explain the whole of the phenomenon it addresses. So the Greeks took the planets' aberrant movements as exceptions with which to prove, _i.e._ , test, their celestial constant speed and unvarying direction natural laws. And following Plato's advice, they used elaborate geometry to provide a consistent planetary movement explanation.

The explanation they developed is known to us from a book by the Second Century CE (or AD, for Christians) Greek astronomer Ptolemy. It contains a geometric description of how the planets move. As such it obviously had to be based on at least a few observations of the planets. But even when forced to use data the Greeks did so dismissively, for some of the data on which Ptolemy based this model were ancient even in his day. It was impossible for him to know the accuracy or inaccuracy of such data, but this clearly didn't concern him. In so far as he was able, Ptolemy followed Plato's dictum, using as little empirical data as possible, relying instead on geometry to explain away the planetary movements so conspicuously inconsistent with the Greeks' notions of celestial mechanics' natural laws.

The geocentric geometry model which Ptolemy described is universally considered to demonstrate brilliant mathematics. It's relevant to note this because many modern theorists claim mathematics alone suffices to discover and prove natural laws, _i.e._ , truth. Since these same persons universally reject the conclusions of Ptolemy's brilliant geometry, one might expect them to be more circumspect about their cabalistic faith in math. Be that as it may, Ptolemy's math involved three special geometric ideas or gimmicks. Epicycles were the most significant. These were little perfect circles on the edge of a planets' big perfectly circular orbit. A planet was described as following its big circle orbit till it came to the location of an epicycle, at which point it moved around the epicycle till it came back to its big circle which it then resumed following. Thus, for a short time the planet would move in an opposite or retrograde direction even though it was always going forward at a constant speed along its perfectly circular circle-epicycle path.

With the Ptolemaic model the ancient Greeks had achieved a complete and completely rational explanation of celestial mechanics, an explanation consistent with their natural laws holding celestial movements to be inherent, _i.e._ , unforced, perfectly circular and of constant speed. With this model, not only were they able to explain how things in the heavens move, they also were able to predict planetary positions at any particular time with what they accepted as adequate accuracy. These predictions were not precise by modern standards, but such precision was obviously considered to be irrelevant, just as Plato would have counseled. The essence of the planets' movements was elegantly rationally described, and the essence of the phenomenon was the truth the ancient Greek philosophers wanted. Thus, they believed Ptolemy's geocentric model with its epicycles and two other geometric gimmicks was what would now be called a natural law. And it was accepted as such for about a millennium and a half.

**The Birth of Modern Science**

Not too long after the time of Ptolemy science disappeared from the west. Fortunately, in Islamic cultures it was preserved and continued to develop along the rationalistic lines of the ancient Greeks. But in Europe there ensued a period in which rational scientific explanations of phenomena were completely replaced with superstitions and irrational dogma. The notion of events being governed by non-intentional impersonal natural processes was suppressed and the Universal Intentionality premise regained ascendancy, remaining unchallenged for centuries.

The first part of this historical period is often called the Dark Ages, but many historians now consider the name a misnomer. One reason is because, unlike science, technology in Europe neither disappeared nor stagnated. Rather, it continued developing in many ways. Military technologies provide the prime, but by no means the only examples. As already noted, modern peoples conflate science and technology. Therefore, if these times were devoid of and hostile to science, and they certainly were, we unthinkingly suppose technology also must have been in a similar state. But abundant historical evidence shows this presumption to be erroneous.

It is possible to build a nonempirical science relying, as did the Greeks, almost exclusively on reason. But reason alone cannot build technology. The objective of every technology is exclusively empirical, to make something work as desired. Technology requires continuous intimate interactions with the real world, building a device, trying it, and then modifying it as the results of these trials shows necessary in order to get the thing to do what one wants it to do.

I believe the empirical orientation which Dark and Middle Ages European technicians of necessity had to employ eventually spread culture-wide, creating a _zeitgeist_ in which propositions like Aristotle's "speed of freefall is proportional to weight" deduction is not treated as a truth dictated by reason but is tested by dropping objects of different weights. At the end of medieval times, Greek science was reintroduced into Europe through Muslim Spain. And the application of this empiricist attitude to the rediscovered Greek science produced modern science.

The person credited with beginning this evolution is Copernicus, a polymath Polish Roman Catholic canon who, during the time when he was a student of church law as well as of medicine, also pursued the newly recovered astronomy enough to become convinced the Ptolemaic model was unrealistic because it predicted such wide movements of the moon as would necessarily have caused systematic changes in its apparent size, changes which were never observed. In this Copernicus manifested the empiricist attitude I attribute to his time. Plato, we may presume, would not have concerned himself with this empirical inconsistency.

The Catholic church's leaders, aware Copernicus had considerable knowledge of the ancient astronomy, invited him to participate in a council seeking to more accurately fix the time of Easter. He declined, saying astronomical knowledge was insufficient to achieve the desired goal. He apparently believed the inadequacy to be due to the unreality of Ptolemy's model. Two features, he appears to have concluded, were responsible for this unreality, geocentricism and one of the three geometry gimmicks Ptolemy had used to fit his model to his limited and old data. Copernicus set to work on an alternative model which placed the sun at the hub of the system and which discarded the gimmick he though unrealistic. But Copernicus retained the idea of epicycles. Indeed, the model he eventually developed had more of them than did Ptolemy's. And Copernicus also retained Aristotle's conclusion that the planets moved spontaneously at unchanging speeds in perfectly circular orbits and epicycles. Although his life overflowed with other activities and responsibilities, and although his model was only slightly more accurate than Ptolemy's, he continued work on it, finally publishing it on his deathbed (literally).

At this point European natural philosophers had two alternative models of the planets' movements, and the task was to choose between them. A Danish nobleman, Tycho Brahe, decided the choice should be made on the basis of data, not the old data Ptolemy and even Copernicus had used, but freshly gathered planetary observations as abundant and precisely accurate as the most exquisite care could make them. To modern ears Brahe's plan sounds obvious. It also sounded reasonable to the Danish king who granted him the funds for, and an island on which to set up an astronomical observatory, by far the best one ever before the days of telescopes. The Greeks, however, would not have considered Brahe's data as useful astronomical evidence but rather as evidence of his gross intellectual and educational deficiencies. Even Copernicus, who died only three years before Brahe was born, rejected the idea that more and better data might be needed to refine astronomical understanding, and he once vehemently attacked a contemporary for so suggesting. Plato, we may reasonably conclude, would have thought Brahe was conspicuously illiterate to suppose data could replace geometric reasoning in choosing the true model of the planets' movements.

{In turning to data Brahe followed the empiricist _zeitgeist_ which, I have suggested, developed in Europe as a result of its never interrupted growth of technology. But paradoxically, the intellectual and scientific sterility and emptiness of the Dark and Middle Ages may have played a vital role in enabling Brahe to apply data to science. If there had been no intellectual break in the passage of Greek philosophy through these times, if the ancient presumptions had been handed down across the years from teacher to student, Renaissance thinkers would have been indoctrinated in the Greeks' reason-shackled, anti-empirical, data-ignoring beliefs. By disrupting such instruction, _i.e._ , brainwashing, these unenlightened times set free at least a few key subsequent thinkers, giants like Brahe, to think outside the box, and by such unrestricted thought to develop modern science.}

Brahe amassed the largest body of the most precisely accurate planetary position data theretofore recorded. To find the best geometrical description of these data he hired the German mathematician and astronomer Johannes Kepler. And the model which Kepler found to best describe Brahe's data did away with both Ptolemy's geometry gimmicks and Aristotle's idea that objects in the heavens are perfect and therefore move at perfectly constant speeds and follow perfectly circular paths. In Kepler's best mathematical description of Brahe's great data set, the center of the solar system is the sun not the earth, each planet's path around the sun is an ellipse not a perfect circle, there are no epicycles nor any other geometry gimmicks, and a planet's speed varies continuously and systematically depending on its proximity to the sun. Not only did Kepler's heliocentric model describe Brahe's data far, far better than either the Ptolemaic or Copernican models, shortly after Kepler died his model accurately predicted a planetary transit which neither of the other two models foresaw. Clearly, Kepler's model provided a quantum leap improvement in the accuracy of planetary movement descriptions.

At this point the usual science history itself leaps to the work of Newton. In so doing it leads the reader to reasonably infer something which not only is wrong, but which distracts the reader from learning what scientists actually do. The erroneous implication is that all natural philosophers, as scientists in those days were known, immediately acclaimed Kepler's model. Only an ignorant hidebound reactionary dogmatist, it would seem, could possibly have reservations about such a conspicuously and vastly more accurate model. Most of today's people, scientists and nonscientists alike, believe Kepler's contemporaries eagerly endorsed him. But not all did. Nor, as I will shortly illustrate, do modern scientists in similar circumstance endorse more accurate models.

Well, what did Kepler's contemporaries do? Some, indeed, did accept his model. But others did not. I've never seen any historical analysis investigating why. Probably many simply would not surrender their Aristotelian ideas that planetary movement is constant in speed and circular in direction, conditions conspicuously inconsistent with Kepler's model. But one highly significant thinker also never accepted the German astronomer's new ideas, and we can be sure it was not because he was a slave to Aristotelian ideas. That man was Galileo. Beyond any possible doubt, the man who overturned so much of ancient physics and astronomy and who spent his last years under house arrest for advocating the new heliocentric idea of Copernicus, the same heliocentric idea included in Kepler's model, was no dogmatic conservative. But he never accepted elliptical orbits.

Why would a man so far in the van of the profoundly changing science of his day not endorse a conspicuously more accurate model? Since he apparently never explained his thinking we can only guess. But I think there is a highly plausible possibility. Galileo had developed what is essentially what we now call the Law of Inertia, an idea usually stated something like: "An object in motion will stay in the same motion unless acted upon by a force." In high school we learn this as a Newtonian axiom, but the great English thinker got the essence of the idea from Galileo. For Newton the inertial path is a straight line, but historians I've read suspect Galileo may have thought the inertial path was a circle. Such a path would be a likely explanation of planets' orbits. But if planets move in Kepler's elliptical orbits, then at every single mathematical point in a planet's orbit both its speed and direction are changing, and some continuously acting force would be needed to continually nudge the planet out of Galileo's inertial circular orbit. Where was this force? Apparently Kepler had similar concerns, for he, a devout Lutheran, suggested invisible angels accompany each planet and supply moving force to it, but Galileo, we may presume, thought this explanation no more plausible than would any modern scientist.

So in general, I conclude that those who didn't accept Kepler's model did so because, its accuracy notwithstanding, it did not make sense. Whether one held the old Aristotelian "perfect circle-constant speed" dogmas or Galileo's new inertia idea, some force was needed to cause the continuous changes in speed and direction which Kepler's elliptical orbits specified. And except for Kepler's implausible invisible angels, there was no apparent force anywhere in the heavens. Again we have an example of the corrosive effects of the dogma of scientific truth. Unquestionably accurate though Kepler's model was, it made no sense whatsoever because it violated one or another principle which one or another natural philosopher accepted as truth.

{At this point I must interrupt the historical story to consider what it means for a proposition to make sense, for while we use the term frequently, and feel we understand it perfectly, we seldom stop to consider what "making sense" means operationally. In order for something to make sense it must agree with one's relevant beliefs regardless of those beliefs' validity. A secular parable can illustrate the point. Ancient peoples perfectly understood that unsupported heavy objects fall, but they did not know about magnetism. Suppose by some magic a modern person took a strong magnet back in time and suspended a piece of iron with it. The demonstration would make perfect sense to the modern but no sense whatsoever to any observing ancient who would be unable to understand why the piece of iron didn't fall. If, however, the time traveler tried to repeat the trick with something that appeared to the modern to be iron but which in fact was lead, the object would fall, and this would make no sense at all to the modern but would make perfect sense to the ancients who expected any heavy object to fall. In brief: **Things make sense if they agree with one's presumptions and prejudices whether or not those beliefs are valid**.}

Galileo's rejection of Kepler's model illustrates what scientists, whether modern, Renaissance or ancient, in fact, always do. We do not discover natural laws because we can not. Scientific conclusions, no matter how widely believed and empirically well supported, simply do not bear truth labels identifying which ones are genuine natural laws and which are only serviceable rules of thumb. Therefore, a scientist who feels he or she has discovered a natural law can only do one thing: Try to convince others that the natural law conclusion makes sense. But as just noted, something only makes sense when it agrees with one's prior convictions, the validity of these preconceptions being irrelevant. As a calculation tool, a means of doing things like accurately fixing the date of Easter, Kepler's model may have been useful. That's a fact. But it's a naked empirical fact, not the rational truth desired by Renaissance natural philosophers, their Greek forebears, and also modern theoreticians. Galileo and others wouldn't accept Kepler's more accurate model because, without a rational explanation for the planets' constantly changing speeds and directions, it didn't make a lick of sense.

**Modern Classical Physics**

A man, who by interesting coincidence was born the year Galileo died, the English intellectual giant Isaac Newton, provided a rational explanation, finally making sense of, and eventually thereby convincing everyone of the truth of Kepler's nakedly empirical planetary movement model. Newton did this by inventing the missing speed and direction changing force, gravity.

The average scientist, I suspect, will blanch at my choice of words. Certainly the average physicist will. Newton, they insist, did not invent anything. He discovered the Law of Gravity, probably the most fundamental, important and true natural law science claims to have found. This, of course, is the crux of the issue addressed by this essay: Can science's conclusions be proven to be natural laws, to be truth? If Newton's model is natural law, then obviously no one could invent it because natural law preexists everyone. But is it a natural law? The average scientist's use of the word discover reflects an unthinking and undefended prejudice that it is. However, an examination of the case shows the proposition is indeterminate at best. There is abundant data suggesting Newton's law of gravity may well not be a natural law. I'll consider this in the next section, but first we need to consider what Newton did and how he did it.

Newton said an invisible force emanates from every piece of matter in the universe. This force is proportional to the mass of the emitting matter, and it pulls all other matter toward the matter from which it emanates. He made no attempt to explain what this force's functional mechanism might be. But the intensity or effectiveness of it, he did say, varies inversely with the square of distance from the pulled matter. Then with three unproven assumptions, or axioms, (two, including the Law of Inertia, he got from Galileo) he showed that Kepler's apparently inexplicable planetary orbits can be mathematically deduced from this supposed invisible force. Ergo, he and virtually everyone since concluded (and everyone since has been taught) his hypothesized gravity force and three assumptions are truth.

Newton's model was and remains one of, if not the most outstanding intellectual achievement in history. The English genius thought he had found the rules the God in which he believed uses to control the world, verily God's very own natural laws. And well he might. But where's the proof?

Though it is often not so identified, the alleged proof in fact is a backward in time hypothetico-deductive test. The test's empirically observable deduction, Kepler's orbits, already were known. Newton then invented (scientific truth defenders say he discovered) a hypothetical model from which these orbits could be deduced. Some might feel it would have been more compelling if he had not known Kepler's orbits in advance, thinking it would be stronger proof of the correctness of his explanation if he had predicted unknown and unsuspected orbits out of the blue. Psychologically, it would have, but that's due only to the way humans think. Logically, Newton's reversed order procedure is just as sure, just as certain as if the planets' orbits had been predicted _de novo_. But not more so. Newton's hypothetico-deductive proof of his law of gravity with its three axioms may be as much in error as the ancient Greeks' hypothetico-deductive proofs of a stationary earth. For as Aristotle explained (but we scientists are not taught) there is no possible way to prove Newton's model has anything more than a coincidental relationship with Kepler's orbits nor with any of the other myriad outcomes predictable from it.

As I noted earlier, this logical uncertainty is itself logically certain. But it's psychologically unthinkable. Newtonian physics is the epitome of science's alleged natural law. Arguably it is the best established, and certainly the most unquestioned and securely believed of all scientific knowledge. We can not conceive of how it could be anything but scientific truth. And that is the root of our problem. We can not conceive what we can not conceive, any more than we can see light which we cannot see ( _e.g._ , ultraviolet and infrared light). Effective though it unquestionably is, the truth of Newton's model is unquestionably logically uncertain. But we are psychologically incapable of and unwilling to accept this uncertainty. Therefore, illogic notwithstanding, we insist Newton's laws are truth.

{In fact, many, if not all, contemporary physicists and astronomers believe Newton's gravity force does not exist. These people subscribe to Einstein's General Relativity theory which says things only appear to be moving under a gravitational force. But in truth, this theory alleges, things inertially move on a linear path through curved space-time. As will be explained later, the validity of this theory is uncertain. However, since Einstein carefully constructed General Relativity so its predictions are precisely equivalent to Newton's when conditions are the same, the following discussion applies regardless of the validity of Einstein's theory.}

NOTE: Research teams at MIT and CalTech on February 11, 2016 announced detection of gravity waves on September 14, 2015. Therefore, the reservations expressed here about General Relativity must be dropped. I leave it to the reader to judge the relevance of this monumental discovery to the arguments made here.
Truth Dogma's Damage: A Contemporary Example

According to the conventional but, I fear, naive story told by many science philosophers and historians, scientists are data bound. Supposedly, scientists explain a phenomenon in a way consistent with all existent data. However, since someday new and different data might be found, every scientific explanation is held to be only tentative and provisional. And should new and inconsistent data be found, allegedly the old science explanation is abandoned and a new one sought.

This idealized view, it seems to me, could only be believed by someone who never did science. I so think because scientists usually do not consider scientific knowledge to be tentative. Rather, scientists treat existing theory as presumptively true. Thus, if and when incompatible new data are found, scientists do not abandon existing theory and invent another. Instead they first work like mad to find some way to reject the incompatible data. And though this not infrequently leads to acrimonious exchanges between those who found it and those who won't accept it, this is the proper thing to do. It is because in many cases these attacks uncover some reason justifying the rejection. The new data doesn't stand up to this hostile reception. ( _Vide_ the recent cold fusion issue.) However, should the incompatible new data be invincible, something often requiring several experimental replications to establish, even then the standard procedure is not to reject the old theoretical explanation and seek another. The usual response is to try to find some way to interpret the data to make it into evidence supporting current theory, _i.e._ , the new data are shoehorned into the old explanation. This, of course, is exactly what the ancient Greeks did with their epicycles and other geometry gimmicks. They used them to force incompatible data into their theoretical preconceptions. And though the mistakes this led them to are well known, modern scientists continue to do the same thing. Newton's laws are an excellent example.

There is considerable data suggesting Newton's laws are not natural laws. These laws purport to precisely describe the movements of all celestial objects everywhere in the universe, but they may be only a local approximation at best. Astronomers in the Twentieth Century discovered a conspicuous error in the Newtonian model. The stars at the periphery of galaxies orbit much faster than Newton's laws say they should. And galaxies themselves orbit other galaxies with yet greater excess speed, and groups of galaxies orbit other groups even faster still.

One might therefore suppose there is a small imprecision in Newton's model, something too small to be detectible across the relatively minute distances in our solar system where the model was empirically established, but which shows up when it is extrapolated millions of times to the periphery of our Milky Way galaxy, then jillions of times further out into intergalactic space. If this is the case, the model should be tweaked a bit to bring it into agreement with the data. An Israeli scientist, Mordehai Milgrom, has invented just such an adjustment. When modified as he suggests, Newton's model accurately describes the orbiting of these high speed stars and galaxies. But although Milgrom is doing exactly what Kepler did, fitting a math description to empirical data, few scientists accept his model. They reject his more accurate description for the same reason I believe Galileo wouldn't accept Kepler's more accurate elliptical orbits: There is no known explanation for it. Milgrom's adjustment is nakedly empirical, and modern scientists, no less than Renaissance natural philosophers and ancient Greek intellectuals, won't accept empirical findings they can not make sense of, _i.e._ , reconcile with their preconceptions.

So instead the majority of astronomers subscribe to a different belief. Ignoring the fact that Newton's laws don't really explain anything (Nobody knows what gravity is. We have no better an insight into why Newton's model works than we have of Milgrom's modification of it.), and also ignoring the fact that Newton's laws are based on data from only our relatively tiny solar system, and also apparently ignorant of the logical fallacy underlying the hypothetico-deductive method by which Newton's model is supposedly proved, these believers affirm Newton's laws aren't his at all. They are nature's laws, or God's. They are truth. Therefore they can neither be in error nor approximate, and something heretofore undetected must be causing the discrepancy between these infallible natural laws and the new inconsistent data. Stars traveling around the periphery of a galaxy as fast as these data suggest would fly away into space if there were not some additional force holding them back. Gravity, astronomers and physicists confidently insist, is the only possible holding force, and Newton's natural law says gravity's force comes from matter. (The contortion of space-time posited by General Relativity also is theorized to be caused by matter, so both theories need more matter in order to fit the data.) Ergo, these people maintain, there indubitably exists some undetected matter the gravitational force of which is holding back these fast stars.

But there isn't the tiniest shred of evidence for such matter. Nada! Zero! Zip! Even the most extreme, enthusiastically unrestrained speculative estimates of matter which might be hidden in galaxies fall far short of the amount which Newton's laws say must exist to hold these fast moving stars. Nevertheless, with an aplomb equivalent to a three-year-old's affirmation of Santa Claus, these astronomers assure everyone such matter unquestionably exists. However, it's dark. Which is to say: It has no other detectible characteristic except its gravitational force. And this invisible, and therefore impossible to study dark matter, these astronomer believers further assure us, is roughly nine times more abundant than all the ordinary matter in the universe.

I always marvel when I read or hear about dark matter, for its logical status is precisely equivalent to the geometry gimmicks the ancient Greeks used to explain away the discrepancy between their celestial mechanics preconceptions and the planets' data. In both cases, in order to make sense of data inconsistent with one's prior beliefs, theoreticians invent things (epicycles or dark matter) for which there is an absolute absence of any direct supporting evidence. The only evidence for these constructs are the phenomena they were invented to explain, or rather, to explain away. (The circularity of such reasoning could not have a smaller radius.) In view of this, one might suppose astronomers would make their claim cautiously and tentatively. On the contrary, however, dark matter believers rave and enthuse about their theoretical gimmick. Some have even called dark matter one of the greatest scientific discoveries of all time.

There isn't any nice way to say this, but it must be said. Nothing but blind, ignorant, anti-intellectual dogma justifies any assertion that dark matter must exist. Those astronomers who insist on the truth of this empirically unsupported, and empirically unsupportable notion are actively damaging science. True, if Newton's laws are truth, then something like dark matter is possible, but only a possibility, and one beyond empirical scientific confirmation. However, Newton's laws can not be shown to be true. They very well may be only serviceable rules of thumb which only work over relatively short astronomical distances, and even this may be for reasons entirely different from what anyone supposes. This possibility suggests Newton's laws might be improved if theoreticians would make a diligent effort thereto. However, to insist on the truth of dark matter is to stymie any improvement, because, since dark matter by definition is dark, there is no way it can be studied, so to affirm its existence is to say the science about this issue is at a dead end.

But this pessimistic conclusion is wholly dependent upon logically and empirically unsupportable dogma: The belief that Newton's laws are truth. If those who dogmatically defend the dark matter gimmick would abandon their semi-religions devotion to Newton's model, they very well might be able to invent a serviceable new one, one which could incorporate the discrepant data into a more consistent, comprehensive and, indeed, more comprehensible model. If these believers would stop dogmatically defending their Newtonian pseudo-religious truth and return to the practice of science, they might be able to invent a model as superior to Newton's as it is superior to Ptolemy's.

The data discrepancy of Newton's model is not a threat to truth which must be thwarted. It is an opportunity to improve scientific knowledge, an opportunity to invent a new model which, although it could no more be proven true than Newton's, will make more sense than it does. But as long as believers insist Newton has already discovered God's own truth, a faith no one can ever prove, they neither are going to look for a better model, nor are they going to be receptive to the efforts of scientists like Milgrom when they make creative steps toward resolving the failure of Newton's laws.

The dark matter dogma, like evolution theorists' dogmatic insistence on the truth of the Modern Synthesis, illustrates another way, the major way the fallacious dogma of scientific truth damages science. Those who presume to know truth do not seek to improve it. For any change to truth can only degrade it. But evidence and logic both show we can never know any scientific explanation is truth. We can never know any so-called natural law is anything more than a serviceable rule of thumb. And since this is so, there is always a possibility any scientific knowledge can be improved, and we must always remain open to this opportunity. Especially we must do so when we discover inconsistent data, clear evidence of a need for improvement.
What Scientific Knowledge Really Is

The goal of science is not to discover truth. Such a goal would be absurd because such discovery is impossible. What we think is true simply can never be shown to be true. Therefore, by necessary default, the goal of science can only be to try to make sense of whatever phenomena one is studying. The central role of sense making was illustrated above with Galileo's refusal to accept Kepler's elliptical orbits and modern astronomer's refusal to accept Milgrom's modification of Newton's laws.

An even more compelling illustration is scientists' long delayed acceptance of evolution, the evidence for which had been accumulating for decades, if not centuries, before the obvious conclusion was conceded. Neither Charles Darwin nor Alfred Wallace was the first to suggest the idea. Darwin's _On The Origin of Species by Means of Natural Selection_ was published in 1859, but Montesquieu suggested evolution in 1721, a century and a third earlier. And both Erasmus Darwin (Charles's grandfather) and Jean-Baptiste Lamarck also suggested evolution a half century earlier. But biologists could not imagine any mechanism which seemed sensible to them, so they declined to endorse the idea. Lamarck suggested a possibility: Repeated use modifies an organ to be more suitable for such use, which modification is passed on to offspring. But this didn't seem plausible to biologists, so they were not convinced. The mechanism of natural selection, however, made sense to them so they finally accepted evolution. In large part natural selection made sense because many persons already accepted the Calvinist doctrine that human life is a struggle which morally superior people are predestined to win, or a secular equivalent thereof. Thus, that evolution occurs because of the survival of the fittest was not only eminently reasonable, it was virtually self-evident.

{It is interesting to observe that adoption of the idea of evolution, so abhorrent to so many religious believers, was facilitated by a religious belief.}

The essential role of sense making in science is particularly illustrated by the fact that, in fact, the paleontological data suggest evolution occurs in discrete, quantum-like bursts, not in the gradual, continuous alteration process the natural selection idea holds to be the case. Indeed, Darwin went to great efforts to convince his readers that these data are inherently defective. In evaluating his theory, he thought, we should not rely on the data but instead on the intuitive sensibleness of the natural selection idea. In other words, one of the main things Darwin sought to accomplish (and did accomplish) with his _magnum opus_ was to show people how to shoehorn inconsistent data into an idea that makes sense. To echo what I said earlier, the notion that scientists humbly follow data to the truth is nonsense. Just as Darwin did, scientists always interpret and manipulate data to make evidence of it which supports an explanation they consider sensible.

The goal of science is, has ever been and can only be to fit data into an explanation which makes sense. But as noted above, something only makes sense if it conforms with the main body of one's prior convictions, whether or not those beliefs are valid. One can give up one or two beliefs in order to incorporate new evidence into the rest of one's body of convictions. And history shows scientists from time to time do so, usually to the improvement of science. But one cannot give up all of one's unproven assumptions. First, because we don't realize we have most of them. We unthinkingly accept them as self-evident. But even more importantly, without the great body of them one cannot think at all.

Reconciling new data with the bulk of one's prior convictions is often quite difficult. No more than do the pieces in a jigsaw puzzle include instructions for their correct placement, data never come with instructions providing some sensible interpretation. Nonscientists often suppose scientists need only look at data to know the kind of evidence it is, but that is seldom the case. Usually, scientists have to invent (scientific truth believers say "discover") an interpretation to turn raw data into evidence fitting some sensible explanation. Quite often this is quite difficult. To do this, it is and always has been normative amongst scientists to interpret and reinterpret data to make a fit with our preconceptions, or to invent constructs like epicycles or dark matter in order to shoehorn discrepant data into our beliefs, or finally, if we can find no way to make sense of it, to ignore or discard damnably inconsistent data.

There is no scientific discipline in which this is not the case. I chose to illustrate this with the dark matter issue not because astronomers are the only or the worst offenders in this regard, but because the physical sciences are generally considered to provide unquestionable scientific truth. Not without reason. In these disciplines science is phenomenally more secure because the phenomena studied therein are phenomenally simpler. For example, in all the biological sciences a conclusion which holds for some individuals will not necessarily hold for all. However, to paraphrase Gertrude Stein, an electron is an electron is an electron. Insofar as science has ever been able to determine, each is identical to every other. And this vastly simplifies the task of studying them and learning about them. Accordingly, much of the data in the physical sciences can be measured with astonishing precision, and many of the models from these disciplines fit the data with an accuracy one can easily presume to indicate truth.

Therefore, if any scientific knowledge could be true, if any scientific conclusions could be natural laws, it would be those from the physical sciences. That is why the above example was chosen from one of these disciplines. It having been established that scientific knowledge in the physical sciences is not truth, it follows that in no discipline is scientific knowledge truth. Scientific knowledge, like all other kinds of knowing, is only a matter of belief.

Despite all the ballyhoo one gets from Nova TV shows, science is done by people, not semi-omniscient demigods. And scientist people think essentially the same way other people do. We make mistakes. We often bet on the wrong horse, invest in the wrong stock, and marry the wrong person. (Einstein so married twice.) What reason, therefore, is there to believe we never make the same kind of bad decisions when we're wearing our scientist hat? Scientists are not engaged in finding truth because, no more than nonscientists, we can't recognize truth when we see it. It doesn't come with a label. All we can do is try to make sense of an inherently ambiguous reality. And inevitably, since nothing makes sense unless it is congruent with one's beliefs, this process necessarily leads all scientific conclusions to be biased toward our subjective preconceptions, whose validities are untested and uncertain. When and if scientists think they can make sense of some part of reality they then try to convince others that their particular sense making notion is truth. It might be, but there is no way to ever be sure it is. In truth, therefore, what science knows is only what scientists believe. Nothing more. Science establishes no scientific truth.

All of this has been clearly and repeatedly shown by a group of scientists and scholars who designate their studies as the Sociology of Scientific Knowledge (often abbreviated as SSK). Using the same methods and procedures scientists use to investigate any other phenomenon, these sociologists have scientifically studied how scientists develop their conclusions. Scientific knowledge, they have found, is formed and determined in part (not infrequently in major part), not by objective data, but by scientists' subjective preconceptions. There is a profound irony here. Turning science upon itself, these scientists have shown the common assumption, _viz._ , science discovers objective truth, is not true.

This has been shown for many scientific disciplines, but as noted, most tellingly for the physical sciences. Indeed, even some mathematical conclusions, which supposedly are established with absolutely irrefutable logical rigor, have also been shown to be so affected. I was amazed and disappointed when I learned this. But I shouldn't have been. Some mathematicians, I knew, have ruefully decided there is no definitive standard of logical rigor. As they have noted, the criteria of rigorous proof vary subjectively from mathematician to mathematician. Ultimately, it's all a matter of belief.

**The Superiority of Scientific Knowledge**

In view of this ubiquitous subjectivity and the uncertainty it necessarily implies, some followers of SSK have decided all knowledge is equal and equally dubious. They say scientific knowledge is no more valid than any other belief. All knowledge is problematic, these people claim, only different ways different persons have of trying to make sense of reality.

When I first encountered this SSK contention I was appalled and angered. I still am. I considered, and still do consider it to be utter and complete balderdash. I acknowledge that scientific knowledge is not truth. Indeed, I insist it is not. It is only what scientists believe. Nevertheless, the superiority of scientific beliefs over all others is demonstrated anew every moment of every day. If any belief is as good as a scientifically established one, a witchdoctor's rattles and incantations would be as effective in curing bacterial infections as an appropriate antibiotic. I adamantly insist scientific knowledge is vastly superior to other kinds of knowledge, its dubious truth status notwithstanding, because with it we can do things we otherwise can not. The evidence for this is unequivocal and overwhelming.

Please notice: I am not offering a sophisticated, abstract philosophical reason for the superiority of scientific knowledge. The reason I offer is the same ordinary, pragmatic, utilitarian reason nonscientists use: Science knowledge is superior because it works. Even those who strongly reject certain scientific conclusions, nevertheless accept the general superiority of scientific knowledge for this reason. Consider Creationists. They absolutely reject the scientific fact of evolution. Nevertheless, when they are ill or injured, the great majority of them turn to scientific medicine (which is based, in large part, on the same fundamental principle as is evolution). Creationists are not alone in this. The overwhelming majority of people consider science to provide overwhelmingly superior knowledge, and they do so because with scientific knowledge things have been and can be accomplished.

But how could scientific knowledge enable these accomplishments unless it were true? After all, if a scientific explanation were not true then it would seem self-evidently to be the case that efforts to use the knowledge ( _i.e._ , mere belief) to do anything would necessarily fail. On the basis of thinking like this the general public concludes scientific knowledge must necessarily be true. And they are not alone. The behaviors, and often the claims, of the majority of scientists show they also so believe. But this belief is demonstrably false. Consider a few examples.

The pre-Columbian Aztecs believed the rising of the sun was controlled by a god they called Huitzilopochtli (which name, presumably, they could pronounce). They further believed this god has an appetite for human sacrifice and would not raise the sun every morning unless this appetite were satisfied. So the Aztec priests reverently sated their god by capturing men from other tribes, dragging them to the top of a pyramid and hacking out their beating hearts with a stone knife. And sure enough, it worked! As long as the Aztec priests did this the sun came up every morning. Ergo, using the same argument used to defend the scientific truth dogma, _i.e._ , "If something works it must be true", they concluded their sacrifices were both efficacious and proof of the existence and bloodthirsty appetite of Huitzilopochtli.

Since the sun has continued to rise even though the sacrifices have stopped, non-Aztecs may doubt this conclusion. Nevertheless, a devout Aztec priest might defend his belief with the kind of excuse scientists use all the time. In the face of negative evidence modern theorists often insist their theory is true, however the parameters of their model are such that this truth can not be demonstrated. For example, General Relativity theory requires the existence of gravity waves. If gravity waves do not exist, General Relativity is wrong. But none of several efforts to directly detect such waves has succeeded. Nevertheless, these failures do not disprove the theory, theoretical physicists and astronomers assure us, because gravity waves are too weak to be detectible by current technology. {NOTE: Research teams at MIT and CalTech on February 11, 2016 announced detection of gravity waves on September 14, 2015. Therefore, the reservations expressed here about General Relativity must be dropped. I leave it to the reader to judge the relevance of this monumental discovery to the arguments made here.} Using an analogous argument, a devout Aztec priest could defend his belief. Huitzilopochtli continues to raise the sun, this priest could claim, because our measurements are not sensitive enough to detect the exact sunrise/sacrifice ratio. Apparently the ratio is much greater than one. Therefore, our Aztec apologist could insist, the thousands of sacrifices performed by his ancient priest brothers built a sun rising surplus that will endure for some time. However, this priest might further suggest, the surplus is necessarily running down, and it might be wise to prophylactically resume cardioectomies on a few of our political opponents just to insure that the planet doesn't go dark.

You may think I am being facetious with this example. I am. But only to the extent of trying to make it humorously obvious that things do not necessarily work for the reasons we believe. Because the ancient Aztec religion is so alien to modern beliefs, it is easy for us to conclude there can be no relationship between human sacrifices and the sun. Perhaps too easy, for though this seems self-evident to modern persons, it is impossible to prove it. And there's the rub. The special pleading of the above hypothetical Aztec priest is not essentially different from that of modern physicists and astronomers who resolutely advocate General Relativity despite their inability to directly measure gravity waves. {See NOTE above.} Ultimately, both party's arguments are special pleading, and neither's is any more provable or disprovable than the others'.

You may find an illustration from a literate culture more convincing. Here's a compelling one from medicine. The ancient Greeks believed the body's various fluids, or humors, had to be in balance for health. Sicknesses, they believed, occurs when these fluids are out of balance. Therefore, a sick person could be cured by bleeding, by cutting into one of the person's veins and removing some of his/her blood in order to restore a healthy fluid balance.

For over two thousand years, until into the Nineteenth Century, this practice was followed with confidence in its effectiveness. Modern medicine believes that, except for one rare condition, such bloodletting would be useless at best, but usually deleterious. For example, George Washington was adept at self administering this therapy, and he applied it repeatedly in his terminal illness. His death may have been hastened or caused by his own bleeding. There must have been many other similar fatal outcomes from the use of this practice, yet belief in and use of it continued for millennia. The reason, it is reasonable to suppose, is the body's own self healing capabilities. People got a cold, were bled, and got better. Ergo, according to the common logical fallacy, _post hoc ergo propter hoc_ (after this, therefore because of this), bleeding was believed responsible for restoring health. And since bleeding worked (apparently), then the humor balance theory was proven (allegedly) to be true.

But examples from the supposedly certainly true physical sciences may be more convincing.

Ancient Greek science, as I've noted, was almost never technological. But it could have been. Consider Aristotle's theory that all things on earth have a proper place and, if unrestrained, will spontaneously, essentially volutionally move toward it. An ancient engineer might have reasoned that if fins were placed on a wheel and the wheel were placed in the path of something spontaneously moving to its proper place, something like flowing water, the movement could be captured just as a harness captures a horse's movement. Thus harnessed, the water could be made to turn the wheel and do useful work while moving toward its proper place. Quite likely no engineer ever engaged in such hypothetico-deductive reasoning. Though water wheels then existed, their invention seems to have been entirely empirical. But if any engineer ever had used Aristotle's physics to design a water wheel, it would have worked. And if this engineer thought the way modern people (including many scientists) do, he would have taken the effectiveness of his waterwheel as proof that, just as Aristotle said, things on earth do indeed spontaneously move to their proper places, something which every modern physicist thinks is nonsense.

If that suppositional historical example doesn't convince you, consider one from actual modern science. When I was a boy, whenever I road in a car I would straighten my fingers and thumb into a wing-like configuration, roll down the window and put my hand out into the moving car's airstream. By angling my hand-wing up or down, the air moving against it would force my hand and arm up or down. I thought I had discovered how airplanes fly. Their wings, I supposed, are angled slightly upward so when the plane's engine moves the plane, the passing air strikes the bottom of the wing, forcing it upwards. When I got to high school, however, I was taught the accepted, and totally different, aeronautical theory. According to it, wings are designed so the path air must take going over the wing is longer than the path under it. Therefore, the air molecules in the airstream above the wing are forced relatively further apart than those under the wing, creating an area of lesser pressure on top of the wing, and the greater pressure under the wing pushes it up. My boyhood theory, I was taught, is common, naive and wrong.

A couple years ago, however, I came across an article written by an aeronautical engineer. He wasn't some ivory tower theoretician, but rather a man who had participated in designing planes, planes that successfully flew. His experience had led him to question whether the accepted "area of lesser pressure" theory is the whole story. On the basis of his experience he was convinced a significant portion of a wing's lift does indeed come from the common, naive notion I in my boyhood (and apparently many other uneducated persons) believed. Well, who's right? I certainly don't know. However, airfoils certainly do work. Airplanes do fly. But which theory does that prove?

The incontestable fact is this: When we claim a particular explanation of a phenomenon is true because we can do something on the basis of it, we implicitly are claiming there is no possible other explanation which can lead to the same accomplishment. But this can never be known. We simply do not know what we do not know. Some practitioners of SSK claim there is no phenomenon which can be explained in one and only one way, and therefore we can never be sure any particular explanation of a phenomenon is the unique, complete and necessary ( _i.e._ , true) one. This claim seems probable to me, but theoreticians may quibble about it. However, there is no possible exception to the principle as here stated. We do not know what we do not know, so we can never know any particular explanation, no matter how effective, is the truth. As illustrated above, often people have attributed apparent accomplishments to explanations which are exceedingly implausible or which have subsequently been demonstrated to be false. Thus, it is demonstrably possible for things to be accomplished on the basis of explanations which may be partially, or even wholly wrong.

In point of fact, the "If an explanation works it must be true" belief is merely a variant of the Fallacy of Affirmation of the Consequent, and it is equally fallacious.

**An Erroneous Criterion of Truth**

We are confronted with an enormous paradox. Scientific knowledge is conspicuously effective, yet it also is conspicuously based upon and contaminated with unproven and often demonstrably wrong subjective preconceptions. How, then, can this uncertain, prejudice biased scientific knowledge be so spectacularly successful? The answer is obvious, but I confess it took me some time and some soul searching to recognize it.

Scientists, according to the scientific truth dogma, know an almost magical truth finding process called scientific method. With it scientists supposedly discover scientific truth. Then, knowing these truths, modern wonders are engineered by deduction. This scientific method notion is a demonstrable delusion. There is no such thing. There is not even any definition of what scientific method might be. The nearest thing to such a definition is to equate the alleged scientific method with hypothetico-deductive testing, which, as Aristotle taught, but we modern scientists are only belatedly learning, is based on a fallacy. Both logic and evidence prove it incapable of establishing truth.

{When I was a graduate student there was bitter carping among some of my colleagues about why we were not being taught the great scientific method, the technology we had enrolled to learn, naively and sincerely believing it to be the very essence of what we, as prospective scientists, were going to spend our lives practicing. Now I know why we were not taught the scientific method. There is none to teach.}

The scientific method notion is a flagrant fairytale. This fact has been scientifically established. Investigations of how scientists have developed scientific knowledge show scientists do not follow any set procedure or method. Every scientist tries to make sense of reality in his or her own particular, prejudiced and, sometimes, peculiar way. Hunches, guesses, preconceptions, intuitions, mistakes and blind strokes of luck have all played major roles in the creation of scientific knowledge, but no standardized procedure is known or followed. This lack leads to an inescapable and rather amusing paradox: The practice of science is an art, not a science.

Marvelous things can be done with scientific knowledge not because it is truth discovered with a magical scientific method, but rather because scientific knowledge, though this is often unintentional, is created precisely to be able to accomplish those marvels. This occurs because the truth criterion Renaissance scientists adopted then bequeathed to modern science is erroneous. It's wrong. Unequivocally, totally wrong. It can not and does not identify truth. But it can and does identify what works.

Just as had the ancient Greek philosophers, Renaissance natural philosophers sought truth. However, they abandoned the ancients' pure reason truth criterion and substituted one they apparently unthinkingly picked up from the empirical _zeitgeist_ which Dark and Middle Ages technology had evolved, data fit. As noted above, ancient scientists, _e.g._ , Ptolemy, used only the bare minimum of data. Nor did they concern themselves with their data's accuracy, apparently because they felt any data will always be fallible. And that is so. Data always are only more or less accurate. There is no such thing as perfectly accurate data, a problem most modern science addresses with statistics, a mathematics unknown to the ancients.

But, as with Brahe and Kepler, Renaissance natural philosophers were obsessed with data. They couldn't get enough. And they weren't willing to throw up their hands and accept inaccuracy as an unavoidable data defect. They sought to overcome this problem by giving the most compulsive care and attention to the gathering of their data. Truth, they believed, is found in the most precise description of the most abundant, most accurate and most relevant data. The closer the fit and the larger, more accurate and more relevant the body of data fitted, the more truthful the description.

{Excuse the interruption, but the introduction of the validity criterion which modern science believes establishes scientific truth raises a point of interest and importance. Many scientists think reality is fundamentally mathematical. Galileo expressed this when he suggested the universe was created with math. More recently physical scientists, in particular, have marveled at the remarkable suitability of mathematics for science. The implication of their wonderment clearly is that Galileo probably was on the right track.

{In fact, however, the suitability of mathematics for science is not remarkable. It is a necessary consequence of the data fit criterion. If an explanation, model or theory's alleged truth is revealed by the precision with which it fits data, then as humanity's only precise language, mathematics will necessarily be fruitful in science. Galileo's proposition may or may not be true, but the indisputable utility of math in a science which uses data fit as its validity criterion is not evidence thereof.

{The "reality is fundamentally mathematical" belief is certainly a tenable philosophical proposition, however, for there are other reasons to suggest it. Nevertheless, to take it as self-evident truth, as some theoreticians do, has science damaging consequences. For example, advocates of string theory (which posits space-time to have ten dimensions, six of which are wound upon themselves so tightly they can not be measured) concede the virtual impossibility of ever getting any data relevant to their notion. Still many of these enthusiasts insist mathematics alone will eventually scientifically prove their theory. Nonsense! Without data there neither is nor can there be science. No matter how brilliant, the most which evidence-devoid math can ever achieve for string theory, or for any other reality proposition, is to show it to be a mathematically consistent metaphysics.

{One cannot leave this topic, however, without noting that math is indeed marvelously useful and often indispensable in science, and the innumeracy or math ignorance of people in the US, even those with college educations, is a national dysfunction and a national disgrace.}

Newton, perhaps the greatest rationalist scientist ever, provides a revealing illustration of modern science's adoption of the empirical data fit truth criterion. After he had published his great gravity model, at a time when an ancient philosopher would have considered truth to have been found, Newton was still concerned with the fit of his model to data, in this case, data about the moon. He kept encouraging the first Astronomer Royal, John Flamsteed, to publish his data so Newton could compare it with his model. But the astronomer was also a disciple of the new criterion of truth, and he wanted to keep his data in order to improve its accuracy. So Newton commandeered it, and Flamsteed had to go to court to get it back, which he did. Clearly, both men were motivated by their acceptance of the new criterion: The most precise fit of a model to the most abundant, most accurate and most relevant data, they believed, determines truth.

Unfortunately, it doesn't. But most scientists don't realize this, so this criterion continues to be the criterion used by modern science, or at least the effective part of modern science, the part able to do things. The empirical, data fit criterion of truth is wrong. Dead wrong. The precise fit of a theoretical model to abundant, accurate, relevant empirical data does not and can not indicate the model's truth. As the several examples given above show, and as Aristotle could have explained, this claimed truth criterion is based on a logical fallacy. No matter how good one's data, and no matter how precisely one's model fits the data, it is impossible to know whether this fit is purely adventitious nor whether the data might be even more accurately described by an entirely different unknown explanation. Until we are omniscient, to affirm any model as true, regardless of how well it fits the data, and regardless of how abundant and accurate the data may be, is to commit the logical fallacy of Affirmation of the Consequent.

People like Copernicus, Brahe, Kepler, Newton and Flamsteed made an enormous logical blunder. These great thinkers should have paid more attention to the ancient Greek philosopher. Aristotle's physics is wrong. But his logic is not. Nevertheless, while the data fit criterion was an enormous logical blunder, it was an enormously fortunate one. No matter how good it is, fit to empirical data can not and does not reveal a theory's truth. But while an explanation or model selected because it precisely fits abundant accurate relevant data is not necessarily true, it necessarily does fit the data. And since it does, its usefulness in manipulating the phenomena which produce such data is assured.

The plain truth is this: One doesn't need to know any natural law, any truth, to do anything. One only needs to know how to do it. And an accurate description of the phenomena associated with what one is trying to do will always facilitate the attempt, whether or not the description correctly explains the mechanism of the phenomenon, and whether or not the explanation holds outside the limited circumstances wherein it was developed. Two examples, one involving science and one not, can illustrate this.

Above I mentioned how military technology, in particular, continued evolving during the Dark and Middle Ages. Consider one military device, the trebuchet, a machine which in effect is a gigantic counterweight actuated sling used to batter down enemy fortifications. There is no evidence this machine was known in earlier times nor in other cultures. It was invented in medieval Europe by medieval technicians who clearly, therefore, did not have any scientific knowledge, neither ancient Greek nor modern. From historical descriptions modern enthusiasts have built trebuchet replicas, and with them have shown the device capable of standing well outside a defender's arrow range and destroying his fortification. How could such an efficacious machine have been developed without any knowledge of natural law? Obviously it was done with the same method used by the builders and testers of the modern replicas, empirical trial and error. Neither scientific knowledge nor truth had anything to do with it.

At the opposite extreme, consider a marvelous modern achievement almost exclusively based on scientific knowledge, the transport of a variety of things, and sometimes people too, into space. A host of scientific knowledge from many disciplines are used to do this. The physics of space flight, however, is fully explained by Newton's laws. It is impossible to suppose these feats could be accomplished without them. But Newton's laws do not need to be true to so serve. His laws were empirically derived in this solar system to fit data from it. Therefore, whether his model is a truth from which one can logically deduce the necessary existence of dark matter, or is only an approximation which fails when extrapolated beyond our solar system, or is a completely erroneous explanation, a rule of thumb which only adventitiously fits the data from our solar system, it necessarily does fit our solar system. And that's all that's needed for it to be useful in moving things around therein. Newton's laws may be truth, or maybe not. But as far as operations in our solar system is the issue, their truth is irrelevant.

This is the real achievement of scientific knowledge. It has nothing to do with truth, and everything to do with figuring out how to do what you want to do. For even if a scientist is not trying in any way to do anything, even if a pure scientist is only trying to make sense of some phenomenon, as long as the scientist uses an empirical criterion of success, whatever sense he or she succeeds in making of the phenomenon will necessarily be useful for any other person who tries to do something on its basis, whether or not said sense is isomorphic with truth. This fact is so important and so different from our common prejudice, our scientific truth dogma, our scientific method brainwashing, it should be reiterated in stark terms and bold type.

**Science produces the wonderful and wondrous results it does because it is single-mindedly, indeed simplemindedly empirical, because scientists have diligently worked to develop scientific knowledge which accurately and usefully, even though uncertainly, describes the phenomena which must be controlled in order to produce science's wondrous works. But whether this knowledge is true,** _i.e._ **, whether such description is natural law which works for exactly the reasons scientists suppose, or which works outside those locales where humans can employ and test it, can never be known.**

At the beginning of this section I admitted how it took me some time and considerable soul searching to figure out how science can be so spectacularly successful notwithstanding the impossibility of ever knowing any scientific knowledge to be true. I made this confession not as an act of humility, but to illustrate how massively we all have been led astray by the fallacy of scientific truth. Had I not been brainwashed by this specious doctrine, the answer should have been obvious. Consider: Scientific knowledge is built in major part on hypothetico-deductive reasoning, and the fallibility of this methodology is the major reason why we know scientific knowledge can not be known to be true. Specifically, as illustrated above with the bad air malaria hypothesis, correct predictions can be and often have been made from completely wrong hypotheses ( _i.e._ , theories, explanations or models). But when one is attempting to do anything, it is the accuracy of prediction, _and only the accuracy of prediction_ , which determines success. The truth of the knowledge, beliefs or assumptions used to obtain the prediction is totally and profoundly irrelevant. {An amusing and wholly accurate colloquial summarization would say: Scientists know what they're doing, but they don't necessarily know what they're talking about.} Thus when we break free from the fallacious and pernicious doctrine of scientific truth, we readily see that precisely the same factor which prevents us from knowing any empirical scientific knowledge to be true is the factor which makes empirical scientific technology effective.

Beyond doubt, science's alleged natural laws are contaminated with the prejudices and preconceptions of the scientists who developed them. But as long as science's explanations accurately fit the phenomena they describe, they will work to manipulate those phenomena. Therefore, uncertain though it inescapably is, scientific knowledge can be used to produce all the wonderful and wondrous results which lead people to believe in science. Insofar as science's effectiveness is concerned, it is completely immaterial whether scientific knowledge works for the reasons we suppose or for entirely different, unknown reasons. And it is equally entirely irrelevant whether it would also work, as the concept of natural law requires, in any time and place no matter how remote from human experience. In short, truth is irrelevant to the effectiveness of scientific knowledge. So if you believe in science because it works, and virtually everyone's belief in science ultimately rests upon this foundation, then you need not and should not assume scientific knowledge is true.

Science is able to accomplish things not because it discovers truth, but because it discovers how to accomplish things. That's the whole and the only scientific truth.
Reason _vs._ Experience

There is a divide amongst epistemologists. A similar one exists amongst scientists. Empiricists hold all knowledge to be based on experience. Some others, like the ancient Greek philosophers, claim reason alone suffices, not simply to establish knowledge, but even to discover truth. From the above you will know I am an empiricist. Furthermore, as a scientist, I consider the reason _vs._ experience ( _i.e._ , data) disagreement to be based on a specious distinction and therefore to be meaningless. Though I realize some philosophers and theoretical scientists may disagree, reason itself, I maintain, is experience derived. Consider.

Reasoning is a biological capability, not essentially different from other animate abilities, such as digesting food and moving. Scientific facts confirm this. Reason can be damaged or destroyed by disease and injury. The current and growing epidemic of senile dementias unequivocally proves this tragic scientific fact. And despite the arrogant anthropocentric claims of the ancient Greeks, abundant modern research shows reason exists in nonhuman animals. Indeed, it exists in surprisingly large amounts in some. Research further shows the extent of different species' reasoning ability correlates with brain size (specifically with the number of neurons in the brain), a scientific fact strongly suggesting that reason is a trait selected and molded by evolution. For this to be the case we need not subscribe to all the provisions of the Modern Synthesis. It is sufficient for natural selection to play any role in evolution, even only a secondary one, because the survival utility of reason is self-evident.

When humans apply reason to any problem, we are not, in fact, using a semi-divine faculty built into our souls, Plato's innate human ability to comprehend truth. We are merely using the biological tool which, over myriads of generations of humanity's human and pre-human progenitors, has been effective in combining sensory information to form inferences and deductions about one's environment, conclusions useful for one's survival.

Reason therefore is to be expected to be useful in science, in expanding and improving understanding. That, after all, is precisely what it evolved to do. There is no reason, however, to suppose the reasoning ability which evolved because it could solve the limited class of survival problems encountered on the surface of our obscure tiny planet necessarily applies at all times and places in the immense universe. It might, and it is obviously helpful to assume it does so long as reason's conclusions can be confirmed by empirical evidence. But to extrapolate reasoning beyond empirical confirmation is to enter an arena of inescapable uncertainty. If you can't confirm your reasoning with direct data, you can't trust it.

This, of course, follows by logical necessity from the fundamental fact that scientific knowledge is not truth. Because it is not truth, we can never be sure any reasoned extrapolation from scientific knowledge will work. Remember, no reasoned, logical conclusion can ever be more true than the axioms ( _i.e._ , assumptions) it is based upon. So when scientific knowledge is used as reason's foundation, its uncertainty inevitably is visited upon the reasoned conclusion. And this is true no matter how logically exact nor brilliant the reasoning may be. The history of science provides numerous instances where such reasoning has failed, cases where well reasoned conclusions based on well accepted scientific knowledge have turned out to be wrong. Consider a couple illustrative examples.

Compared to male humans, females have a lower incidence of cardiovascular diseases until menopause, after which their incidence becomes more like males'. At menopause, a woman's production of estrogen declines, making her hormone status more like a man's. Therefore, it was perfectly reasonable to conclude, estrogen is heart protective, and administration of sufficient estrogen to recover a woman's pre-menopause levels would maintain her younger cardiovascular health. This eminently reasonable therapy was eagerly practiced for many years before researchers gathered evidence to evaluate it. Unfortunately, when the data were in, they didn't agree with this unequivocally logically correct deduction. Post-menopausal women given estrogen did not have better cardiovascular health than women given a placebo. In fact, though the difference was quite small and statistically insignificant, if anything, the women given placebo had the better cardiovascular outcome.

Similarly, the highly acidic conditions in the stomach and duodenum, reason insisted, would destroy any pathogenic organism therein. This conclusion seemed inescapable since abundant evidence provided secure scientific knowledge that such acidic conditions will kill such organisms. Thus, it was faultlessly logically deduced, duodenal ulcers could not be due to infection. This solidly reasonable conclusion, however, was destroyed in what can well serve as a paradigm case of the way something we don't know invalidates what we think we know. In the 1980s the bacterium _Helicobacter pylori_ was found to be able to survive in the acidic stomach and, by the very survival mechanism it employs, to cause peptic ulcers, ulcers which can be cured with appropriate antibiotics.

When a reasoned conclusion is refuted, as in cases like these, it is common to ascribe the failure to faulty reasoning. But this is usually not so. It was conspicuously not so in these two illustrations. Their reasoning was not faulty. These failures occurred because in each case good, sound, logical reasoning was based on well established, eminently plausible scientific knowledge which turned out to be untrustworthy.

In science, this is the case in most cases. All reasoning must start somewhere, and if one's foundational assumptions happen, for unknown and possibly even unreasonable reasons, to be wrong, the most brilliant and exquisitely perfect reasoning based on them can not reach valid conclusions. And the fact that we can never know any scientific knowledge to be truth means one can never know _a priori_ that any reasoned extension of any scientific knowledge will be reliable, no matter how well established the knowledge may be, and no matter how logical the extrapolation. Therefore, one must always empirically substantiate one's science knowledge based reasoning.

Nevertheless, theoreticians insist reason is infallible. They succumb to the seduction of deduction. Einstein, for example, once said he agreed in a certain sense with the ancient Greek philosophers' belief that reason alone suffices to discover truth. If this were so, then how could Einstein be so sure, as he most definitely was, that ancient Greek science was mostly wrong? The truth is: Reason can not be trusted. Not because it is defective, but because it can not stand on its own feet. It must have an independent foundation. In science that foundation is empirical scientific knowledge. But because such knowledge is not truth, that foundation is always insecure.

Deductive reasoning in science is equivalent to get-rich-quick schemes in business, a seeming easy road to success. And to be sure, sometimes it works. Such successes shows the method has power. Therefore, empirical confirmation may seem a mere formality, something justifiable omitted when convenient, expedient or necessary. But such omission is the easy road to error.

This empirical-confirmation-is-unnecessary blunder is particularly likely for scientists working in those disciplines whose models are stated mathematically, for when these scientists are making deductions from their models they may easily suppose they are doing math instead of science. But the difference is crucial. In mathematics one's starting assumptions may be taken as given. They need only be clearly specified. It would be absurd to question them, for the mathematical goal is precisely to determine what conclusions logically follow from them regardless of their truth. But the starting assumptions in science are scientific knowledge which, as we have seen, can never be taken as given because it can never be known to be true.

Another seductive feature of math is the surety of math's rigorously proven conclusions. If math Theorem B can be shown to follow from an already proven Theorem A, then B's mathematical truth is assured. But such assurance only holds because mathematics is a precisely and fully defined closed system. Anything outside it is mathematically irrelevant and incapable therefore of invalidating the deductive truth of Theorem B. But nothing is outside the realm of science. Science necessarily concerns the whole of reality, known and unknown. And in this big bad real world no holds are barred. So a science conclusion, B, which logically follows from science conclusion A, may nevertheless not follow in fact because the empirical relationship is disrupted by unknown reality factors, X, Y or Z.

This most definitely is not to say that scientific knowledge should not be rationally extrapolated. On the contrary, it always should be. This essay has considered one of the most conspicuous and unequivocal examples of how a reason built model based upon solid empirical scientific knowledge can vastly improve science. Kepler's model of planetary movements is wonderfully accurate but nakedly empirical. It tells us little about anything except the particular phenomenon it was developed to describe. However, the model Newton built with reason on Kepler's empirical description is magnificent. It can be extrapolated accurately and usefully to solve an abundance of problems for which Kepler's planetary movement model offers little or no guidance. However, we know this only because we have empirically confirmed these extrapolations. {Thus, it is interesting to note: Even the use of reason in science is itself empirically based.} But when confirmation of a reasoned extrapolation from scientific knowledge can not be had, as with the dark matter deduction, then honesty requires such conclusions always to be forthrightly acknowledged as unconfirmed (and in the dark matter case, not confirmable) speculation and nothing more.

The take home message from all of this is that both reason and data are necessary in science. Experience ( _i.e._ , data) is our only contact with reality. But it is severely limited. Reason is necessary to place data in a larger, more meaningful, more useful context. But reason alone is untrustworthy. Therefore, its conclusions must always be reconnected to reality with new data. This will not, indeed, it can not establish truth. But empirical confirmation will at least show a reasoned conclusion to be reality relevant and not merely some theoretician's pipe dream. Traditionally, the experience _vs._ reason debate concerns which of the two enables the discovery of scientific truth. In truth, neither does. Neither experience nor reason, severally nor in conjunction, can determine truth. There always is the possibility that something we do not know invalidates our understanding. Still, uncertain scientific knowledge is conspicuously effective and useful, and it can only be created with both data and reason.
Quascience

Much of what is represented to the nonscientist public as science, indeed, often as the best and greatest science, does not qualify by the standards explained in the above section. Modern theoreticians routinely deductively extrapolate scientific knowledge beyond any possibility of direct empirical confirmation. Perhaps they are unaware of, but clearly they are unconcerned about the inescapably insecure foundation scientific knowledge provides for such expeditions into the unknown. We have already considered one egregious example of this, the beyond evidence dark matter notion. But regrettably, this is only one example of an epistemologically dubious practice which has become common. Because it is common, we need an identifying name, a moniker with which to distinguish such elaborate but untrustworthy guesswork from believable, empirically confirmed science, the kind of science which, while it can never establish truth, nevertheless is demonstrably effective.

Pseudo science comes immediately to mind, but this phrase is not wholly appropriate. It implies dishonesty and fakery. It also implies the necessary falsity of extrapolated theories. Neither implication is accurate. Scientists who do this questionable kind of science are misguided and misinformed, but I doubt very much that they are dishonest. Rather, I suspect, they have allowed their enthusiasm to lead them beyond what logic and evidence support. Nor are their conclusions necessarily false. Rather, their deductions are naked conjectures, science based, but beyond scientific confirmation or rejection. Therefore, honesty must consider these theories to be mere learned guesses.

Much of what these enthusiasts do superficially looks like science. They gather and evaluate data. However, it is not direct and only becomes relevant to their beyond-human-experience deductions by interpretations, inferences, and sometimes (as with dark matter) by the most egregiously conspicuous circular reasoning. A more accurate name for these theoretical excursions into uncertainty is quasi science, meaning science-like but not science _per se_. This name we can shorten to the more convenient form, quascience.

Because all of us have been subjected repeatedly to brainwashing with the dogma of scientific truth, it is well to interrupt here to reemphasize a fundamental point. Whether science or quascience, truth is not the issue. Neither practice can identify truth. But science has a well earned and fully deserved believability, while quascience does not. People believe in science and accept its conclusions because scientific technology can accomplish things, whereas quascience can do nothing. It merely provides explanations which are pleasing to its believers. As such, the most severe criticism of the most extreme Sociology of Scientific Knowledge devotee unquestionably applies. Quascience theories are indeed nothing more than different persons' different imaginary ways of explaining reality. Despite their science-like appearance, no quascience theory is reliable, useful, nor any more believable than similar conceits not based on science.

Probably because they are well aware of this deficiency, quascience practicing theoreticians have subjected the public to endless promotions of their dubious notions. No used car salesman ever proffered his shoddy goods more eagerly. In every venue they can access quascientists tout their theories. Although I have no direct evidence to support this suspicion, I'd guess one motive for all this hype arises from quascientists' own uncertainty. One consequence of doubts about one's beliefs is eager proselytization. The more other people one can convince of the truth of something one wishes to believe, the more does one feel justified in believing it. So my suspicion is a reasonable speculation, and it's fair to use speculation to evaluate theorists who themselves are only speculators.

My suspicion about quascientists' own doubts does have some indirect supporting evidence. A few quascientists, apparently realizing the impossibility of ever obtaining relevant direct evidence, maintain their theories are nevertheless scientific truth because they satisfy validity criteria much stronger than evidence. Beauty, elegance and simplicity are three I have seen mentioned. Such claims are blatant baloney. One of the theories often defended in this way is General Relativity. Anyone who thinks it is simple simply doesn't know the meaning of simple. Beauty and elegance are much worse, for they are infamously and inescapably subjective. No one has _a priori_ knowledge that reality is beautiful, elegant or simple, nor even _a priori_ knowledge of what constitutes these attributes. Therefore, to claim them as scientific validity criteria is precisely equivalent to saying "Things are true if I say they are."

The most cynical explanation for quascientists' vigorous theory selling charges them with trying to hijack the cachet and imprimatur of science for financial reasons. It isn't that they hope to grow wealthy from their theories. They won't starve, but they won't get rich. Rather, it is that quascientists are riding exceedingly expensive hobbyhorses in search of goals having no conceivable human utility. Many millions of dollars are required to continue their ride. They get it by bamboozling the science admiring but scientifically uninformed and therefore gullible politicians and public into accepting quascience as science and then footing the bill.

{The expenditure of such vast sums on humanly inconsequential theoretical trivia might be criticized as an evil diversion of funds from socially important purposes. However, that is not my point. In the first place, I'm speaking as a scientist, and such criticism concerns morality, not science. Second, even if these funds were not dissipated on useless quascience, there is no guarantee they'd be used instead for morally important purposes. Thirdly, the few millions spent on quascience are insignificant compared to the many billions the public eagerly squanders on frivolities such as pop entertainment and spectator sports. Moreover, the real cost of quascience isn't monetary, it is the wasting of society's severely limited intellectual resources on issues which can never be more than the semi-religious beliefs of a self-anointed intellectual gentry. For example: As this is being written the world's automobile industries are experiencing the greatest number of recalled vehicles in their history. Automobile engineers don't seem to be clever enough to design ignition switches which don't unintentionally turn off nor airbags which reliably and/or safely deploy. Yet at this same time some of the most brilliant people on the planet have been squandering their intelligence, and also a not inconsiderable amount of electrical power, concocting specious evidence for the existence of such things as a Higgs boson which, even if by some unknowable coincidence it does exist, is of no interest to anyone save a few hundred high energy physicists.}

The first thing to note about quascience theories is their exclusive reliance on hypothetico-deductive methodology. The very essence of such theories is the extrapolation of scientific knowledge beyond human experience. Therefore, direct evidence is impossible to obtain and the hypothetico-deductive method is the only way to gain any kind of empirical foothold. In science this method may be useful despite its inherent ambiguity because hypothetico-deductive reasoning can lead to direct empirical evidence. Also, its conclusions can lead to doable outcomes, the kind of accomplishments which make scientific knowledge believable even though not certain. For example, as we have already considered, Newton's model may not be the true explanation of Kepler's planetary orbits, nevertheless it is believable scientific knowledge because abundantly many things can be done with it. But quascience theories pertain to beyond-human-experience phenomena about which and with which nothing can be done and for which no direct evidence can be had. Therefore, all the logically certain uncertainty of this research method is inherent and inescapable in quascience.

This beyond-human-experience nature of quascience theories prevents anyone from ever obtaining unquestionably relevant direct data concerning them. Quascience isn't the kind of research where an Alexander Fleming returns to his bacteriology lab after a vacation and notices how a penicillium mold had suppressed the growth of a pathogenic bacterium he had been studying, a result of conspicuous relevance and significance. Instead, such evidence as quascientists obtain only becomes meaningful as a result of theoretical definitions and judgments. The data must be suitably (from the view of the quascience theory at issue) defined and interpreted in order to become pertinent. Quascientists insist these rationalizations are reliable because they are made by knowledgeable experts. And indeed they are the opinions of experts. But opinions are only opinions, no matter whose they are. And we are wise to keep in mind how infamously insecure are inferences and deductions, even when made by experts. After all, it was such thinking which led Lt. Col. (brevet General) George A. Custer, a highly experienced and highly successful Civil and Indian Wars cavalry commander, to believe there were only a few Sioux camped along the Little Bighorn.

**Big Bang Cosmogony**

It is well to illustrate the practice and problems of quascience with an example. The Big Bang cosmogony ( _i.e._ , birth of the cosmos) theory is suitable. This notion, you will recall, claims the whole universe was once an infinitesimal speck, almost a mathematical point. In some totally incomprehensible way which, as far as science knows, is impossible, all the energy in the universe ( _i.e._ , absolutely everything that exists) allegedly was confined in this speck. How it might have gotten there the theory does not say. However, supposedly about fourteen billion years ago the speck began to expand. This expansion was not into surrounding space, for there was no surrounding space. Space itself (actually space-time) was what is said to have expanded. Many of us find it impossible to imagine this, but that's our deficiency and not a valid criticism of the theory. Supposedly this expansion is continuing. Indeed more recent versions of the theory say the expansion is accelerating and therefore will continue forever. (I've glossed over many particulars, but this is the theory in a nutshell.)

Quascientists present this theory as, and it has been widely accepted as, not simply scientific knowledge, but established scientific truth. Well, there is no such thing as scientific truth, but Big Bang isn't even scientific knowledge. Let me point out a few of its difficulties. As with all these beyond-human-experience grandiose theories, there are many. But the following few suffice to make my point, which point is not to claim Big Bang is wrong, though I confess I'm not a believer. There is no way to know if the theory is wrong. That's precisely the problem. My discussion seeks to illustrate the way quascientists kick this problem under the rug in their efforts to convince everyone, themselves included, of the scientific status, indeed, the scientific truth of their semi-religious belief.

As noted above, scientists frequently have the problem of shoehorning data in order to make it fit into an accepted explanation. And right off the bat Big Bang theorists had this problem because their claimed expansion seems to violate its own theoretical foundation. Big Bang is built on Einstein's General Relativity theory which, in turn (supposedly), is a generalization of his Special Relativity. The latter theory, physical scientists agree, is a hypothetico-deductive proof that nothing moves faster than the speed of light, though this conclusion does not depend on this test. As Einstein himself once noted, that the speed of light is absolute is something which only evidence could have discovered, and extensive research has never found anything faster. (There are theoretical particles which supposedly always move faster, but there's no evidence they exist.)

Big Bang says the universe expansion was faster than light. I'm not referring only to the so-called initial inflation, where the universe supposedly effectively instantaneously grew from its initial speck to be about as big as a grapefruit. (Intuitively this doesn't seem phenomenally rapid. But because the initial speck is supposed to have been so infinitesimally tiny, such inflation would exceed light speed many times over.) Even after inflation the expansion supposedly was faster than light. Nevertheless, with the pretended certainty of a loving but lying grandparent assuring a grandchild that fat Santa can easily get down skinny chimneys, Big Bangers say this violation of a supposed absolute physical limit is irrelevant because what was supposedly expanding was space-time itself. But what difference does that make? If the expansion were faster than light then the distance between at least some of the things in the expanding space-time must have been increasing faster than the speed of light. If this distance were so increasing then the relative speed of those objects was faster than light, and that, as far as the evidence shows and as far as physics knows or believes, is impossible. There's a fundamental inconsistency here, but Big Bangers kick it under the rug with the equivalent of a pettifogging lawyer's hairsplitting word definition.

General Relativity presents other problems to Big Bang. One is that the foundational theory is itself not established. As noted above, the gravity waves it specifies have never been directly measured. Most assuredly this may be because, as the theory's defenders insist, all attempts have been insufficiently sensitive to detect such weak waves. But there is a troubling precedent. In the Nineteenth Century physicists tried to measure how the speed of an object's movement would be added to the speed of light shined from the object. These efforts failed. However physicists refused to doubt the additive speed effect, so the failures were dismissed as due to a lack of instrument sensitivity. Finally an exquisitely sensitive measurement also failed to find the additive speed effect, and physicists reluctantly were forced to decide the speed of light is absolute. Currently researchers are trying to measure gravity waves with an instrument of extreme sensitivity. If these efforts also fail, physicists, it would seem, will once again have to abandon a theoretical dogma. If that happens then Big Bang won't have a theory to stand on. Doesn't honesty in advertising require theoreticians to point out the insecure, tentative nature of their theoretical foundation when they are selling their notion to the nonscientist public? {NOTE: Research teams at MIT and CalTech on February 11, 2016 announced detection of gravity waves on September 14, 2015. Therefore, the reservations expressed here about General Relativity must be dropped. I leave it to the reader to judge the relevance of this monumental discovery to the arguments made here.}

In this essay's first section I noted how, to their shame, some scientists sometimes fake data. General Relativity may illustrate this. Early in its history two empirical facts seemed to provide hypothetico-deductive confirmation of the theory. It seemed to explain a slight anomaly in the orbit of Mercury, the innermost planet. Also an expedition to observe a rare solar eclipse reported the path of light from a star hidden behind the sun was bent by the sun's gravity by twice as much as Newton's gravity theory posited, precisely what General Relativity predicted. Subsequently doubt has been raised about both claims. A much more detailed consideration shows uncertainties about the shape of Mercury make the first finding ambiguous. This illustrates how exceedingly subtle and difficult scientific research can be, and in no way casts aspersions on the competency or integrity of those who made the initial favorable conclusion. The light bending finding, however, is now generally dismissed for precisely such reasons. Subsequent study of the expedition's data suggests the initial gravitational light bending conclusion was either inexcusably sloppy or deliberately overstated in support of Einstein's theory. (Lest anyone jump to an invidious but totally wrong conclusion, I must note that the great theoretician himself had nothing whatsoever to do with this dubious expedition.) Whether or not there was fakery, uncertainty about these two theoretical deductions must increase a reasonable person's doubts about General Relativity theory and, therefore, Big Bang. {NOTE: Research teams at MIT and CalTech on February 11, 2016 announced detection of gravity waves on September 14, 2015. Therefore, the reservations expressed here about General Relativity must be dropped. I leave it to the reader to judge the relevance of this monumental discovery to the arguments made here.}

Another General Relativity problem is time. Big Bangers say the universe is slightly less than fourteen billion years old. When they do, they are claiming every single bit of the universe has the same age. But again Special Relativity presents problems, for one of the things it supposedly proves is that an object's passage of time depends upon its speed of movement. This phenomenon is called time dilation, and there are several kinds of solid evidence of it, so we need not rely on Special Relativity theory to know time dilation exists. It has not only been directly measured, it has been experimentally demonstrated. (Which demonstration, incidentally, showed that unlike Einstein's initial theoretical claim, time dilation occurs to only one of two relatively moving objects.) Thus, objects in the universe which move at different speeds must age differently, and astronomers tell us that pretty much every astronomical object is moving relative to every other one. From whence, then, comes this idea of everything having the same age? From General Relativity. Einstein, without explanation, apology or blush (as one physicist critic I once read said) resurrected in General Relativity the very absolute time idea which his Special Relativity supposedly disproved. Big Bangers claim to have data showing galaxies about a dozen billion light-years distance are receding from us at an appreciable fraction of the speed of light. If so then the scientific fact of time dilation says their ages must be significantly different from ours. One of us may be almost fourteen billion years old, but the other must be only a fraction of that age. But the theory's quascience advocates also kick this problem under the rug.

Another problem with the supposed age of the cosmos is that some stars are almost as old. Thus, the oldest star yet found (and among the billions of billions of stars which haven't been carefully studied even older ones well may exist) is estimated to be only about a hundred million years younger that the cosmos itself, an astronomically tiny length of time. To believe in Big Bang one must therefore suppose that in an incredibly short period, astronomically speaking, the cosmos went from a fantastically, almost infinitely dense glob to the present highly differentiated state in which the overwhelmingly dominant characteristic is vast emptiness. But immediately after this miraculous change, the miracle stopped, leaving the distribution of matter in the cosmos essentially as it now is.

This problem is exacerbated because every star ever studied contains heavy atomic elements. (Although many astronomical measurements are highly inferential, even conjectural, the atomic elements in a star can be directly detected by spectral analysis of its light.) These heavy atoms are a problem for Big Bang theory. Here's why.

Because of its virtually infinite compressional heat, atoms could not exist in the theoretical initial speck. But in the early phases of the expansion the universe, though still astronomically hot, cooled enough for the formless energy to be fused into atoms. One of the claimed great successes of Big Bang is that the theory's predictions of the relative proportions of the two lightest elements supposedly formed in this way agrees with estimates of their universal abundance (roughly eighty percent hydrogen, twenty percent helium and a trace of lithium). However, the hypothetical expansion cooled too rapidly for any but these three lightest atoms to form. So how did the heavier atoms, elements present in all known stars, originate? Big Bang explains this with a special _ad hoc_ hypothesis. It says the three lightest atomic elements which were formed in the early expansion immediately segregated into stars vastly more massive than any ever seen by any astronomer. These super stars cooked up heavy atoms for only a tiny time (by astronomical standards) then blew up in super-supernovae. Then, allegedly, the debris of these stars immediately re-accumulated into heavy element containing stars like all those astronomers now see. All of this supposedly occurred in approximately the lifetime of the shortest lived known stars. Unfortunately, no one has ever found any such hypothetical first generation super star. But the theory explains away this total absence by claiming such short life and sudden death befell every single one of the first generation of super stars. Logically, this makeshift excusatory scenario is precisely equivalent to Ptolemy's three geometry gimmicks: An elaborate logically circular mythical excuse, itself totally devoid of supporting data, for inescapable data which contradict a favored theory. As Yogi Berra famously said, "It's like _déjà vu_ all over again."

The first most people heard of Big Bang theory was when the 1978 Nobel Prize in physics was awarded to Arno Penzias and Robert Wilson for discovery of the theoretical initial event's theoretical residual heat, the so-called cosmic background radiation (CBR). The theory-defined and theory-dependent nature of quascience evidence is illustrated by this award, for when they published their finding, Penzias and Wilson, although fully competent and knowledgeable scientists, had no idea what the very weak signal they could not remove from their giant antenna might mean. Quascientist Big Bang theorists had to supply a hypothetical explanation.

{Although I've never seen any Big Banger explicitly so claim, implicitly they use this Nobel award as certification of their theory. Many scientists have complained how, as in this case, the Nobel Prize is often misleadingly seen as a guarantee of scientific conclusions, conclusions which properly are always only tentative. Because of this, one eminent physicist said he thought of rejecting the prize when it was awarded to him. He didn't. It is not only Creationist farmers using genetically modified seed whose actions are not in accord with their convictions. But, as I noted above, scientists are only people, and liable to all the weaknesses and inconsistencies characteristic of us all. With respect to the reliability of the Nobel Prize as a certificate of validity, I note that one was awarded the developers of prefrontal lobotomy, a surgical technique which scrambles the forebrains of psychotics in order to calm them. This technique is no longer used because it is equivalent to slowing down an automobile engine, not by limiting its fuel, but by mixing abrasive diamond dust in its engine oil, an irreversible procedure the effectiveness of which is scarcely a significant scientific discovery.}

The strength of the signal Penzias and Wilson found was weaker than predicted by Big Bang theoreticians for the CBR. But the theory's believers apparently subscribe to the adage "close enough is good enough". I suppose these quascientists consider it nit-picking, but it certainly is relevant to note that, despite believers' eager claims, the signal found was not of the strength their theory had predicted. Nevertheless, Big Bangers insist it is their hypothetical CBR. Certainly it could be. Their original calculations could have been off the mark. However, maybe their initial calculations accurately expressed their theory's predictions but their post-hoc revision does not, and the signal Penzias and Wilson found isn't the so-called CBR. Either way, this is a conspicuous judgment call, and as any sports fan can assure you, judgment calls are frequently wrong.

Another problem with the CBR conclusion is shown by the letter C in this acronym. This letter says this signal is cosmic, that it supposedly exists everywhere in the universe. I have seen passionate praise of this, almost poetic claims that everywhere in the cosmos anyone might go this signal will be found. But there are no data whatsoever to support this claim because no one can go anywhere in the cosmos. The radiation, whatever it might be, has only been measured on earth. Therefore, it might be only a local phenomenon, some unknown thing restricted to our Milky Way galaxy. Indeed, as far as the data show, it could even be something from just our solar system. To claim it as a cosmic phenomenon is and can be nothing more than a guess, or an article of faith.

But there is another, much more serious objection to the CBR conclusion. At least one astronomer whom I once read felt the characteristics of the Penzias and Wilson signal were more like what would be expected of degenerate starlight than the theoretical CBR. To see what degenerate starlight is we must first note that not all of a galaxy's starlight passes out of it. Some impacts other objects in the galaxy. Just as sunlight warms the earth, whatever matter in a galaxy any starlight impinges upon will absorb the energy constituting this light, and this matter then will reradiate the energy it absorbs. Now by one of the best established laws in all of science, energy always works its way downward from hotter to cooler objects. But no matter how cool, unless an object's temperature were absolute zero (presumably an impossibility) it will radiate energy: The cooler the object, the cooler the energy it radiates. Thus, as you can see, we can think of some of the starlight in a galaxy as bouncing around from object to object, getting progressively cooler and cooler but never going away completely (a thermodynamic no-no). The final result of all this light cooling is the galaxy's degenerate starlight. If physicists know anything at all, they know there has to be some degenerate starlight. The astronomer I read thought the signal Penzias and Wilson found better fit this phenomenon than it fit the CBR claim.

I'm sure Big Bang quascientists have a well considered reason for rejecting this alternative interpretation. I don't know it, and I wouldn't presume to evaluate it even if I did. This is not my area of science, and I am clearly incompetent to make such an evaluation. But my point is not that the theory believers' defense of their CBR interpretation is wrong. My point is that the Penzias and Wilson data become relevant to, as well as supportive of Big Bang theory only because of such rationalization. This situation is not at all similar to, for example, the role Flamsteed's moon location data had in regard to Newton's model. The location of earth's only natural satellite was precisely what Newton was predicting, so there was no possible question about the relevance of Flamsteed's data. That is not the case with any of the data used to defend the Big Bang hypothesis, all of which requires potentially fallible theoretical judgments and interpretations in order to become relevant and supportive.

**Redshift:** But by far the most serious problem with the Big Bang theory is that its defining characteristic, the expansion of the universe, is itself only a potentially fallible inference, one piled upon other potentially fallible inferences. Anyone who tells you the universe is expanding is telling you a personal conviction, not a scientific fact. The same applies to anyone who tells you it is not expanding. Either conclusion can only be an opinion because, while there is evidence for expansion, it is conspicuously ambiguous. Once again, a little science history will put the issue in focus.

Until well into the Twentieth Century our Milky Way galaxy was generally believed to be the whole of the universe, though in the Eighteenth Century the German born English astronomer F. W. Herschel had suggested that some of the fuzzy patches of light, or nebulae, which can be seen in the heavens were galaxies of stars separate and distant from our own Milky Way. In the 1920's the American astronomer Edwin Hubble presented data which convinced astronomers Herschel's speculation was correct. This conclusion relies wholly upon inferential measurements of the distance to some of the purported galaxies.

The brighter a star, astronomers logically assume, the nearer it is, all else being equal. Unfortunately, all else probably isn't equal. A star's brightness must depend on factors other than distance, so it isn't reasonable to set up a simple brightness _vs._ distance rule and apply it mindlessly to all stars. Accordingly, astronomers have carefully studied myriads of stars, and have found relationships between various detectible star characteristics and the assumed inherent brightness of the stars differentially categorized in these various ways. One such characteristic is a periodic fluctuation of the star's brightness. These stars are called Cepheid variables. Astronomers had determined a relationship between the period of such a star, which can be directly measured, and its presumed inherent brightness, which, of course, can only be inferred. Hubble found Cepheid variables in the Andromeda nebula. Because he could infer their inherent brightness from their period, and because they were exceedingly faint for stars of such presumed intrinsic intensity, he thought the whole nebula must be very far away, well outside the Milky Way. Andromeda, he concluded, must be a separate galaxy, a most plausible and reasonable inference (I certainly accept it), but inescapably only an inference. Hubble went on to expand his conclusion, identifying many nebulae as separate galaxies.

To understand Hubble's next great discovery we must consider redshift. If you have ever listened at a railroad crossing while a swift train was passing, blowing its whistle or horn, you will have heard a phenomenon called the Doppler shift. As the train passes you, the sound changes pitch to a lower note. This occurs because sound is a wave in air, and the pitch of the sound you hear depends on how frequently sound wave peaks reach your ear. The faster the peaks arrive, the higher the pitch. When the train is approaching, it is literally pushing these peaks closer together in front of it. Thus, you hear a high pitch. But after the train passes, it is pulling the sound wave peaks apart, and you hear a lower pitch. Although light is not a wave in the same sense as sound, it does move in a wave-like manner, and the same Doppler effect occurs with it. We experience high frequency light as blue, and low frequency as red. Thus, if a light emitting thing, _e.g._ , a star, is approaching, the light from it will be shifted toward the color blue. And if it is going away, or receding, the color is shifted toward red. Unlike so much of astronomical data, these shifts can be directly measured.

Color shifts are vital in astronomy because direct measurement of the movement of astronomical objects is only possible for exceedingly nearby ones, mainly those in our solar system. The movement of only a few stars can be directly detected. Beyond them, the movement of no astronomical object can be directly measured, and only inferential measurements are possible. And that's astronomers' principal use of Doppler color shifts, as inferential measures of the movement of celestial objects. It is also, quite possibly, Big Bang's Achilles' heel.

The problematic issue concerns the use of redshift as a pure measure of recessional movement. The validity of this usage is wholly dependent upon there being nothing else which can cause an astronomical object's color shift. One redshift contaminating factor is known. Gravity also redshifts light, and astronomers take this into account when interpreting redshift as a recessional measure. This is appropriate but insufficient because it is simply unknown whether any other factors also cause redshift. As pointed out and repeatedly illustrated above, what we don't know always has the potential of invalidating what we think we know. And with respect to the use of redshift as a measure of recessional movement, there is reason to suspect there well may be such an unknown contaminant. Let's continue the history to see why.

Hubble arrayed some of his newly identified galaxies according to brightness, an arrangement in which the dimmer ones were presumed to be further away. The dimmer the light from a galaxy, he found, the more was its light shifted to the red. This relationship is sometimes called Hubble's redshift law. This law is a relationship between two different inferential measurements. However, if the uncertainty of inference is ignored, if dimness is assumed to indicate only distance and redshift only recessional movement, Hubble's redshift law can be interpreted as showing the universe to be expanding. This interpretation requires one further assumption: It supposes an astronomer situated anywhere in the universe would also find the same Hubble law. Obviously it is impossible to know this, but it seems like a reasonable guess since it's unreasonable to suppose the earth is the center of the cosmos and everything is moving away from only it. Nevertheless, a guess is only a guess. However, if this plausible but not confirmable guess in fact is true, and if the two inferential measurements in Hubble's law in fact measure what astronomer's believe they do (and only what astronomer's believe they do) then the universe is reasonably concluded to be expanding.

This expanding universe interpretation was treated almost like divine revelation by those with faith in the hypothetico-deductive method, because about a decade before Hubble found his redshift law Einstein invented his General Relativity theory, a theory in which curved space-time replaces Newton's gravity force. The mathematics of this theory can be made to say (if a theoretical factor Einstein invented and called the cosmological constant is suitably adjusted) the universe is not stable, but rather is expanding. Thus theoreticians take Hubble's law as hypothetico-deductive proof of Einstein's theory. Indeed, it is. Unfortunately, most scientists, Big Bangers particularly, do not seem to understand how uncertain and unreliable is such proof.

As if the muses of science intended to demonstrate this weakness and to illustrate how the unknown can invalidate what we think we know, a subsequent discovery raised serious questions about redshift as a measure of recessional movement. Astronomers had for some time known of some exceedingly bright celestial objects which in telescopes look exactly like stars, stars whose exceptional brightness apparently shows they are quite near. However, this interpretation was called into question when it was discovered that light from these apparent nearby stars is massively redshifted, as much as that from some supposedly distant galaxies. Apparently, therefore, these objects weren't really stars, but only starlike, quasi stars or, as we now call them, quasars. The brightness of quasars seemed to say they had to be in our Milky Way galaxy, our next-door neighbors. But if they are in our galaxy, and their great redshifts are due to movement, why would they all be moving away from earth? By chance at least one or two would be moving toward earth, and this would cause their light to be blueshifted. So if quasars are in our own Milky Way galaxy, their redshifted light seems necessarily to be due to something other than recessional movement. Thus, a prudent person must conclude, the expanding universe interpretation of Hubble's law is at least ambiguous if not frankly doubtful.

I don't know how astronomers reacted when they encountered this problem, but using some of the same kind of inferential thinking they use to develop their distance and movement measurements, one can guess. They probably were quite upset to find a large part of what they thought they knew to be threatened by the possible invalidation of their recessional movement interpretation of Hubble's law. Not surprisingly, therefore, most astronomers subscribed to a consensus holding the redshift measure to be uncontaminated and completely valid. Quasars, despite their extraordinary brightness, are not in our galaxy, consensus accepters insist. They are as distant as Hubble's law says the redshift of their light suggests. But if this is so, then to be as bright as they are, quasars must be astoundingly energetic. Though this is difficult to imagine, consensus subscribers much prefer to so place their faith than to cast doubt on their expanding universe belief. But is not the refusal to acknowledge reasonable doubts the definition of a dogma?

A few astronomers, however, are willing to look at the quasar data with unflinching courage and to think the unthinkable. In the van of this group is the astronomer Halton Arp. The paradoxical, very bright but massively redshifted character of quasars led him to suspect something of fundamental importance is not known about redshifts. He and a few like-minded and/or open-minded other astronomers have built a body of data which seriously questions the consensus dogma that redshift is a pure measure of recessional movement. A large part of these data consist in the identification of high redshift quasars which appear to be part of astronomical assemblages having much smaller redshifts. Even if the disparate objects in only one pair are in fact associated in space, then their inconsistent redshifts directly demonstrates the role of something other than recessional movement in causing redshift. All of these data, however, are routinely dismissed, denied and denigrated by the consensus acceptors. To save this evidence from the dustbin to which these believers would consign it, Arp wrote a book summarizing these data in layperson accessible form, _Quasars, Redshifts, and Controversies._ (Interstellar Media, Berkeley, CA: 1987). And since more such data continued to be found, he wrote another, _Seeing Red_ (Apeiron, Montreal: 1998).

{Unlike most scientific reports, which carefully (and hypocritically) maintain science's image of complete objectivity, Arp candidly reports the underhanded things redshift consensus believing astronomers have done to suppress data which threatens their dogma and to prevent non-consensus astronomers from accumulating any more of it. The things he alleges are nothing like what the general public expects on the basis of the popular image of scientists as dispassionate devotees to the advancement of knowledge. These hostile acts are more like the machinations of villains in a melodrama. I can not verify Arp's specific allegations, but I can assure you that acts to suppress unwanted data are not unheard-of in science. An amusing illustration is provided by the following story.

{A colleague of mine submitted a paper to a peer reviewed journal reporting research in which he had found results inconsistent with the then current opinion. Peer review means before a work is accepted for publication it is critically reviewed by one or more scientist(s) with knowledge about the area of research. The ostensible purpose is to prevent deficient research from contaminating the scientific literature. Sometime later my colleague got a letter from the journal editor saying the scientist who reviewed his report insisted its results were impossible. The reviewer advised the editor that if the report were published, he would never again review for the editor. Now reviewing is rather like an anonymous charity act. One is not paid for it, and one seldom gets much if any professional benefit for doing it. So editors often have a hard time getting reviewers. The editor's letter explained how this had determined his decision. While he himself felt my colleague's research was well done and well worth publication, the editor admitted, he could get another paper much easier than he could get another reviewer. Therefore, he declined to publish it. Peer review, I'm afraid, does as much, maybe more, to enforce conformity to prevailing opinion as to guarantee the quality of scientific papers.}

While I appreciate Arp's bringing into the open the hostile way Big Bang believers attempt to suppress discordant data, I am not persuaded to his opinion about redshift. His data are as indirect and inferential as that of the expanding cosmos consensus defenders. Both sides are missing the most fundamental and inescapably point: There is no way to either prove or disprove the claim that galaxies' redshifts are pure measures of recessional movement because there is no way to directly detect the movement of anything more than a couple light years from earth.

Apparently because they appreciate this, some defenders of redshift as a pure measure of recessional movement have insisted their interpretation is necessarily valid because the only other thing which can cause redshift, gravity, can not explain Hubble's law. Indeed, no other _known_ redshift causing variable can. But to claim certainty for their dogma because of this is arrogant, errant claptrap, for such claim necessarily implies that astronomers know everything which could cause redshift. Quite obviously nobody has such omniscience. For all we know (or better, for all we don't know) the suspicions of astronomers like Arp are valid and the Hubble law is due to some redshift effect having absolutely nothing whatsoever to do with recessional movement.

Whether or not galaxies are moving apart from each other can not be known. While relevant data exist, the evidence derived from them is indirect, inferential and inherently ambiguous. So whether one thinks the cosmos is expanding or stationary, such belief can not be scientific knowledge. It can only be opinion.

**Indirect Measurement**

Whichever side of the redshift argument an astronomer may be on, none is likely to be pleased with my conclusion, for it seems to say astronomy is not a science. Let me address this issue, for this may help clarify why the kinds of data quascience theories must rely on lead only to quascience opinions, not to scientific knowledge.

Science has developed a large number of explanations of how the natural world seems to operate. These explanations, scientists like to believe and students are taught, are natural laws, the truth of how the world works. But as I have shown above, despite all the hype one hears about scientific method and hypothetico-deductive proofs, this truth claim is indefensible. There is no humanly accessible means of establishing scientific truth. Scientists, and before them natural philosophers, have suggested only two possible such means, experience and reason. A considerable amount of experience shows human explanations of experience are unreliable, confirming Aristotle's teaching about the logical error of Affirmation of the Consequent. Abundant data also show deductive reasoning, whether based on allegedly self-evident premises or scientific knowledge, also is unreliable.

These epistemological issues are of little or no interest to anyone but philosophers. Other people, scientists and nonscientists alike, believe in science not because of anything philosophers may think or say. People believe in science because scientifically based technology is able to do things. True, these accomplishments are usually taken as presumptive proof that science's alleged natural laws are indeed true. But again, as I have shown above, this conclusion is itself a logically fallacious Affirmation of the Consequent. Scientific knowledge enables the accomplishment of things, not because it is truth, but because its empirically based descriptions necessarily are useful in doing things whether or not the explanations associated with them match ultimate truth.

Thus the validity criterion of scientific knowledge, the thing which makes it believable, is that the knowledge makes a doable difference. This may seem to be a utilitarian criterion, but it is not. Utility is irrelevant. It is not the usefulness nor lack thereof which makes scientific knowledge believable. It is simply the fact that something, anything (useful or useless, helpful or harmful) can be done with the knowledge. But the very same remoteness from human experience which makes it necessary for quascience to rely on indirect, inferential measurements, upon data which must be theoretically rationalized to become relevant and supportive of quascience theory, also makes it impossible to do anything with quascience opinions. Therefore, quascience conclusions do not and can not rise to the standard generally accepted as establishing the validity of scientific knowledge.

{I beg your indulgence for using the term "validity" above. A strong linguistic argument can be made that this term implies truth. But when I speak of scientific knowledge being validated, I do not mean its truth, as truth is defined in this essay, is established. I only mean the verified proposition is justifiably considered believable scientific knowledge, even though the ultimate truth of such knowledge is unknown and unascertainable. Of necessity one must present one's ideas in an extant language. But often, as in this case, the language did not evolve to precisely communicate nuanced ideas such as my distinguishing truth from believable but unprovable scientific knowledge.}

Consider, for example, the alleged cosmic background radiation discovered by Penzias and Wilson. Anyone with an old analogue TV can experience the signal they found. Just turn on the set and tune it to a channel with no broadcast signal. The visual noise you'll see on the screen is the signal they found, and certainly you can experience it. It can't be avoided. After all, that was the problem reported in the original Penzias and Wilson paper. But the signal can't be manipulated or controlled in any way. It can't be increased or decreased. There is absolutely nothing anyone can do with it. In our present state of knowledge all we can do is argue about what it is, CBR, degenerate starlight, or some unknown other thing nobody suspects. Whatever it might be can only be a matter of opinion. Awareness that the signal exists is knowledge of a sort, but it isn't scientific knowledge because nothing can be done with it.

Now consider the science of astronomy. As noted, ultimately everyone, scientist and nonscientist alike, believes in science because on the basis of scientific knowledge something is doable. A large part of astronomical knowledge clearly meets this criterion. Unlike the time of Copernicus, if there remains any ambiguity in affixing the date of Easter, it does not come from astronomical ignorance. More compellingly, space exploration shows astronomy unquestionably is able to do things. It is able to do these things primarily because the movements and distances of objects inside our solar system can be directly measured. Doubtlessly there are many times when indirect, inferential measurements have been employed in space exploration, but this is completely scientifically acceptable because the very successes of space exploration are empirical confirmations of these measurements.

But direct measurement and empirically confirmed indirect measurement of the movement of astronomical objects is possible for only the tiniest area of nearby space. For the most part, astronomers can only directly measure the movement of things in our minuscule solar system. It may seem absurd to describe an area extending millions upon millions of miles as minuscule, but consider this: The nearest star (the sun excluded) is believed to be light-years distant. A light-year is the distance light in a vacuum can travel in one year's time. By comparison light takes only approximately six hours to travel the radius of our solar system, and virtually all space exploration has taken place within that radius (one research satellite, its mission completed, has passed at most a few light hours beyond it). So all of astronomy's direct movement measurements and empirically confirmed indirect ones have taken place within an area extending out (by rough approximation) about one thousandth of a light-year. Yet Andromeda, our nearest separate independent galaxy, is estimated to be about two million light-years distant, and most of the galaxies which supposedly are expanding away form everything else are billions of light-years distant. (Andromeda, incidentally, apparently is not moving away. Its light is blueshifted.) Even if there were no reason to be suspicious of redshift as a pure recessional movement measure, to extrapolate inferential, indirect measurements trillions of times beyond their empirical confirmation is an act of faith not science.

Indirect measurements are scientifically meaningful when they can be empirically verified. Even when they can not be verified there is a place in science for them because often they are the only data obtainable, and indirect data is certainly better than no data. However, when indirect measures are used, their inherent ambiguity must always be clearly acknowledged, and the uncertainty of conclusions based on them must be candidly accepted. But astronomers usually do not do so, and it's easy to be misled by their omission. Some of the indirect measures astronomers routinely use are not simply clever, they are brilliant, a testimony to astronomers' intelligence and depth of knowledge. When a person first encounters such measures, one is apt to be so impressed by their cleverness as to overlook the caveats which must always accompany them. But this is a serious mistake. An indirect measurement is indirect, and no matter how brilliantly clever it may be, it is necessarily ambiguous until and unless empirically validated. Failing such validation, conclusions dependent upon indirect, inferential measurements must never be considered as scientific knowledge. Inescapably, they are only educated guesses, quascience opinions.

In conclusion: While astronomy has a solid core of scientific knowledge, it is surrounded by a fringe (a nebula?) of quascientific opinion. It isn't easy always to delineate where one ends and the other begins, but part of what astronomers try to do is expand the core scientific knowledge thereby bringing some of the ambiguous quascience opinions into the arena of believable scientific knowledge. While this two part characteristic is conspicuous in astronomy, it is not unique to it. It is true of all scientific disciplines, though those disciplines which rely more on inferential measurements and rational extrapolations, _e.g._ , economics, necessarily have a greater proportion of quascience than they do of science. Unfortunately, however, when presenting their conclusions to the nonscientist public scientists often do not discriminate believable scientific knowledge from conceivable but not empirically confirmed quascience opinion.

**Quascience Myths**

The kinds of problems illustrated in the above example, total reliance on indirect measurement with the result that quascience theory can be supported only by involved, convoluted (and often dubious) rationalizations, shows the deficiency of all quascience. There is no scientific way to determine if any quascience theory is believable. Scientific knowledge is believable because something can be done with it. No rationalizations are needed to convince anyone the theories underlying nuclear bombs are scientific knowledge. Whether these devices are good or bad, useful or harmful, moral or immoral, _etc_., these issues are matters of differing opinion. But the scientific status of the knowledge used to build atomic bombs is unequivocally established by the fact that they work. If Big Bang were believable scientific knowledge, the theory would similarly have some operational consequence. It has none. The same remoteness from human experience which forces Big Bangers to rely on indirect evidence and theoretical rationalizations prevents anything in human experience from being different whether or not the theory were true. The same is true of all quascience theories.

Quascientists are not unaware of the impotence of their theories. The fact is so conspicuously obvious they could not be unaware. But they have a well rehearsed excuse, almost a mantra. They are, they claim, engaged in a glorious noble quest, the search for and discovery of knowledge for knowledge's sake. By knowledge, of course, they mean scientific truth. This claim is as preposterous as it is pretentious. Even if quascientists were practicing genuine science, the most they could discover would be scientific knowledge, tentative ideas the truth of which is unknowable, knowledge which is believable only because it is effective. Since quascience theories are conspicuously not effective, they obviously do not rise even to the level of scientific knowledge. Therefore, what could quascience possibly discover? Clearly it cannot be truth. Though they refuse to admit it, all quascientists find is dogma. This is not to say their conclusions are necessarily false. But they are necessarily ambiguous and uncertain. Therefore those who believe them necessarily can do so only by an act of faith. This is why I say quascience conclusions are semi-religious beliefs, dogmas.

Inconsequential impotence is characteristic of all quascience theories. All are merely modern myths. By myth I mean what cultural anthropologists do when they use the term, a meaning different from its colloquial sense. Commonly myth is taken to imply necessary falsity. But as noted above, the truth or falsity of quascience theories is indeterminable, so quascience is not mythical in the common sense. It is myth, however, in the sense that humans manifest a need to think we comprehend, whether we do or not, and myths fulfill this need. Just as the myths of primitive cultures provide their members with specious explanations they find satisfying, so do quascience theories provide modern persons with explanatory myths expressed in concepts and terms moderns find meaningful. To fulfill this psychological understanding need, myths suffice. But truly, quascience theories serve no other purpose, for truly nothing in human experience is different whether or not quascience myths were true or believable.

All of the _au courant_ theories so highly touted in the science oriented news and popular literature are quascience myths, not science. And almost all scientific knowledge shades off into quascience at the extremes. Two excellent examples are the Modern Synthesis theory of evolution and the quark theory of the constituents of nuclear particles such as protons and neutrons. These two are worth particular note because the mythical scientific status of each, that status here named quascience, has been explicated in excellent works accessible to non-specialists, works which I highly recommend to you. In a monumental and masterful work ( _The Structure of Evolutionary Theory_. Belknap Press of Harvard University Press, Cambridge, MA: 2002) Steven Jay Gould presented the problems of the Modern Synthesis. And in a work of comparable excellence ( _Constructing Quarks_. University of Chicago Press, Chicago: 1984) Andrew Pickering showed how the indirect evidence supposedly establishing the quark theory was obtained simply by theoretically redefining what was originally considered to be meaningless data noise, reinterpreting it to be confirmatory evidence, perhaps the most flagrant example of how quascientists create supporting evidence _ex nihilo_ by mere theoretical definition.

**Coping with Quascience Mythology**

As noted earlier, some followers of the Sociology of Scientific Knowledge hold all knowledge to be unsure and problematic, only different persons' different myths. This SSK contention is both true and false. It is false because, as shown above, scientific knowledge, while it may not be truth, nevertheless is demonstrably effective and therefore believable. However, the cynical SSK contention is unquestionably true for quascience, for such theories are no more believable than myths. It might seem, therefore, that quascience is deceptive and harmful and should be suppressed. But even if this were feasible, it would be shortsighted, because quascience, properly used, can have value. Indeed, even as myths they have value so long as their mythical status is recognized.

And this is the foremost requirement. Quascience theories must be acknowledged for what they are, quascience. They are not science, and those malarkey merchants who so eagerly, continuously and fraudulently try to sell them as such must be eagerly and continuously confronted and refuted. The survival of humankind unquestionably depends upon scientific knowledge. It therefore is vital that the believability of scientific knowledge not be compromised by the disingenuous efforts of quascientists to appropriate the authority of science in order to build their castles in the air. Quascience theories are inherently ambiguous. To the extent that quascience is misconstrued as science, persons who approach these theories with honest skepticism, the kind of skepticism which should characterize all thought, will come to doubt both science and quascience. And this has the potential of harming and perhaps eventually even of destroying humanity. This statement may seem hyperbole, but I'll explain below one way in which it might occur.

One need not be a scientist to identify and expose quascience. It requires little study and no expertise whatsoever to see both the grandiose, beyond-human-experience nature of its claims and also its operational impotence. Indeed, if the mythical nature of quascience is ever to be appreciated, it probably is necessary for the debunking to be achieved mainly by the efforts of nonscientists. Certainly quascientists are not going to expose it themselves. And scientists who do genuine science are too occupied thereby to concern themselves with quascience. Also, as shown by Arp's many examples of how astronomer believers in the redshift consensus have limited the research facilities and inhibited the publications of astronomers who question this dogma, it is professionally risky for professional members of a particular scientific discipline to challenge one of its strongly held consensus quascience opinions.

Therefore, whenever some quascientist claims to have "smoking gun" evidence supporting one of these mythical notions, even though only experts can evaluate the technical specifics of such claim, anyone can recognize the smoke as most probably resulting from the quascientist's "medical" marijuana. Everyone is able to, justified in and, indeed, obligated to point this out. Do not be intimidated by the expertise of the quascientists. True, in general they are both highly intelligent and well educated. But since much of their knowledge is dogma, as their eager practice of quascience shows, the very nature of their expertise is sufficient reason for being suspicious of their opinions. One cannot tell which are based on scientific knowledge and which on quascience dogma. Therefore, don't hesitate to form, hold and advocate your own opinion about matters of quascience. Hopefully, if the public persistently refuses to be taken in by quascientists' boastful claims, these myth makers will get the message and themselves start treating their ideas as the interesting but unprovable speculations they can only be.

Identifying quascience for what it is does not in any way constrain anyone's opinion about any quascience myth. Indeed, exactly the opposite. Because of quascience theories' conspicuous impotence, and since none can be either proved or disproved, everyone is perfectly intellectually justified in accepting or rejecting any such theory on whatever basis one finds convincing. After all, quascience conclusions are only opinions, and everyone has a right to one's own opinion. There is a cost to such freedom. While one may be firmly convinced of the truth or falsity of any quascience claim, other people may with equal intellectual justification hold an exactly opposite opinion. True-believers of either persuasion will, of course, consider holders of the opposite opinion to be intellectual troglodytes and fools. But these charges are themselves only opinions.

An even greater freedom results from recognizing quascience for what it is. With full intellectual justification anyone should feel free to concoct one's own theoretical explanation of the beyond-human-experience phenomena addressed by quascience. If we were talking about genuine science, science attempting to do something or science able to directly measure the variables studied, I would be a fool to make this recommendation, and persons lacking educations in the relevant scientific discipline would be fools to follow it. But quascience is a kind of sophisticated make-believe game, something with objectives more closely related to popular entertainment than to science. So there is no legitimate nor logical reason why everyone should not play. Two benefits, one probable but the other only a remote possibility, can derive from this. The probable benefit: Insofar as everyone does participate in the game, the mythical nature of the ideas presented will be obvious. This will help keep everyone acutely aware of the differences between science and quascience.

The second benefit is unlikely. But should it ever occur, it could be of great importance. Outsiders' ideas, wild though they may be, have a chance, a small one to be sure, but a chance nonetheless of supplying something which is conspicuously lacking in quascience. This lack is illustrated by the above Big Bang example which showed how quascientists treat their theories as a truth they must defend. They kick contradictions under the rug; They reinterpret predictions to make data conform to their theories; They form and dogmatically adhere to theory-supporting consensus interpretations of conspicuously ambiguous data; They ostracize scientists who dare to doubt such consensus opinions. In short, quascientists' behavior shows them to dogmatically adhere to one particular theoretical explanation and to rigidly and rigorously exclude any alternative.

But the kind of inconsistencies they kick under the rug or rationalize away are precisely the things which lead genuine science to develop scientific knowledge. If one is doing the kind of science which seeks a doable goal, it is all too frustratingly obvious when one's efforts don't work. Such failure simply can not be kicked under the rug. One is forced to try something different. If this eventually succeeds, the explanation one has used to achieve the doable thing may not be truth, but if it works, it is believable scientific knowledge. Similarly, if one is attempting to develop a scientific description and explanation using directly measured variables, though failure won't be as unambiguous as when one is attempting to do something, eventually it likely will be telling. If one is unable to accurately describe or predict some phenomenon, direct measurement usually will eventually make this apparent. What these frustrations do is force scientists' thinking out of its ruts, force the invention of some new theoretical explanation, some more data-consistent way of making sense of the theretofore incomprehensible results.

As noted earlier, when confronted with irrefutable data inconsistent with current thinking, scientists' traditional first response is not to revise or abandon a theory, but rather to work mightily to shoehorn the new data into it. Only after repeated frustrations of the kind just considered does the usual scientist abandon an old idea and invent another. But the beyond-human-experience nature of quascience with its concomitant exclusive reliance on theoretical rationalizations and indirect measurements means a quascientist never need suffer such frustration and therefore is unlikely ever to be driven to creative novelty. Because all their evidence is highly interpreted, made relevant and supportive only by definition of the theory at issue, quascientists have unlimited freedom to reinterpret inconsistent data as necessary in order to preserve their current theory. And that's just what they do.

Quascientist practice isn't an iota different from what a scientist practicing genuine science does in similar circumstances. And that's a major reason why quascientists so readily fool themselves and others into believing they are doing genuine science. Science is traditionally and habitually compulsively conservative. Scientists always are loath to give up an old explanation and almost never do so until persistent inconsistent data compels it. When one is working with directly measured variables, and especially when one is attempting to do something, reality will eventually make conspicuous an inadequate old theory, and the necessity of new thinking will be inescapable. But in quascience this need never occur. Quascientists can always re-rationalize their interpretations. The consequence of this is that quascience is starved for new ideas, but has no chance of obtaining them.

This is a deficit outsiders may be able to correct. As mentioned earlier, there is a nebula of quascience around all science. Fruitful new theories often come from this nebula, and that is a useful characteristic of quascience. However, since the very nature of quascience practice inhibits quascientists' invention of new and potentially fruitful new ideas, perhaps they might come from outsiders. The chances of this are probably remote, but even if the only thing an outsider's new ideas accomplished were to make the quascience more intuitively plausible, that would greatly improve our modern myths. Because outsiders are not constrained by existing quascience dogma, they may be able to suggest such new ideas. To be sure, the chances that an outsider's notions will be fruitful are remote. But since quascientists themselves are conspicuously committed to defending the myths they have rather than inventing new ones, if there is ever to be any new thinking, it would seem perforce to have to come from outsiders.
Discerning Believable Scientific Knowledge

Because its inherently ambiguous theories address issues which make no difference even if they should be true, quascience is a game everyone is free to play. After all, whether the universe is infinite or bounded; or if bounded, whether it is stable, expanding or contracting (or perhaps pulsating in and out to a tango rhythm), our part of the universe is as it is, and nothing can be done to change whatever might be going on with the universe at large. So the truth or falsity of a quascience theory such as Big Bang is a matter of no consequence whatsoever, and anyone may believe or not believe it for any reason whatsoever. Similarly, it makes no operational difference whether humans evolved slowly and piecemeal over many millions of years as the Modern Synthesis posits, or in sudden bursts as suggested by the evidence and the punctuated equilibrium model, or even were divinely created _ex nihilo_ six thousand years ago. We are as we are, and there is nothing we can now do about how we got this way.

But genuine science is different, and it is different precisely because scientific knowledge can make a difference notwithstanding the fact that it can never be known to be true. Things can be accomplished by following it. Thus, its relevance and utility are manifest, and the more scientific knowledge we can get, the better. For the most part everyone recognizes this, and since considerable specialized training is needed to develop scientific knowledge, the public is generally appropriately willing to leave the getting and judging of scientific knowledge in the hands of scientists and the various specialists whose work depends immediately and directly on science, _e.g._ , engineers and physicians.

But sometimes scientific knowledge impinges on everyone's lives. Sometimes scientists say people should do something, or not do it, or do it in some particular way, and people may not wish to follow these science based directives. This seems to raise for them the basic question addressed in this essay: Is scientific knowledge true? For if it is not, then people would seem to be justified in ignoring these directives. An unequivocal answer can be given this question. Truth doesn't matter. Any quascience theory may be true, but whether it is or not is immaterial because nothing can be done with a quascience theory. And any science knowledge may be false, but that's also immaterial because things can be accomplished with genuine science even if, unbeknownst to us, it isn't true.

To understand this answer one needs to understand the operational role of science laws. Usually they are considered to be a tentative or provisional statements of truth. But as was shown above, the truth of science knowledge can never be established. So this usual interpretation is seriously misleading. It is better to consider science laws only as elaborate mnemonics, devices which serve to make complicated situations more sensible and more easy for humans to remember and deal with. Operationally, that in fact is precisely the function they serve, and we would be less likely to get ourselves confused about issues of scientific truth were we to keep this fact in mind. A mnemonic is only a tool for dealing with human limitations. Its truth is irrelevant. Only its usefulness matters, and if a scientific law's predictions are accurate, it is useful. Thus if we think of science law as a mnemonic rather than as a provisional statement of truth, we can more readily keep in mind the fact that all science law very well may be false, but this is totally immaterial.

Therefore, the relevant question for those attempting to determine whether to follow some scientific recommendation concerns not the truth of the scientific recommendation, but its reliability. Will things work as science predicts? In general the answer is an overwhelming "Yes!". If one follows scientific directives, the predicted result usually will ensue. It is impossible to doubt this. Just look at all the science based technological marvels surrounding and sustaining our lives. Many millions of us would not be alive were it not for the reliability of predictions from scientific knowledge.

Still, you may wonder how scientific predictions can be reliable if, as noted above, occasionally rogue scientists concoct fraudulent data. Such puzzlement is appropriate. But apprehension about the integrity of scientific data, and the reliability of predictions derived therefrom, is unnecessary. Scientists have a salient, if not a defining characteristic: Deep-seated skepticism, an obstinate, "You've got to show me before I'll believe it" attitude. As noted earlier, science is a social endeavor. While some scientists work for organizations which keep their data private for proprietary or security reasons, in general scientists eagerly promulgate their findings as widely as possible. And if the data are in the least unusual, traditional skepticism will drive other scientists to attempt to replicate what has been reported in order to assure themselves of the data's accuracy and reliability. If the report is a fraud, these attempts will fail, and the scientific community will reject the not replicable finding. This is not merely recommended good practice, something honored only in the breach. Any scientific claim of any significance always is eventually confirmed or rejected in this manner. Or if it cannot be replicated (for example the original astronomy claim of the amount of gravitational light bending predicted by General Relativity can not be replicated because it depended on a rare astronomical event) it is meticulously scrutinized (as was the flawed and now discredited light bending observation).

My principal graduate school mentor had an unflattering explanation for why scientists so consistently and aggressively challenge others' work. He used to say scientists are a bunch of intellectual prima donnas, each of whom suppose him or her self to be the cleverest person around, something which he/she perpetually attempts to demonstrate by showing up the claims of other scientists. The result of this intellectual king-of-the-hill game is that scientific findings are vigorously challenged, and surviving this skeptical gauntlet gives the scientific knowledge built on successfully replicated research a high degree of reliability.

In general, however, my professor was overly cynical. By and large scientists follow science's skeptical tradition for honorable reasons. For example, doubts about the just mentioned light bending observation were raised and reported not by persons hostile to General Relativity but by scientists disposed to accept the theory. But they are not disposed to accept dubious data. Few scientists are. So findings are routinely challenged, and whether unreliable ones are an honest mistake, a statistical fluke, or a downright fraud, the skeptical attitude of other scientists virtually always exposes it.

{To protect against a reasonable but erroneous inference, it should be noted that the same replication occurs in quascience. However, it does not remove quascience ambiguity because this uncertainty arises not from the data itself, but from its interpretation. Thus, the Penzias and Wilson finding of a weak background radiation is well replicated. It definitely exists. But no amount of replication can possibly show whether the signal is cosmic background radiation rather than something else entirely. Similarly, Hubble's redshift law is well replicated, but no amount of such replication can ever prove that astronomical redshift is a pure measure of recessional movement.}

In general, therefore, scientific recommendations are reliable and may be confidently followed. There are a few caveats. However, one need not be a scientist nor have any scientific knowledge to understand and follow them.

The most important caveat is to assure that the recommendation does indeed come from scientists and is based on scientific knowledge. Unfortunately, as I'm sure you well know, the world abounds with liars and cheats. And precisely because real scientific recommendations are reliable, these liars and cheats attempt to represent their frauds as scientifically supported or derived. Ofttimes it is difficult to discern these deceptions because these con artists exercise great skill and care in camouflaging them. Thus, there is no infallible way to protect one's self from these charlatans. The best course is to follow the rule my grandmother taught me: The more anyone has to gain from your following his/her recommendations, the more likely is such a one to be a liar and a cheat.

Another caveat is to be cautious of new scientific claims. As noted, unreliable scientific conclusions are routinely exposed. However, this takes time, at times quite a bit of time. Thus, sometimes an erroneous finding is popularly adopted before it is refuted. The media are usually at fault. Always clamoring for the public's attention, they publicize preliminary claims without the tiniest awareness of or concern for reliability. A perfect demonstration of this was a preliminary and scientifically quite questionable report claiming childhood vaccinations cause autism. To the great health threat of millions of the world's children, some people, eager to find a cause of the unquestioned tragedy of autism, compounded the tragedy by prematurely assuming this dubious report was fact. Therefore, they refuse to provide their children the protection of vaccination. Many efforts to replicate this finding have all failed. In fact, large, well done studies have proven the original report wrong, and it has been withdrawn. But these subsequent, superior scientific studies have been unable to overcome the conviction of some who too eagerly latched onto what has now been shown to be an incompetent if not fraudulent report. This is a totally unnecessary tragedy. One need not be a scientist to keep from being led astray in this way. Just don't trust any original report. Before forming an opinion regarding a new scientific claim, wait for it to be vetted and replicated by the scientific community. When it is, it's reliability may be assumed, but until then keep a skeptical attitude.

Another caveat pertains not to the reliability of a scientific claim, but rather to its fruitfulness. Businesses have a practice which is wise and which should be used when one is attempting to determine whether a bit of scientific advise is worthwhile: The cost/benefit analysis. Analyze what the costs would be of following a scientific recommendation if it turns out to be unreliable. And analyze what the costs would be of not realizing the benefits of following it if it turns out to be reliable. Even if a scientific suggestion is reliable, the benefits to be obtained from following it may not be sufficient to warrant pursuing it. On the other hand, the costs also well may be so minimal or the potential benefits so great, the scientific recommendation is worth taking a chance on even if it turns out to be unreliable.

A final caveat: As noted above, in all disciplines there is a body of quascience surrounding genuine scientific knowledge. The net effect is that scientific knowledge is arrayed along a dimension of believability, with conclusions which have been abundantly confirmed by direct measurements at the most trustworthy end and those resting largely, or wholly, on deductive inferences and indirect data at the opposite pole. This may be called the science-quascience dimension, and the reliability of scientific predictions varies according to where alone this dimension is located the scientific knowledge upon which a prediction rests. Predictions with foundations at the trustworthy end of this dimension are highly reliably. But those based on conclusions from the opposite, quascience end are not. Unfortunately, to someone who does not have expert knowledge about the relevant scientific discipline it is not always apparent where along this dimension may lie the foundation for any particular scientific prediction. One solid clue, however, can often be obtained: The kind of measurement underlying the claim. Unreliable, quascience based predictions are based on inferences or deductions from indirect measurements. So to the extent that indirect measurements are known to underlie scientific predictions, one should be cautious, if not suspicious of them. Conversely, if predictions are clearly based on directly measured directly relevant variables, then they are usually reliable.

If these caveats are satisfied, if a scientific recommendation truly comes from science, if it is based on scientific knowledge which has been around long enough to be vetted by the scientific community, if a cost/benefit analysis shows it to be worth trying even if wrong, if it is based on direct measurements, and if it is not advocated mainly by persons having a personal reason for so doing, then nonscientists confidently can and should accept scientific recommendations.

**Global Warming**

It will be helpful to illustrate how these caveats may be applied in a specific case, and there is a highly relevant contemporary scientific claim which can serve. We are being told the planet is warming, that the gases which human energy technology has been venting into the atmosphere are trapping heat in it. Furthermore, many scientists are predicting abundant serious consequences of such warming. Therefore, changes in the world's current energy producing technologies are being recommended. This conclusion and recommendation have been vehemently opposed by some. So let us examine this issue in light of the caveats suggested in the previous section in order to show how nonscientists may prudently evaluate these global warming claims.

With respect to the first caveat there is no question. As is evident from the fact that the deniers often preface their objections with the disclaimer, "I am no scientist, but...", everyone acknowledges that the global warming conclusion comes from scientists and is based on scientific knowledge. And with respect to the second caveat there also is no ambiguity. The global warming conclusion has been well and widely known for decades, quite sufficient time for it to be amply vetted by the scientific community. In fact, climate scientists have been studying the effects of gases released from fossil fuel combustion for about a century. So everyone can be confident the data leading to the warming climate conclusion have been extensively and minutely scrutinized by the scientific community and are definitely reliable.

The cost/benefit analysis is even less ambiguous. Fossil burning energy technologies are obsolescent. Cleaner, safer and cheaper ones already are in widespread and rapidly growing use. And these newer technologies continue to enjoy substantial improvements which are lowering their costs even more, so much so that the financial viability of some carbon based energy suppliers is being undermined. Even if there were no global warming problem, our traditional energy technologies will inevitably disappear for purely financial reasons. So, the question isn't really whether fossil burning should be abandoned, but rather how quickly to do so. Therefore, even in the highly unlikely event of climate change science being found to be completely in error, the technological change this science recommends is unquestionably worthwhile.

Next we need ask whether the scientific global warming conclusion is based on direct measurements. The temperature of the whole earth is a high level abstraction which can not be directly measured but which must be estimated by sophisticated analyses based upon enormous amounts of data gathered from all over the planet. This presents many difficulties, problems which were recently illustrated when for a period the estimated global temperature did not increase despite abundant other evidence indicating global warming. This anomaly was traced to the simple fact that earth is a very large place (by human standards) most of which is inaccessible because it is covered with water. Therefore there were insufficient locations at which temperatures could be measured. When the measurement locations were sufficiently increased, the estimated global measured temperature accorded with other global warming data.

But these estimates are not the best direct measurements of the planet's temperature. A much more direct and completely unambiguous measurement is available to everyone, scientist and nonscientist alike: The worldwide melting of glaciers. Pictures of retreating glaciers have been widely circulated. Indeed, thousands of nonscientists have personally seen this evidence. Many have walked upon ground which till quite recently was covered with ice. This is part of the evidence which told scientists the anomalous global temperature estimates were in error. But one need not be a scientist to recognize the significance of this melting ice evidence. The disappearance of all these massive amounts of ice, in some cases from places where it has existed for longer than humans have existed, absolutely and unambiguously is a direct measure of global warming. So the caveat concerning direct measurement also is satisfied.

But while global warming is an indisputable fact, is it in fact due to the venting of combustion gases into the atmosphere, or is it, as a couple deniers insist, merely a normal fluctuation in global temperature? The ability of such gases to retain heat is a chemical fact. No one can nor does question this. But have combustion gases been accumulating in the atmosphere? Apparently so, for the data do show an increase in their atmospheric concentration. Well then, is this increase responsible for the well established global warming? Again the answer is apparently so, for only when all these gases are taken into account can the observed global temperature increases be accurately described.

What we have here is a series of two hypothetico-deductive tests. First: Of necessity, the burning of fossil fuels releases gases, carbon dioxide mostly, into the atmosphere. But perhaps some unknown or unidentified thing is absorbing them. So the first hypothesis says the gases are accumulating in the atmosphere. And the data unambiguously confirm that there is a growing amount of such gases in the atmosphere. Thus the first hypothetico-deductive test is consistent with the human caused global warming conclusion. Next: By conspicuous fact the earth is warming. So the second hypothesis says the accumulating gases are causing this warming. And the accuracy with which climate scientists' mathematics describe the observed global warming from the increased atmospheric concentration of these gases is also consistent with the human caused global warming hypothesis.

Nevertheless, these tests do not show global warming to be a scientific truth. As this essay has repeatedly emphasized, hypothetico-deductive tests do not and can not establish truth. However, as also noted, such tests are not necessarily wrong. Indeed, they are usually right, at least operationally. That is to say: While hypothetico-deductive tests are usually right, sometimes they are right for the wrong reasons. Consider Newton's great gravity model. It was established by hypothetico-deductive test, and both data (the excessive speed of stars at the galaxy's periphery) and theory (General Relativity's claim that there is no gravity force) suggest Newton's model isn't true. Nevertheless, Newton's laws are perhaps the most useful and reliable of all scientific knowledge. So unless someone is willing to doubt not only Newtonian physics but almost all scientific knowledge (vast amounts of which were established by hypothetico-deductive test or equivalent methods) a cautious and prudent person can and should consider the conclusion that human activity is causing global warming to be a highly reliable finding.

The only remaining caveat is grandma's. What is the self-interest of the persons recommending a curb on fossil-burning, and what is the self-interest of those who deny the existence of global warming and therefore the need for any change in our energy producing technologies?

The former are a sizeable group, almost every single scientist with professional competence to judge. What do they stand to gain by claiming the burning of fossil fuels is threatening human well-being? Little or nothing. These people are academics and professional meteorologists whose positions do not depend on the results of their research. In point of fact, they would have more to gain were they able to convincingly deny global warming. A scientist who could raise reasonable doubts about the global warming conclusion would have an almost certain chance of gaining funding for his/her research. This is because science perpetually seeks new knowledge. Its major rewards are not given for me-too research. So every scientist knows it would be a virtual slam-dunk Nobel Prize for anyone who could disprove global warming. They are not lining up to try, however, because they are overwhelmingly convinced no such disproof is even remotely conceivable, let along possible. Those who know the data can not see any exception to the global warming conclusion.

On the other hand, consider the self-interests of climate change deniers. Everyone who profits from a continuation of the present energy technologies has much to loose. These people range from the school drop-out coal miner who has no chance for comparably compensated employment to the multi-billionaire oil baron. One must feel compassion and concern for the former. But it is the latter who are funding the climate change deniers as the recent revelations of one prominent denier's funding support reveals. The disposition of such billionaires to use the power of their wealth to preserve it, and the fact that the United States has, as the wags put it, "the finest government money can buy," explains why so many politicians are climate change deniers. Indeed, it is quite remarkable that any politician, dependent as they all are on campaign funding, is not.

With so much at stake it is not surprising that some climate change deniers have exceed the bounds of honest discussion. One such instance is worth considering, for it played upon and exploited a widespread misunderstanding of how proper science is conducted.

Some unidentified data thief hacked into a computer containing the professional correspondence of climate scientists from East Anglia University, United Kingdom. The anonymous but hostile hacker(s) published these memos, and climate change deniers claim they prove all global warming data are faked because some of the memos discuss how to present data so it makes the most persuasive case for global warming. This, the deniers say, is a clear demonstration of dishonest data manipulation. Scientists who have reviewed the stolen documents dispute this. The memos, they say, show nothing more than scientists doing what scientists always do: Interpreting their data in the way they believe makes the most sense of it. This is the misunderstood point. As I have explained above, most nonscientists do not know how science actually is conducted. There exists a naive notion that data are self explanatory, that one need only look at data to know what they mean. This simply is not the case. Interpretation is always necessary to make data into evidence which then is supportive of some sense making conclusion. And that is what the most knowledgeable judges conclude the stolen memos show the East Anglia scientists were doing, interpreting their data in the most reasonable, sensible possible way.

In summary, therefore, in the global warming case nonscientists are easily able to evaluate science recommendations. By all five of the caveats suggested above the scientific conclusion is validated: Global warming is occurring and it most probably is human caused. It is relevant to note that this conclusion is exactly the one arrived at by a distinguished physicist who examined this issue with the thoroughness which only someone abundantly knowledgeable about the physical sciences could. He had publicly expressed doubts about global warming and was given (by well-heeled deniers) funds with which to consider the data in detail. His eventual conclusion no doubt was a disappointment to those who funded his investigation, but it does suggest the reasonableness of the five criteria (caveats) suggested here for nonscientists to use to evaluate scientific recommendations.

However, it is clearly unreasonable to suppose this will always be the case. There clearly are cases where not only nonscientists, but even scientists who are not specialists in the particular area at issue can not reasonably evaluate scientific recommendations. In those cases, one must either accept or reject the recommendation on an essentially faith basis. I am a scientist, and I have seen both the good and the bad in science. This experience leads me to conclude one will usually be best served by trusting science. But since the truth of any scientific conclusion can never be established, there are no guarantees.
Conclusion

We modern people have a belief so deeply embedded we do not, indeed we can not consider it a mere belief. For us it is a self-evident and necessary truth, something which could not possibly be wrong. This is our belief in natural law. For some moderns, those who subscribe to no theistic beliefs, natural law is simply the way reality works. No matter how the universe may have been created, or if perhaps it has always existed and will always exist, natural law is what makes things happen the way they do. Other moderns, those who do subscribe to theistic beliefs, usually also believe the god or gods in which they believe prescribed natural laws in order to govern reality. In either case, modern people have an unyielding faith in the existence of natural law.

Scientists, in particular, believe in natural law, and they have developed numerous alleged examples thereof, detailed explanations of how particular aspects of reality function. With them it has been possible to accomplish abundantly many wondrous things to improve and sustain human life. And these accomplishments further strengthen our convictions. Not only do they support our absolute certainty that natural laws exist, they also seem to justify our conviction that science can unequivocally determine natural laws and frequently does so.

It is most understandable, therefore, that anyone who questions the existence of natural law will likely be considered to be an ignorant troglodyte, someone totally devoid of both intelligence and learning. Nevertheless, our belief has been challenged. And the challenge has severely shaken many of us, for it can not be refuted. That our natural law belief is only a belief is proven by the fact that it is impossible to prove the truth of any natural law.

This essay is of the nature of a confession, for through my career as a scientist I never questioned the existence of natural laws nor the ability of science to discover them. When in retirement I learned of the logical error underlying my faith, it profoundly disturbed me. But while it led me to consider questioning the natural law assumption, it did not weaken my faith in science. The technical wonders which have been accomplished with science are simply too abundant and too well established to allow such doubt. But if the ability of science to prove natural law is questionable, and as the above essay shows, it unquestionably is, then how could science be so dramatically and unequivocally successful?

The attempt to answer this question drove me to a multiyear quest, the results of which are reported in this essay. To reiterate what I believe I have discovered: The achievements of science can not and do not establish any natural law because there is solid evidence that the correct, reliable predictions scientists derive from presumed natural laws, the predictions which lead to such achievements, can be obtained from theories which are demonstrably dubious and/or conspicuously wrong. But this very fact is the explanation for science's fabulous successes. For it is not the truth of any supposed natural law which enable things to be accomplished with it. It is the reliability of the predictions made from it. Therefore, the fact that reliable predictions can be made from supposed natural laws which are erroneous, means that the truth or falsity of any of science's claimed natural laws is irrelevant. The very factor which undermines science's vaunted hypothetico-deductive method of establishing natural laws guarantees the successful applicability of the supposed natural laws even if they are not true.

The take home message from this is that there is no good reason to doubt well established scientific recommendations. Not because they are always accurate. Science is not always accurate. But it is always more reliable than any other basis for human action. There are caveats. One must be sure one is dealing with genuine and well established science. In addition to science there also is quascience, and it is unreliable. But as I hope the above shows, it's easy to identify quascience. However, it is much more difficult to identify instances where, for selfish reasons, dishonest persons attempt either to pass off nonsense as scientific knowledge or to discredit _bona fide_ scientific knowledge as unreliable. But Grandma's rule is helpful in identifying these frauds.

I reiterate what was said at the beginning of this essay. If humanity is to survive, either on this planet or perhaps eventually elsewhere in the universe, it will do so only because of scientific knowledge. It works as nothing else does. Therefore, we must stop doubting it.
Appendix

Pity the poor history professor, for much of what is considered common historical knowledge is actually folklore. I well know this because through high school, college and graduate school I accepted a widely held story as fact which turned out, when I finally bothered to investigate, to be mere legend. I long believed the Italian Renaissance natural philosopher Galileo Galilei dropped different weight balls from the Leaning Tower of Pisa in order to test Aristotle's claim that the heavier would fall faster. However, as I discovered, the historical evidence does not support this notion. It's a legend which seems to have grown from two sources: One: Galileo did report what would occur were different weight objects dropped, but he didn't say how he knew this. Nor did he say who may have done the experiment nor how it might have been done, whether with balls or other objects, nor from the leaning tower or elsewhere. Second: Galileo was at Pisa during the time when he would have done the experiment, if in fact he did do it.

I learned of my ignorance when reading an essay by the science historian Thomas B. Settle, "Galileo's use of experiment as a tool of investigation." (In _Galileo, Man of Science_ , E. McMullin, Ed., NY: Basic Books, 1967. Pp. 315-337.) Settle notes the ambiguity of the historical evidence and further notes a difference in historians' opinions of the meaning of Galileo's report. Though Galileo presented the claimed dropping test as fact, some think he was only reporting what he thought would be found were such an experiment to be conducted. Others, however, think Galileo reported what indeed had been found by such a test, though where, how and by whom it might have been done are completely unknowable.

Settle inclined to the latter opinion because Galileo had no reason to expect and no ability even to attempt to explain the results he reported. Galileo said if a heavy and light object are simultaneously dropped, the light one hits the ground first, exactly opposite from the prediction of Aristotelian physics and also inconsistent with the theory Galileo was attempting to develop at the time. Therefore, Galileo had no reason whatsoever to expect the results he reported, and he offered no explanation for this weird outcome. Settle admitted he also had no idea what might cause it. But why, he asked, would anyone report something totally unexpected and inexplicable as fact unless he knew it to be fact? I thought Settle's argument was powerful and compelling. Moreover, I also thought there is a good neurophysiological explanation for what Galileo reported.

A natural way to attempt the test Galileo mentioned would be to hold objects in front of one's self palm down, _i.e._ , under one's hands, so their fall would be unimpeded by the hands and could be initiated by simultaneously releasing one's grips. But the different weights of the objects would make simultaneous release impossible. Here's why.

A fundamental fact of neural functioning is that a neuron's sensitivity decreases a bit each time it fires, requiring a bit stronger stimulus to again fire it. The neuron needs a period of rest for its firing threshold to regain its original sensitivity. Now consider a person holding two different weight objects, one in either hand. The hand holding the heavier object must grip harder and lift harder. Thus, the neurons in this hand will be stimulated more than those in the other. This raises the thresholds of both its afferent neurons ( _i.e._ , those which give rise to the holding sensation) and its efferent ones ( _i.e._ , those controlling the holding muscles). Thus, when the gripping sensations from the two hands are equal, in fact the hand holding the heavier object will be gripping its object a bit more firmly. Similarly, when the holder supposes he/she is simultaneously releasing the objects, in fact the efferent neurons in the hand holding the heavy object will respond a bit more slowly. Both of these factors will cause the heavy object to be released a bit slower than the release of the lighter object. A fundamental principle of physics (the development of which in fact is due Galileo's subsequent research) says: Freefall acceleration is independent of weight and the same for all objects in the same gravitational field. Accordingly, if both dropped objects have insignificant airfoil characteristics, as they would if they were balls, then modern physics tells us their accelerations will be equal. Thus, the lighter, being released a tiny bit earlier, will hit the ground first, exactly the weird result Galileo reported.

I wrote Dr. Settle, suggesting this as a probable explanation for Galileo's report, and offered, if he thought it worthwhile and would take responsibility for publishing the results, to do a little demonstration of the differential release phenomenon. He agreed to report the results to historians, but said I'd have to agree to separately report the methodological and physiological aspects of the study. I agreed. This appendix is a much belated such report.

To cover the expenses of the demonstration I requested and received a small grant from the _Research Society of Sigma Xi_ , whose support I gratefully acknowledge. Then I designed the following demonstration.

A number of volunteers would be recruited to hold different weight objects in the manner described above and to release them as simultaneously as possible while a motion picture record was made of the effort. Galileo's report, Dr. Settle advised, provides no guidance about the kind of objects to use. I selected pairs of balls, the objects of legend, a heavy metal one and an identical size lighter one of wood. Since the size of objects might have an effect on the differential release phenomenon, two different size pairs were used. The diameter of the larger pair was four-and-an-eighth inches. The diameter of the lighter, three-and-a-half inches. The metal/wood weight ratio in each pair was approximately ten-to-one, with the weight of the wooden ball in the big pair being one pound, and the weight of the wooden ball in the small pair being about ten ounces.

The demonstration was organized and conducted by undergraduate research assistants (RAs) participating in a research practicum under my supervision. They arranged for it to be conducted at Regis University, a Catholic (Jesuit) institution in north Denver, using Regis undergraduates as participants. The RAs were not Regis students. (Regis was chosen only because of geographic proximity and not, as one wag suggested, to have the religious convictions of our subjects the same as Galileo's.) The staff and students at Regis could not have been more generous and accommodating as is shown by fifty-one students volunteering whereas we had requested only twenty-four. I'm pleased to have this chance to publicly express to the folks at Regis the thanks privately given at the time of the study.

The volunteers were not told the historical issue motivating the demonstration, but only that it was a study to determine if it is possible to simultaneously release objects of differing weights. Each stood on a stool behind a solid screen about five feet tall and dropped the balls into a sand filled receptacle in front of the screen. To facilitate subsequent measurement of the ball's falls, parallel horizontal lines one-and-one-half inches apart were painted on the screen. The screen was adjusted to make these lines perfectly level. With the RA's help the participants used the top line to align the two balls. When both agreed the balls were level, the RA turned on an old eight millimeter movie camera and the participant tried to simultaneously drop the balls. The films were made at twenty-four frames/second. Each person made four drops, two with each set of balls, one with the heavier ball in his/her right hand and one with the light ball in it. There are twenty-four ways these four drops can be ordered. A randomized master list of these orders was made, and successive subjects dropped the balls in the order next up on the list. With fifty-one subjects we went through the list two-and-an-eighth times.

No statistical analysis was done because none was needed. The effect was obvious. The lighter ball often, and often conspicuously, hit first. Indeed, the RAs reported, some participants noted the earlier hit of the lighter ball and became concerned they might have done something wrong. When the RAs asked if they thought they had dropped the balls simultaneously all said they were sure they had. Indeed, some remarked, that's why the earlier hit of the lighter ball surprised and puzzled them. A frame-by-frame analysis in a film editor showed the lighter ball hit first approximately ninety percent of the time. There were no effects due either to which hand held which ball nor the size of the balls. But of course, such an analysis would not have been possible for Galileo, so the films were also run at real speed. Under these conditions many more trials were ambiguous. Still I judged the lighter ball to obviously hit first on a slight majority of trials, whereas the heavier ball never appeared to do so.

Dr. Settle, as he had promised (and vastly more promptly than I), reported the results of this demonstration in an appropriate historical publication. ("Galileo and Early Experimentation." In _Springs of Scientific Creativity_ , R. Aris, H. T. Davis and R. H. Stuewer, Eds. Minneapolis: U. of Minnesota Press, 1983. Pp. 3-20.)

To my mind this little demonstration compellingly shows Galileo's report not to be some speculation which Galileo had neither reason to make nor ability to explain. Rather it shows him to have reported exactly the results one would observe were one to test Aristotle's hypothesis in the manner we reasonably may expect a novice experimenter would. Obviously, this demonstration does not show who might have done the test. However, one of my colleagues at the time suggested Galileo, the man who at a later time used inclined planes to study freefall, was too astute to perform the test in the naive manner of this demonstration. If one agrees with my colleague then one may reasonably suppose Galileo reported the results of tests done by others.

Before the demonstration was conducted Dr. Settle mentioned our plan and its rationale to other Galilean scholars. One, Professor Charles Schmitt, opined that even if we should demonstrate earlier hitting of lighter objects it would not change his opinion that Galileo's report was a speculation, not a fact. (See C. B. Schmitt, _Studies in the Renaissance_ , 1969, v. XVI, pp. 80-138. See the footnotes on pp. 118-119.) Galileo, Schmitt pointed out, not only said the lighter object would hit the ground first, he also reported that when the objects hit, the heavier was going much faster and was about to overtake the lighter. This, Schmitt noted, is impossible, which it most certainly is. Unless classical physics is completely wrong, which nobody believes, two objects with negligible airfoil characteristics will freefall at the same rate. Thus the heavier could not have accelerated faster than the lighter. But Schmitt's astute observation only strengthens my conviction that Galileo accurately reported the results of an object dropping test he either did or knew about. Here's why.

After everything was ready for the demonstration, but before beginning it (and before learning of Professor Schmitt's argument), I took the sand filled receptacle and the four balls to my home for some preliminary tests. I wanted to assure myself the balls would remain in the receptacle when they hit, and not bounce out of it nor make a mess by scattering sand all about. I set the receptacle in the garage and, working from the lightest to heaviest, I dropped the balls into it one at a time. There was no problem. Next, from my standing height I dropped a pair of balls. What I saw gave me one of the greatest shocks of my life! Galileo, Newton and classical physics notwithstanding, when the balls hit, the heavier was going much faster. Clearly it was about to overtake the lighter.

Since what I was seeing is impossible I though it must be an artifact, perhaps an artifact caused by the hand dropping method. So I balanced a pair of balls on a board and tipped it to simultaneously drop them. I saw the same impossible result: The heavier metal ball was falling much faster when it struck the sand, apparently just about to overtake the lighter ball. In complete bewilderment at what seemed like an incomprehensible, direct experimental refutation of one of the fundamentals of classical physics, I quickly grabbed a small ladder and repeated the test from about ten feet, the height of my garage ceiling. By doubling the length of fall I expected to allow for the heavier ball to overtake the lighter. But again I saw exactly the same thing: The heavier ball was going much faster when it hit and once again appeared to be on the verge of overtaking the lighter.

My head reeling with my inability to comprehend what seemed to be a conspicuous contradiction of irrefutably established basic physics, I got a longer ladder out of its storage place and headed outside where I could set it up and then drop the balls from an even greater height in order to demonstrate the overtaking phenomenon so tantalizingly imminent in my previous tests. As I was doing this I realized I was being a fool. If the heavier ball was falling much faster and on the verge of overtaking the lighter when they hit after being dropped from about five feet, then when I repeated the test from twice this height the overtaking should have been observed. Indeed, it should have been conspicuous. But in both cases I saw exactly the same thing: _Viz._ , the heavier ball was going much faster when it hit and just about to overtake the lighter.

Clearly, what I was seeing had to be a perceptual illusion. And a little thought unraveled it. We have no perceptual apparatus for seeing speed. It must be inferred from aspects of what we can see. And when the two balls hit one couldn't miss seeing the heavier hit much, much harder, with much greater force. What our brains obviously do is make a preconscious perceptual inference of greater speed based upon the heavier object's conspicuously greater impact.

It is important to note: Like all perceptual processes, this inference is completely preconscious. For example, when one sees a triangle the brain knows it is a triangle because the eye counts the corners. This is not a theory or speculation. It's an experimental fact. Such eye movements have been measured with hi-tech equipment while persons were identifying different geometric figures. But we are not and can not be consciously aware of these eye movements nor of the brain processes which translate them into a perception. All of this is completely preconscious. And I can assure you I was not thinking "The heavy ball hit harder ergo it was traveling faster." Consciously I was thinking "What I'm seeing can't be happening! I must be going mad!" Preconsciously my visual perceptual brain inferred greater speed from the heavier ball's greater impact. Thus I literally "saw" the heavier ball about to overtake the lighter one.

Without question, neither Galileo nor any other person from his time had any kind of device with which to measure the actual speed of a dropped object. Clearly, Renaissance persons could only have determined such a thing by visual observation. Thus, if one of them ever performed an object dropping test like the above, he/she would have been victim of the same perceptual illusion as I. He/she would have "seen" the heavier object moving much faster and about to overtake the lighter when it hit the ground. And as my experience shows, the height of fall would have had no effect on this illusion. Therefore, I conclude and am absolutely convinced that since Galileo reported this totally unknown perceptual illusion, he must either have done the test himself, observed others doing it, or had been informed in detail of the results which some other experimenter had found.

If you concede my conclusion, then we have here another perfect example of the kind of data which can lead to the Fallacy of Affirmation of the Consequent. In this case it did not lead to such an error because Galileo did not try to invent a physics explanation of the incomprehensible dropping test results he reported. But had he done so, his explanation would certainly have been erroneous, for the two phenomena are not physics ones. One is neurophysiological and the other perceptual. And obviously no one in the Renaissance had any knowledge of neural functioning nor of preconscious perceptual processes. We therefore have another compelling example of something Aristotle knew so well and tried to teach us, but which disciples of the fallacious dogma of scientific truth refuse to believe: One's explanation of any phenomenon, no matter how compelling nor how well it fits the facts, can be completely wrong because of what one does not know. Ergo, experimental data can not and do not reveal truth.

### END
