# The semantics of belief - Numerical evaluation

#### A numerical evaluation of propositions

I propose a simple numerical scheme by which people can express their evaluation of the reliability of any proposition by giving it a value. In his latest book The God Delusion. Richard Dawkins also presents a list of evaluations of propositions, but his scheme does not work anything like as well as mine. First, it contains an arbitrary but fixed number of points on what could be a sort of scale (as mine is) but is in fact just a list of English phrases. The two extremes do represent, in a way, the same extremes as mine; but his scheme is very much weakened by the fact that the list of values bears no relation to the analogous idea of likelihood as probability (whereas both my scheme and that of Dawkins are about the hazier idea of likelihood as whatever it is about any proposition or theory that leads us to adopt whatever attitude we do adopt towards its reliability).

#### Range and outline definition

Like mathematical probability, the numbers in my scheme range from 0 to 1 (inclusive of both limits).
The number is a **real number**.

The analogy to probability is limited. The range and nature of the number are the same. The precise meaning is rather different, because in probability theory the definition of the probability, which in broad general terms is somewhat similar to the “likelihood” of a proposition being true, is (strictly speaking) defined in terms of the proportion of the total number of instances in which the event being evaluated occurs. For example (possibly the best known example) if a coin is tossed, the probability of it landing with its head side uppermost is defined in terms of the number of times the head is uppermost when it lands, if the coin is tossed a very large number of times and a tally of results kept. For a correctly made coin with a fair tossing system, the probability of a head should be exactly half, that is 0.500...

I cannot (so far) offer any suggestion of how any such precise definition can be contrived for my numerical evaluation of propositions here. I offer it merely as a shorthand way of expressing, with more precision than mere English words can without using a great many of them each time, how at any given time a person regards a proposition in terms of its reliability, or likelihood to prove to be reliable when more information becomes available.

#### Meaning of the two extreme values

0 means certainly, completely false; and 1 means certainly, completely true, as is traditional. However because the number is real, not digital, and we are in the world of human value judgements and not that of digital logic, both these extremes and all the infinitely many numbers in between are treated in a way quite different from the meanings within a computer. The two extremes are absolutes, and are correctly applied only to two categories:

- mathematical propositions (within appropriate contexts), and
- the absolute evaluations which religious people give to their beliefs.

Each mathematical proposition that is true is stated within an explicit context
in which its truth is absolutely certain. Many non-mathematicians will have only hazy
ideas about this. For example, a person may have learnt at school that
“the three angles of a triangle add up to two right angles”,
yet may have heard or worked out that on the surface of the earth this is not true.
Suppose that you are at the north pole and you go about 6000 miles south
(a quarter of the earth’s circumference through the poles)
then about 6000 miles west (along the equator) then 6000 miles north.
You will end up at the north pole again along a line at an angle of 90 degrees
from the line along which you departed, having traversed three sides of a triangle
of which all *three* angles are right angles.
What happened to absolute truth? Well, the answer is that, correctly stated,
the original proposition is absolutely true, namely that
the sum of angles of a triangle result holds within the context of **plane Euclidean geometry**
which means the geometry of lines and shapes on an ideal (perfect) plane surface
within the system documented by the ancient Greek master Euclid,
based on his axioms. However, there are **other geometries** even within the context of a plane,
based on other (more recently devised) axiomatic systems;
and in addition to all that, the geometry of “lines”
(which are in 3-dimensional space actually circles) and other shapes
on the surface of a sphere (to which the earth’s surface is a very rough approximation)
is a completely different geometry again, called (reasonably) spherical geometry.
Taken as a whole, spherical geometry is even stranger than you might guess
from this simple example. Still, all in all I hope this illustration
gives some notion of how precise one has to be when stating mathematical propositions.

Back to the scheme for evaluating propositions. It makes sense to discuss assigning values to propositions in two contexts: your own evaluations, and those of specific other people or categories of people. Here, at this point in the development of the method, we can say the following. Each religious tenet, or article of faith, is by definition given 1 by believers in that religion (unless they are suffering a “crisis of faith”. Such people might not be party to this discussion, or might refuse to take part, or might join in but refuse to accept the idea of evaluating propositions in this way. Many religious people are technophobes who have a dislike of science, and especially mathematics. That is their prejudice.

However in my terms, the fact that they give these absolute values to their beliefs is evident in their reactions to suggestions that their beliefs are false. Whereas a scientist would, given rasonable grounds for doubt of a theory, calmly evaluate it and (if using my scheme) assign a lower evaluation if the new countervailing evidence seems to have merit and to warrant re-evaluation, a religious person will generally just reject all suggestions and countervailing argument out of hand, or try to change the subject or the rules of the argument, or even (especially in the case of religious authorities from time immemorial) try to suppress the new evidence and possibly to silence by more drastic means any person raising the argument or evidence.

That is what these extreme values are about, and that is why, using this system, no true scientist will ever give a value of 0 or 1 to any proposition outside of pure mathematics (taken as including logic) unless talking about everyday judgemnents of everyday things. An example of the latter is if they are sitting at a table, and somebody places a plate of dinner in front of them and asks whether they are sure that the plate of dinner is there. It would be pedantic to insist on applying the robust empiricist rule that their level of confidence in the proposition that the plate of dinner was there must remain less than 1.0!

Similarly, false propositions in those contexts (only) have value 0.

#### Meaning of values between the two extremes

The really useful feature of the scheme is the values in between 0 and 1. Any proposition about which a person has no opinion, or any for which they regard the likelihood of being true as “50–50” (that is, “the odds are evens”), they can give the value 0.5. (Here the reliability factor works just like probability). However the part of the scheme between 0 and 0.5 and between 0.5 and 1 is purely subjective. It is merely a shorthand way to say how close to fully reliable or fully unreliable a person feels a proposition is, based on the content, the judgement of the competence of the source of a piece of information, the person’s own knowledge and experience of the subject matter, and any number of other factors. An example is when a contestant on a quiz show is asked, or phones a friend and asks them, how confident they are that a given choice is the right answer, and the reply is (say) “90 per cent”: that level of confidence equates to a reliability factor of 0.9. Indeed, the percentage and the decimal number are arithmetically the very same.

Using this scheme, a scientist who “believes” in a theory
(such as the Darwinian theory of evolution, or the theory of tektonic plates)
gives the theory a reliability factor close to *but always still different from* 1
— perhaps 0.98.
Colloquially, in our terms “believes” simply means
“gives a reliability rating very close to 1”.
Still, I advise any scientist never to use the words “believe” or “belief”
while discussing the attitude of any true scientist to any proposition, in science or otherwise.

This now highlights this vital distinction: the small difference between the 1.0 of the religious believer and the scientist’s 0.98 represents a huge difference in outlook.

Many scientific theories are more tentative; confidence in some theories might only be 0.9, or 0.8 or even 0.7; if it reaches 0.6 it is only a little more likely than its opposite. Remember, when the value drops below 0.5 it is less likely to be right than it is to be wrong.

The beauty of this scheme as a form of shorthand, which is all it is, is that the number conveys instantly an idea of how confident a given person feels. Yet whether the level is at 0.01 (very unlikely to be true) or at 0.99 (very likely to be true) the confidence is not 1.0; at 0.99 it is still just that little way off, reminding us that any evaluation can be revised.

You may still ask why we do not just say that we think a theory is correct,
give it a value of 1.0, and then say we will re-evaluate it
and give it a value of 0.8 or 0.7 (or even less than 0.5 if a bad mistake has been made!)
if some new evidence comes to light suggesting it may be wrong.
Well, the answer to that is that *it is against the rules of the scheme*
to do so. I have defined the extreme values as meaning
that my view of the proposition given one of them is immutable;
I consider it can never, ever, be changed.
We specifically use values different from 0 and 1
for any proposition for which we admit any possibility that it may have to be revised.
The moment a scientist — privately or publicly — takes that last step
to 0.0 or to 1.0, according to my rules for this scheme,
the detachment and skepticism of the scientific method has been abandoned
and the individual has stepped off the scientific straight and narrow
and begun to treat a scientific theory with a quasi-religious faith,
and that is intellectually fatal. *We do not do it.*

Incidentally, the similarity with mathematical probabilities gives this scheme another benefit, as follows. If a person is weighing up several alternative, mutually exclusive propositions — perhaps alternative theories to explain some phenomenon — then, just as it is in probability theory for alternative possible outcomes, the sum of the reliability factors of the alternatives must also be less than 1. Thus, if you have three possible theories, each equally likely to be right based on available evidence, then each of them cannot have a reliability factor greater than 0.33 (to two places of decimals), meaning that between them they total 0.99. And they can only have factors that high if you are extremely confident (0.99 or 99% sure, in fact) that one of the three is the correct! This tells us that, with a value of 0.33 each, the likelihood of any one of these three (remember: mutually exclusive and equally likely) possibilities is worse than evens. That can be sobering for people who are used only to qualitiative, almost literary, verbal descriptions of how they feel about various propositions, theories, ideas about how some aspect of the world is, or why it is the way it is.