# Proposition evaluation

#### A numerical evaluation of propositions

There is a simple numerical scheme by which people can express their evaluation of the reliability of any proposition by giving it a value. It is not the only such scheme, but I am confident that it is the best. It is used by scientists sometimes, but here for comparisons I call what I am describing “my scheme”.

In this scheme, we still consider that any given proposition, as stated by whoever proposes it, is in fact either true or false. In other words, either right or wrong. What we are expressing with our numerical evaluation is simply our opinion about the likelihood that, with more information (sufficient to achieve certainty), the proposition will be found to the true.

To give an example of another such scheme, in his book The God Delusion, Richard Dawkins also presents a numerical list of evaluations of propositions, but his scheme does not work anything like as well as the one I describe here. First, it contains an arbitrary but fixed number of points on what could be a sort of scale (as mine is) but is in fact just a list of English phrases. The two extremes do represent, in a way, the same extremes as mine; but his scheme is very much weakened by the fact that the list of values bears no relation to the analogous idea of likelihood as probability (whereas both my scheme and that of Dawkins are about the hazier idea of likelihood as whatever it is about any proposition or theory that leads us to adopt whatever attitude we do adopt towards its reliability).

#### Range and outline definition

Like mathematical probability, the numbers in the scheme described here range from 0 to 1 (inclusive of both limits).
The number is a **real number**.
The value is often expressed as a percentage; a value of 0.5 can be called 50%, or “half and half”,
and a value of 0.98 is almost always called 98% (“98 per cent”).

The analogy to probability is limited. The range and nature of the number are the same. However, the precise meaning is rather different, because in probability theory the definition of the probability, which in broad general terms is somewhat similar to the “likelihood” of a proposition being true, is (strictly speaking) defined in terms of the proportion of the total number of instances in which the event being evaluated occurs. For example (possibly the best known example) if a coin is tossed, the probability of it landing with its head side uppermost is defined in terms of the number of times the head is uppermost when it lands, if the coin is tossed a very large number of times and a tally of results kept. For a correctly made coin with a fair tossing system, the probability of a head should be exactly half, that is 0.500...

I cannot (so far) offer any suggestion of how any such precise definition can be contrived for this numerical evaluation of propositions here. The number that is quoted is merely a shorthand way of expressing, with more precision than mere English words can (without using a great many of them each time), how confident a person is (at any given time) that a particular proposition will be shown to be reliable; that is, that it will not be shown to be wrong by any future empirical evidence. Here we are talking about the empirical science process as described in How scientific empiricism works.

This use of a number to express a degree of confidence in something is not, by any means, unique to evaluation of the probability of truth (correctness) of scientific propositions. The best known example of a context within which numbers are used to express the level of confidence of various people in various propositions is gambling. The typical figure who assigns a numerical value to each proposition in a category which interests them is the bookmaker, widely known as a bookie, or turf accountant (mainly because of the association with horse racing). For their work, bookmakers assign probabilities to each possible outcome of certain types of future event (chiefly in sport, especially horse racing), and then offer gamblers the opportunity to place a bet with odds defined (supposedly, anyway) by those probabilities. For a discussion of this subject, see Mathematics of bookmaking.

The similarity here is that a person is assigning a numerical probability to the idea of a given proposition turning out to be true at some point in the future. The difference between science and sport is that in sport you only have to wait for the result of the sporting event to know who won, at which point the bookie pays money to punters who bet on the winner and pockets the bets of all other punters on that event. In science, there may be nobody betting on any proposition being true; and it may be years before a given proposition becomes decidable, if it ever does. It might even be centuries, or it may remain strictly undecidable, though with 98% confidence probably true, for generations. (Bookies could not make a business out of taking bets on such things, and any punter would be a moron to place bets on such things.)

#### Meaning of the two extreme values

0 (zero) means certainly, completely false; and 1 means certainly, completely true, as is traditional. However because the number is real, not digital, and we are in the world of human value judgements and not that of digital logic, both these extremes and all the infinitely many numbers in between are treated in a way quite different from the meanings within a computer. The two extremes are absolutes, and are (for example) correctly applied to two important kinds of proposition outside empirical science:

- mathematical propositions (within appropriate contexts)
- the absolute evaluations which religious people give to their beliefs.

Each mathematical proposition that is true is stated within an explicit context
in which its truth is absolutely certain. Many non-mathematicians will have only hazy
ideas about this. For example, a person may have learnt at school that
“the three angles of a triangle add up to two right angles”,
yet may have heard or worked out that on the surface of the earth this is not true.
Suppose that you are at the north pole and you go about 6000 miles south
(a quarter of the earth’s circumference through the poles)
then about 6000 miles west (along the equator) then 6000 miles north.
You will end up at the north pole again along a line at an angle of 90 degrees
from the line along which you departed, having traversed three sides of a triangle
of which all *three* angles are right angles.
What happened to absolute truth? Well, the answer is that, correctly stated,
the original proposition is absolutely true, namely that
the sum of angles of a triangle result holds within the context of **plane Euclidean geometry**.
This means the geometry of lines and shapes on an ideal (perfect) plane surface,
within the system documented by the ancient Greek master Euclid,
based on his axioms. However, there are **other geometries** even within the context of a plane,
based on other (more recently devised) axiomatic systems;
and in addition to all that, the geometry of “lines”
(which are in 3-dimensional space actually circles) and other shapes
on the surface of a sphere (to which the earth’s surface is a very rough approximation)
is a completely different geometry again, called (reasonably) spherical geometry.
Taken as a whole, spherical geometry is even stranger than you might guess
from this simple example. Still, all in all I hope this illustration
gives some notion of how precise one has to be when stating mathematical propositions.

Still, Euclidean geometry is a good example of a branch of pure mathematics within which every true statement such as the one about the angles of a triangle is absolutely true so that we can give it a confidence value of 100 per cent.

It makes sense to discuss assigning values to propositions in two contexts: your own evaluations, and those of specific other people or categories of people. I discuss the evaluation of religious belief in Numerical evaluation of beliefs.

## The scheme for evaluating propositions in empirical science

That is what these extreme values are about, and that is why, using this system, no true scientist will ever give a value of 0 or 1 to any proposition outside of pure mathematics (taken as including logic) unless talking about everyday judgemnents of everyday things. An example of the latter is if they are sitting at a table, and somebody places a plate of dinner in front of them and asks whether they are sure that the plate of dinner is there. It would be pedantic to insist on applying the robust empiricist rule that their level of confidence in the proposition that the plate of dinner was there must remain less than 1.0!

Similarly, false propositions in those contexts (only) have value 0.

#### Meaning of values between the two extremes

The really useful feature of the scheme is the values in between 0 and 1. Any proposition about which a person has no opinion, or any for which they regard the likelihood of being true as “50–50” (that is, “the odds are evens”), they can give the value 0.5. (Here the reliability factor works just like probability). However the part of the scheme between 0 and 0.5 and between 0.5 and 1 is purely subjective. It is merely a shorthand way to say how close to fully reliable or fully unreliable a person feels a proposition is, based on the content, the judgement of the competence of the source of a piece of information, the person’s own knowledge and experience of the subject matter, and any number of other factors. An example is when a contestant on a quiz show is asked, or phones a friend and asks them, how confident they are that a given choice is the right answer, and the reply is (say) “90 per cent”: that level of confidence equates to a reliability factor of 0.9. Again: the percentage and the decimal number are arithmetically the very same.

Using this scheme, a scientist who “believes” in a theory
(such as the Darwinian theory of evolution, or the theory of tektonic plates)
gives the theory a reliability factor close to *but always still different from* 1
— perhaps 0.98.
Colloquially, in our terms “believes” simply means
“gives a reliability rating very close to 1”.
Still, I advise any scientist never to use the words “believe” or “belief”
while discussing the attitude of any true scientist to any proposition, in science or otherwise.
They should leave these words to the religious fanatics.

This now highlights this vital distinction: the small difference between the 1.0 of the religious believer and the scientist’s 0.98 represents a huge difference in outlook.

Many scientific theories are more tentative; confidence in some theories might only be 0.9, or 0.8 or even 0.7; if it reaches 0.6 it is only a little more likely than its opposite. Remember, when the value drops below 0.5 it is less likely to be right than it is to be wrong.

The beauty of this scheme as a form of shorthand, which is all it is, is that the number conveys instantly an idea of how confident a given person feels. Yet whether the level is at 0.01 (very unlikely to be true) or at 0.99 (very likely to be true) the confidence is not 1.0; at 0.99 it is still just that little way off, reminding us that any evaluation can be revised.

You may still ask why we do not just say that we think a theory is correct,
give it a value of 1.0, and then say we will re-evaluate it
and give it a value of 0.8 or 0.7 (or even less than 0.5 if a bad mistake has been made!)
if some new evidence comes to light suggesting it may be wrong.
Well, the answer to that is that *it is against the rules of the scheme*
to do so. I have defined the extreme values as meaning
that the person’s view of the proposition given one of them is immutable;
They are supposedly certain that it can never, ever, be changed.
We specifically use values different from 0 and 1
for any proposition for which we admit any possibility that it may have to be revised.
The moment a scientist — privately or publicly — takes that last step
to 0.0 or to 1.0, according to the rules for this scheme,
the detachment and skepticism of the scientific method has been abandoned
and the individual has stepped off the scientific straight and narrow
and begun to treat a scientific theory with a quasi-religious faith,
and that is intellectually fatal. As robust empiricists, *we do not do it.*

Incidentally, the similarity with mathematical probabilities gives this scheme another benefit, as follows. If a person is weighing up several alternative, mutually exclusive propositions — perhaps alternative theories to explain some phenomenon — then, just as it is in probability theory for alternative possible outcomes, the sum of the reliability factors of the alternatives must also be less than 1. Thus, if you have three possible theories, each equally likely to be right based on available evidence, then each of them cannot have a reliability factor greater than 0.33 (to two places of decimals), meaning that between them they total 0.99. And they can only have factors that high if you are extremely confident (0.99 or 99% sure, in fact) that one of the three is the correct! This tells us that, with a value of 0.33 each, the likelihood of any one of these three (remember: mutually exclusive and equally likely) possibilities is worse than evens. That can be sobering for people who are used only to qualitiative, almost literary, verbal descriptions of how they feel about various propositions, theories, ideas about how some aspect of the world is, or why it is the way it is.

## What these values are not

Remember what I said in the second paragraph above: an evaluation of a proposition using numbers in the range from 0 to 1, for exaple 0.7, does not claim that the proposition is 0.7 (70 per cent) true! It says that we reckon that the probability of the proposition turning out to be true if and when enough information is available to establish its truth beyond any doubt is 0.7.

So, 0.7