Best UFO Cases” by Isaac Koi

 

PART 20:  Quantitative criteria : Hynek – Strangeness and Probability

 

Some Relevant Definitions

 

Before considering Hynek’s Strangeness and Probability ratings, it may be helpful to briefly recap a few of Hynek’s relevant definitions.

 

In his book “The UFO Experience” (1972), divided sightings into two divisions “(I) those reports in which the UFO is described as having been observed at some distance; (II) those involving close-range sightings” (see Footnote 20.17).

 

The distant sightings were divided by Hynek into:

 

  1. “Nocturnal lights” – “those seen at night”
  2. “Daylight discs” – “those seen in the daytime”
  3. “Radar-Visual” – “those reported through the medium of radar”

 

The close-range sightings were divided by Hynek into:

 

(1) Close Encounter of the First Kind – “the reported UFO is seen at close range but there is no interaction with the environment (other than trauma on the part of the observer”

 

(2) Close Encounter of the Second Kind – “physical effects on both animate and inanimate material are noted”

 

(3) Close Encounter of the Third Kind – “the presence of ‘occupants’ in or about the UFO is reported”.

 

In addition to the above definitions, J Allen Hynek proposed the use of a Probability Rating and a Strangeness Rating.  Those ratings are probably the best known suggested quantitative criteria for evaluating UFO cases. However, as with the other criteria outlined in PART 19: Quantitative criteria : Introduction, their actual application has been very limited.

 

This webpage examines Hynek’s proposed Probability Rating and a Strangeness Ratings, rather than the above definitions.  However, I note in passing that those definitions (while very widely adopted) are acknowledged by quite a few UFO researchers as giving rise to various difficulties.  For example, Jenny Randles has concisely identified three main difficulties: “Firstly, there is a clear overlap where it can often by very hard to determine which category a case fits into.  This is particularly so between the Daylight Disc, CEI and CEII cases.  Secondly, it is not very acceptable to distinguish between close encounter and non-close encounter on the basis of distance. An arbitrary boundary may well be set (e.g. 100 metres) where anything closer than this becomes labelled a close encounter, but it is well known that witness estimates of distance are, to say the least, inaccurate.  Thirdly, there seems to be not enough distinction between the higher strangeness types of reports – the very reports we ought to be the most interested in” (see Footnote 20.13).

Similarly, this is not the place to explore suggested refinement of (additions to) Hynek’s classification. I simply note in passing that various suggestions have been made. Of the various classification systems which have sought to develop Hynek’s definitions,  particularly noteworthy are those put forward by Jenny Randles in several publications (see Footnote 20.11 to Footnote 20.16 and the discussion in PART 22: Quantitative criteria : BUFORA’s case priority

 

Many researchers have, for example, suggested adding a “Close Encounters of the Fourth Kind” category (usually, but not always, to deal with alleged alien abductions).  However, there is no universal acceptance of any of the proposed variations to Hynek’s classifications. Indeed, there is very considerably variation in the proposed additional classes of reports – even within books by the same authors. For example, in their book “UFOs: A British Viewpoint” (1979) Jenny Randles and Peter Warrington referred to CEIV as being “encounters with psychic effects (including “all reports of a psychic (here defined as ‘apparently non-physical’) nature take place.  This often means abduction claims, where there are time-lapses and other ‘non-real’) elements” (see Footnote 20.14).  However, in a subsequent book entitled “Science and the UFOs” (1985) Jenny Randles and Peter Warrington gave the following definitions: “A CE3 case involves observation of an animate alien entity in association with a UFO.  A CE4 goes one step beyond and includes contact between that entity and the witness” (see Footnote 20.15).  The definition in their later book is closer to the commonest usage of CE4 that has emerged in subsequent decades.

 

 

 

 

 

Hynek’s Strangeness and Probability Ratings

 

Hynek discussed these proposed ratings in several books, particularly in his book “The UFO Experience” (1972). He also discussed those ratings in his essay in “UFO’s: A Scientific Debate” (1972) (edited by Carl Sagan and Thornton Page).  In that essay, he gave the following summary of “strangeness” and “probability” (or “credibility”) ratings:

 

“The degree of ‘strangeness’ is certainly one aspect of a filtered UFO report. The higher the ‘strangeness index’ the more the information aspects of the report defy explanation in ordinary physical terms.  Another significant dimension is the probability that the report refers to a real event; in short, did the strange thing really happen? And what is the probability that the witnesses described an actual event? This ‘credibility index’ represents a different evaluation, not of the report in this instance, but of the witnesses, and it involves different criteria.  These two dimensions can be used as coordinates to plot a point for each UFO report on a useful diagram.  The criteria I have used in estimating these coordinates are: For strangeness: How many individual items, or information bits, does the report contain which demand explanation, and how difficult is it to explain them, on the assumption that the event occurred? For credibility: If there are several witnesses, what is their collective objectivity? How well do they respond to tests of their ability to gauge angular sizes and angular rates of speed? How good is their eyesight? What are their medical histories? What technical training have they had? What is their general reputation in the community? What is their reputation for publicity-seeking, for veracity? What is their occupation and how much responsibility does it involve? No more than quarter-scale credibility is to be assigned to one-witness cases” (see Footnote 20.04).

 

Hynek described the two ratings in more detail in his book “The UFO Experience” (1972), as follows:

 

The Strangeness Rating:

 

“The Strangeness Rating is, to express it loosely, a measure of how ‘odd-ball’ a report is within its particular broad classification. More precisely, it can be taken as a measure of the number of information bits the reports contains, each of which is difficult to explain in common-sense terms. A light seen in the night sky the trajectory of which cannot be ascribed to a balloon, aircraft, etc would nonetheless have a low Strangeness Rating because there is only one strange thing about the report to explain : its motion.  A report of a weird craft that descended to within 100 feet of a car on a lonely road, caused the car’s engine to die, its radio to stop, and its lights to go out, left marks on the nearby ground, and appeared to be under intelligent control receive a high Strangeness Rating because it contains a number of separate very strange items, each of which outrages common sense” (see Footnote 20.02).

 

The Probability Rating:

 

“Assessment of the Probability Rating of a report becomes a highly subjective matter. We start with the assessed credibility of the individuals concerned in the report, and we estimate to what degree, given the circumstances at this particular time, the reporters could have erred.  Factors that must be considered here are internal consistency of the given report, consistency among several repors of the same incident, the manner in which the report was made, the conviction transmitted by the reporter to the interrogator, and finally, that subtle judgment of ‘how it all hangs together’” (see Footnote 20.03).

 

 

Hynek made the following comments about assigning relevant numbers for these two criteria:

 

“Ideally, a meaninful Probability Rating would require the judgment of more than one person.  Such luxory of input is rarely available.  … In my own work, I have found it relatively easy to assign the Strangeness number (I use 1 to 10) but difficult to assign a Probability Rating. Certainty (P=10) is, of course, not practically attainable; P=0 is likewise impossible under the circumstances since the original report would not have been admitted for consideration.  The number of persons involved in the report, especially if individual reports are made, is most helpful.  I do not assign a Probability Rating greater than 3 to any report coming from a single reporter, and then only when it is established that he has a very solid reputation” (see Footnote 20.05)

 

Various other UFO researchers have discussed Hynek’s proposed rating system.  Most discussions merely give a summary of that proposed system, with little meaningful comment or analysis.  However, several researchers have sought to develop, or comment upon, Hynek’s proposal.

 

Some comments have been fairly dismissive.  For example,  after suggesting that McDonald’s interviews of UFO witnesses merely “confirm his well-known bias in favour of ETH”, Menzel has suggested that “… Hynek’s indexes of ‘credibility’ and ‘strangeness’ are equally subjective.  Study of them may throw some light on Dr Hynek but they are unlikely to contribute much to the UFO problem” (see Footnote 20.01).

It is not merely debunkers that have questioned the subjectivity of these ratings.  For example, Robert Moore (a British ufologist that has edited several UFO magazines, including one published by BUFORA) has commented that Hynek's application of the Strangeness/Probability system "was always subjective - in that it had no fixed data criteria, ratings of cases were based solely on judgement" and that later proposed refinements "attempted to address this" (see Footnote 20.24).

Before turning to those proposed refinements, I note that one of the simplest but (in my view) actually most useful suggestions in relation to Hynek’s rating system is to produce an overall score for a UFO report by multiplying the two numbers together.    Don Berliner has suggested that “J Allen Hynek was on the right track” in proposing the Strangeness/Probability rating system and that some system of establishing the relative usefulness of a report is needed” because “for decades we have been far too unscientific about judging the merits of reports, and this has led to a great waste of effort”.  He added that also suggested the use of a “sighting coefficient”, suggesting that “by multiplying the two numbers, the report can be given a K/U (coefficient of usefulness) which will establish its potential for helping to solve the mystery, relative to other reports” (see Footnote 20.06).

 

Personally, I tend to think of Don Berliner’s “sighting coefficient” in terms of a sighting’s “total score” or “Berliner Score”.

 

 

 

Don Berliner’s Strangeness Scale and Credibility Scale

 

Don Berliner’s suggestion of a “sighting coefficient” (obtained, as discussed above, by multipiplying the Strangeness rating by the Probability rating) was coupled with the following “Strangeness Scale” and “Credibility Scale” (see Footnote 20.06).

 

Don Berliner’s “Stangeness Scale” :

0 – Identified as a known object or phenomena, or a report lacking a clear UFO content.

1 – Night light with no apparent object.

2 – Night object

3 – Daylight object seen at a distance

4 - Night Close Encounter of the First Kind

5 – Daylight CE-I

6 – Ambiguous CE-II

7 – Unambiguous CE-II

8 – CE-III

9 – CE-III with occupant reaction to the witness

10 – CE-III with meaningful communication

 

Don Berliner’s “Credibility Scale” :

0 – Witnesses lacking believability

1 – Single average witness

2 – Multiple average witnesses

3 – Single exceptional witness

4 – Multiple exceptional witnesses

5 – Radar/visual

6 – Still photos shot by amateurs

7 – Still photos shot by professionals

8 – Amateur movies or videotape

9 – Professional movies or videotape

10 – Live television

 

Don Berliner sought to illustrate the application of the Strangeness Scale and the Credibility Scale (and his “sighting coefficient) to several of the best known UFO cases.  He gave scores to 11 well-known sightings, the highest score of those 11 sightings being a score of 27 for Zamora's sighting at Socorro as follows:

1947 Kenneth Arnold. Sighting Coefficient “3*3=9”

1948 Thomas Mantell. Sighting Coefficient “3*4=12”

1950 Trent photos. Sighting Coefficient “3*6=18”

1952 Washington Nationals. Sighting Coefficient “1*5=5”

1952 Nash/Fortenberry sighting.  Sighting Coefficient “2*4=8”

1957 Levelland. Sighting Coefficient “2*7=14”

1964 Socorro. Sighting Coefficient “9*3=27”

1966 “Swamp gas”, Dexter. Sighting Coefficient “6*2=12”

1973 Coyne helicopter. Sighting Coefficient “4*4=16”

1979 New Zealand film. Sighting Coefficient “1*9=9”

1980 Cash/Landrum incident. Sighting Coefficient “7*2=14”

 

 

 

Berliner’s personal conclusion from the fact that the highest score he assigned to the cases in his sample was 27 (out of a potential score of 100) was that this “seems to make it clear that there is a severe lack of reports which could be used to convince scientists, legislators and the general public that we are dealing with something so unusual that it deserves immediate attention”.

 

It is notable that J Allen Hynek’s own (subjective?) values for various cases are (like the numbers assigned by Berliner) also fairly low.  Hynek included several Strangeness Rating/Probability charts in his book “The UFO Experience” (1972) – see Footnote 20.07. Those tables included:

 

(1) 15 Daylight Discs. In this list, the highest Berliner “Sighting Coefficient” based on the Strangeness-Probability ratings assigned by Hynek is 24.

 

(2) 13 Nocturnal Lights. In this list, the highest Berliner “Sighting Coefficient” based on the Strangeness-Probability ratings assigned by Hynek is 24.

 

(3) 10 Radar-Visual cases. In this list, the highest Berliner “Sighting Coefficient” based on the Strangeness-Probability ratings assigned by Hynek is 36 points (Strangeness-Probability of 4-9) – 17 July 1957 SW United States

 

(4) 14 Close Encounters of the First Kind. In this list, the highest Berliner “Sighting Coefficient” based on the Strangeness-Probability ratings assigned by Hynek is 36 points (Strangeness-Probability of 4-9) – 10 October 1966 Newton, Ill.

 

(5) 23 Close Encounters of the Second Kind. In this list, the highest Berliner “Sighting Coefficient” based on the Strangeness-Probability ratings assigned by Hynek is 40 points (Strangeness-Probability of 5-8) – 2 November 1957 Levelland, Texas

 

(6)  5 Close Encounters of the Third Kind. In this list, the highest Berliner “Sighting Coefficient” based on the Strangeness-Probability ratings assigned by Hynek is 40 points (Strangeness-Probability of 5-8) – 26 June 1958 Boianai, New Guinea

 

Studying Don Berliner’s suggested ratings for a moment betrays some of the problems with such prescriptive scales of strangeness and (in particular) credibility.  For example, should reports from “multiple exceptional witnesses” only get a single credibility point more than a report from a “single exceptional witness” and only two more than “multiple average witnesses”.  More importantly, is it really satisfactory to authomatically give a still photo a credibility scale twice the credibility weighting of a a report from a “single exceptional witness”?  Or to automatically give a movie twice the credibility weighting of “multiple exceptional witnesses”?

 

Such strict automatic weightings in relation to photographic material appear particularly unsatisfactory in an era when photographs are easily manipulated using computer software (e.g. Photoshop) and movies are created almost as easily using three dimensional modeling and video editing software (e.g. 3Dmax).

 

While the basic idea of multiplying numbers for strangeness and probability ratings together seems to me to be a very useful idea, I disagree with the detailed content of Berliner’s suggested prescriptive ratings. For example, I have looked into several hoaxed videos of “aliens” that were created by CGI.  Where do such videos rate on Berliner’s scale? Well, it seems that they score a 9 or 10 on the Strangeness Scale and 8 or 9 on the Credibility Scale, resulting in a total Berliner “sighting coefficient” of about 80 to 90 – massively more than the scores assigned by Berliner to any of the classic sightings he considered in his article.

 

 

 

 

Jim Speiser’s suggested refinements

 

Don Berliner is not the only researcher to seek to reduce the subjectivity involved in assigning Strangeness and Probability Ratings.  Another researcher, Jim Speiser, has published different criteria for assigning relevant values. The relevant article appeared, like Don Berliner’s article referred to above, in the MUFON Journal in 1987 (see Footnote 20.08).

 

Jim Speiser’s article began by referring to Don Berliner’s article and stating that he quite agreed with Don Berliner’s objective of concentrating efforts on those reports which may contain information of long-term value.  Speiser said that his organization, Paranet, had for the previous year been using a similar system for “weighing UFO reports as a method of determining usefulness”.   He indicated that Paranet’s system used a scale of one to five, as follows:

 

Strangeness Factor: S1-S5

S1 – Explainable or explained

S2 - Probably explainable with more data

S3 - Possibly explainable, but with elements of strangeness

S4 - Strange; does not conform to known principles

S5 - Highly strange; suggests intelligent guidance

 

Probability Factor: P1 – P5

The "Probability" factor of a case relates to the credibility, number and separation of witnesses and/or the soundness of evidence gathered.

P1 - Not Credible or Sound

P2 - Unreliable; smacks of hoax

P3 - Somewhat credible or indeterminate

P4 - Credible; Sound

P5 - Highly Credible, leaving almost no doubt

 

In a subsequent online article, Jim Speiser has given the following examples of his “Probability Factor” values:

P1 - Known Hoaxer or UFO "Flake"; Hoax Photo

P2 - Repeat Witness; Conflicting Testimony

P3 - Standard, first-time witness; slight radiation reading

P4 - Multiple witnesses; pilot; clear photo

P5 - National Figure; Multiple independent witnesses; videotape

 

In his MUFON Journal article in 1987, Jim Speiser suggested (correctly in my view) that the most obvious difference between Speiser’s system and Berliner’s system is that Speiser’s “is more subjective” since “it is not dependent on categorization based on specific elements of the case; rather it calls for a more general judgment of how useful the various elements are to the advancement of our knowledge”.

Of course, since it is more subjective the cost of the flexibility of Speiser’s ratings is that the values assigned can vary considerably from individual to individual – limiting the usefulness of the system as a means of comparing the importance of various cases (particularly if the values are assigned by different researchers/groups).

In my view, it is doubtful that Speiser’s suggestions meaningfully limit the subjectivity inherent in Hynek’s original proposals.  They amount to say that a value of 1 is very low, a value of 2 is low, 5 is very high etc etc.

 

Jim Speiser included some illustrations of the values that he would assign to several high-profile cases, duplicating the list used by Don Berliner in his article:

 

1947 Kenneth Arnold. S4/P3

1948 Thomas Mantell. S2/P5

1950 Trent photos. S5/P4

1952 Washington Nationals. S5/P5

1952 Nash/Fortenberry sighting.  S5/P5

1957 Levelland. S5/P5

1964 Socorro. S5/P3

1966 “Swamp gas”, Dexter. S3/P5

1973 Coyne helicopter. S5/P5

1979 New Zealand film. S3/P3

1980 Cash/Landrum incident. S5/P4

 

Since Jim Speiser was only using a scale of 1 to 5, these values are (relative to those assigned by J Allen Hynek and Don Berliner) relatively high.  Given the significant difference in the level of ratings, it is difficult to completely ignore Menzel’s suggestion that study of Hynek’s indexes of ‘credibility’ and ‘strangeness’ “may throw some light on Dr Hynek but they are unlikely to contribute much to the UFO problem” (see Footnote 20.01).

 

One question that I will return to in PART 28 is whether obtaining the judgment of more than one person would help smooth out relevant subjective biases and produce a more useful result.  I note that Hynek himself suggested that “Ideally, a meaninful Probability Rating would require the judgment of more than one person.  Such luxory of input is rarely available” (see Footnote 20.05)

 

 

 

 

Claude Poher’s suggested refinements

 

Claude Poher, the first director of GEPAN (the UFO investigative office under the French government's National Center for Space Sciences), has also suggested a different system which seeks to reduce the subjectivity involved in Hynek’s Strangeness and Probability Ratings.

On his website (www.premiumwanadoo.com/universons), Claude Poher suggested that the credibility criterion should be based on “the known parameters about the witnesses and their method of observation”, NOT taking into account “the anecdotal story of what the witnesses have seen” in order to “separate the credibility from the strangeness criterion of an observation”. He suggested that “credibility belongs to the witnesses, strangeness belongs to the observed facts”.

I am aware that ratings suggested by Poher are also given on pages 85-92 of his “Etude statistique des rapports d’observations du phénomène O.V.N.I. Etude menée en 1971, complétée en 1976”, available on-line on the GEIPAN site (see Footnote 20.18).  However, since that document is in French I have not been able to read it.  The comments below therefore relate to Poher’s suggestions as set out on his website.

In relation to the credibility criterion of an observation, he begins by assigned a number similar to very similar to Speiser’s scale:

0 = absolutely not credible.
1 = very little credible.
2 = a little credible.
3 = credible.
4 = very credible.
5 = perfectly credible.

Poher commented that “This note depends only of the witnesses and of the observation method. In our computer file, we have one rubric for the observation method and three rubrics concerning the witnesses : their number, their age, their ‘competencies’. We can ascribe a different note for each rubric, and combine the four notes according to relative ‘weights’ for the rubrics. This means the relative importance of each rubric as compared to the others.”

He acknowledged that “All this is quite subjective” but suggested that “these are only comparison criteria”.

Poher noted that the relative weights of the four rubrics was as follows:

(1) Relative weight of the number of witnesses = 31 %

(2) Relative weight of the age of the main witness = 7%

(3) Relative weight of the "socio-professional code" of the main witness = 31%

(4) Relative weight of the method of observation = 31%

Total = 100 %

Thus, the witness age was “three times less important than the three other criteria”, with those other criteria being given equal weight.  Credibility was thus = (31 x Value 1  +  7 x Value 2  +  31 x Value 3  +  31 x Value 4)  /  100

 

In terms of the values to be assigned for each rubric, Poher noted the following:


In relation to "number of witnesses":

0 if the number is unknown.
1 for one witness.
2 for two witnesses.
3 for 3 to 9 witnesses.
4 for 10 to 100 witnesses.
5 for more than 100 witnesses.

Poher himself accepted that these values these notes are “extremely ‘severe’” and that they “penalize considerably most of the testimonials, where the number of simultaneous witnesses is rarely larger than five”.


In relation to “age of main witness”:

0 if age is unknown.
1 under 13 years.
2 not used.
3 from 14 to 20 years.
4 larger than 60 years.
5 from 21 to 59 years.



In relation to "socio-professional code of main witness" :

0 if unknown.
1 for schoolboys, shepherds ...
2 for workers, farmers ...
3 for technicians, policemen, qualified army personnel ...
4 for engineers, officers ..
5 for pilots, researchers, astronomers ...


In relation to the method of observation :

0 without information, or naked eye observation with no indication of distance.
1 naked eye with more than 3 km distance.
2 naked eye with 1 to 3 km distance, or from an airplane with more than 1 km distance.
3 for a radar observation, or naked eye with 200 to 1000 m distance.
4 for a binocular observation, or binoculars + radar, or from an airplane at less than 1000 m distance, or naked eye with less than 150 m distance.
5 for an observation with a telescope, or with a photography, or with binoculars + photo, or naked eye with less than 50 m distance.


Strangeness criterion of an observation :

Poher’s system involved assigning a Strangeness criterion as follow:

0 = not at all strange, or insufficient information.
1 = slightly strange, object is a dot moving in straight line and constant angular speed.
2 = fairly strange, object of a small angular dimension but abnormal trajectory.
3 = strange, complex trajectory, landing or quasi landing without traces, sudden disappearance in flight.
4 = very strange, landing with traces.
5 = particularly strange, landing with observation of occupants.


While Poher’s system superficially looks very detailed and scientific, the actual basis for the relative values is far from clear.  For example, in relation to credibility, why should a UFO report from a pilot have more than twice the value of a report from a farmer – particularly in the light of the consideration of the data in PART 16 : Qualitative criteria: Credible witnesses?

 

Does Poher’s system merely result in spurious accuracy and the codification of biases?  The answers are far from clear.

 

 

Suggested refinements by Jenny Randles

 

Jenny Randles wrote the book “UFO Study” (1981) as a “handbook for enthusiasts”.  In that book, she suggested that case reports written by UFO investigators after an investigation is concluded should include an evaluation of the strangeness and probability rating of the case  (see Footnote 20.11).

 

She suggested that the case report “could profitably include … your first-hand opinion on the strangeness and credibility of a story” since “you are are person who has had direct contact with the witnesses”, referring to J Allen Hynek as the first to propose the importance of this this.

 

Jenny Randles commented that “in truth this means a subjective assessment of the witness and the events by yourself, but then you are in the best position to make such an assessment.  It is suggested that you bear in mind a scale from 0 to 9 for both strangeness and probability.  0 would represent a report which was totally without credibility (especially so far as the witnesses were concerned) or one where there were _no_ strange aspects. 9, on the other hand, would apply to cases which are completely credible or without unstrange attributes.  Both of these extremes should be regarded as unobtainable guidelines, and your two-figure evaluation should fall somewhere in between”.

 

In addition to stating a “S-P rating”, Jenny Randles suggested that UFO investigation case reports should have a title page containing “any codified information about the case that will transfer rapid data. The relevant codes, devised by Jenny Randles and Bernard Delair “for a joint research catalogue” (which I do not recall seeing discussed subsequently) includes, for example, “CE3” for a Close Encounter of the Third Kind, “L” for “Landing”, and “EM” for “Electronmagnetic Inteference”.    Of particular significance in terms of refinement of Hynek’s Strangeness-Probability Ratings is the suggestion that the title page should also include “the Investigation Level”.

 

The “Investigation Level” of a sighting was a proposal that had previously been made by Jenny Randles and Peter Warrington in their book “UFOs: A British Viewpoint” (1979) - see Footnote 20.12 (and had been discussed by Jenny Randles in an article in Flying Saucer Review in 1978 - see Footnote 20.16).  Jenny Randles and Peter Warrington commented that : “Almost any sighting of an aerial phenomenon will find a publisher who will print the report without reference to a logical explanation.  There is obviously a need for some kind of estimation of the reliability of a published report. This needs to be agreed by world UFO organizations.  Every report published should be codified in some way to indicate the amount of investigation which has gone into it”.  They noted the absence of such a system at that time and proposed the following “Investigation Levels”:

 

Level A: A report which has received on-site investigation by experienced investigators.

 

Level B: An interview with the witness or witnesses was conducted by investigators but there was no follow-through investigation into the case.

 

Level C: The witness has simply completed a standard UFO report form of some type. No interviews have been conducted.

 

Level D:  The report consists solely of some form of written communication from the witness.

 

Level E: The report is based on information received second hand (such as a newspaper account).  There has been no follow up investigation at all.

 

The article by Jenny Randles in Flying Saucer Review in 1978 (see Footnote 20.16) suggested that she and Bernard Delair of CONTACT considered might be of “great value if regularly published in UFO periodicals”. I am not aware of any publication subsequently adopting such a practice, including the journal in which that article was published (i.e. Flying Saucer Review).

Robert Moore has suggested that the propsals made by Jenny Randles and Peter Warrington were "far superior" to Hynek's proposals, "but never widely adopted, sadly" (see Footnote 20.24).

 

Suggestions made by David Saunders

 

In 1981, Vicente-Juan Ballester-Olmos published a book entitled “Los OVNIS y la Ciencia” (UFOs and Science) with physicist Miguel Guasp. Chapter V was called “Methodology and Organization” and it started with a section entitled “Standards in the Evaluation of UFO Reports” where they reviewed the various systems to that date and proposed their own system, the one later one adopted by MUFON (see PART 23:  Quantitative criteria : Ballester/MUFON index).  In that book (at page 122, 3rd paragraph, and page 123, 1st paragraph), they refer to an article entitled “How Colorado classes UFOs” (see Footnote 20.21).  It appears from the summary provided by Vicente-Juan Ballester-Olmos that the article described a matrix created by Dr. David Saunders to preliminarily classify UFO sightings.

 

The relevant matrix was published on page 124 of the book by Vicente-Juan Ballester-Olmos, with a caption stating “Matrix used by the Colorado University’s UFO Commission for the classification of cases, based on their potential value.”

 

Basically, the matrix is a table with columns and rows.

 

The columns have a label indicating that they are of “increasing strangeness” from left to right.  From left to right, those columns are:

1. Sighting

2. Recurrance

3. Tracking

4. Motions

5. Formations

6. Day-Night

7. Clouding

8. Landing

9. Rendezvous

    10.  Chasing

    11.  Pacing

    12.  Maneuvering

    13.  Curiosity

    14.  Responsivity

     

    The rows have a label indicating that they are of “increasing objectivity” from the top downwards.  From the top downwards, those rows are:

    1. Prediction

    2. Communication

    3. Single Witness

    4. Exceptional Witness

    5. Multiple Witness

    6. Independent Witnesses

    7. Theodolite or telescope

    8. Polarizer or grating

    9. Animal Reactions

      10.  Electromagnetic effects

      11.  Radar

      12.  Isolated Pictures

      13.  Still sequences

      14.  Movies

      15.  Advanced Instrumentation

      16.  Radioactivity or burn

      17.  Garbage

      18.  Fragments

      19.  Wreckage

       

      There are obvious similarities between the ratings of “strangeness” and “objectivity” proposed by Dave Saunders (presumably in the period between 1966 and 1968) and Hynek’s proposal of Strangeness and Probability Ratings.

       

      It is not clear to me whether these proposals were devised independently or jointly or whether one influenced the thinking of the other.  This may be made clearer by the content of the article entitled “How Colorado classes UFOs” (see Footnote 20.21) mentioned above.

       

       

       

       

      Actual applications of Hynek’s Strangeness and Probability Ratings

       

      There has been a considerable amount of discussion Hynek’s Strangeness and Probability Ratings. Robert Moore has referred to these ratings as "iconic and widespread" (see Footnote 20.24).

       

      However, there has in fact been very limited application of them.

       

      The reasons for the limited application are unclear.

       

      The Hynek Strangeness and Probability Ratings do not appear to be used in the huge UFO database (UFOCAT) sold by the organisation Hynek founded, CUFOS.  UFOCAT entries do, however, include numbers in relation to Vallee’s SVP criteria discussed in PART 21: Quantitative criteria : Vallee’s SVP ratings.  I have contacted the researcher that has managed the UFOCAT project since about 1990 (Donald Johnson) and understand from him that the Hynek Strangeness and Probability Ratings were was never “formally adopted” by UFOCAT. Before 1990, and after David Saunders and Fred Merritt stopped working on UFOCAT, it went through a period when it was “out of favour with Hynek, presumably because of Willy Smith's efforts to invent UNICAT as a replacement. The UFOCAT record layout therefore “remained stagnant and no new fields were added” until Donald Johnson started work on UFOCAT around 1990.  He began working on re-creating UFOCAT “by first adding many pages of case coding that had been done by CUFOS staff in the early 1980s” and noticed that “no one had attempted to add the strangeness and probability ratings” and so “that probably influenced me to be as expedient as possible and not add the Hynek ratings when I expanded the number of fields”.  From an article published in the MUFON Journal in 1976, it appears that at least some of those that worked on UFOCAT had envisaged that Hynek's Strangeness and Probability Ratings would be added (see Footnote 20.22). That article indicates that at that time columns 133-136 of UFOCAT's records related to "Credibility (to be computed)" while columns 137-140 relate to "Strangeness (to be computed)". Another MUFON publication a couple of years later contains some analysis of some of various fields within the UFOCAT records and notes that the columns above column 120 (i.e. including the columns designated for Strangeness and Credibility Ratings) "are devoted to detail coding, and are not in active use at this time" (see Footnote 20.23).

       

      Donald Johnson’s view, having managed the largest existing UFO database for about two decades, is that “applying probability ratings is not that difficult, but I have never seen a written codification of the process to apply the strangeness ratings”.  He decided that “without sufficient guidance and because I could not go back and ask Hynek about it, as he had died in 1985” that he would decided not to include either of these ratings.

       

      However, Larry Hatch’s *U* database (the second largest UFO database, after UFOCAT, of which I am aware) does include Strangeness and Probability Ratings, but that database can only be accessed on modern computers if considerable effort is made since Larry Hatch developed his own database software for use under MS-DOS.  Few computer systems purchased after about 2002 will have an operating system compatible with the software developed by Larry Hatch.  (It is currently still possible to run a "Virtual PC" on a modern computer that simulates an older computer environment capable of running the *U* database, but involves several steps  - see Footnote 20.25. The necessary backwards compatibility is now reaching its limit, with that method not working on the very latest incarnation of the Windows operating system, i.e. Windows 7].  Donald Johnson has commented that while Larry Hatch did make the effort to add Strangeness and Probability Ratings, Larry Hatch “has never really defined and operationalized how he would assign these codes” so Donald Johnson “hesitated to follow suit” (see Footnote 20.19).

       

      Another database (Willy Smith's UNICAT) also included Strangeness Ratings.  According to UFO researcher Jan Aldrich, it included "strangeness values assigned by Hynek" (see Footnote 20.26).   Unfortunately, Willy Smith died in 2006 and he, according to Jan Aldrich, used "used a computer program which is obsolete".  Jan Aldrich is in possession of paper copies of the content of UNICAT, but this consists of "500+" records, with each record having a separate page.  I am not aware of any plan to make those records available to other researchers and this would, presumably, be a time-consuming task. 

       

      (I do not know how many hundreds of hours were spent by Larry Hatch and Willy Smith creating and maintaining their respective databases, but I note in passing that the above couple of paragraphs should provide some sobering facts for the next generation of UFO researchers that are currently planning and creating new UFO databases).

       

      As noted above, Jenny Randles appears, at least to some degree, adopted Hynek’s rating scheme.   I have not seen the NUFON database she mentioned.  It seems that BUFORA and/or its investigators may also have adopted Hynek’s rating scheme.  I have been told by one of BUFORA’s ex-Chairmen (Tony Eccles) that BUFORA “adopted systems developed by Hynek (and Vallee)” (see Footnote 20.20).  I am not sure what form that adoption took – neither Hynek’s Strangeness/Probability nor Vallee’s SVP ratings appear to be included in the standard BUFORA case investigation report forms contained within the BUFORA (see PART 22: Quantitative criteria : BUFORA’s case priority).

       

      Jim Speiser has written an article (see Footnote 20.10) which refers to “each UFO Sighting Report in the CUFON database” having a rating at the bottom in the form S#/P#.  As at June 2010, the website of the Computer UFO Network (www.cufon.org) focuses on UFO documents rather than sightings. The few sightings it addresses do not appear to have any ratings at the bottom, whether in the format S#/P# or otherwise.   Some material apparently produced by “CUFON Computer UFO Network” in 1987, i.e. during the same year as Speiser’s article, available online (see Footnote 20.11) do contain a fairly small number of UFO reports which include such ratings.  It is not clear how long the system was persisted with, nor why it appears to have been abandoned.

       

      I have been informed by Fran Ridge that he believes that the Berliner number system (and, by implication, presumably also Hynek’s Strangeness and Probability Ratings) were used by Willy Smith and also by MUFON in their evaluations of submitted cases.  However, I have not been able to confirm this.  In relation to the latter, I note that MUFON has, since 1992, appeared to enforce the quantitative criteria considered in PART 23: Quantitative criteria : Ballester/MUFON index.

       

       

      FOOTNOTES

       

      [20.01] Donald Menzel, “UFO’s: A Scientific Debate” (1972) (edited by Carl Sagan and Thornton Page) at pages 136-137 (in Chapter 6) of the Barnes and Noble hardback edition (with the same page numbering in the Norton paperback edition).

       

      [20.02] J Allen Hynek, “The UFO Experience” (1972) at pages 24-25 (in Chapter 4) of the Henry Regnery hardback edition (with same page numbering in the Abelard-Schuman hardback edition) at page 28 of the various Ballantine paperback editions, at page 42 of the Corgi paperback edition.

       

      [20.03] J Allen Hynek, “The UFO Experience” (1972) at page 25 (in Chapter 4) of the Henry Regnery hardback edition (with same page numbering in the Abelard-Schuman hardback edition) at page 29 of the various Ballantine paperback editions, at page 43 of the Corgi paperback edition.

       

      [20.04] J Allen Hynek, “UFO’s: A Scientific Debate” (1972) (edited by Carl Sagan and Thornton Page) at pages 41-42 (in Chapter 4) of the Barnes and Noble hardback edition (with the same page numbering in the Norton paperback edition).

       

      [20.05] J Allen Hynek, “The UFO Experience” (1972) at pages 25-26 (in Chapter 4) of the Henry Regnery hardback edition (with same page numbering in the Abelard-Schuman hardback edition) at page 29 of the various Ballantine paperback editions, at page 43 of the Corgi paperback edition.

       

      [20.06] Don Berliner article entitled “Sighting Coefficient” in MUFON Journal, April 1987, Issue 228, pages 14 and 17.

       

      [20.07] J Allen Hynek, “The UFO Experience” (1972) pages 235-240 of the Henry Regnery hardback edition (with same page numbering in the Abelard-Schuman hardback edition), pages 265-270 of the various Ballantine paperback editions, at pages 289-294 of the Corgi paperback edition.

       

      [20.08] Jim Speiser article entitled “Paranet Classification” in MUFON UFO Journal, June 1987, Issue 230, pages 15-16

       

      [20.09] Jim Speiser article, “The Hynek Rating System”, undated.  Available online as at 1 June 2010 at:

      http://www.skepticfiles.org/ufo1/hynekufo.htm

       

      [20.10] CUFON article, “Report #: 220”, 24 Janaury 1987.  Available online as at 1 June 2010 at:

      http://www.skepticfiles.org/mys5/ufo-4-27.htm

       

      [20.11] Jenny Randles, “UFO Study” (1981) at pages 122-124 (in Chapter 9) of the Hale hardback edition.

       

      [20.12] Jenny Randles and Peter Warrington, “UFOs: A British Viewpoint” (1979) at pages 167-168 (in Chapter 9) of the Hale hardback edition.

       

      [20.13] Jenny Randles and Peter Warrington, “UFOs: A British Viewpoint” (1979) at pages 54-55 (in Chapter 3) of the Hale hardback edition.

       

      [20.14] Jenny Randles and Peter Warrington, “UFOs: A British Viewpoint” (1979) at pages 56 (in Chapter 3) of the Hale hardback edition.

       

      [20.15] Jenny Randles and Peter Warrington, “Science and the UFOs” (1985) at page 136 (in Chapter 10) of the Hale hardback edition.

       

      [20.16] Jenny Randles article entitled “Publishing of UFO Data” in FSR Vol. 24 No.2, 1978 at pages 22-23.

       

      [20.17] J Allen Hynek, “The UFO Experience” (1972) at page 25 onwards (in Chapter 4) of the Henry Regnery hardback edition (with same page numbering in the Abelard-Schuman hardback edition) at page 29 onwards of the various Ballantine paperback editions, at page 43 onwards of the Corgi paperback edition.

       

      [20.18] Claude Poher, “Etude statistique des rapports d’observations du phénomène O.V.N.I. Etude menée en 1971, complétée en 1976”, pages 85-92. Available on-line on the GEIPAN site:

      http://www.cnes-geipan.fr/documents/stat_poher_71.pdf

       

      [20.19] Email from Donald Johnson to Isaac Koi, 21 June 2010.

       

      [20.20] Email from Tony Eccles to Isaac Koi, 20 June 2010.

       

      [20.21] NOT YET OBTAINED : Alfred J. Cote Jr, “How Colorado classes UFOs”, INDUSTRIAL RESEARCH, August 1968, 27-28.

       

      [20.22] Article entitled "UFOCAT - Tool for UFO Research" in MUFON Journal, Number 106, September 1976, pages 14-15. No author indicated, so presumably written by the editor (Dennis William Houck).  Concludes by stating that inquiries should be addressed to Dr David Saunders.

       

      [20.23] Fred Merritt, "UFOCAT and a friend with two new ideas", MUFON Symposium Proceedings 1980, pages 30-52.

       

      [20.24] Email from Robert Moore to Isaac Koi, 24 June 2010.

       

      [20.25]  After a flood of suggestions for different approaches (particularly in a discussion with members of the AboveTopSecret.com discussion forum), I am pleased to report that I now have the full version of Larry Hatch's database working flawlessly on a fairly modern computer using Windows Vista Business.

      I've spend a bit of time on numerous dead-ends (some arising from trying to use my main computer, which has Windows 7, which is already incompatible with the approach outlined below).

      However, the approach that worked was:

      (1) Installing Microsoft's Virtual PC 2007 from a page on Microsoft's website onto a laptop I own which still has Windows Vista Business on it.

      (2) Adding Windows 95 within that virtual PC, using the instructions on

      )">a page on the Youtube website.

      (3) Saving the old installed files from the floppy disk (which I've passed on from old machine to new machine several times, without having had a floppy drive for a few computer generations...) into an .ISO file using  MiniDVDSoft's free ISO creation software.  Running Window95 within Virtual PC 2007 then capturing the .ISO image of the relevant files, copying them into a directory on the virtual C: hard drive (into a new directory, "UFO").

      (6) Running the u.exe file from that new UFO directory

       

      [20.26] Email from Jan Aldrich to Isaac Koi, 21 June 2010.

      Category: