Category:

Best UFO Cases” by Isaac Koi

PART 19:  Quantitative criteria: Introduction

 

Previous sections of this article have examined with some of the qualitative criteria that have been proposed for selecting the best UFO cases.

 

Although rarely referred to, some attempts have been made to go further and make _quantitive_ assessments of aspects of UFO reports to assist in selecting the best cases and/or the cases which should be given priority by investigators/researchers.

 

The best known such attempts are also the most straightforward (e.g. by J Allen Hynek and Jacques Vallee). 

 

Lessor known attempts have been more ambitious, either seeking to refine the systems proposed by Hynek and Vallee or creating rather complicated new systems for quantifying the significance and/or reliability of UFO reports.

 

Jacques Vallee has commented (in his book Confrontations) that assigning credibility or weight to an observation is an integral part of any intelligence evaluation task, but “UFO researchers have rarely bothered to apply it in support of their own work” (see Footnote 19.01).

 

Several proposals for quantitative criteria are discussed in subsequent Parts of this article, including:

 

Category:

Best UFO Cases” by Isaac Koi

PART 2:    Challenges to produce lists of top cases

 

If UFO proponents wish to persuade scientists to examine the evidence for the alleged objective reality of UFOs, then it is not unreasonable to expect those UFO proponents to make serious efforts to identify the material which the scientists should focus upon.

 

Unparticularised suggestions to read the “UFO literature” or “witness reports” are simply poor advocacy, given the relevant mass of material and the variability of its quality.

 

Scientists and skeptics are only human. They will keep going as long as the initial material gains their interest.  If (as many UFO-proponents claim) they wish to encourage serious study of UFO reports by scientists, why not refer them to the best material to get their attention?

 

One online debate about UFOs and aliens began with one individual asserting that it is “obviously true they are out there". When challenged to state the facts in support of his statement he responded in the following way: "try googling UFO reports and sightings etc....and any decent site that comes up on google or any other search engine for that matter will be my facts" (see Footnote 2.01).

 

Unsurprisingly, the skeptics involved in that discussion did not find this suggestion very helpful or persuasive.

 

It is not merely those new to ufology that make such statements to skeptics. When asked to provide evidence for UFOs, the astronomer and famous ufologist J Allen Hynek would respond sarcastically “Where do you want the truck to stop” (see Footnote 2.02). During an online debate, skeptic Andy Roberts asked ufologist Jerry Clark (author of the leading encyclopedia on UFOs) what evidence there was of “non-mundane UFO origin”.  Jerry Clark responded: “Read the UFO literature, guy, if it's not too much trouble.  The answer’s there” (see Footnote 2.03).

 

Any scientist that bothers to respond to a vague suggestion to read the UFO literature by visiting his local bookstore in search of UFO books could be discouraged from pursuing the matter further.  Looking for UFO books in a bookstore, a scientist may become embarrassed by the fact that he is lurking in a section entitled “Esoteric” or “Occult”, in which the UFO books are mixed with books on spell-casting, ghosts and prophecies. If he randomly purchases a few UFO books, then he is unlikely to be impressed.  There is probably a consensus among most serious UFO researchers that many of the mass of books on UFOs are an embarrassment to ufology.  For example, J Allen Hynek has written that books about UFOs “regale the reader with one UFO story after another, each more spectacular than the other, but little space is devoted to documentation and evaluation.  What were the full circumstances surrounding the reported event? How reliable and how consistent were the reporters (all too often it is the lone reporter) of the event? And how were the UFO accounts selected?  Most often one finds random accounts, disjointed and told in journalese” (see Footnote 2.04).

 

Comments on the UFO literature and recommendations for reading are worth a separate article (and I am presently drafting such an article). For present purposes it suffices to say that that body of literature is considerable and there is only a limited consensus regarding recommended reading.

 

Sometimes skeptics are lucky enough to be referred to a specific well-researched book with references to further reading. Typical examples for such recommendations are Jerry Clark’s “UFO Encyclopedia” and Richard Hall’s “The UFO Evidence”.   However, if a well-intentioned skeptic did actually follow a recommendation to read, say, Jerry Clark’s “UFO Encyclopedia”, then he may not bother going beyond the entries beginning with “A”.  Those entries include (but are, of course, not limited to):

(a) Adamski, George

(b) Aetherius Society

(c) Allende Letters

(d) Angelucci, Orfeo Matthew

(e) Ashtar

 

Jerry Clark’s “UFO Encyclopedia”, as with virtually all other UFO books, was not written to present the best evidence for the objective reality of UFOs.  The cases and individuals discussed include many the author considers to have been significant in the history of ufology for various reasons, even if those cases have been explained and relevant individuals have been discredited.  Indeed, Jerry Clark has himself commented that his Encyclopedia “features many solved cases”  (see Footnote 2.05).

 

This is not a criticism of the content of such books.  Ufology has much to gain from a consideration of UFO reports arising from stimuli which were subsequently identified. There are many lessons to be learnt from such reports.  Indeed, such material is probably under-utilised by most UFO researchers.  However, the fact that most UFO books are not limited to the best evidence means that scientists referred to such books will be spending some, if not most, of their time on material which is not the most persuasive evidence of the objective reality of UFOs.

 

It is not only UFO sceptics that have complained about the tendency of ufologists to refer to large books about UFOs.  Ufologist Brad Sparks has commented as follows: “Typically the UFO proponent in desperation will cite some big 500-page or 1,000-page tome and say "All the UFO proof is in there! Go read it!" Whereas in fact the huge tomes are hopeless hodge-podges of bad cases, good cases, mediocre cases, erroneous cases, all intermixed according to some order (maybe alphabetical or chronological) that has nothing whatsoever to do with selecting best cases according to any scientific or quasi- scientific criteria ("criteria" is plural by the way, and "criterion" is singular). Indeed those few books were not really written for the purpose of presenting the best scientific case for the UFO to scientists. They were written for other worthy purposes, but let's not kid ourselves, though, they were not specially designed to state the case to scientists”  (see Footnote 2.06).

 

Similarly, Brad Sparks has also commented that, “Busy scientists …. [are] not going to read through huge books, multiple books, looking for something they don't even think is there, with not a clue as to what to look for” (see Footnote 2.07).

 

Category:

Best UFO Cases” by Isaac Koi

 

PART 21:         Quantitative criteria : Vallee’s SVP ratings

 

Since the 1960s onwards, Jacques Vallee has written several discussions regarding classification and codification of UFO reports.

 

During 1963, he published one of the first articles on classifying UFO reports into various types (see Footnote 21.12).  That proposal including suggestions for the codification of certain indications of interest (e.g. the number of witnesses), but no codes evaluating the relative credibility of the reports.  (Even that basic proposal was met with some concern that it might give a misleading impression of the similarity between cases within the various categories proposed by Vallee – see Footnote 21.16).

 

A couple of years later, in his first book - “Anatomy of a Phenomenon : UFOs in Space” (1965) – Vallee referred to the assignment of a “reliability index” as part of the first step of an analysis (see Footnote 21.14). That concept was explained in an article he wrote later that year (see Footnote 21.13).  In that article, Vallee commented that all writers on the subject of UFOs agree on one point: “many reports refer to misinterpreted conventional objects” and asked “But exactly how many reports are significant? How do you go about finding them?”.  He commented (in an observation which remain almost as valid several decades later, as can be seen from the other Parts of this article) that: “Yet very little information is found in the literature on exactly how to select your sample. It seems that every UFO student uses his own judgment to make the choice … Most UFO studies thus generate confusion instead of clarification”.   He complained that no “reliability scale” was given in statistics relating to UFOs.  He expressed regret that UFO groups and magazines did not have “a set of simple tests ready for use when a report comes in, to weigh its degree of significance”.  He set out a flowchart as a “guide for the identification of obvious mistakes which have no place in a catalogue of UFO sighting”. However, as noted in the conclusion of that article, the proposed procedure “leaves the final estimate of the report to the investigator’s judgment”.

 

In his subsequent book “Challenge to Science – The UFO Enigma” (1966), Jacques Vallee included a detailed appendix entitled “An Analysis of UFO Activity” setting out proposed classifications and codifications of UFO reports.  That appendix included a section entitled “Reliability (Weight) of the Sightings” (see Footnote 21.15). Vallee explained that the “weight” to be given to a sighting within his system “is not only a measure of the reliability of the witness, it seeks to determine to what degree each report is important in a study of the phenomenon”, setting out the following categories represented by different characters:

 

“*” = “sightings that must by accounted for in any global theory of the phenomenon, either because of the strong evidence obtained or because of the large number or scientific competence of the witnesses (assuming favourable observing conditions)”.

 

“+” =  “significant cases where we feel that sincerity of the witnesses cannot be questioned, and where the reported phenomenon is representative of the problem under study”.

 

“=” = “doubtful cases where the report can be interpreted, on the basis of the data presented, by a borderline conventional phenomenon”

 

“-” = “nothing to do with the UFO phenomenon, but have to be catalogued because of the effect they have had on the general rumour, at least on a local scale”.

 

 

Since then Jacques Vallee has proposed a system of assigned three digits to indicate the credibility of a report.

 

Jacques Vallee’s “SVP” system involved assigning a value from zero to four for “S” (reliability of the Source), “V” (site Visit) and “P” (probability of natural explanations).

 

He has discussed this proposal in several of his books, including in “Confrontations” (1990) and “Revelations” (1991).

 

While this system is not as referred to in UFO books as frequently as Hynek’s Strangeness and Probability Ratings (see PART 20), it appears to me that Vallee’s system may have actually been implemented to a greater extent by several UFO databases and groups than Hynek’s better known proposal.

 

Vallee has made strident comments about the failure of other researchers to implement such a system.  For example, he has made the following comments in the books referred to above:

 

  1. “No classification system is complete without a way of assigning credibility or ‘weight’ to an oberservation.  While such a procedure is an integral part of any intelligence evaluation task, UFO researchers have rarely bothered to apply it in support of their own work” (see Footnote 21.02).

 

  1. “In the absence of such a rating [of the credibility of UFO reports], the UFO databases and catalogues that exist today are little more than large buckets filled with random rumours” (see Footnote 21.03).

 

Vallee has referred to a notable exception to his criticisms, i.e. the “quality index” proposed by Spanish researchers Vicente Juan Ballester-Olmos and Guasp, “but it is so detailed that I have found it difficult to apply in practice (see Footnote 21.02).  Vallee has stressed that a system needs to be simple enough to be applied quickly, and with enough mnemonic value to insure it does not require constant reference to a thick codebook.  The relevant “quality index” proposed by Ballester-Olmos and Guasp is discussed in PART 23.

 

Vallee proposal of a “very simple system a (‘the SVP rating’) to indicate the credibility of reports” relies on “only three questions” (see Footnote 21.01), i.e.:

 

1. “Do we know the source of the report?”

 

2. “Was a site visit made?”

 

3. “And what alternative explanations exist for the event?”

 

Each of the three digits assigned for S, V and P has a value from zero to four, as follows:

 

The first digit, “S” indicates the reliability of the source:

 

0 = unknown source, or an unreliable source

1 = a source of unknown reliability

2 = credible source, second hand

3 = credible source, first hand

4 = firsthand personal interview with the witness, by a source of proven reliability

 

 

The second digit, “V” indicates whether or not a site visit took place:

 

0 = no site visit, or the answer is unknown

1 = visit by a casual person unfamiliar with such phenomena

2 =  site visit by a personal familiar with the range of phenomena

3 = site visit by a reliable investigator with some experience

4 = site visit by a skilled analyst

 

 

The third digit, “P” indicates the probability of natural explanations:

 

0 = data is consistent with one or more natural causes

1 = natural explanation only requires slight alteration of the data

2 = natural explanation would demand gross alteration of one parameter

3 = natural explanation demands gross alteration of several parameters

4 = no natural explanation is possible, given the evidence

 

Thus, a rating of 222 or better (meaning that each of the three digits is 2 or higher) is supposed to indicate events reported through a reliable source, in which a site visit has been made, and where a natural explanation would require the gross alteration of at least one parameter.

 

 

Actual applications of Vallee’s SVP criteria

 

In an online article dated April 2007 on his website (see Footnote 21.08), Jacques Vallee has updated the discussion of his proposed classifications and SVP criteria that appears in the books referred to above.

 

In that online article, Vallee states that:

 

“This system is now in use in all our catalogues. It has also been used consistently by several major external studies, notably by CUFOS in their UFOCAT catalogue, by the National Institute for Discovery Science (NIDS) in its private database, and by the French study of pilot sightings conducted by M. Dominique Weinstein in connection with the GEIPAN (Groupe d’Etudes et d’Information sur les Phénomènes Aériens Non-identifiés) in connection with the CNES in Paris”.

 

Vallee concludes that “it is becoming possible to compare statistical data in cross-indexing among several databases, a significant first step towards international cooperation in the study of this puzzling phenomenon”.

 

It is not clear from the article which catalogues are included within Vallee’s reference to “all our catalogues” which now use his classifications and SVP criteria.  The same article refers to four databases developed by Vallee, so presumably it includes those databases.  They do not appear to be available on, or via, Vallee’s website.  I have made enquiries with various researchers but they were unable to enlighten me as to the nature or content of those databases. I have not troubled Jacques Vallee himself as yet.

 

As for the several major external studies that have, according to Vallee, “used consistently” the SVP criteria:

 

 

  1. (1) UFOCAT:

 

The biggest catalogue mentioned by Vallee appears to be CUFOS’s UFOCAT.

 

The “UFOCAT 2002 User’s Guide” states that Vallee SVP “system appears to have the advantage that it is simple enough to be applied quickly with enough mnemonic value that it does not require constant reference to a reference manual” (see Footnote 21.10).

 

However, the SVP rating does not appear in relation to most of the records within UFOCAT.  It seems to be present for only approximately 2,900 out of more than 65,000 sightings covered in the 2004 edition of the UFOCAT database.

 

I note that over 10 per cent of the rated sightings have the maximum SVP value, i.e. 444.  On the other hand, there are only two records rated 000 (both in relation to a hoax).  Given the number of records within UFOCAT labelled hoaxes (and there are quite a few, which should provide a very useful resource to anyone looking into hoaxed reports) that are not assigned any SVP, it may well be that there is a selection bias in that the researchers that coded entries teneded only to bother including SVP ratings for the more credible reports.

 

I have yet to locate any analysis of those SVP records. The researcher that has managed the UFOCAT project since about 1990 (Donald Johnson) is also unaware of any analysis of those records, commenting that there has not been any “in large part because the assignment of these SVP codes has been lopsided in favour of the superior cases”.  He commented that he principally used the codes to filter out cases when he was looking for quality cases and that “until there is a systematic assignment of a representative sample of cases they really can't be used for statistical analysis because the results would be meaningless” (see Footnote 21.17).

 

Nonethless, the inclusion of more than 2,900 SVP ratings in the UFOCAT makes this the second largest attempt to apply any quantitative criteria to assess the weight/credibility of UFO reports of which I am aware. (The largest such attempt is in Larry Hatch’s *U* database, which includes values in relation to Strangeness and Probability – as discussed in PART 20).

 

Due to the scale of that endeavour to add such ratings, I asked Donald Johnson how easy (or otherwise) the SVP criteria are to apply in practice when dealing with a large database. He responded that “The most difficult code to assign is the middle one, and the easiest is the last one. Most reports contain virtually no information about the quality of the investigation, yet I am reluctant to apply a zero code to an otherwise good case that lacks a followup investigation” (see Footnote 21.17).

 

Incidentally, although the SVP values are stated within the UFOCAT database as a single three digit number (e.g. 444) I have found it straightforward to convert UFOCAT’s data into a Microsoft Excel spreadsheet and within Excel it is possible to, for example, add the various digits in a number together (e.g. 4 + 4 + 4 = 12).  It would therefore be possible to create an additional column containing those totals and then rank the 2,900 rated sightings according to the total of their SVP values. (See Footnote 21.11).

 

As discussed in PART 20:  Quantitative criteria : Hynek – Strangeness and Probability, it is notable that UFOCAT, despite being sold by an organisation founded by J Allen Hynek (i.e. CUFOS), does not include Hynek’s Strangeness and Probability Ratings but has instead (to some extent) adopted Vallee’s SVP ratings.

 

 

 

  1. (2) NIDS Database

 

The private database of the National Institute for Discovery Science (NIDS) mentioned by Jacques Vallee appears to remain private.  This is unlikely to change, given that the National Institute for Discovery Science (NIDS) was disbanded back in about 2004. (The last substantive update to its website was in September 2004 – see Footnote 21.05).

 

However, an article was published on the NIDS website in April 2001 entitled  “The NIDS UFO Database: Classification and Credibility Indices” (see Footnote 21.06) which included an analysis of the SVP values assigned by NIDS to 660 cases received over the a 15 months period. That article stated that “a significantly higher level of our close encounter cases (71.7%) have high credibility according to the Vallee SVP index” than non-close encounter cases.

 

That article indicated that the numbers of cases in the database were “still very small” and referred to a hope “to add to the data in the coming months”, but no further similar article appears to have appeared on the NIDS website before NIDS was disbanded three years later.

 

  1. (3) Dominique Weinstein’s study

 

I have not found a copy of the French study of pilot sightings conducted by M. Dominique Weinstein in connection with the GEIPAN (Groupe d’Etudes et d’Information sur les Phénomènes Aériens Non-identifiés) in connection with the CNES in Paris.

 

I have seen various publically available catalogues of pilot sightings compiled by Dominique Weinstein (see Footnote 21.06 and Footnote 21.07).  However, those publically available catalogues do not appear to include SVP ratings or contain any analysis of SVP ratings.

 

 

 

 

Joseph Randall Murphy’s “Ufology Society International”

 

Vallee’s SVP ratings also appear to have been applied (although somewhat modified) by Joseph Randall Murphy’s “Ufology Society International”, also known as “USI” (see Footnote 21.09). The “USI Confidence Rating” scheme uses (or used, since the relevant group’s website does not appear to be available as more of June 2010) the same three categories as the Vallee SVP system plus one more for the type of memory a UFO report is gatherd from. The addition rating, the Mnemonic Rating (M), included the following:

 

0: Recalled via channeling dream or other altered state.

1: Hypnosis assisted with minimal corroboration.

2: Hypnosis assisted with independent corroboration.

3: Conscious recall of an event more than 5 years old.

4: Clear Conscious recall of a recent event.

 

When I asked J R Murphy about this additional factor by email in 2007, he kindly explained that “The USI Mnemonic Rating was developed for the purpose of providing a framework for addressing the memory state via which sighting report data is gathered. For the constructively skeptical and objective ufologist, this should be an important factor, but whether it will ever get established is another story”.  It appears that, thus far at least, Murphy’s suggseted additional factor has not been applied by any other groups or databases, nor have I been able to locate any analysis of the application of this additional factor to any files held by Murphy’s “Ufology Society International”.

 

Robert Moore (a British ufologist that has edited several UFO magazines, including one published by BUFORA) has commented that Vallee's SVP rating system "is a fair superior system to the Berliner scale" discussed in PART 20, commenting that Vallee's scale "very much represents the reality of Ufology as confronted by field investigators!" (see Footnote 21.18).

 

 

 

FOOTNOTES

 

[21.01] Jacques Vallee, “Revelations” (1991) at page 291 (in Appendix 2) of the Ballantine Books paperback edition.

 

[21.02] Jacques Vallee, “Confrontations” (1990) at page 218 (in the Appendix) of the Ballantine Books paperback edition.

 

[21.03] Jacques Vallee, “Revelations” (1991) at page 291 (in Appendix 2) of the Ballantine Books paperback edition.

 

[21.04] Archive of NIDS website, articles section:

http://web.archive.org/web/20070930043450/www.nidsci.org/whatsnew.php

 

[21.05] NIDS, “The NIDS UFO Database: Classification and Credibility Indices”, April 2001.

http://web.archive.org/web/20061210060109/www.nidsci.org/pdf/nids_ufo-database_0301.pdf

 

[21.06] Dominique Weinstein, “Military, Airline and Private Pilot UFO sightings from 1942 to 1996”, 1997.  Available online as at 1 June 2010 at:

http://www.project1947.com/acufoe.htm

 

[21.07] Dominique Weinstein, “Catalog of Military, Airliner, Private Pilots sightings from 1916 to 2000”, February 2001 edition.  Available online as at 1 June 2010 at:

http://www.narcap.org/reports/004/tr-4c.htm

 

[21.08] Jacques Vallee, “System of Classification and Reliability Indicators for the Analysis of the Behavior of Unidentified Aerial Phenomena”, April 2007.  Available online as at 1 June 2010 at:

http://www.jacquesvallee.net/bookdocs/classif.pdf

 

[21.09] Website of Joseph Randall Murphy’s “Ufology Society International” .  Available online as at 1 June 2007 at:

http://www.nucleus.com/~ufology/USI/Content/Classes-01.htm

 

[21.10] Donald Johnson, “The UFOCAT 2002 User’s Guide”, April 2002 at page 30.

 

[21.11] To add all the digits of a number within Excel, see (for example) the webpage at the link below which suggests using the following : “=SUM((LEN(A2)-LEN(SUBSTITUTE(A2,{1,2,3,4,5,6,7,8,9},"")))*{1,2,3,4,5,6,7,8,9})”:

http://www.mrexcel.com/board2/viewtopic.php?t=53620

 

[21.12] Jacques Vallee “How to classify and codify Saucer sightings”, FSR Volume 9 issue 5, September/October 1963, pages 9-12.

 

[21.13] Jacques Vallee “How to select significant UFO reports”, FSR Volume 11 issue 5, September/October 1965, pages 15-18.

 

[21.14] Jacques Vallee, “Anatomy of a Phenomenon” (1965) at pages 39-40 (in Chapter 3) of the Henry Regnery hardback edition (with the same page numbering in the Tandem paperback edition).

 

[21.15] Jacques Vallee and Janine Vallee, “Challenge to Science : The UFO Enigma” (1966) at pages 266-267 (in Appendix 4) of the Ballantine Books paperback edition, at page 222 of the Tandem paperback edition.

 

[21.16] William T Powers,“Some Preliminary Thoughts on Data Processing”, FSR Volume 12 issue 4, July/August 1966, pages 21-22

 

[21.17] Email from Donald Johnson to Isaac Koi, 21 June 2010.

 

[21.18] Email from Robert Moore to Isaac Koi, 24 June 2010.

Category:

Best UFO Cases” by Isaac Koi

 

PART 20:  Quantitative criteria : Hynek – Strangeness and Probability

 

Some Relevant Definitions

 

Before considering Hynek’s Strangeness and Probability ratings, it may be helpful to briefly recap a few of Hynek’s relevant definitions.

 

In his book “The UFO Experience” (1972), divided sightings into two divisions “(I) those reports in which the UFO is described as having been observed at some distance; (II) those involving close-range sightings” (see Footnote 20.17).

 

The distant sightings were divided by Hynek into:

 

  1. “Nocturnal lights” – “those seen at night”
  2. “Daylight discs” – “those seen in the daytime”
  3. “Radar-Visual” – “those reported through the medium of radar”

 

The close-range sightings were divided by Hynek into:

 

(1) Close Encounter of the First Kind – “the reported UFO is seen at close range but there is no interaction with the environment (other than trauma on the part of the observer”

 

(2) Close Encounter of the Second Kind – “physical effects on both animate and inanimate material are noted”

 

(3) Close Encounter of the Third Kind – “the presence of ‘occupants’ in or about the UFO is reported”.

 

In addition to the above definitions, J Allen Hynek proposed the use of a Probability Rating and a Strangeness Rating.  Those ratings are probably the best known suggested quantitative criteria for evaluating UFO cases. However, as with the other criteria outlined in PART 19: Quantitative criteria : Introduction, their actual application has been very limited.

 

This webpage examines Hynek’s proposed Probability Rating and a Strangeness Ratings, rather than the above definitions.  However, I note in passing that those definitions (while very widely adopted) are acknowledged by quite a few UFO researchers as giving rise to various difficulties.  For example, Jenny Randles has concisely identified three main difficulties: “Firstly, there is a clear overlap where it can often by very hard to determine which category a case fits into.  This is particularly so between the Daylight Disc, CEI and CEII cases.  Secondly, it is not very acceptable to distinguish between close encounter and non-close encounter on the basis of distance. An arbitrary boundary may well be set (e.g. 100 metres) where anything closer than this becomes labelled a close encounter, but it is well known that witness estimates of distance are, to say the least, inaccurate.  Thirdly, there seems to be not enough distinction between the higher strangeness types of reports – the very reports we ought to be the most interested in” (see Footnote 20.13).

Similarly, this is not the place to explore suggested refinement of (additions to) Hynek’s classification. I simply note in passing that various suggestions have been made. Of the various classification systems which have sought to develop Hynek’s definitions,  particularly noteworthy are those put forward by Jenny Randles in several publications (see Footnote 20.11 to Footnote 20.16 and the discussion in PART 22: Quantitative criteria : BUFORA’s case priority

 

Many researchers have, for example, suggested adding a “Close Encounters of the Fourth Kind” category (usually, but not always, to deal with alleged alien abductions).  However, there is no universal acceptance of any of the proposed variations to Hynek’s classifications. Indeed, there is very considerably variation in the proposed additional classes of reports – even within books by the same authors. For example, in their book “UFOs: A British Viewpoint” (1979) Jenny Randles and Peter Warrington referred to CEIV as being “encounters with psychic effects (including “all reports of a psychic (here defined as ‘apparently non-physical’) nature take place.  This often means abduction claims, where there are time-lapses and other ‘non-real’) elements” (see Footnote 20.14).  However, in a subsequent book entitled “Science and the UFOs” (1985) Jenny Randles and Peter Warrington gave the following definitions: “A CE3 case involves observation of an animate alien entity in association with a UFO.  A CE4 goes one step beyond and includes contact between that entity and the witness” (see Footnote 20.15).  The definition in their later book is closer to the commonest usage of CE4 that has emerged in subsequent decades.

 

 

 

 

 

Hynek’s Strangeness and Probability Ratings

 

Hynek discussed these proposed ratings in several books, particularly in his book “The UFO Experience” (1972). He also discussed those ratings in his essay in “UFO’s: A Scientific Debate” (1972) (edited by Carl Sagan and Thornton Page).  In that essay, he gave the following summary of “strangeness” and “probability” (or “credibility”) ratings:

 

“The degree of ‘strangeness’ is certainly one aspect of a filtered UFO report. The higher the ‘strangeness index’ the more the information aspects of the report defy explanation in ordinary physical terms.  Another significant dimension is the probability that the report refers to a real event; in short, did the strange thing really happen? And what is the probability that the witnesses described an actual event? This ‘credibility index’ represents a different evaluation, not of the report in this instance, but of the witnesses, and it involves different criteria.  These two dimensions can be used as coordinates to plot a point for each UFO report on a useful diagram.  The criteria I have used in estimating these coordinates are: For strangeness: How many individual items, or information bits, does the report contain which demand explanation, and how difficult is it to explain them, on the assumption that the event occurred? For credibility: If there are several witnesses, what is their collective objectivity? How well do they respond to tests of their ability to gauge angular sizes and angular rates of speed? How good is their eyesight? What are their medical histories? What technical training have they had? What is their general reputation in the community? What is their reputation for publicity-seeking, for veracity? What is their occupation and how much responsibility does it involve? No more than quarter-scale credibility is to be assigned to one-witness cases” (see Footnote 20.04).

 

Hynek described the two ratings in more detail in his book “The UFO Experience” (1972), as follows:

 

The Strangeness Rating:

 

“The Strangeness Rating is, to express it loosely, a measure of how ‘odd-ball’ a report is within its particular broad classification. More precisely, it can be taken as a measure of the number of information bits the reports contains, each of which is difficult to explain in common-sense terms. A light seen in the night sky the trajectory of which cannot be ascribed to a balloon, aircraft, etc would nonetheless have a low Strangeness Rating because there is only one strange thing about the report to explain : its motion.  A report of a weird craft that descended to within 100 feet of a car on a lonely road, caused the car’s engine to die, its radio to stop, and its lights to go out, left marks on the nearby ground, and appeared to be under intelligent control receive a high Strangeness Rating because it contains a number of separate very strange items, each of which outrages common sense” (see Footnote 20.02).

 

The Probability Rating:

 

“Assessment of the Probability Rating of a report becomes a highly subjective matter. We start with the assessed credibility of the individuals concerned in the report, and we estimate to what degree, given the circumstances at this particular time, the reporters could have erred.  Factors that must be considered here are internal consistency of the given report, consistency among several repors of the same incident, the manner in which the report was made, the conviction transmitted by the reporter to the interrogator, and finally, that subtle judgment of ‘how it all hangs together’” (see Footnote 20.03).

 

 

Hynek made the following comments about assigning relevant numbers for these two criteria:

 

“Ideally, a meaninful Probability Rating would require the judgment of more than one person.  Such luxory of input is rarely available.  … In my own work, I have found it relatively easy to assign the Strangeness number (I use 1 to 10) but difficult to assign a Probability Rating. Certainty (P=10) is, of course, not practically attainable; P=0 is likewise impossible under the circumstances since the original report would not have been admitted for consideration.  The number of persons involved in the report, especially if individual reports are made, is most helpful.  I do not assign a Probability Rating greater than 3 to any report coming from a single reporter, and then only when it is established that he has a very solid reputation” (see Footnote 20.05)

 

Various other UFO researchers have discussed Hynek’s proposed rating system.  Most discussions merely give a summary of that proposed system, with little meaningful comment or analysis.  However, several researchers have sought to develop, or comment upon, Hynek’s proposal.

 

Some comments have been fairly dismissive.  For example,  after suggesting that McDonald’s interviews of UFO witnesses merely “confirm his well-known bias in favour of ETH”, Menzel has suggested that “… Hynek’s indexes of ‘credibility’ and ‘strangeness’ are equally subjective.  Study of them may throw some light on Dr Hynek but they are unlikely to contribute much to the UFO problem” (see Footnote 20.01).

It is not merely debunkers that have questioned the subjectivity of these ratings.  For example, Robert Moore (a British ufologist that has edited several UFO magazines, including one published by BUFORA) has commented that Hynek's application of the Strangeness/Probability system "was always subjective - in that it had no fixed data criteria, ratings of cases were based solely on judgement" and that later proposed refinements "attempted to address this" (see Footnote 20.24).

Before turning to those proposed refinements, I note that one of the simplest but (in my view) actually most useful suggestions in relation to Hynek’s rating system is to produce an overall score for a UFO report by multiplying the two numbers together.    Don Berliner has suggested that “J Allen Hynek was on the right track” in proposing the Strangeness/Probability rating system and that some system of establishing the relative usefulness of a report is needed” because “for decades we have been far too unscientific about judging the merits of reports, and this has led to a great waste of effort”.  He added that also suggested the use of a “sighting coefficient”, suggesting that “by multiplying the two numbers, the report can be given a K/U (coefficient of usefulness) which will establish its potential for helping to solve the mystery, relative to other reports” (see Footnote 20.06).

 

Personally, I tend to think of Don Berliner’s “sighting coefficient” in terms of a sighting’s “total score” or “Berliner Score”.

 

 

 

Don Berliner’s Strangeness Scale and Credibility Scale

 

Don Berliner’s suggestion of a “sighting coefficient” (obtained, as discussed above, by multipiplying the Strangeness rating by the Probability rating) was coupled with the following “Strangeness Scale” and “Credibility Scale” (see Footnote 20.06).

 

Don Berliner’s “Stangeness Scale” :

0 – Identified as a known object or phenomena, or a report lacking a clear UFO content.

1 – Night light with no apparent object.

2 – Night object

3 – Daylight object seen at a distance

4 - Night Close Encounter of the First Kind

5 – Daylight CE-I

6 – Ambiguous CE-II

7 – Unambiguous CE-II

8 – CE-III

9 – CE-III with occupant reaction to the witness

10 – CE-III with meaningful communication

 

Don Berliner’s “Credibility Scale” :

0 – Witnesses lacking believability

1 – Single average witness

2 – Multiple average witnesses

3 – Single exceptional witness

4 – Multiple exceptional witnesses

5 – Radar/visual

6 – Still photos shot by amateurs

7 – Still photos shot by professionals

8 – Amateur movies or videotape

9 – Professional movies or videotape

10 – Live television

 

Don Berliner sought to illustrate the application of the Strangeness Scale and the Credibility Scale (and his “sighting coefficient) to several of the best known UFO cases.  He gave scores to 11 well-known sightings, the highest score of those 11 sightings being a score of 27 for Zamora's sighting at Socorro as follows:

1947 Kenneth Arnold. Sighting Coefficient “3*3=9”

1948 Thomas Mantell. Sighting Coefficient “3*4=12”

1950 Trent photos. Sighting Coefficient “3*6=18”

1952 Washington Nationals. Sighting Coefficient “1*5=5”

1952 Nash/Fortenberry sighting.  Sighting Coefficient “2*4=8”

1957 Levelland. Sighting Coefficient “2*7=14”

1964 Socorro. Sighting Coefficient “9*3=27”

1966 “Swamp gas”, Dexter. Sighting Coefficient “6*2=12”

1973 Coyne helicopter. Sighting Coefficient “4*4=16”

1979 New Zealand film. Sighting Coefficient “1*9=9”

1980 Cash/Landrum incident. Sighting Coefficient “7*2=14”

 

 

 

Berliner’s personal conclusion from the fact that the highest score he assigned to the cases in his sample was 27 (out of a potential score of 100) was that this “seems to make it clear that there is a severe lack of reports which could be used to convince scientists, legislators and the general public that we are dealing with something so unusual that it deserves immediate attention”.

 

It is notable that J Allen Hynek’s own (subjective?) values for various cases are (like the numbers assigned by Berliner) also fairly low.  Hynek included several Strangeness Rating/Probability charts in his book “The UFO Experience” (1972) – see Footnote 20.07. Those tables included:

 

(1) 15 Daylight Discs. In this list, the highest Berliner “Sighting Coefficient” based on the Strangeness-Probability ratings assigned by Hynek is 24.

 

(2) 13 Nocturnal Lights. In this list, the highest Berliner “Sighting Coefficient” based on the Strangeness-Probability ratings assigned by Hynek is 24.

 

(3) 10 Radar-Visual cases. In this list, the highest Berliner “Sighting Coefficient” based on the Strangeness-Probability ratings assigned by Hynek is 36 points (Strangeness-Probability of 4-9) – 17 July 1957 SW United States

 

(4) 14 Close Encounters of the First Kind. In this list, the highest Berliner “Sighting Coefficient” based on the Strangeness-Probability ratings assigned by Hynek is 36 points (Strangeness-Probability of 4-9) – 10 October 1966 Newton, Ill.

 

(5) 23 Close Encounters of the Second Kind. In this list, the highest Berliner “Sighting Coefficient” based on the Strangeness-Probability ratings assigned by Hynek is 40 points (Strangeness-Probability of 5-8) – 2 November 1957 Levelland, Texas

 

(6)  5 Close Encounters of the Third Kind. In this list, the highest Berliner “Sighting Coefficient” based on the Strangeness-Probability ratings assigned by Hynek is 40 points (Strangeness-Probability of 5-8) – 26 June 1958 Boianai, New Guinea

 

Studying Don Berliner’s suggested ratings for a moment betrays some of the problems with such prescriptive scales of strangeness and (in particular) credibility.  For example, should reports from “multiple exceptional witnesses” only get a single credibility point more than a report from a “single exceptional witness” and only two more than “multiple average witnesses”.  More importantly, is it really satisfactory to authomatically give a still photo a credibility scale twice the credibility weighting of a a report from a “single exceptional witness”?  Or to automatically give a movie twice the credibility weighting of “multiple exceptional witnesses”?

 

Such strict automatic weightings in relation to photographic material appear particularly unsatisfactory in an era when photographs are easily manipulated using computer software (e.g. Photoshop) and movies are created almost as easily using three dimensional modeling and video editing software (e.g. 3Dmax).

 

While the basic idea of multiplying numbers for strangeness and probability ratings together seems to me to be a very useful idea, I disagree with the detailed content of Berliner’s suggested prescriptive ratings. For example, I have looked into several hoaxed videos of “aliens” that were created by CGI.  Where do such videos rate on Berliner’s scale? Well, it seems that they score a 9 or 10 on the Strangeness Scale and 8 or 9 on the Credibility Scale, resulting in a total Berliner “sighting coefficient” of about 80 to 90 – massively more than the scores assigned by Berliner to any of the classic sightings he considered in his article.

 

 

 

 

Jim Speiser’s suggested refinements

 

Don Berliner is not the only researcher to seek to reduce the subjectivity involved in assigning Strangeness and Probability Ratings.  Another researcher, Jim Speiser, has published different criteria for assigning relevant values. The relevant article appeared, like Don Berliner’s article referred to above, in the MUFON Journal in 1987 (see Footnote 20.08).

 

Jim Speiser’s article began by referring to Don Berliner’s article and stating that he quite agreed with Don Berliner’s objective of concentrating efforts on those reports which may contain information of long-term value.  Speiser said that his organization, Paranet, had for the previous year been using a similar system for “weighing UFO reports as a method of determining usefulness”.   He indicated that Paranet’s system used a scale of one to five, as follows:

 

Strangeness Factor: S1-S5

S1 – Explainable or explained

S2 - Probably explainable with more data

S3 - Possibly explainable, but with elements of strangeness

S4 - Strange; does not conform to known principles

S5 - Highly strange; suggests intelligent guidance

 

Probability Factor: P1 – P5

The "Probability" factor of a case relates to the credibility, number and separation of witnesses and/or the soundness of evidence gathered.

P1 - Not Credible or Sound

P2 - Unreliable; smacks of hoax

P3 - Somewhat credible or indeterminate

P4 - Credible; Sound

P5 - Highly Credible, leaving almost no doubt

 

In a subsequent online article, Jim Speiser has given the following examples of his “Probability Factor” values:

P1 - Known Hoaxer or UFO "Flake"; Hoax Photo

P2 - Repeat Witness; Conflicting Testimony

P3 - Standard, first-time witness; slight radiation reading

P4 - Multiple witnesses; pilot; clear photo

P5 - National Figure; Multiple independent witnesses; videotape

 

In his MUFON Journal article in 1987, Jim Speiser suggested (correctly in my view) that the most obvious difference between Speiser’s system and Berliner’s system is that Speiser’s “is more subjective” since “it is not dependent on categorization based on specific elements of the case; rather it calls for a more general judgment of how useful the various elements are to the advancement of our knowledge”.

Of course, since it is more subjective the cost of the flexibility of Speiser’s ratings is that the values assigned can vary considerably from individual to individual – limiting the usefulness of the system as a means of comparing the importance of various cases (particularly if the values are assigned by different researchers/groups).

In my view, it is doubtful that Speiser’s suggestions meaningfully limit the subjectivity inherent in Hynek’s original proposals.  They amount to say that a value of 1 is very low, a value of 2 is low, 5 is very high etc etc.

 

Jim Speiser included some illustrations of the values that he would assign to several high-profile cases, duplicating the list used by Don Berliner in his article:

 

1947 Kenneth Arnold. S4/P3

1948 Thomas Mantell. S2/P5

1950 Trent photos. S5/P4

1952 Washington Nationals. S5/P5

1952 Nash/Fortenberry sighting.  S5/P5

1957 Levelland. S5/P5

1964 Socorro. S5/P3

1966 “Swamp gas”, Dexter. S3/P5

1973 Coyne helicopter. S5/P5

1979 New Zealand film. S3/P3

1980 Cash/Landrum incident. S5/P4

 

Since Jim Speiser was only using a scale of 1 to 5, these values are (relative to those assigned by J Allen Hynek and Don Berliner) relatively high.  Given the significant difference in the level of ratings, it is difficult to completely ignore Menzel’s suggestion that study of Hynek’s indexes of ‘credibility’ and ‘strangeness’ “may throw some light on Dr Hynek but they are unlikely to contribute much to the UFO problem” (see Footnote 20.01).

 

One question that I will return to in PART 28 is whether obtaining the judgment of more than one person would help smooth out relevant subjective biases and produce a more useful result.  I note that Hynek himself suggested that “Ideally, a meaninful Probability Rating would require the judgment of more than one person.  Such luxory of input is rarely available” (see Footnote 20.05)

 

 

 

 

Claude Poher’s suggested refinements

 

Claude Poher, the first director of GEPAN (the UFO investigative office under the French government's National Center for Space Sciences), has also suggested a different system which seeks to reduce the subjectivity involved in Hynek’s Strangeness and Probability Ratings.

On his website (www.premiumwanadoo.com/universons), Claude Poher suggested that the credibility criterion should be based on “the known parameters about the witnesses and their method of observation”, NOT taking into account “the anecdotal story of what the witnesses have seen” in order to “separate the credibility from the strangeness criterion of an observation”. He suggested that “credibility belongs to the witnesses, strangeness belongs to the observed facts”.

I am aware that ratings suggested by Poher are also given on pages 85-92 of his “Etude statistique des rapports d’observations du phénomène O.V.N.I. Etude menée en 1971, complétée en 1976”, available on-line on the GEIPAN site (see Footnote 20.18).  However, since that document is in French I have not been able to read it.  The comments below therefore relate to Poher’s suggestions as set out on his website.

In relation to the credibility criterion of an observation, he begins by assigned a number similar to very similar to Speiser’s scale:

0 = absolutely not credible.
1 = very little credible.
2 = a little credible.
3 = credible.
4 = very credible.
5 = perfectly credible.

Poher commented that “This note depends only of the witnesses and of the observation method. In our computer file, we have one rubric for the observation method and three rubrics concerning the witnesses : their number, their age, their ‘competencies’. We can ascribe a different note for each rubric, and combine the four notes according to relative ‘weights’ for the rubrics. This means the relative importance of each rubric as compared to the others.”

He acknowledged that “All this is quite subjective” but suggested that “these are only comparison criteria”.

Poher noted that the relative weights of the four rubrics was as follows:

(1) Relative weight of the number of witnesses = 31 %

(2) Relative weight of the age of the main witness = 7%

(3) Relative weight of the "socio-professional code" of the main witness = 31%

(4) Relative weight of the method of observation = 31%

Total = 100 %

Thus, the witness age was “three times less important than the three other criteria”, with those other criteria being given equal weight.  Credibility was thus = (31 x Value 1  +  7 x Value 2  +  31 x Value 3  +  31 x Value 4)  /  100

 

In terms of the values to be assigned for each rubric, Poher noted the following:


In relation to "number of witnesses":

0 if the number is unknown.
1 for one witness.
2 for two witnesses.
3 for 3 to 9 witnesses.
4 for 10 to 100 witnesses.
5 for more than 100 witnesses.

Poher himself accepted that these values these notes are “extremely ‘severe’” and that they “penalize considerably most of the testimonials, where the number of simultaneous witnesses is rarely larger than five”.


In relation to “age of main witness”:

0 if age is unknown.
1 under 13 years.
2 not used.
3 from 14 to 20 years.
4 larger than 60 years.
5 from 21 to 59 years.



In relation to "socio-professional code of main witness" :

0 if unknown.
1 for schoolboys, shepherds ...
2 for workers, farmers ...
3 for technicians, policemen, qualified army personnel ...
4 for engineers, officers ..
5 for pilots, researchers, astronomers ...


In relation to the method of observation :

0 without information, or naked eye observation with no indication of distance.
1 naked eye with more than 3 km distance.
2 naked eye with 1 to 3 km distance, or from an airplane with more than 1 km distance.
3 for a radar observation, or naked eye with 200 to 1000 m distance.
4 for a binocular observation, or binoculars + radar, or from an airplane at less than 1000 m distance, or naked eye with less than 150 m distance.
5 for an observation with a telescope, or with a photography, or with binoculars + photo, or naked eye with less than 50 m distance.


Strangeness criterion of an observation :

Poher’s system involved assigning a Strangeness criterion as follow:

0 = not at all strange, or insufficient information.
1 = slightly strange, object is a dot moving in straight line and constant angular speed.
2 = fairly strange, object of a small angular dimension but abnormal trajectory.
3 = strange, complex trajectory, landing or quasi landing without traces, sudden disappearance in flight.
4 = very strange, landing with traces.
5 = particularly strange, landing with observation of occupants.


While Poher’s system superficially looks very detailed and scientific, the actual basis for the relative values is far from clear.  For example, in relation to credibility, why should a UFO report from a pilot have more than twice the value of a report from a farmer – particularly in the light of the consideration of the data in PART 16 : Qualitative criteria: Credible witnesses?

 

Does Poher’s system merely result in spurious accuracy and the codification of biases?  The answers are far from clear.

 

 

Suggested refinements by Jenny Randles

 

Jenny Randles wrote the book “UFO Study” (1981) as a “handbook for enthusiasts”.  In that book, she suggested that case reports written by UFO investigators after an investigation is concluded should include an evaluation of the strangeness and probability rating of the case  (see Footnote 20.11).

 

She suggested that the case report “could profitably include … your first-hand opinion on the strangeness and credibility of a story” since “you are are person who has had direct contact with the witnesses”, referring to J Allen Hynek as the first to propose the importance of this this.

 

Jenny Randles commented that “in truth this means a subjective assessment of the witness and the events by yourself, but then you are in the best position to make such an assessment.  It is suggested that you bear in mind a scale from 0 to 9 for both strangeness and probability.  0 would represent a report which was totally without credibility (especially so far as the witnesses were concerned) or one where there were _no_ strange aspects. 9, on the other hand, would apply to cases which are completely credible or without unstrange attributes.  Both of these extremes should be regarded as unobtainable guidelines, and your two-figure evaluation should fall somewhere in between”.

 

In addition to stating a “S-P rating”, Jenny Randles suggested that UFO investigation case reports should have a title page containing “any codified information about the case that will transfer rapid data. The relevant codes, devised by Jenny Randles and Bernard Delair “for a joint research catalogue” (which I do not recall seeing discussed subsequently) includes, for example, “CE3” for a Close Encounter of the Third Kind, “L” for “Landing”, and “EM” for “Electronmagnetic Inteference”.    Of particular significance in terms of refinement of Hynek’s Strangeness-Probability Ratings is the suggestion that the title page should also include “the Investigation Level”.

 

The “Investigation Level” of a sighting was a proposal that had previously been made by Jenny Randles and Peter Warrington in their book “UFOs: A British Viewpoint” (1979) - see Footnote 20.12 (and had been discussed by Jenny Randles in an article in Flying Saucer Review in 1978 - see Footnote 20.16).  Jenny Randles and Peter Warrington commented that : “Almost any sighting of an aerial phenomenon will find a publisher who will print the report without reference to a logical explanation.  There is obviously a need for some kind of estimation of the reliability of a published report. This needs to be agreed by world UFO organizations.  Every report published should be codified in some way to indicate the amount of investigation which has gone into it”.  They noted the absence of such a system at that time and proposed the following “Investigation Levels”:

 

Level A: A report which has received on-site investigation by experienced investigators.

 

Level B: An interview with the witness or witnesses was conducted by investigators but there was no follow-through investigation into the case.

 

Level C: The witness has simply completed a standard UFO report form of some type. No interviews have been conducted.

 

Level D:  The report consists solely of some form of written communication from the witness.

 

Level E: The report is based on information received second hand (such as a newspaper account).  There has been no follow up investigation at all.

 

The article by Jenny Randles in Flying Saucer Review in 1978 (see Footnote 20.16) suggested that she and Bernard Delair of CONTACT considered might be of “great value if regularly published in UFO periodicals”. I am not aware of any publication subsequently adopting such a practice, including the journal in which that article was published (i.e. Flying Saucer Review).

Robert Moore has suggested that the propsals made by Jenny Randles and Peter Warrington were "far superior" to Hynek's proposals, "but never widely adopted, sadly" (see Footnote 20.24).

 

Suggestions made by David Saunders

 

In 1981, Vicente-Juan Ballester-Olmos published a book entitled “Los OVNIS y la Ciencia” (UFOs and Science) with physicist Miguel Guasp. Chapter V was called “Methodology and Organization” and it started with a section entitled “Standards in the Evaluation of UFO Reports” where they reviewed the various systems to that date and proposed their own system, the one later one adopted by MUFON (see PART 23:  Quantitative criteria : Ballester/MUFON index).  In that book (at page 122, 3rd paragraph, and page 123, 1st paragraph), they refer to an article entitled “How Colorado classes UFOs” (see Footnote 20.21).  It appears from the summary provided by Vicente-Juan Ballester-Olmos that the article described a matrix created by Dr. David Saunders to preliminarily classify UFO sightings.

 

The relevant matrix was published on page 124 of the book by Vicente-Juan Ballester-Olmos, with a caption stating “Matrix used by the Colorado University’s UFO Commission for the classification of cases, based on their potential value.”

 

Basically, the matrix is a table with columns and rows.

 

The columns have a label indicating that they are of “increasing strangeness” from left to right.  From left to right, those columns are:

1. Sighting

2. Recurrance

3. Tracking

4. Motions

5. Formations

6. Day-Night

7. Clouding

8. Landing

9. Rendezvous

    10.  Chasing

    11.  Pacing

    12.  Maneuvering

    13.  Curiosity

    14.  Responsivity

     

    The rows have a label indicating that they are of “increasing objectivity” from the top downwards.  From the top downwards, those rows are:

    1. Prediction

    2. Communication

    3. Single Witness

    4. Exceptional Witness

    5. Multiple Witness

    6. Independent Witnesses

    7. Theodolite or telescope

    8. Polarizer or grating

    9. Animal Reactions

      10.  Electromagnetic effects

      11.  Radar

      12.  Isolated Pictures

      13.  Still sequences

      14.  Movies

      15.  Advanced Instrumentation

      16.  Radioactivity or burn

      17.  Garbage

      18.  Fragments

      19.  Wreckage

       

      There are obvious similarities between the ratings of “strangeness” and “objectivity” proposed by Dave Saunders (presumably in the period between 1966 and 1968) and Hynek’s proposal of Strangeness and Probability Ratings.

       

      It is not clear to me whether these proposals were devised independently or jointly or whether one influenced the thinking of the other.  This may be made clearer by the content of the article entitled “How Colorado classes UFOs” (see Footnote 20.21) mentioned above.

       

       

       

       

      Actual applications of Hynek’s Strangeness and Probability Ratings

       

      There has been a considerable amount of discussion Hynek’s Strangeness and Probability Ratings. Robert Moore has referred to these ratings as "iconic and widespread" (see Footnote 20.24).

       

      However, there has in fact been very limited application of them.

       

      The reasons for the limited application are unclear.

       

      The Hynek Strangeness and Probability Ratings do not appear to be used in the huge UFO database (UFOCAT) sold by the organisation Hynek founded, CUFOS.  UFOCAT entries do, however, include numbers in relation to Vallee’s SVP criteria discussed in PART 21: Quantitative criteria : Vallee’s SVP ratings.  I have contacted the researcher that has managed the UFOCAT project since about 1990 (Donald Johnson) and understand from him that the Hynek Strangeness and Probability Ratings were was never “formally adopted” by UFOCAT. Before 1990, and after David Saunders and Fred Merritt stopped working on UFOCAT, it went through a period when it was “out of favour with Hynek, presumably because of Willy Smith's efforts to invent UNICAT as a replacement. The UFOCAT record layout therefore “remained stagnant and no new fields were added” until Donald Johnson started work on UFOCAT around 1990.  He began working on re-creating UFOCAT “by first adding many pages of case coding that had been done by CUFOS staff in the early 1980s” and noticed that “no one had attempted to add the strangeness and probability ratings” and so “that probably influenced me to be as expedient as possible and not add the Hynek ratings when I expanded the number of fields”.  From an article published in the MUFON Journal in 1976, it appears that at least some of those that worked on UFOCAT had envisaged that Hynek's Strangeness and Probability Ratings would be added (see Footnote 20.22). That article indicates that at that time columns 133-136 of UFOCAT's records related to "Credibility (to be computed)" while columns 137-140 relate to "Strangeness (to be computed)". Another MUFON publication a couple of years later contains some analysis of some of various fields within the UFOCAT records and notes that the columns above column 120 (i.e. including the columns designated for Strangeness and Credibility Ratings) "are devoted to detail coding, and are not in active use at this time" (see Footnote 20.23).

       

      Donald Johnson’s view, having managed the largest existing UFO database for about two decades, is that “applying probability ratings is not that difficult, but I have never seen a written codification of the process to apply the strangeness ratings”.  He decided that “without sufficient guidance and because I could not go back and ask Hynek about it, as he had died in 1985” that he would decided not to include either of these ratings.

       

      However, Larry Hatch’s *U* database (the second largest UFO database, after UFOCAT, of which I am aware) does include Strangeness and Probability Ratings, but that database can only be accessed on modern computers if considerable effort is made since Larry Hatch developed his own database software for use under MS-DOS.  Few computer systems purchased after about 2002 will have an operating system compatible with the software developed by Larry Hatch.  (It is currently still possible to run a "Virtual PC" on a modern computer that simulates an older computer environment capable of running the *U* database, but involves several steps  - see Footnote 20.25. The necessary backwards compatibility is now reaching its limit, with that method not working on the very latest incarnation of the Windows operating system, i.e. Windows 7].  Donald Johnson has commented that while Larry Hatch did make the effort to add Strangeness and Probability Ratings, Larry Hatch “has never really defined and operationalized how he would assign these codes” so Donald Johnson “hesitated to follow suit” (see Footnote 20.19).

       

      Another database (Willy Smith's UNICAT) also included Strangeness Ratings.  According to UFO researcher Jan Aldrich, it included "strangeness values assigned by Hynek" (see Footnote 20.26).   Unfortunately, Willy Smith died in 2006 and he, according to Jan Aldrich, used "used a computer program which is obsolete".  Jan Aldrich is in possession of paper copies of the content of UNICAT, but this consists of "500+" records, with each record having a separate page.  I am not aware of any plan to make those records available to other researchers and this would, presumably, be a time-consuming task. 

       

      (I do not know how many hundreds of hours were spent by Larry Hatch and Willy Smith creating and maintaining their respective databases, but I note in passing that the above couple of paragraphs should provide some sobering facts for the next generation of UFO researchers that are currently planning and creating new UFO databases).

       

      As noted above, Jenny Randles appears, at least to some degree, adopted Hynek’s rating scheme.   I have not seen the NUFON database she mentioned.  It seems that BUFORA and/or its investigators may also have adopted Hynek’s rating scheme.  I have been told by one of BUFORA’s ex-Chairmen (Tony Eccles) that BUFORA “adopted systems developed by Hynek (and Vallee)” (see Footnote 20.20).  I am not sure what form that adoption took – neither Hynek’s Strangeness/Probability nor Vallee’s SVP ratings appear to be included in the standard BUFORA case investigation report forms contained within the BUFORA (see PART 22: Quantitative criteria : BUFORA’s case priority).

       

      Jim Speiser has written an article (see Footnote 20.10) which refers to “each UFO Sighting Report in the CUFON database” having a rating at the bottom in the form S#/P#.  As at June 2010, the website of the Computer UFO Network (www.cufon.org) focuses on UFO documents rather than sightings. The few sightings it addresses do not appear to have any ratings at the bottom, whether in the format S#/P# or otherwise.   Some material apparently produced by “CUFON Computer UFO Network” in 1987, i.e. during the same year as Speiser’s article, available online (see Footnote 20.11) do contain a fairly small number of UFO reports which include such ratings.  It is not clear how long the system was persisted with, nor why it appears to have been abandoned.

       

      I have been informed by Fran Ridge that he believes that the Berliner number system (and, by implication, presumably also Hynek’s Strangeness and Probability Ratings) were used by Willy Smith and also by MUFON in their evaluations of submitted cases.  However, I have not been able to confirm this.  In relation to the latter, I note that MUFON has, since 1992, appeared to enforce the quantitative criteria considered in PART 23: Quantitative criteria : Ballester/MUFON index.

       

       

      FOOTNOTES

       

      [20.01] Donald Menzel, “UFO’s: A Scientific Debate” (1972) (edited by Carl Sagan and Thornton Page) at pages 136-137 (in Chapter 6) of the Barnes and Noble hardback edition (with the same page numbering in the Norton paperback edition).

       

      [20.02] J Allen Hynek, “The UFO Experience” (1972) at pages 24-25 (in Chapter 4) of the Henry Regnery hardback edition (with same page numbering in the Abelard-Schuman hardback edition) at page 28 of the various Ballantine paperback editions, at page 42 of the Corgi paperback edition.

       

      [20.03] J Allen Hynek, “The UFO Experience” (1972) at page 25 (in Chapter 4) of the Henry Regnery hardback edition (with same page numbering in the Abelard-Schuman hardback edition) at page 29 of the various Ballantine paperback editions, at page 43 of the Corgi paperback edition.

       

      [20.04] J Allen Hynek, “UFO’s: A Scientific Debate” (1972) (edited by Carl Sagan and Thornton Page) at pages 41-42 (in Chapter 4) of the Barnes and Noble hardback edition (with the same page numbering in the Norton paperback edition).

       

      [20.05] J Allen Hynek, “The UFO Experience” (1972) at pages 25-26 (in Chapter 4) of the Henry Regnery hardback edition (with same page numbering in the Abelard-Schuman hardback edition) at page 29 of the various Ballantine paperback editions, at page 43 of the Corgi paperback edition.

       

      [20.06] Don Berliner article entitled “Sighting Coefficient” in MUFON Journal, April 1987, Issue 228, pages 14 and 17.

       

      [20.07] J Allen Hynek, “The UFO Experience” (1972) pages 235-240 of the Henry Regnery hardback edition (with same page numbering in the Abelard-Schuman hardback edition), pages 265-270 of the various Ballantine paperback editions, at pages 289-294 of the Corgi paperback edition.

       

      [20.08] Jim Speiser article entitled “Paranet Classification” in MUFON UFO Journal, June 1987, Issue 230, pages 15-16

       

      [20.09] Jim Speiser article, “The Hynek Rating System”, undated.  Available online as at 1 June 2010 at:

      http://www.skepticfiles.org/ufo1/hynekufo.htm

       

      [20.10] CUFON article, “Report #: 220”, 24 Janaury 1987.  Available online as at 1 June 2010 at:

      http://www.skepticfiles.org/mys5/ufo-4-27.htm

       

      [20.11] Jenny Randles, “UFO Study” (1981) at pages 122-124 (in Chapter 9) of the Hale hardback edition.

       

      [20.12] Jenny Randles and Peter Warrington, “UFOs: A British Viewpoint” (1979) at pages 167-168 (in Chapter 9) of the Hale hardback edition.

       

      [20.13] Jenny Randles and Peter Warrington, “UFOs: A British Viewpoint” (1979) at pages 54-55 (in Chapter 3) of the Hale hardback edition.

       

      [20.14] Jenny Randles and Peter Warrington, “UFOs: A British Viewpoint” (1979) at pages 56 (in Chapter 3) of the Hale hardback edition.

       

      [20.15] Jenny Randles and Peter Warrington, “Science and the UFOs” (1985) at page 136 (in Chapter 10) of the Hale hardback edition.

       

      [20.16] Jenny Randles article entitled “Publishing of UFO Data” in FSR Vol. 24 No.2, 1978 at pages 22-23.

       

      [20.17] J Allen Hynek, “The UFO Experience” (1972) at page 25 onwards (in Chapter 4) of the Henry Regnery hardback edition (with same page numbering in the Abelard-Schuman hardback edition) at page 29 onwards of the various Ballantine paperback editions, at page 43 onwards of the Corgi paperback edition.

       

      [20.18] Claude Poher, “Etude statistique des rapports d’observations du phénomène O.V.N.I. Etude menée en 1971, complétée en 1976”, pages 85-92. Available on-line on the GEIPAN site:

      http://www.cnes-geipan.fr/documents/stat_poher_71.pdf

       

      [20.19] Email from Donald Johnson to Isaac Koi, 21 June 2010.

       

      [20.20] Email from Tony Eccles to Isaac Koi, 20 June 2010.

       

      [20.21] NOT YET OBTAINED : Alfred J. Cote Jr, “How Colorado classes UFOs”, INDUSTRIAL RESEARCH, August 1968, 27-28.

       

      [20.22] Article entitled "UFOCAT - Tool for UFO Research" in MUFON Journal, Number 106, September 1976, pages 14-15. No author indicated, so presumably written by the editor (Dennis William Houck).  Concludes by stating that inquiries should be addressed to Dr David Saunders.

       

      [20.23] Fred Merritt, "UFOCAT and a friend with two new ideas", MUFON Symposium Proceedings 1980, pages 30-52.

       

      [20.24] Email from Robert Moore to Isaac Koi, 24 June 2010.

       

      [20.25]  After a flood of suggestions for different approaches (particularly in a discussion with members of the AboveTopSecret.com discussion forum), I am pleased to report that I now have the full version of Larry Hatch's database working flawlessly on a fairly modern computer using Windows Vista Business.

      I've spend a bit of time on numerous dead-ends (some arising from trying to use my main computer, which has Windows 7, which is already incompatible with the approach outlined below).

      However, the approach that worked was:

      (1) Installing Microsoft's Virtual PC 2007 from a page on Microsoft's website onto a laptop I own which still has Windows Vista Business on it.

      (2) Adding Windows 95 within that virtual PC, using the instructions on )">a page on the Youtube website.

      (3) Saving the old installed files from the floppy disk (which I've passed on from old machine to new machine several times, without having had a floppy drive for a few computer generations...) into an .ISO file using  MiniDVDSoft's free ISO creation software.  Running Window95 within Virtual PC 2007 then capturing the .ISO image of the relevant files, copying them into a directory on the virtual C: hard drive (into a new directory, "UFO").

      (6) Running the u.exe file from that new UFO directory

       

      [20.26] Email from Jan Aldrich to Isaac Koi, 21 June 2010.

      Category:

      Best UFO Cases” by Isaac Koi

       

      PART 22:         Quantitative criteria : BUFORA’s case priority

       

      One of Britain’s most profilic and respected ufologists, Jenny Randles, wrote a book entitled “UFO Study” (1981) as a “handbook for enthusiasts”. A revised and updated version of that book has generously been made available online by Jenny Randles and Robert Moore (see Footnote 22.07) - see the separate entry on this website in relation to that book.

       

      That book included, in effect, two different proposed systems in relation to the ranking of cases. They had different purposes:

        (1) Firstly, Jenny Randles suggested that case reports written by UFO investigators (i.e. after an investigation is concluded) should include an evaluation of the strangeness and probability rating of the case.  That suggestion adopts (and adds to) J Allen Hynek’s Strangeness/Probability Ratings and is discussed in PART 20:  Quantitative criteria : Hynek – Strangeness and Probability.

        (2) Secondly, Jenny Randles included a “chart to determine case priority” (i.e. when a report is received by an investigator, prior to an investigation) – see Footnote 22.01. That chart is discussed below, along with similar proposals and their implementation.

           

          The proposals made by Jenny Randles make use of the following definitions (see Footnote 22.02):

           

          (a) Low definition - Simple phenomena with no definite shape and no interactive effects.

           

          (b) Medium definition - Phenomena as above, with clear shapes accorded them.

           

          (c) Instrumentally detected – Recorded evidence of a visually observed phenomenon (sub-divided into photographic, film and radar cases).

           

          (d) CE1 – Any phenomenon causing transient effects on the witness, the environment, or both (e.g. time loss, animal disturbance, radio interference etc)

           

          (e) CE2 – Any phenomenon causing effects, as in CE1, which are semi-permanent and observable by others, who did _not_ experience the phenomenon alleged to have caused them.

           

          (f) CE3 – Phenomena which have animate entities of some king in association with them.

           

          (g) CE4 - Events which cause a witness to suffer temporary or permanent reality distortion (e.g. a psychic interaction) or which cause imbalance or change in a witness of long duration following the initial events (e.g. post-abduction symptoms).

           

          Jenny Randles commented that “one can, I think, use this as a guide to priority in an ascending scale (with the possible exception that CE1 and INST [instrumentally detected] cases are often of roughly equal priority)” (see Footnote 22.02)

           

           

          The “chart to determine case priority” mentioned above assigns a certain number of points to each of three factors:

          Factor A: Case type

          Factor B: Witness Groups

          Factor C: Witness Type

           

          In short, the higher the total number of points, the more likely a report is to merit a higher priority investigation.

           

          The number of points to be assigned to each of these factors is as follows:

           

          Factor A. Case Type

           

          1 point:  Low definition

          2 points: Medium definition

          3 points: not used

          4 points: CE1 / Instrumentally detected

          5 points: CE2

          6 points: CE3

          7 points: CE4

           

           

           

          Factor B. Witness Groups

           

          1 point: Single witness

          2 points: Multiple witnesses

          3 points: Independent witnesses

           

           

           

          Factor C. Witness Type

           

          1 point:  Experience in RAF etc

          2 points: Serving in army, air force etc. Pilot or policeman etc.

           

           

          The purpose and content of the “chart to determine case priority” devised by Jenny Randles are similar to the purpose and content of a system set out in BUFORA’s “UFO Investigation” manual (1976, edited by Roger Stanway, with Jenny Randles as Assistant Editor) – see Footnote 22.03.

           

          BUFORA’s manual set out BUFORA’s “Investigation Classification System”, the purpose of which is to assist in determining “the degree of urgency which is likely to be involved” when a UFO report is first made.   That system was stated to be “based on an original system devised by Mr Charles Lockwood, BUFORA’s Research Projects Officer” (although no reference is given in BUFORA’s manual for any relevant publication by Lockwood).

           

          BUFORA’s 1976 manual states that it is likely that the system of classification to be used after evaluation of a case will be different, and details will be published “as soon as this final system has been agreed” and refers to the desirability of obtaining international agreement on such a system.  I am not aware of BUFORA adopting or promoting any such system, although it may be that the proposals of Jenny Randles for that system are reflected in her proposals based on Hynek’s Strangeness/Credibility ratings (discussed in PART 20:  Quantitative criteria : Hynek – Strangeness and Probability).

           

          The BUFORA system also assigned points to three different factors:

           

          Factor 1 : Number of qualified or trained observers;

          Factor 2 : Class of observation;

          Factor 3 : Total number of witnesses.

           

           

          The number of points to be assigned to each of these factors is as follows:

           

           

          Factor 1 : Number of qualified or trained observers [“Category”]

           

          2 points – 1 or more official observers: pilot, professional astronomer, who was using his expertise when making the observation.

           

          1 point – 1 or more experienced observers, not necessarily professional but of good standing: police, trained UFO students.

           

          0 points – No experienced observers. Most reporters of UFO’s are in this category.

           

           

           

          Factor 2 : Class of observation [“Class”]

           

          6 points – Permanent record made – such as physical or physiological traces left, photograph taken, measurements made with instruments and recorded.

           

          5 points – Temporary physical effects reported.  Occupants or entities.  Vehicle interference. EM effects.  Time inconsistency.

           

          3 points – Object seen nearby with features not likely to be observed in a known manmade or natural phenomenon.  No effects noted locally.

           

          1 point – Distant object or point of light.  Shape not clearly distinguishable.

           

           

           

          Factor 3 : Total number of witnesses [“Group”]

          2 points – 2 or more independent witnesses at different locations.

          1 point – 2 or more witnesses at one location.

          0 points – 1 witness only.

           

          The BUFORA manual suggests that Lakenheath was an “A1a” case (10 points), whereas the Villas Boas incident was a “C1c” case (6 points).

           

           

          A system similar to that in BUFORA’s manual is set out in the material relating to BUFORA’s Postal Training Course for UFO investigators (see Footnote 22.04). The similarities are not surprising given that the Postal Training Course materials were, as I understand it, largely written by Jenny Randles. The potential scores set out in that material are as follows:

           

          Case happened within past week : 2 points

          Case older than a week but exact date known : 1 point

          Three or more witnesses : 3 points

          Two witnesses only : 2 points

          At least one witness is independent (i.e. not known to others and in a different location : 1 point

          An entity is seen : 2 points

          Interaction between entity and witness : 1 point

          Time loss reported : 1 point

          Photograph/video taken : 2 points

          Ground traces or electrical interference : 2 points

           

          Total : 17 points

           

           

          BUFORA is not the only organisation to offer a postal training course based on material written by Jenny Randles.  MAPIT also offers such a course. MAPIT’s course material also includes a case priority system for UFO reports (see Footnote 22.05), with the following point system:

           

          2 points: Case happened within the past week

          1 point: Case older than a week but exact date known

          3 points: Three or more witnesses

          2 points: Two witnesses only

          1 point: At least one witness is independent (i.e. not known to others, different location)

          2 points: A UFO entity has been seen

          1 point: Physical interaction between witness and phenomena

          1 point: Time loss reported

          1 point: Time Displacement reported

          2 points: Recorded : Photograph or video

          2 points: Electrical / mechanical interference

          3 points: Physical marks on the body caused by the phenomena

          2 points: MIB Encounter

          2 points: Animal or Human Mutilation reported

          1 point: Military presence

          1 point: Black helicopter reported

          1 point: Crop circle / Burns / Flattened Area reported

          0 points: One witness only

          1 point: Angel Hair substance

          1 point: Paralysis reported

          2 points: Abduction / Interaction

           

          Since MAPIT’s remit is not limited to investigation of UFOs, its course material also includes a case priority system for cases of a paranormal nature (see Footnote 22.06). That system involves assigning points as follows:

           

          2 points: Case happened within past week

          1 point: Case older than a week but exact date known

          3 points: Three or more witnesses

          2 points: Two witnesses only

          1 point: At least one witness is independent (not known to others, different location)

          2 points: An apparition is seen

          1 point: Physical interaction between witness and phenomena

          1 point: Time slip / Lapse or Displacement

          3 points: Recorded : Photograph or video

          2 points: Electrical / mechanical interference

          1 point: Objects being moved or misplaced

          1 point: Formings / disembodied voices or strange sounds

          2 points: Strange balls of light or Plasma Effects

          2 points: Appearance of Gentry ie small dwarf type beings, fairy type folk etc

          3 points: Physical Marks on the body caused by phenomena

          1 point: Bad smells or odours / cold spots or areas

          0 points: One witness only

          1 points: Suffered from paralysis

          1 points: Levitation or Excess Body Electricity

           

           

          Actual applications of the proposals by BUFORA / Jenny Randles

          I do not know to what extent Jenny Randles herself applied her proposals for assigning points to reports when received as a guide to the relative priority to be assigned to cases. I presume that her proposals were intended as a guide for newer investigator rather than to be strictly applied by someone with her level of experience.

          I have asked one of the former Chairmen of BUFORA whether the BUFORA case files and/or any database held by BUFORA include BUFORA’s “Investigation Classification System” ratingsand/or any other rating system (e.g. Vallee's three digit SVP scores and/or Hynek's Strangeness-Probability ratings). I also asked him whether, if not, whether he knew if any such rating system was tried and/or rejected for any particular reason(s). He replied: "A very good question. Yes for all of the cases I submitted" [although I am not sure whether this answer was intended to refer to BUFORA’s “Investigation Classification System” ratings or  one of the other rating system mentioned in my question].  He continued: "The manual I have includes a copy of the BUFORA Case Report Database (by Mike Wootten 1992) and just by looking through one of the boxes of archived cases I found that a number of completed sighting questionnaires did not contain this information, others did. It might be possible that when a case report was submitted this information may have contained such a form and that this was taken by someone like Phillip Mantle or Heather Dixon and then the data was entered onto a database (I've never seen this) and then returned the form to the case file for storage".  I have emailed the current Chairman of BUFORA (Matt Lyons) and Heather Dixon to see if they assist further.

          Robert Moore, the co-author of the updated online edition of "UFO Study" (see Footnote 22.07), has suggested that the case priority system developed by Jenny Randles and BUFORA:

          (1)  "was mostly for the benefit of beginning investigators" and "to get people thinking along the right lines" (see Footnote 22.08). He has commented "one of Jenny's main concern were that investigators were spending too much time on simple Lights in the sky (LITS) cases" and that there was a hope that focusing on "high strangeness" cases could provide more high quality information on "True UFOs".

          (2)  is "now effectively obsolete!".  In his opinion, those systems were "compiled at a time when investigators had lots of sightings to deal with and choices had to be made as to which reports to investigate. In ufology today we have clumps of very low quality reports (mostly Khoom Fay [i.e. Chinese lanterns]) and the very occassional significant event. While the quantity of UFO reports has increased since the early 2000's quality has not". 

          In relation to the latter point, I disagree with Robert Moore.  In my opinion, even if the choices as to which current reports are worth investigating (i.e. those not very likely to be Chinese Lanterns), there is still a need to decide which historical cases are worth further investigation.

           

          FOOTNOTES

          [22.01] Jenny Randles, “UFO Study” (1981) at page 75 of the Hale hardback edition.

           

          [22.02] Jenny Randles, “UFO Study” (1981) at page 73 of the Hale hardback edition.

           

          [22.03] BUFORA’s “UFO Investigation” manual (1976, edited by Roger Stanway, with Jenny Randles as Assistant Editor), Appendix 9.

           

          [22.04] BUFORA’s Postal Traingin Course material, Lesson 1, as at approximately 2000.

           

          [22.05] MAPIT’s BITC course material, Module 4, as at approximately 2000.

           

          [22.06] MAPIT’s BITC course material, Module 1, as at approximately 2000.

           

          [22.07] Revised edition of “UFO Study” (1981), updated by Jenny Randles and Robert Moore. Available online at:

          http://www.scribd.com/doc/33162159/Ufo-Study-p1v2-162

          [22.08] Email from Robert Moore to Isaac Koi, 24 June 2010.

          NOT YET OBTAINED : “Guidelines on the Content and Organisation of Reports”, Hind, J & Keatman, M. UFOIN Guidebook, 1979. [Referred to by Jenny Randles in her book "UFO Study" at page 125, footnote 2]