Category:

Best UFO Cases” by Isaac Koi

 

PART 23:  Quantitative criteria : Ballester/MUFON index

 

Introduction

 

The most detailed attempt made thus far to put forward a method for the quantitative assessment of UFO reports is probably the Ballester-Guasp quantification method. 

 

In 1981, Spanish ufologist Vicente-Juan Ballester-Olmos and physicist Miguel Guasp published a book entitled “Los OVNIS y la Ciencia” (UFOs and Science). That book included a review the various systems that had been proposed for evaluating UFO reports. It also included their own proposal for such a system (see Footnote 23.01). An expanded treatment of that proposal was published in English in 1988 (see Footnote 23.02).

 

An article by Jerold Johnson containing a discussion of a revised version of that system was subsequently published in the 1995 edition of the MUFON training manual (see Footnote 23.03).  That article have appeared on various websites (e.g. see Footnote 23.06) has also been translated into French (see Footnote 23.04).

 

 

 

MUFON training manual

 

Jerold Johnson’s article in the MUFON training manual (see Footnote 23.03) set out details of “the evaluation procedure applied to reports at headquarters level prior to their being entered into the computer file” and stated that since 1992 “reports processed into the MUFON files have been given a numeric evaluation” based on this system. 

 

That article sets out the relevant factors at some length and mentions in the concluding section that Field Investigators should run through their own reports to “check if all the questions required for the evaluation have been answered somewhere in the report”.  It states that “the MUFON report forms were designed before this formula was recognized as worthwhile and there are not specific blanks on the forms for all the required data, especially in the ‘how investigated, for how long’ category, so some Field Investigator ‘write-ins’ are necessary”.

 

The article proposes the calculation of an overall “score” for the report derived by multiplying the following three values (each of which ranges from zero to one) together, “representing the degree of certainty that the report indeed represents an anomalous event that happened as recorded”.

 

Factor 1 : “The volume and quality of the data recorded, based on the methods employed and the time spent investigating the case”.

Factor 2 : “The inherent abnormality or ‘strangeness’ of the event, making it unlikely to have a natural or conventional explanation”.

Factor 3 : “The credibility of the report, based on the reliability, maturity, and circumstances of the witnesses interviewed”.

 

The values to be assigned for each of these factors was set out in the article, as summarised below.

 

Factor 1 : “Information Quality Index”

This factor “indicates the ‘strength’ that a report has for analysis based on how it was acquired”.

 

This factor is similar to the [S]ource and [V]isit factors in Jacques Vallee’s SVP criteria (see PART 21: Quantitative criteria : Vallee’s SVP ratings) and the Investigation Levels proposed by Jenny Randles (see the relevant discussion towards the end of PART 20: Quantitative criteria : Hynek – Strangeness and Probability).

 

 

Source

Direct Investigation

At the site

>= 2 hours

1.0

< 2hours

0.9

Interview Person to person

>= 1 hour

0.9

< hour

0.8

By Telephone

>= 1/2 hour

0.7

< 1/2 hour

0.6

Indirect Investigation

Questionnaire with follow-up

Extensive

0.7

Brief

0.6

Letter with follow-up

Extensive

0.6

Brief

0.5

Other Investigation

Questionnaire no follow-up

0.6

Letter/Narrative no follow-up

>= 1 page

0.4

< 1 page

0.3

Newspaper

>= 500 words

0.2

< 500 words

0.1

Radio/TV

0.1

Witness Relative

0.1

Verbal/Rumor/Unknown

0.0

 

 

While such values can probably be assigned fairly easily by the person that conducted the relevant investigation, a person faced with merely an account in a UFO book would have often have difficulty in assigning any value other than 0.0 (i.e. the value where the source is “unknown”).  Few accounts of UFO reports in books and magazines indicate the nature and depth of an investigation.  Perhaps it is only right that the information quality of such accounts in UFO books are assigned a very low or zero value. 

 

 

 

Factor 2 : “Strangeness Index”

 

This factor indicates “the ‘abnormality’ level of a report compared to normal processes, familiar phenomena and known manufactured objects”.

 

It is similar to the Strangeness Rating suggested by J Allen Hynek (see PART 20: Quantitative criteria : Hynek – Strangeness and Probability).

 

The article suggests that “one simply counts up” the number of the following seven features that are “commonly found in sighting reports” and divides by seven:

Strangeness Feature 1 : “Anomalous appearance” (i.e. “shape or dimensions do not correlate with any identifiable flying craft”)

Strangeness Feature 2 : “Anomalous movements”  (i.e. the “dynamic characteristics of the observed phenomenon” make it “impossible to receive a logical explanation”)

Strangeness Feature 3 : “Physical-spatial incongruities”  (e.g. “disappearances”, “the merging of two objects into one”)

Strangeness Feature 4 : “Technological detection” (“observing and/or recording of the passage of the UFO through calibrated precision instruments”, including radar, telescopes, film or videotape)

Strangeness Feature 5 : “Close encounter” (“within 500 feet”)

Strangeness Feature 6 : “Presence of beings associated with the UFO” (“the association of presumed occupants”)

Strangeness Feature 7 : “Finding of traces or production of effects” (“lasting physical or chemical characteristics or residues left by a UFO after its disappearance, provided that there exists some testimony that the traces or effects were produced by the presence of the UFO”).

 

 

 

Factor 3 : “Reliability Index”

 

This factor indicates “the witness ‘credibility’ ”.

 

(I note in passing that some of the factors that relate to the credibility of a UFO report – e.g. a video recording or physical evidence– are covered within this system within the factor on “strangeness”).

 

The “Reliability Index” is considerably more complicated than the above factors. There are six categories within this parameter and each is assigned a “weight factor”. MUFON’s manual states that a researcher “selects the appropriate number from each category, multiplies it by its ‘weight factor’ and ultimately adds the six results together”.  

 

A similar method of giving different weights to different factors has been described by Claude Poher (see the relevant discussion near the end of PART 20: Quantitative criteria : Hynek – Strangeness and Probability).

 

Credibility Feature 1 : Number of witnesses (“a sighting is more believable if it has more witnesses”)

0.0 - none or unknown
0.3 - one
0.5 - two
0.7 - three to five; "several"
0.9 - six to ten
1.0 - more than ten

(multiply by weight factor 0.25)

 

 

Credibility Feature 2 : Profession or occupation of the witnesses (“indicates their level of job responsibility, from which can be inferred a measure of their dependability or social status”).

0.0 - not specified
0.3 - students (pre-college)
0.5 - laborers, farmers and housewives
0.6 - university students
0.7 - traders, businessmen, employees and artists
0.9 - technicians, police and pilots
1.0 - university graduates and military personnel

(multiply by weight factor 0.2)

 

 

Credibility Feature 3 : Relationship between witnesses (“provides indication of the theoretical tendency to generate a hoax together, based on the different types of ties between them”).

0.0 - unknown
0.3 - friends
0.6 - family relationship; also applies to cases with a single witness
0.8 - professional relationship
1.0 - no relationship

(multiply by weight factor 0.15)

 

 

Credibility Feature 4 : Geographic relation between witnesses (“when there are multiple observers, their relative location affects the certainty of the event”).

0.0 - unknown
0.5 - together; also applies when there is a single witness
1.0 - independent (separate)

(multiply by weight factor 0.15)

 

 

Credibility Feature 5 : Activity at the time of the sighting (“measures the opportunity for a hoax”)

0.0 - not specified
0.3 - recreational activity (walk, rest, outing, hunting, sport, at home, on vacation, etc.)
0.6 - traveling (moving, by any means)
0.8 - cultural or intellectual activity
1.0 - working (at work or on the way to or from)

(Multiply by weight factor 0.15)

 

 

Credibility Feature 6 : Age of the witness (“indicates their degree of maturity and the validity of their testimony, based on their capability”).

0.0 - unknown
0.2 - under 10 years or over 75 years
0.4 - between 10 and 17 years
0.6 - between 18 and 34 years
0.8 between 65 and 74 years
1.0 between 35 and 64 years

(multiply by weight factor 0.1)

 

 

It is notable that no references whatsoever are given in the MUFON manual in support of any assertion that the various features actually matter to the credibility of a witness or to support the values assigned. As one French ufologist (Claude Mauge) has commented, “Many values used by various authors are based more on good sense than on ‘scientific’ data” (see Footnote 23.09). 

 

While some of the values suggested above may be justified on the basis of (to use Claude Mauge’s phrase) “good sense”, I question whether (in particular) the values for “activity at the time of the sighting” as a measure of “the opportunity for a hoax” are really justified. For example, is it really justifiable to give a report from someone that reported he was indulging in an activity falling within the category “cultural or intellectual activity” should be  given a score for this feature which is nearly three times that from someone that said he was walking at the time of the sighting?

 

Vicente-Juan Ballester-Olmos himself has commented on the paragraphs above. He stated that my comments are "well taken" and that "we move in a subjective area here but our index is based on criteria supported by years of field work and case analysis experience, not merely an arm-chair elaboration" (see Footnote 23.12). He also mentioned that the values are followed a consideration of "previous work in this particular area of research" and referred to the bibliography for the original paper.  He also commented specifically on the activity at the time of sighting, stating "it is tested that the proportion of of hoax cases is larger during a lazy, leisure activity than during a professional one, for instance, this is why we computed it that way".

 

 

 

The overall score (“Certainty Index”)

 

The “Certainty Index” (i.e. the overall score for a UFO report) is obtainted by multiplying the three factors above. 

 

According to the MUFON manual, this provides a “measure of the overall degree of ‘certainty’ of an anomalous event behind the report” and is “often expressed as a percentage”. 

 

The MUFON manual also suggests that “the Certainty Index might be used as a quick way to order the reports in a catalog from ‘least promising’ to ‘most promising’, while the other three parameters will indicate why each report received the value it has”.

 

As can be seen from the above summary, calculating the relevant total score involves several multiplications which would be difficult to perform mentally.  A calculator would generally be needed.

 

In 2003, the MUFON Journal published an item by Terry Groff entitled “Online Javascript Certainty Index Calculator useful as an investigating tool” (see Footnote 23.07).  That item decribed a free online tool helpfully developed by Terry Groff (which is still available online - see Footnote 23.08) which allows the user to merely click on relevant attributes and then ask the tool to calculate the relevant scores. 

 

Terry Groff’s online tool simplies the relevant calculations considerably. The relevant webpage also obviates the need to remember (or keep at hand) all the relevant guidelines and values.

 

 

 

Actual applications of the Ballester-Guasp method

As mentioned in the introductory sections above, the method above was adopted and promoted by MUFON in the 1995 edition of MUFON’s training manual for its field investigators. Since the MUFON report forms at the time of the publication of that manual did not contain “specific blanks” for all the required data, the manual stated that “write-ins” were necessary. The relevant article in the MUFON training manual states that since 1992 “reports processed into the MUFON files have been given a numeric evaluation”

 

It is not clear whether the MUFON report forms were ever redesigned to contain the relevant “specific blanks”, nor how common it was (or is) for MUFON field investigators to “write-in” relevant information. 

 

However, one of the volunteers that has contributed to MUFON's database systems has helpfully informed me that "a great number of cases that have been investigated do have this information available" and that it is part of the "final report text field" within MUFON's records.  That volunteer commented that: "Unfortunately though it's difficult to do mass comparison across the entire DB to get a sense of the quality of all cases (or the cases that are missing this information) because the data hasn't been abstracted to its own numerical column" (see Footnote 23.10). 

 

The current incarnation of the online tool mentioned above that has been developed by Terry Groff now appears to be integrated with MUFON's Case Management System("CMS") database, since it includes fields for inputting the relevant Case Management System case number (see Footnote 23.11). I am currently seeking confirmation of this. 

 

Unfortunately, the only database of reports available on the MUFON website to members of the public appears merely to contain details of sighting submitted without any evaluation (whether numerical or otherwise).  I am currently seeking clarification of the status and availability of the "final report text field" referred to above. 

 

Jacques Vallee has referred to the Ballester-Guasp proposals in the context of a criticism of the fact that other UFO researchers “have rarely bothered” to apply some way “of assigning credibility or ‘weight’ to an oberservation”.  Jacques Vallee said that there was to a notable exception to his criticisms, i.e. the “quality index” proposed by Spanish researchers Vicente Juan Ballester-Olmos and Guasp, “but it is so detailed that I have found it difficult to apply in practice” (see Footnote 23.05).  Vallee has suggested that it is “important to implement a system that is simple enough to be applied quickly and with enough mnemonic value that it does not require constant reference to a user’s manual or a set of tables”.

 

Jacques Vallee’s actual experience of the proposed system appears to be inconsistent with the assertion made in MUFON’s manual that “The system gives reproducible numbers when evaluated by different individuals, at different times, as long as they are following the standards as published. The method is relatively ‘quick and easy’ given a calculator and a few tables and definitions extracted from the publications and kept handy as notes”.  It is notable that Vallee's remarks were made prior to the developed by Terry Groff of his online calculation tool and the apparent subsequent integration of that tool with MUFON's Case Management System.

 

One of the points made in the MUFON training manual in relation to this system is valid in relation to any of the quantitative criteria that have been proposed by various researchers : “Avoid the urge to argue with the selection, ordering, or number value assigned to the various factors. The standards must be stable and rigorously adhered to for there to be any usefulness in comparisons of reports using the numbers at different times, by different people, possibly in different countries, but all using the same standardized evaluation procedure”. 

 

Vicente-Juan Ballester-Olmos has mentioned that the Ballester-Guasp proposal was used by Willy Smith "in working with his UNICAT" (see Footnote 23.12).Since Jan Aldrich (a ufologist that has paper copies of the UNICAT records) has stated that UNICAT includes "strangeness values assigned by Hynek", it is currently unclear to me whether UNICAT records include strangeness values under the system proposed by Ballester-Olmos and Guasp in addition to Hynek's strangeness values, or only values under one of these approaches.  Due to the range of UFO classification schemes, I am aware of at least one other signficiant database (UFOCAT) has adopted more than one classification scheme in order to provide the maximum amount of information.

 

Vicente-Juan Ballester-Olmos has also mentioned that German engineer Adolf Schneider "did computer work" about the Ballester-Guasp proposal "but soon left ufology and it was not complete" (see Footnote 23.12).  The only mention of Adolf Schneider's name in relation to a UFO database that I have found is a brief reference to him as as a point of contact for a catalogue of 1080 UFO cases with electro-magetic and gravity effects.  MUFON-CES was also mentioned in this context.  I do not know if that database was in fact developed by Adolf Scheider and/or if it assigned the Ballester-Guasp values to the relevant cases.

 

Vicente-Juan Ballester-Olmos himself has applied the system in a book he co-authored with Juan A. Fernández entitled "Enciclopedia de los encuentros cercanos con OVNIS" (Plaza & Janés, 1987), where he "analyzed 230 UFO and 355 IFO landing reports in Spain and Portugal".

 

 

FOOTNOTES

 

[23.01] Vicente-Juan Ballester-Olmos and Miguel Guasp “Los OVNIS y la Ciencia” (UFOs and Science), Plaza & Janes, S.A., Barcelona, 1981, 1989 at pages 117-135.  [Obtained but not translated]

 

[23.02] Vicente-Juan Ballester-Olmos with Miguel Guasp, “Standards in the Evaluation of UFO Reports”, in The Spectrum of UFO Research, Mimi Hynek (editor), J. Allen Hynek Center for UFO Studies (Chicago, Illinois), 1988, pages 175-182.  [NOT YET OBTAINED]

 

[23.03] Jerold Johnson, “Ballester-Guasp Evaluation of Completed Reports”, in MUFON Field Investigator´s Manual, Walter H. Andrus, Jr. (editor), Mutual UFO Network, Inc. (Seguin, Texas), February 1995, pages 214-221.

 

[23.04]  French translation of the article referred to in Footnote 23.03 available online at:

http://rr0.org/Documents/Pratique/Ballester-Guasp.html

 

[23.05]  Jacques Vallee, “Confrontations” (1990) at page 218-219 (in the Appendix) of the Ballantine Books paperback edition.

 

[23.06] Article published, for example, on Terry Groff’s website in 2005 at:

http://web.archive.org/web/20050205000326/http://terrygroff.com/ufotools/eval/eval_calc.html

 

[23.07] Terry Groff, “Online Javascript Certainty Index Calculator useful as an investigating tool”, MUFON Journal, February 2003, page 12.

 

[23.08] Terry Groff’s “Online Javascript Certainty Index Calculator” has appeared on various websites in the last few years, including at the bottom of the following webpage:

http://web.archive.org/web/20050205000326/http://terrygroff.com/ufotools/eval/eval_calc.html#calc

 

[23.09] Email from Claude Mauge to Isaac Koi, 29th May 2007.

 

[23.10] Email from Dustin Darcy to Isaac Koi, 25th June 2010

 

[23.11] Current version, as at June 2010, of Terry Groff's online tool, is at:

http://mufoncms.com/cgi-bin/bge/bge.pl 

 

[23.12] Email from Vicente-Juan Ballester-Olmos to Isaac Koi, 27th June 2010

Category:

Best UFO Cases” by Isaac Koi

 

PART 24:        Quantitative criteria : Olsen’s Reliability Index

 

There have been various proposals for quantitative criteria to assess the reliability of UFO reports.  Most are considerably less well-known than, say, the schemes proposed by Hynek and Vallee  (see PART 20: Quantitative criteria : Hynek – Strangeness and Probability and PART 21: Quantitative criteria : Vallee’s SVP ratings respectively).  Some of those less known schemes have been implemented due to support from large UFO groups, e.g. BUFORA and MUFON (see PART 22: Quantitative criteria : BUFORA’s case priority and PART 23: Quantitative criteria : Ballester/MUFON index respectively).  Some, such as the scheme outlined below, have largely remained unapplied.

 

Thomas Olsen wrote a book entitled “The Reference for Outstanding UFO Sighting Reports” (1966).  That book was discussed in the Condon Report in 1969 (see Footnote 24.11), which stated the following:  “There apparently exists no single complete collection of UFO reports. ... Proposals have been made from time to time for a computer-indexing of these reports by various categories but this has not been carried out. Two publications are available which partially supply this lack: one is The UFO Evidence (Ha11, 1964) and the other is a collection of reports called The Reference for Outstanding UFO Reports (Olsen)”.

 

While the first of the two books mentioned in that paragraph of the Condon Report, i.e. Richard Hall’s book “The UFO Evidence” (1964) remains well-known and the complete text of that book is now available on several websites, Olsen’s book is now relatively obscure. 

 

In his book, Thomas Olsen discussed at length the calculation of a “reliability index” for UFO reports in a book he wrote in 1966 (see Footnote 24.01)

 

Olsen acknowledged that his “reliability index” was an approximation, but suggested that it was nonetheless useful since it gave “in a single number, some general, conservative indication of probable reliability” (see Footnote 24.08).   He also claimed that another use of the reliability index is “in arranging thereports according to their relative reliability” (see Footnote 24.09).

 

Olsen’s “reliability index” is a value between zero and which represents “the probability that the sighting report accurately describes a real event - that it is not the result of a hoax or hallucination”.  This value is obtained by multiplying the probabilities of three factors, summarised by Olsen as follows (see Footnote 24.04):

 

1. Witness factor: probability that the witnesses, reporting in concert, accurately described a valid experience;

 

2. Investigation factor: probability that the investigating agency correctly documented a reported experience which has no explanation in terms of known man-made or natural phenomena.

 

3. Transcription factor: probability that intermediate sources for the report have related it as originally obtained, without omission, distortion or addition of spurious details.

 

The relevant chapter of Olsen’s book discusses his proposals for assigning values to each of these three factors, as summarised below.

 

 

 

1. Olsen’s “witness factor”

 

At the heart of Olsen’s proposed method of calculating this factor appear to be the following points (see Footnote 24.05):

 

a. Olsen assumes that the more experienced a witness has of “aerial phenomena” then the less likely that individual is to provide “false, inaccurate testimony”.  Thus, the probability of an astronomer or commercial pilot (individuals with “extensive” experience with “aerial phenomena”) providing “false, inaccurate testimony” is assumed to be 12.5%, while the probability of a baker or plumber (individuals with “essentially” no experience with “aerial phenomena”) providing “false, inaccurate testimony” is assumed to be 50%.  Olsen's discussion of this factor should now be considered in the light of the matters covered in PART 16 : Qualitative criteria: Credible witnesses.

 

b. Olsen himself notes that calculating a more accurate (or “literal”) value of the probability that the witness has provided false or inaccurate evidence would require “information about the character, personality and moral integrity” of the witnesses, but such information is “often not stated in a sighting report”. Such further information should be taken into account when available.

 

c. Multiple witnesses have a very dramatic effect on the value assigned by Olsen to the “witness factor”.  The formula put forward by Olsen for assigning a value to the “witness factor” assumes that in cases with multiple witnesses, the probability of testimony either being inaccurate or false decreases _exponentially_ with the number of witnesses.  Thus, a sighting report from one pilot has a 12.5% (1 in 8) probability of being false or inaccurate, but if the report is from two pilots then the probability decreases to 1.56% (1 in 64).  Similarly, a report from one plumber has a 50% (1 in 2) probability of being false or inaccurate, from if the report is from 3 plumbers from the probability decreases to  6.25% (1 in 16).  Olsen's discussion of this factor should now be considered in the light of the matters covered in PART 17: Qualitative criteria: Multiple witnesses.

 

d, The dramatic effects of multiple witnesses on the witness factor results in considerably more weight being given to a report from several individuals of unidentified profession than a report from a single astronomer, meteorologist or pilot.

 

 

 2. Olsen’s “investigation factor”

 

This factor represents the probability that the following two conditions are met (see Footnote 24.06):

a. The reported details were documented correctly and completely.

b. They have no explanation in terms of known man-made or natural phenomena.

 

According to Olsen, the values assigned to this probability “reflect the importance given to a check for correlations with other phenomena … a high value of the investigation factor is assurance that a report can be used with confidence, without having to double-check for a conventional explanation”.

 

Olsen suggests the following numerical values for the investigation factor:

 

(a) 99.9% - Quality of Investigation: High - (1) On-site investigation by Federal investigator, supported by local Weather Bureau data, evaluated by USAF consultant, and documented in detail.  (2) Lengthy interview by professional astronomer, filing a detailed report privately.

 

(b) 75%    - Quality of Investigation: Intermediate - (1) Detailed report filed by one crew member of squadron of fighter-bombers, after discussion with accompanying witnesses.  No check for correlation with other phenomena.  (2) Professional astronomer filing report of personal sighting.

 

(c) 50%    - Quality of Investigation: None or unknown - (1) Detailed newspaper report.  (2) Personal sighting report filed by mechanic; no check for correlation with other phenomena.

 

 

 

 

3. Olsen’s “transcription factor”

 

This factor reflects the reduction in reliability resulting from secondary reports, which may be abbreviations of the originals or reflect other alterations (to dramatize the account, or simply due to typographical or translation errors) – see Footnote 24.07.  

 

Olsen suggests reducing the value to be assigned to the reliability of an account _exponentially_ with the number of transcriptions involved (the “n-th handedness” of the report).  Thus, the transcription factor is equal to 100% if the report is a primary report, 50% (1 in 2) if a second-hand report, 25% (1 in 4) if a third-hand report.

 

 

 

 

Actual applications of Olsen’s “reliability index”

 

Olsen applies (or at least purports to apply) his system to generate “reliability indexes” for a relatively large sample of UFO reports.  He includes a table which lists each of the UFO reports included in his book according to their Reliability Index (R.I.), i.e. “160 oustanding UFO sigting reports” with each report being “so clear-cut, detailed, unambiguous and unconventional that rational misinterpretation of natural or man-made phenomena is obviously impossible”. 

 

The top 12 entries in Olsen’s table (see Footnote [610]) are as follows:

 

1)   R.I. = 0.99999,  Summer 1952 - Haneda Airport

2)   R.I. = 0.99999,  14 September 1954 - Vendee, France

3)   R.I. = 0.99999,  22 October 1954 - Marysville, Ohio

4)   R.I. = 0.99999,  22 May 1962 - Paraiso Del Tuy, Venezuela

5)   R.I. = 0.99999,  12 October 1961 - Indianapolis, Indiana

6)   R.I. = 0.99975,  16 January 1951 - Artesia, New Mexico

7)   R.I. = 0.99975,  3 April 1964 - Monticello, Wisconsin

8)   R.I. = 0.99804,  3 August 1951 - Silver Lake, Michigan

9)   R.I. = 0.99804,  27 July 1952 - New Jersey, opposite NYC

10) R.I. = 0.99804,  25 July 1957 - Niagara Falls, Municipal Airport

11) R.I. = 0.99609, 11 April 1964 - Homer, New York

12) R.I. = 0.99218,  14 September 1952 - Hill near Sutton, West Virginia

 

Olsen’s “reliability index” does not appear to have been discussed on the Internet prior to this article. No one appears to have been inclined to provide a summary online of the factors which it took into account.

 

However, this “reliability index”:

a. is briefly discussed and applied to one case in an appendix to Illobrand Von Ludwigger’s book “Best UFO Cases : Europe” (1998) – see Footnote 24.02. That discussion adds little, if anything, to the original presentation of the proposed scheme by Olsen in his own book.  However, Von Ludwigger’s book does make several references to Von Ludwigger’s participation in MUFON-CES (see Footnote 24.12), i.e. the Central European Section of MUFON.  It is therefore possible that MUFON-CES has applied Olsen’s “Reliability Index” scheme to some extent.  I note, however, that MUFON’s centralised database appears to have implemented the scheme outlined in PART 23 rather than Olsen’d criteria.  Von Ludwigger’s was published by NIDS (i.e. the National Institute for Discovery Science), but I am not aware of NIDS ever applying the “Reliability Index” scheme.

 

b. Olsen’s “Reliability Index” scheme is discussed in the book “Ufology” by James McCampbell at slightly greater length (see Footnote 24.03).   That discussion includes a comment that Realibity Theory had “been successfully applied to UFO reports”, stating that “as with any complex system, the problem was first broken down into its finest elements”. It noted that “such factors as the number of witnesses, their training in aerial observation, and the circumstances of the sighting were isolated” and stated that “details of the original documentation were accounted for with emphasis upon interviews of the witnesses and the professional qualifications of the interviewers”. While the McCampbell’s book notes that 160 sightings “were selected and analysed”, he was referring to Olsen’s own application of the “Reliability Index” rather than any later application.

 

 

 

 

 

 

FOOTNOTES

[24.01] Thomas Olsen, “The Reference for Outstanding UFO Sighting Reports” (1966) at pages 4_1 to 4_13 (in Chapter 4) of the UFOIRC spiral-bound edition.

 

[24.02] Illobrand Von Ludwigger, “Best UFO Cases : Europe”, 1998, NIDS, Appendix A: “Reliability Index according to Olsen”, page 159.

 

[24.03] James McCampbell, “Ufology” at pages 2-4 of the version online at:

http://www.nicap.org/ufology/ufology.htm

 

[24.04] Thomas Olsen, “The Reference for Outstanding UFO Sighting Reports” (1966) at page 4_1 (in Chapter 4) of the UFOIRC spiral-bound edition.

 

[24.05] Thomas Olsen, “The Reference for Outstanding UFO Sighting Reports” (1966) at pages 4_1 to 4_2 (in Chapter 4) of the UFOIRC spiral-bound edition.

 

[24.06] Thomas Olsen, “The Reference for Outstanding UFO Sighting Reports” (1966) at pages 4_2 to 4_5 (in Chapter 4) of the UFOIRC spiral-bound edition.

 

[24.07] Thomas Olsen, “The Reference for Outstanding UFO Sighting Reports” (1966) at page 4_5 (in Chapter 4) of the UFOIRC spiral-bound edition.

 

[24.08] Thomas Olsen, “The Reference for Outstanding UFO Sighting Reports” (1966) at page 4_6 (in Chapter 4) of the UFOIRC spiral-bound edition.

 

[24.09] Thomas Olsen, “The Reference for Outstanding UFO Sighting Reports” (1966) at page 4_7 (in Chapter 4) of the UFOIRC spiral-bound edition.

 

[24.10] Thomas Olsen, “The Reference for Outstanding UFO Sighting Reports” (1966) at page 4_11 (in Chapter 4) of the UFOIRC spiral-bound edition.

 

[24.11] Thomas Olsen’s book is discussed at: Condon Report (“Scientific Study of Unidentified Flying Objects”,  Edward U Condon (Director) and Daniel S Gillmor (Editor) (1969)) at page 32 (in Section 2 “Summary of the Study”, by Edward U Condon) of the uncorrected version submitted to the Air Force (with the same page numbering in the 3 volume paperbound edition distributed by the National Technical Information Service, US Department of Commerce) at page 23 of the Vision hardback edition (with the same page numbering in the Bantam paperback edition).  The first of these editions has the same page numbering as the edition available free online at the following links:

http://files.ncas.org/condon/text/contents.htm

http://www.project1947.com/shg/condon/contents.html

 

[24.12]  MUFON-CES has a website at the link below, available as at June 2010:

http://www.mufon-ces.org/text/english/about.htm

Category:

Best UFO Cases” by Isaac Koi

 

PART 26:         Quantitative criteria : Moravec's rating system

 

Mark Moravec has suggested that assigning “numerical weights” to the factors which are considered important when comparing UFO reports produces “a system whereby UFO reports can be objectively compared”.  He has suggested that “a good system is one that is more than just a logical exercise; that does not violently clash with our own subjective (but carefully considered) comparisons of cases; and is simple and practical to use”  (see Footnote 19.02).

 

When putting forward the “UFO Report Rating System” summarised below, Mark Moravec suggested that the individual factor weightings can be combined to give numerical totals and that “a point in the range of possible totals could be defined as the dividing line between UFO reports of ‘limited’ and ‘high’ merit” (see Footnote 26.02). 

 

Moravec’s involves 5 factors, each with values of 0 to 5. 

 

The first four factors (i.e. “Documentation” [“D”], “Time Lapse Before Investigation” [“T”], “Witness Credibility” [“W”] and “Physical Evidence” [“P”]) relate to “supporting evidence”.

 

The fifth factor (“Strangeness” ["S"]) emphasizes “the value of proximity and substantial effects associated with a UFO experience”.

 

Moravec suggested obtaining a total rating by adding the first 4 values and multiplying by the fifth (i.e. Strangeness) i.e. R = (D + T + W +P) S.

 

He commented that cases with a rating of less than 20 points could be considered “limited merit reports”, while cases with a rating equal to or exceeding 20 points are “high merit reports”.

 

The values to be assigned for each factor were:

 

 

“Documentation” ("D")

 

0 = Anecdote or unconfirmed media account

1 = Witness statement

2 = Report form completed by witness

3 = Brief witness interview by qualified investigator

4 = Detailed witness interview by qualified investigator

5 = Detailed on-site investigation by qualified investigator

 

 

“Time Lapse Before Investigation” ("T")

 

0 = More than 5 years, Not known or Not applicable

1 = 1-5 years

2 = Within a year

3 = Within a month

4 = Within a week

5 = Within 24 hours

 

 

“Witness Credibility” (W)

 

0 = Single witness with low or unknown credibility

1 = Multiple witnesses with low or unknown credibility

2 = Single witness with high credibility

3 = Multiple witnesses with high credibility

4 = Multiple independent witnesses known to each other

5 = Multiple independent witnesses not known to each other

 

 

“Physical Evidence” ("P")

 

0 = No physical evidence detected

1 = Transient physical effect not instrumentally recorded (physiological, electromagnetic etc)

2 = Transient physical effect instrumentally recorded/analysed (photograph, radar, radiation reading, etc)

3 = Durable physical effect not instrumentally recorded (unanalysed ground trace, artefact, etc)

4 = Durable physical effect instrumentally recorded/analysed

5 = Multiple durable physical effects instrumentally recorded/analysed

 

 

“Strangeness” ("S")

 

0 = Identified or Probable identified

1 = Possible identified or Not enough information

2 = Distant light or object (NL [Nocturnal Light] or DD [Daylight Disc]

3 = Distant light or object with substantial effects on witness/physical environment

4 = Close encounter (CE1, CE2 or CE3)

5 = Close encounter with substantial effects on witness/physical environment

 

 

 

 

Actual applications of Moravec’s proposals

 

Moravec has published the application of his rating system to “all the cases presented in PSIUFOCAT” (see Footnote 26.02).   The extract I have obtained of the relevant book  does not include the relevant evaluations.

 

I am not currently aware of any database or publication which has applied Moravec’s proposals.

 

 

FOOTNOTES 

 

[26.01] Mark Moravec’s article entitled “Evaluating UFO Reports” in the Journal of the Australian Centre for UFO Studies, Volume 2 number 1, February 1981 pages 13-15.

 

[26.02] Mark Moravec, “PSIUFO Phenomena : A Study of UFOs and the paranormal”, 1982, Appendix 1 : “Evaluation of Reports”

 

Category:

Best UFO Cases” by Isaac Koi

 

PART 25:         Quantitative criteria : Figuet’s hardest cases

 

 

Michel Figuet prepared a paper for the “European Congress on AAP” in November 1988 entitled “Criteria for selecting the hardest cases and other recent works on French and Belgium sighting catalogues” (see Footnote 25.01).   That paper was presented to the “European Congress on AAP” by Jacques Scornaux, as Figuet did not speak English (see Footnote 25.06). 

 

Very similar papers by Figuet have been made available in French (see Footnote 25.02) and Italian (see Footnote 25.03). Incidentally, the Italian version of that paper includes some information and references that are omitted from the copy of the English version provided to me (although I understand that a more complete copy of the English version did exist – see Footnote 25.06).

 

Michel Figuet’s paper began by acknowledging that “at least some of the criticisms addressed to ufology” in previous years had been “well-founded” (see Footnote 25.04).   He suggested that “we can long afford the risk of working on valueless cases” because “it is of no use, and iti is like offering a present to debunkers, who will eagerly and easily destroy them”. 

 

He suggested that other methods for selecting the hardest cases were “not severe enough”. Consequently “an informal group of French and Belgian ufologist” found it necessary to “establish a new set of extremely strict criteria”. He acknowledged that this selection method would “surely” result in “a great many cases” being eliminated as “waste” or “noise” and that this may include some case being “unduly rejected”.  However, he considered that “it is better to wrongly exclude potentially hard cases than to wrongly include explainable cases. What really matters is that what will remain will be very solid”.

 

The suggested critera seek to isolate cases “even if their number is very small, that would testify with a high degree of certainty to the existence of at least one original phenomenon (whatever it may be) having a physical component.

 

The relevant criteria relate to the features of the phenomenon (B to D), the sighting conditions (E to J), the witnesses (K and L) and the investigation (M to Q).

 

The criteria were intended to be multiplied together, with the product being multiplied by 100 to obtain a total mark with a maximum of 100.

 

The highest mark for each criteria is 1, with lessor values being assigned in relation to some of the criteria if certain factors are present.

 

 

Factor A : Value = 1 if no explanation based on serious objective data can be proposed for the phenomenon.  Otherwise value = 0.

 

Factor B : Value = 1 if the phenomena is not a point phenomena (i.e. apparent size remained lower than the one of Jupiter of Venus during the whole sighting, except the value). Value = 0.9 if the phenomenon remains point-like but follows a complex trajectory. Otherwise, value = 0.

 

Factor C : Value = 1 if the phenomena’s angular coordinates change during the sighting, or where the movement consists in apparently nearer or moving away. Otherwise, value = 0.

 

Factor D : Value = 1 if the phenomenon does not have a steady movement (straight line or simple curve, even if broken with stops) or it it leaves the ground level or its close proximity or where the phenomenon has a sharp outline or does not consist only in light blobs.  Value = 0.7 if the phenomenon consists in light blobs having a sharp outline or arranged in an orderly fashion. Otherwise, value = 0.

 

Factor E : Value = 1 if the phenomena are not night-time phenomena or are lit, at least partially, during some part of the sighting. Value = 0.7 if the phenomenon is self-luminous.  Otherwise, value = 0.

 

Factor F : Value = 1 if the sighting duaration is more than 30 seconds.  Value = 0.9 if there are physical effects (with a duration of at least 10 seconds). Otherwise, value = 0.

 

Factor G : Value = 1 where the sighting duration is less than 15 minutes or if the sighting duration is longer but the behaviour of the phenomenon is not constant or repetitive during the whole sighting. Otherwise, value = 0.

 

Factor H : Value = 1 where there is a landmark in the environment making it possible to know the angular coordinates of the phenomenon or its exact position on the ground. Otherwise, value = 0.

 

Factor I : Value = 1 where the witnesses are not all in a continually moving vehicle. Value = 0.9 where the witnesses are all in a ship. Value = 0.5 where the witnesses are all in a continually moving vehicle but the phenomenon is close and diurnal. Otherwise, value = 0.

 

Factor J : Value = 1 where there is an obstacle present during the whole sighting that is likely to distort the phenomenon’s image or to limit perception of it. Value = 0.9 where the phenomenon occurs during the daytime and the only obstacle is a window. Otherwise, value = 0.

 

Factor K : Value = 1 if there are two or more witnesses (at least one of them being more than 18 years old) who have no physical or mental disability impairing their perceptive powers or their capacity to testify. Value = 0.7 if there are not two or more witnesses, but there are physical effects. Otherwise, value = 0.

 

Factor L : Value = 1 where the witnesses constitute at least two independent groups (each group may consist in only one witness) who give reasonably similar descriptions.  Value = 0.7 if the witnesses are not independent (e.g. form a single group) but give similar descriptions. Otherwise, value = 0.

 

Factor M : Value = 1 where the first field investigation was performed less than one year after the sighting. Value = 0.9 if the first investigation was performed between one and three years after the sighting.  Otherwise, value = 0.

 

Factor N : Value = 1 where the investigation report includes all the following (otherwise value = 0):

1, The precise date and time (to within 30 minutes)

2. The precise place of the sighting

3. The weather conditions

4. Age, sex and occupation of the witnesses

5. The way the sighting began and ended

6. Some data enabling assessment of the witnesses’ reliability.

 

Factor O : Value = 1 where the witnesses were interviewed separately.  (Witnesses who are not acquainted with each other are considered to have been interviewed separately if the way they were interviewed is not known).  Value = 0.7 if the witnesses were not interviewed separately.

 

Factor P : Value = 1 where the investigation was performed at the place of the sighting, in the presence of the witnesses and under the same environmental conditions (light and, as far as possible, weather).  Value = 0.7 if the investigation was performed at the place of the sighting and in the presence of the witnesses, but under different light conditions. Otherwise, value = 0.

 

Factor Q : Value = 1 where the investigator’s name and address are known, as well as his/her possible membership of a private group.  Otherwise, value = 0.

 

 

The Italian version of the paper includes a section that includes the following comments: “Ideally, the score of cases should be made by people who have access to the original investigation report or, better still, by the same investigator. But as we know, alas, the world of ufology is far from ideal ... Often the investigators are no longer active, or the group in possession of the report was lost, or reports are not available to researchers rivals, or are lost, or perhaps never existed ... Thus, for many cases a score accurately is impossible.”

 

It may be that the criteria are too idealist for the real world of ufology, where records of investigation are all too often incomplete or (even more commonly) not readily accessible to other researchers.

 

 

The relevant factors were generated following discussion between a small group of ufologists, including Claude Maugé, Thierry Pinvidic, Jacques Scornaux, Franck Boitte and Denis Breysse (see Footnote 25.05).

 

Several members of that group of ufologists (Claude Maugé, Jacques Scornaux and Denis Breysse) have been kind enough to supply me with some further information and reflections upon the above criteria.

 

Claude Mauge has commented to me that many of the values used by various authors in such schemes to assess the importance/credibility of UFO reports “are based more on good sense than on ‘scientific’ data” (see Footnote 25.07). He said:  “For instance, Figuet’s criteria state that the ‘best’ cases must have a duration of more than 30 seconds. Why? Because when a little team (in which I was) discussed about that matter, such a minimal duration seemed to us necessary so that the witness had perceived enough data. But the reading of psychological literature convinced me later that we had badly overestimated that duration: in most cases, 10 seconds are enough for the brain to record many, many facts”.

 

 

Denis Breysse mentioned that the minutes of the group’s discussions about the relevant criteria “fill more than one hundred pages…” (see Footnote 25.05). He said that:

 

“The criteria were discussed extensively on several occasions and the final version was fixed in February 1984.

Our basic ideas were :

-     to define criteria which are external to the source (without postulating any idea about its nature),

-     to eliminate all main sources of confusions/misperceptions,

-     to give some weight to each criterion (which would allow to select more or less hard cases, depending on the chosen threshold on the final ranking”.

 

He also commented that “perhaps some discussion can also be fruitful more than 20 years later” and that if “some courageous ufologist” wants to select the ‘best’ of their files, “these criteria will probably be a good basis” (see Footnote 25.05).

 

Denis Breysse has also noted that one researcher has argued against such criteria, “since the best criterion is (according to him) the truthfulness of the witness, that the investigator can guarantee!”.  I have not read the relevant article (see Footnote 25.08), but would be interested to know how an investigator is supposed to be able to determine with any confidence that a witness is being truthful and, probably even more importantly, accurate (both in terms of recollection and the original perception).

 

 

Another of the relevant group of ufologists, Jacques Scornaux, also kindly provided some information to me in 2007 about the above proposal (see Footnote 25.06).  He made the following comments:

 

“Our criteria obviously lacked reliability indexes for witnesses and investigators ! But estimating the witness' reliability and the investigator's objectivity is inevitably somewhat... subjective ! And is moreover psychologically and "politically" difficult : a poor investigator (or a poor witness) can be a good friend, and there are already so many disputes and conflicts within ufology... So I confess I somewhat lost interest in these criteria. They are at least to be complemented.”

 

 

 

Actual applications of Figuet’s criteria

 

The Italian version of the paper by Miguel Figuet putting forward his proposed criteria contains a section applying the scoring system to at least a few sightings (see Footnote 25.03).  It mentions that Jacques Scornaux applied the criteria to some cases significant sightings and published them in his magazine and that “at least one case achieved the maximum score of 100. This is a classic of IR2 Vins sur Caramy (April 14, 1957) … [see Footnote 25.11]”. That section of the paper stated that “some other cases have received a score of 70”, mentioning the Valensole incident, the Lezay incident (May 1, 1975) [see Footnote 25.09], and the case of Villiers en Morvan (August 21, 1968) [see Footnote 25.10].

 

Jacques Scornaux has expressed some concerns about the actual score achieved by certain cases as a result of applying Figuet’s system (see Footnote 25.06).  Jacques Scornaux has stated that he must “confess the fact that, among the tested cases, the Vins-sur-Caramy CE2 was the only one to obtain a maximum score left me rather uncomfortable with the criteria”. He said that “Vins looks like a perfect case, except that it was investigated by the late Jimmy Guieu, who was not, to say the least, a model of rigorous and objective investigator. So can we be assured that every apparently unexplainable detail of this case has been exactly reported, without exaggeration?”.  He also commented that “Perhaps still worse, the Dr X case had the honourable score of 70. The problem is that it is now known this case is a hoax...”.

 

 

Given that Figuet himself kept a database of certain UFO reports in France (FRANCAT), I was interested in learning whether Figuet applied his criteria to that database and whether there had been any analysis of any such application.  Unfortunately, one member of the group of ufologists that developed these criteria, Denis Breysse, has told me that he is “not aware of any extensive application of these criteria to the FRANCAT cases” (see Footnote 25.05).   Another member of that group, Jacques Scornaux, provided some further information (see Footnote 25.06): “To my knowledge, there were not many applications of these criteria and, to answer Isaac's question, I know of no catalogue or database routinely applying ths kind of scores. I can confirm that it was not even systematically applied by Figuet to FRANCAT, his French CE catalogue. As, after his death, SCEAU association made a detailed inventory of Figuet's archives, I am able to assure that we found no trace of such a score on FRANCAT index cards”.

 

Jacques Scornaux has also provided some further observations on the absence of any wider adoption of Figuet’s criteria: “Why were these selection criteria not more widely applied ? Well, it would be useful to have the opinon of other members of the team who developed them (and comprised, as Denys rightly remembers, Denys himself, Franck Boitte, Michel Figuet, Claude Maugé, Thierry Pinvidic and me). One of the reasons is probably that the core members of the team (Denys, Claude, Thierry and me), who all lived in Paris and met at least once a month for ufological debates and brainstorming, disbanded soon after as Denys, Claude and Thierry successively had to leave Paris because of their job, and it was before the Internet era... Michel fell ill and died prematurely” (see Footnote 25.06).

 

 

I note that the additional material in the Italian version of the paper refers to a further catalogue which was being developed in France by Gilles Munsch and Eric Maillot (apparently continuing a job begun by François Diolez).  It is not clear to me whether that catalogue applied Figuet’s criteria. Jacques Scornaux has mentioned that he would have to contact Munsch and Maillot about what this project became after Figuet's death (see Footnote 25.06).

 

 

 

FOOTNOTES

 

[25.01] Michel Figuet, “Criteria for selecting the hardest cases and other recent works on french and belgium sighting catalogues”, European Congress on AAP, Bruxelles 11-19/11/1988.  

 

[25.02] Michel Figuet, "Catalogue Francat des rencontres rapprochées en France (Listing 800-1982) (1)", in "Lumières dans la nuit" 255/256, 1985.

 

[25.03]  Michel Figuet, “Ufo Forum”, n. 8 (October 1997). Available online at:

http://www.arpnet.it/ufo/f8figuet.htm

 

[25.04] The Italian version of Michel Figuet’s paper includes references in relation to this point which do not appear to have been included in the English version of that paper.  The material cited by Michel Figuet includes the following articles in English : (a) Claude Mauge, “Questioning the ‘real’ Phenomenon, Magonia No 13th, 1983;  (b) Michel Monnerie, The Case for Skepticism, in Hilary Evans e J. Evans and JohnSpencer, UFOs: 1947-1987 , Fortean Tomes 1987; Spencer’s book “UFOs: 1947-1987”, Fortean Tomes 1987- Michel Monnerie, The case for skepticism ,; (c) - Jacques Scornaux, The rising and limits of a doubt , Magonia n.Jacques Scornaux, The Rising and limits of a doubt, Magonia No 15, 1984; 15, 1984.

 

[25.05] Email from Denis Breysse to Isaac Koi, 8th December 2006.

 

[25.06] Email from Jacques Scornaux to Isaac Koi, 1st January 2007

 

[25.07] Email from Claude Mauge to Isaac Koi, 29th May 2007.

 

[25.08] TO BE OBTAINED AND TRANSLATED - D. De Tarragon, LDLN 271-72 (P. 4).

 

[25.09] NOT OBTAINED - Blay, Bosch, Dupuy and Chasseigne, LDLN No 148, 1975; M. 148, 1975

 

[25.10] NOT OBTAINED - Joël Mesnard and Rene Fouéré, Phénomènes spatiaux No 18, 1968; Fernand Lagarde e il gruppo LDLN, Mystérieuses Soucoupes Volantes , Albatros 1973; M. 18, 1968.5Joël Mesnard e René Fouéré, Phénomènes Spatiaux ,

 

 

Category:

Best UFO Cases” by Isaac Koi

 

PART 27:         Quantitative criteria : Koi's ICES Ratings

 

Isaac Koi has adapted Hynek's Strangeness and Probability Ratings (discussed in PART 20: Quantitative criteria : Hynek – Strangeness and Probability) as set out below.

 

The "ICES Rating" is obtained by multiplying four factors (I, C, E and S) together:

 

1. I = IMPACT RATING

2. C = CREDIBILITY RATING

3. E = EXPERT RATING

4. S = STRANGENESS RATING

 

Each of these ratings has a potential score  of up to 14, so that they can be illustrated by playing cards.  The highest ratings are represented by the "picture cards", i.e.:

a Jack represents a rating of 11,

a Queen represents a rating of 12,

a King represents a rating of 13,

an Ace represents a rating of 14,

 

 

 

1. I = IMPACT RATING

[Description of this rating to be inserted]

[Description of this rating to be inserted]

[Description of this rating to be inserted] 

 

 

2. C = CREDIBILITY RATING

[Description of this rating to be inserted] 

[Description of this rating to be inserted]

[Description of this rating to be inserted]

 

 

3. E = EXPERT RATING

[Description of this rating to be inserted] 

[Description of this rating to be inserted]

[Description of this rating to be inserted]

 

 

4. S = STRANGENESS RATING

[Description of this rating to be inserted] 

[Description of this rating to be inserted]

[Description of this rating to be inserted]

 

[Sample ratings to be inserted]

 

The first two factors (at least) of the ICES Ratings can also be applied to researchers and books:

(a) The impact rating : [insert discussion]

(a) The credibility rating : [insert discussion]

 

In relation to some researchers it is tempting also to state a (fairly high) strangeness rating, but this is probably impolitic...