Comparative and Analytical Remarks on Judicial Fact-finding*
After a rough survey of judicial fact-finding in Germany and in common law and their different standard(s) of proof, some specific questions are discussed: possible reasons for the reluctance of the German judiciary and scholars to admit the common law approach; the distinction between two stages in fact-finding: the information gathering and the final decision-making; the differentiation of standards as the result of the weighing of the disutilities of an erroneous decision (a relationship expressed by a formula developed for the first time by Antoine de Condorcet in 1785); decision making in evenly balanced cases (the German „non liquet“); proof of paternity and D. H.Kaye’s criticism of the prior probability of 50% in paternity cases; two German paternity cases with extreme factual constellations; the momentum of the prior probability; the interdependence between the prior probability and the standard of proof.
1. Standard(s) of Proof and Decision Making
1.1 Differentiation of Standards of Proof in Civilian and Common Law
1.1.1 The Two Standards
German law, and indeed the civilian system, at least in theory, accepts only one standard: in contrast to the common law the civilian system does not systematically distinguish between a standard of proof in civil cases (proof on the balance of probabilities, sometimes expressed as the „p>0.5 rule“) and a – much higher – criminal standard („proof beyond a reasonable doubt“). The one civilian standard is the high criminal standard of the common law. This gap between the two world wide systems of law is odd and needs explanation.
Clermont and Sherwin (2002) with further references conclude their comprehensive comparative study (p. 274seq.):
Civil-law and common-law standards of proof diverged in the late eighteenth century, probably of the one system’s French Revolution and the other’s distinctive procedure. The French Revolution in the course of simplifying the civilian law of proof, hid the standard of proof from the view. ….
….. the different civil standard employed by civil-law and common-law courts conform to subtle differences in the procedural objectives of the two systems. The civil law seeks the legitimating benefits of the myth that their courts act only on true facts and not on mere probabilities. Common-law courts seek the legitimacy elsewhere, perhaps in other myths, and thus are free to adopt the standard of proof that more fairly and efficiently captures the real truth of the case.
The gap is big since according to common law the general standard is not the high criminal standard but the civil standard. In criminal proceedings assertions in favour of the defendant (for example self-defense or insanity) may be proved „on the balance of probabilities“. The reason is obvious: The high criminal standard reflects the asymmetric weights of possible errors, or, in other words: the presumption of innocence according to the venerable insight: Tutius semper est errare in acquietando quam in puniendo, ex parte misericordiae quam ex parte justitiae.
The standard of proof is the criterion for the ultimate decision and varies with the symmetry or asymmetry of disutilities of erroneous decisions.2 The error of convicting an innocent defendant is more harmful than the error of failing to convict a guilty one. Reducing the risk of wrongly convicting an innocent defendant automatically increases the risk of failing to convict the guilty, and the total error rate rises because relevant, but too „soft“, evidence must be disregarded when the high standard is applied. The relaxed civil standard however stands and falls with the symmetry of disutilities as the characteristic of civil cases.
A lawyer acquainted with common law will hardly believe that the „beyond reasonable doubt“ standard yields reasonable results in civil cases. Whereas a lawyer from the continent will be full of misgivings to reduce the burden of proof as sharply as the p>0.5 rule does.
To illustrate the reach of the criterion for the ultimate decision (the standard of proof) I recall two famous cases:
22.214.171.124 The O. J. Simpson Case
More than a decade ago a double murder stirred up the public in the States and all over the world: The accused was the most famous football star of the United States, O. J. Simpson. The victims were his ex wife Nicole and her new friend Ron Goldman. The jury in the criminal court brought in the verdict „not guilty“. In the law suit brought by the families of the victims concerning damages Simpson was convicted to pay 33,5 million dollars, namely 8,5 million damages and a 25 million fine, the reason for the extremely high fine being the utmost conviction of the second jury that Simpson was in fact guilty. As a matter of fact in that subsquent civil trial some additional evidence of rather strong probative value was introduced.
But even if the evidence in both trials had been the same, the conflicting findings in the Simpson Case are in the view of a common lawyer not erroneous in law because of the two different standards of proof. The jury in a criminal court has to acquit the accused unless it is satisfied beyond a reasonable doubt that the accused is in fact the criminal. Whereas the jury in the civil trial decides upon the „more probable than not“ standard. Or as Roskill, J. coined it:
The plaintiffs cannot succeed unless and until they have satisfied me, on a balance of probabilities, of the truth of that which they assert.
In France a court dealing with damage claims is always bound by the finding of the criminal court whether the verdict is „guilty“ or an acquittal. By this rule a controversial outcome is excluded. In Austria there is another rule: The verdict „guilty“ is binding but not the acquittal. The reason for this distinction is the different requirements concerning the evidence for a verdict and for an acquittal: The acquittal may be an acquittal on the ground of insufficient evidence; it does not necessarily mean that the defendant’s innocence has been proved – notwithstanding the acquittal, he may very well have in fact committed the crime. In Germany the legal position is the same as in common law: no binding force of the ruling of the criminal court. But in practice the civil court will follow in due course the fact findings of the criminal court.
126.96.36.199 The Contergan (Thalidomid) Case in Germany
The German system produces now and then scandalizing results. A somewhat remote but striking example was the handling of the Contergan (Thalidomid) Cases in Germany – in contrast to the straightforward procedure in England:
In England in 1968 (when heavily disabled babies at a very unusual rate were born) the lawyers of the victim families started immediately a civil action in court which ended half a year later by an agreement on damages for the plaintiffs. In Germany the lawyers of the victim families delayed the civil damage proceedings because they did not feel able to prove their case (and indeed causality) up to a „probability on the limit of certainty“. They preferred to wait for the outcome of the criminal proceedings. And it took the prosecution 6 1/2 years to start, and the courts another three years to finish them; although the defendant could not seriously deny the following allegation of the prosecution:
The mothers are from all parts of the world. They are of various ages. They have – only – in common that they took thalidomid during the early state of pregnancy. About eight or nine months after thalidomid was recalled the surge of malformations stopped as suddenly as it started at the time when the new drug was sold. In areas where the drug was not sold there were no similar births of disabled children.
1.2 Reluctance in Germany to Lower the General Standard of Proof
The reluctance in Germany to accept (in principle) more than just one standard of proof is rather strong. Judicial factfinding is basically seen as a matter of perception and knowledge, and not so much of decision making. It is seen as matter of fact, not of evaluation. The truth must come out. Therefore the ultimate criterion was, and still is, certainty, and not a mere probablitity – „certainty“ being qualified as „moral“ certainty.
However in practice a couple of devices water down the strict rule.
A very general device is the idea of the „free conviction of the judicature“ („freie richterliche Überzeugung“), the French „intime conviction du juge“. „Free“ means: not limited by a body of formalised and strict legal rules concerning proof (the former system of „Legalbeweis“). It is an subjective approach qualified only by the factfinders’s – i. e. the court’s – obligation to formulate explicitly the reasons of the particular fact-finding decision („Begründungszwang“). The test of the free conviction of the judge somehow counterbalances the criterion of (moral) certainty (Vollbeweis). Its merit is flexibility, which is at the same time its weakness offering little orientation if the decision-maker is confronted with scientific evidence expressed by numbers and ratios. Still the idea of the „free conviction of the judicature“ was, and still is, a great achievement of the civilian system, to be compared with the common law’s jury system in its merits as well as in its defaults.
There are additional devices to lower the strict standard in German law. In some limited areas of negligence and alike „prima facie evidence“ is held to prove an allegation fully as long as it is not shaken by evidence coming from the respondent. In other cases the courts or even the legislator shifts the ultimate burden of proof to the respondent. And finally in some special fields a „lightening of proof“ („Beweiserleichterung“) is conceded by statute, the test being „the preponderant probablity“ („überwiegende Wahrscheinlichkeit“) or something similar. But the paradigm of proof in court remains „full proof“ meaning moral certainty, that is (highest) probability on the limit of certainty.
The high and single German standard reflects, I think, a deeply rooted dichotomy between the realm of facts and the realm of norms: Factfinding as such must not be mixed up with the legal evaluation of those facts. The court first has to find out the truth, and truth is considered to be normally within the reach of the court! If the court is not able to find out the truth it has to decide the case according to the ultimate burden of proof. On the level of factfinding one should not peep at the final outcome of the case. This is the classical paradigm of judicial proof in Germany. And this paradigm is challenged by the common law approach to satisfy the court „ on a balance of probabilities, of the truth of that which they (the plaintiffs) assert.“
Jürgen Rödig (1942-1975) analyzed court proceedings and indeed judicial decision making; he used as a stringent tool mathematical logic yielding this dichotomy as a doctrinaire device and discovering court proceedings as such having the structure of logical proof („Beweis als logische Form des Prozesses“). I adopted this approach and I came to the conclusion that the common law’s differentiation of standards of proof is sound and not the moral certainty approach as the general standard.
1.3 Optimal Production of Evidence and Final Decisionmaking
To counter the misgivings of the German judiciary and scholars vis-à-vis the „relaxed“ civil standard, i. e. proof on the balance of probabilities (Überwiegensprinzip), I suggest to distinguish clearly two levels or stages in the court’s fact-finding: on the one hand the gathering and the careful scrutinizing of all the available evidence needed for drawing sufficiently safe conclusions and on the other hand the final decisionmaking . These two stages should not be mixed up. The p>0.5 rule belongs to the second stage; it aims at exploiting the given evidence of the particular case fully, particularly if the evidence is either very poor or almost evenly balanced. But this does not relax the requirements concerning the gathering and scrutinizing of all the available evidence. In D. H. Kaye’s (2000) words:
The crucial issue is what to do about the ineluctable risk of error. I believe that the law’s best strategy is to formulate rules of evidence that promote the accuracy of factual determinations and to encourage triers of fact to decide in accordance with their best-informed and best-justified elements of the probabilities that the totality of the material facts is such as to warrant recovery. 
Again: Gathering and scrutinizing the evidence in a given case is one thing and deciding whether or not the plaintiff (the prosecution) has discharged the burden of proof is another. The first operation requires an optimum of information , not necessarily a maxium.
Note: the criterion for determining this optimum differs sharply from the criterion for the ultimate decision: The former depends on the importance of the case as such; there are cases of extremely high importance, both for the parties and for the community, where the disutilities of both kinds of possible errors are symmetrical. On the other hand, there are cases of asymmetrical error disutilities which are of little importance, and where everybody will be happy if costs in time and money are saved. Whether or not the court has suitably gathered and elaborated all the available evidence is the matter of increasing the amount and quality of the relevant evidence at marginal costs for the parties and the community. The standard of proof that decides upon the ultimate fact-finding however is the problem of getting a maximum sum total of utilities, or, if you prefer, a minimum sum total of disutilities again for the parties and the community. Obviously both stages are crucial for the accuracy of the court’s ultimate fact-finding.
Common law proceedings as well as court proceedings in civil law countries clearly separate the two stages of fact-finding mentioned above: Both are marked by a sharp break between the gathering and scrutinizing of the evidence and the final decision making. In a jury trial the turning-point is the summing up of the judge. All the more surprising is the reluctance for instance of the English judiciary to articulate clearly the former distinction, talking instead vaguely about „standards of proof varying with the seriousness of the allegation“, a „test“ belonging rather to the first than to the second stage. However the jurisprudence in the United States unambiguously states: The choice of the standard of proof „reflects a very fundamental assessment of the comparative social costs of erroneous factual determinations“, and this test belongs clearly to the ultimate decision making.
1.4 A Formula to Determine the Standard of Proof by Weighing the Disutilities of Possibly Erreoneous Decisions
There is a simple formula for determining the criterion of decision by weighing the disutilities of errors, which I shall name the „Condorcet formula“.
e = ——
1 + r
r = ——
„e“ being our (one) mark on the scale from 0 to 1, the standard of proof;
„r“ being the ratio of the two disutilities D1 and D2; D1 being the disutility emerging from erroneously finding A and D2 the disutility emerging from erroneously finding non A = B.
If both disutilities are equal, r becomes 1 and e becomes:
e = —— = 0,5.
Thus the Condorcet formula corroborates the p>0.5 rule in cases with symmetrical error disutilities.
Professor Kaye (1999) develops the p>0.5 rule elegantly and visualizes the interdependence between the standard and the disutilities (= losses, costs, errors) by graphic representations.  M. Redmayne (1999 p. 169) gives a more complex formula serving, as far as I can see, the same purpose and constructed on the same insights. He prefers the more complex structure because he does not agree with the assumption of equal utility of accurate fact-findings, an assumption that looks however sound to me.
In criminal proceedings the law protects the defendant (and indeed all of us) against errors of justice by stipulating a particularly high standard of proof for a verdict. The community accepts deliberately the acquittal of accused persons who are in fact guilty: The evidence is good enough for making the guilt probable but not good enough to make it sure beyond a reasonable doubt. Society is ready to pay this price in order to avoid errors in justice. But if we go too far in protecting the defendant we force ourselves to acquit too many criminals. The smaller evil becomes the greater. Such a practice undermines morale, demotivates prosecution and encourages crime. Law-makers thus sail like Odysseus between the rocks of Scylla and Charybdis.
1.5 Standard of Proof, Burden of Proof and so called „Non Liquet“
According to the German doctrine the ultimate burden of proof as a fact-finding device comes in if the standard of proof has failed and things have ended in a deadlock (the so called „non liquet“). In theory the proof of the main issue („Hauptbeweis“) and the proof of its contrary (the so called „Gegenteilsbeweis“) must be given just in the same way, the criterion being for both of them moral certainty, i. e. „probability at the limit of certainty“. This means that there are two marks on our scale: one near „one“ and the other near „zero“. Thus the non-liquet-area is supposed to be rather large, and all those cases in the grey area are to be decided according to the ultimate burden of proof.
If we imagine for instance on a virtual scale from „zero“ to „one“ as our criterion for the ultimate decision one single and clearly cut turning point (no matter where it is fixed!: 0.5?, 0.6? …0.99?) in theory a unclearable fact-finding situation will only occur if there is a precise coincidence of the overall probability of the alleged fact on the one hand and our mark on the scale on the other. But such a coincidence is a rare borderline situation as for instance the concurrent identical twin brothers’ case in a paternity suit when paternity is supposed to be cleared only by genetics (below 3.2). Of course, if the decision mark is not clearly cut or, indeed, if we fix two marks enclosing a grey zone we will immediately get a larger group of non-liquet-issues.
The great majority of lawyers in Germany holds that non-liquet-issues happen axiomatically. Yet some, and so do I, believe that the law-maker himself may stipulate whether or not a grey area is admitted.
The strategy of admitting a grey area may be reasonable under special circumstances. But it is sound only if we are prepared to solve the grey area cases according to a special legal ruling. Such a ruling may be delaying the decision for a certain period hoping that additional evidence will then be available; or splitting up the alleged sum (for damages) amongst the parties. But to decide the grey area cases according to the ultimate burden of proof simply shifts the mark on the scale in favour of the respondent (and against the plaintiff correspondingly). The recourse to the burden of proof is not a genuine but a feigned special ruling.
2. Proof of Paternity
2.1 Scientific Evidence and Standard of Proof
The fascinating thing in paternity cases and other identification issues is the clash of scientific blood/tissue test or DNA evidence (and indeed the expert’s mathematical calculation of probability of paternity) with the standard of proof. I call it a clash because the experts compute their findings in numbers that have to be integrated into the whole of the evidence of the particular case. The experts’ figures reach often very high degrees of probabilitiy (0.99 and much more). Thus the court has to sharpen its decisionmark on the scale and lay its cards concerning the exact standard of proof on the table.
One might think that scientific achievements of this kind will corroborate the suitability of the high civilian standard of proof at least for cases where this kind of evidence is available. But when I compared many years ago – the English Family Law Reform Act 1969 with its sect. 26 being brand new at the time – paternity cases in England and in Germany in great detail it became clear to me that the p>0.5 rule is superior to the standard of moral certainty and its equivalent „probability on the limit of certainty“. It is superior because it increases the fact-finder’s chance to hit the truth in the given case, and thus get better results in the long run as well. And this is so even in fields as sensitive as the rebutting of the presumption of legitimacy! As a matter of fact the German Bundesgerichtshof produced at the time a couple of bizarre judgements. In my view this was due to a misunderstanding of „probability“ as a concept strictly opposed to „truth“ or „moral certainty“.
Nowadays science provides courts with even more powerful DNA evidence, the so called genetic finger-print. Science clears paternity more or less fully. But as we shall see below from two German cases the basic issue remains the same as in view of blood/tissue evidence: The court has to cope with probabilistic knowledge, though often of extremely high probative value. And that means: However high the probability may be, the evidence is rebutted if equally strong or stronger evidence emerges. Thus paternity cases remain a suitable field for testing out our fact-finding devices.
2.2 The Findings of the Expert and the Standard of Proof
Common ground is, I think, that there should be a sound division of labor between the court and the expert: It is the expert’s duty to provide the court with the information coming from the blood/tissue or DNA testing of the given set of mother, child and defendant and nothing else. She has to communicate the probative weight of this information in the light of her scientific knowledge, above all her statistical knowledge. It is up to the court to put together the whole of the evidence concerning the particular case and to work out whether or not the plaintiff has discharged the ultimate burden of proof. Common ground, too, is that the expert is bound to state her findings as clearly and verifiably as possible: She has to communicate her message in a way that makes its essence understandable to a reasonably intelligent layperson as jurors or judges are with regard to genetics.
All this is trivial. But things become complicated and problematic as soon as the expert uses numbers as a measurement of the probative value of the particular item or complex of items. What does a figure in the shape of a ratio, for instance a percentage or a number representing a likelihood ratio, really mean with regard to judicial proof? If the receiver of the message has in her or his mind a given standard of proof and associates this standard with a percentage, let’s say the p>0.5 rule, the message is very significant indeed. But without such a reference system, without appropriate background knowledge, a percentage or any other number is not a meaningful code for the fact-finder in his or her decision-making.
Summing up: The expert comes out with her figure representing the probability of the respondent/defendant being the father of the child according to the blood/tissue and DNA testing. It is up to the court
(1) to integrate this piece of information into the whole of the evidence of the particular case and
(2) to decide the case by applying the relevant standard of proof to the over all probability representing the total of the evidence of the given case.
2.3 The Expert’s Reasoning and Professor Kaye’s Critism of the PoP (50) Approach
Suppose the blood characteristics of the child, his mother and the alleged man on the one hand and the frequencies of those blood characteristics within the relevant set of the population on the other hand are known. The expert proceeds then as follows – the reader will forgive my utterly gross simplifications – :
First of all she will ask: Can this man possibly be the father of this child or does the tests exclude him from being the father? If there is a characteristic in the blood (or in the genetic material) of the child that must have come from the father, and if precisely this characteristic is lacking in the fathers blood or genes, he is excluded. He definitely cannot have begotten this child (except for – very rare – cases of genetic anomalies).
If the alleged man is not excluded by considering just one characteristic which is very common within the relevant set of population, such a finding does not mean much. If he is not excluded in view of quite a number of characteristics more or less common within the relevant set of population it means already something. And if he is not excluded considering a very rare characteristic we have a strong indicator that this man may be indeed the father.
This reasoning leads over to the second level:
Here the expert asks simply: Is the alleged man the father or not? What is the probability of him being the father? If there is a very rare characteristic that must come from the father and if it is to be found in the blood of the alleged man, there is strong evidence for him being the father. Conformity in more than one rare characteristic produces a high or even extremely high probability.
The Bayesian formula is a device for the expert to compute numerical probability values. The result presents the overall (posterior) probability of the defendant being the father, based on this specific set of genetic information. Experts may as well present their findings conveying just the paternity index (likelihood ratio). But as we have seen already this is not the end of the first stage. The whole of the relevant evidence in the particular case – and not just the blood/tissue and DNA findings – must be evaluated by the court in order to reach the ultimate decision.
Up to the beginning of the seventies of the last century German courts allowed blood test evidence only to exclude paternity. The idea was that only an exclusion might be fully reliable, a positive indicator of paternity remaining for ever on the level of a mere probablity. But things (and abilities) have changed and the (positive) indicator (probability) of paternity in the individual case may now be calculated by the expert. Most of them use the Essen-Möller-formula that is a version of the Bayesian formula starting with a prior probability of paternity of 50% (Kaye’s „PoP (50)“.
Professor Kaye in his comprehensive, penetrating and wellkown study criticises this 50 to 50 approach. For him this is not a neutral and fair starting point for the expertise. According to him the fifty-fifty-rule is supported by the following – unconvincing – arguments:
Not knowing anything else about the case, the expert may reason that 0.5 is a fair or neutral starting point, since it gives half the probability to the mother and half to the alleged father. * In other words, the expert can argue that the use of a prior probability of 0.5 expresses the ignorance of the laboratory, and as such, is both reasonable and innocuous. – This argument is unconvincing…
He concedes only that „it simply expresses the belief that the mother is as likely to tell the truth as to lie when she identifies a particular man as the father.“ And a little bit further on he argues:
As previously explained, the computation of the PI (paternity index) supposes that if the defendant is not the father, the father is drawn at random from a large population. Arguably, a truly neutral choice of the prior odds similarly should assume that the defendant was selected at random. Although the relevant population for this selection is difficult to define, if there are more than two men biologically capable of fathering the child, then the prior probability for the defendant is less then 0.5.* According to this view, the laboratory’s ignorance should be represented by distributing the prior probability uniformly over all men in the relevant racial and geographic population. If, for instance, there were 100,000 Caucasian men in the city where the Caucasian mother of the Caucasian child resided, the prior odds might be taken to be 1 to 99,999, and the posterior odds for Don’s (defendant’s) paternity would be 19.4 to 99,999, which corresponds to a PoP (probability of paternity) of 0.00019, or 0.019%. Any other approach to arriving at a single, „neutral“ PoP gives considerable credence to the very fact of the mother’s accusation, which is hardly part of the genetic analysis.*
Professor Kaye has underlined his view disputing the reasoning of the Kentucky Supreme Court in Butcher v. Commonwealth. Other but not all authorities in the field support him. Professor Richard D. Friedman (2000) has backed him up vigorously in a fulminant rejoinder to Judge Richard Posner’s assertion that an unbiased fact-finder should begin consideration of a disputed case with prior odds of one to one as the neutral starting point. I come back to this below (4.).
3. Two German Paternity Cases
3.1 Case of Contradictory Scientific Findings Explained by a New Mutation
1990 the Bundesgerichtshof decided a case where according to a series of concurring expert opinions paternity was at the same time excluded and highly probable! The fatherhood of the alleged man was excluded on the ground of the so called Pi system (his blood was missing the characteristic Pi (M3), that only could come from the father). But according to all the other various systems there was conformity, resulting in a PoP (50) of 99.999904%. The lower courts had brought in three serological and one DNA expertise. The DNA expertise had declared the alleged man being „without any doubt“ the father of the child. The Federal High Court confirmed the decision of the lower courts affirming paternity. The experts explained the contradictory evidence by a new mutation of the Pi gene, the human mutation rate coming to an order of magnitude of µ = 10 -5 or µ = 10 -6 . Including these mutation rates into the calculus the PoP (50) decreased to 99.5851% respectively to 95.9999% according to the experts.
If we take into account this overall probability of the case and if our decision-mark is fixed at 99.73 % we are bound to deny paternity. This does not seem to be sound. The Federal High Court did not discuss this aspect. It simply ruled out the evidence against paternity on the ground that it was a feigned exclusion, a false bit of pseudo-evidence arising out of mutation.
I don’t agree with the ruling out of the evidence of exclusion. These findings and the explanation by a mutation of the Pi gene are scientific findings of the same nature as the other PoP (50) findings. Therefore the overall PoP (50) of 99.5851% respectively of 95.9999% which embraces the (dis)probability of a mutation represents the comprehensive final outcome of the combined blood/tissue and the DNA testing. And this figure must be, first, integrated into the whole of the evidence of the particular case, and, second, put on the balance for the ultimate factfinding.
Let’s keep in mind:
(1) Notwithstanding the apparently clear logic of an exclusion of paternity by scientific findings from a blood/tissue or DNA test, the findings yield mere probability knowledge and may be rebutted by equally strong or even stronger evidence.
(2) Evidence of the same nature has to be dealt with in the same manner. It is a flaw if a part of the scientist’s findings is held to be absolutely true (as the German courts did, and still are inclined to do, with regard to the evidence of exclusion).
(3) The test „probability on the limit of certainty“ is far from being an unmistakable instruction. Regarding merely its precision this test is even less unequivocal than the p>0.5 rule because the criterion „probability on the limit of certainty“ is not a clearly cut mark on our scale.
3.2 Case of Identical Twins as Rivals.
The Federal High Court decided a case in which the respondent defended himself by alleging his identical twin brother as a rival. The calculus of the expert serologist gave a probability rate of 99,9994% and of 99,9995% for the paternity of both, the respondent and his identical twin brother. The magistrate court had held: There is a probability on the limit of certainty that one of the two is the father. But since the probability rate in respect to each of them is equal, the legal action against the respondent must be dismissed. The court of appeal did not change the verdict. The Federal High Court quashed the verdict and instructed the lower Court to find out whether the twin brother in fact had had intercourse with the mother during the biological conception period.
What may we abstract from this case?
(1) Again: However high the PoP (50) based on scientifically scrutinized genetic evidence may be, it can be watered down (or even be rebutted) by new relevant (and stronger) evidence.
(2) Irrespective of a higher or lower standard of proof for the issue between the two identical twin brothers genetic evidence of any kind is not relevant. We must rely on the other evidence of the particular case. If all things are found equal in those other aspects too (as for instance intercourse during the most likely conception period, missing contraceptives, potency) there is a genuine deadlock or „non liquet“, and God alone knows the truth. To decide such a case the court will have to refer to an additional legal ruling.
(3) The case with identical twin brothers is a borderline case. In practice such a genuine deadlock is rare. As mentioned the legal system may or may not produce this sort of situation by enlarging the mark on the scale or indeed prescribing two marks that enclose a grey area (above 1.3).
Moving away from identical twins to normal brothers we find that here the genetic evidence is not irrelevant but of much less discerning power than amongst unrelated alleged fathers. In such a case the relevant standard of proof becomes very meaningful. The superiority of the p> 0.5 rule as far as the ultimate decision is concerned, is due to the fact that it exploits the given evidence better than the high standard that rules out information which is relevant but beneath the defined threshold.
4. Interdependence of Prior Probability and Standard of Proof
Now we have to consider the interdependence between the experts’ prior probability and the standard of proof. The matter is of particular interest and concern if the proposed prior probability is 0.5 and coincides with the general standard of proof in the common law, the p>0.5 rule.
4.1 Momentum of the Prior Probability
To demonstrate the importance of the prior probability for the outcome let us take a thought experiment:  The person who committed the crime has left some traces fit for genetic analysis on the spot. Expert analysis concludes: There is only one person possessing exactly the same genetic profile within a million. Suppose that the set of persons who could have committed the crime and thus have left the trace embraces all the inhabitants of a big city, let’s say of one million persons, then the prior probability is 1 to 1 000 000. The police has taken a person possessing the relevant genetic profile. What is now the posterior probability that this person is in fact the one who left the trace on the spot? The Bayesian formula gives us a posterior probability of 50% that the person taken by the police is in fact the person who has left the trace. Thus the momentum of the prior probability is very considerable notwithstanding that it is continuously diminishing if fresh evidence of high probative value can be added.
Our thought experiment is meant to be a warning against the danger of overestimating DNA evidence and alike (the prosecutor’s fallacy). It is not supposed to be a sound approach in a real case. The assumption that anybody in town may have left the material is as far away from reality as is D. H. Kaye’s uniform distribution over all men in the relevant population (above 2.3).
R. Meester and M. Sjerps (2003) discuss in great detail the impact of prior probability or prior odds and recommend that the expert should report in court proceedings likelihood ratios only together with a table showing the relation between prior probability and posterior probablity.  They claim (p. 85), rightly I think, that their
tabular approach has three advantages: (1) you can use it with any of the hypotheses sets discussed; (2) it guides the jurors to combining the likelihood ratio with the prior odds; (3) it may prevent the infamous prosecutor’s fallacy.
However the tabular approach is in fact quite modest: As the expert refrains from choosing definitely a specific prior, there is no single posterior probabitity but a range of probabitities which could be confronted with the standard of proof. Consequently, this comparison has to be drawn on an intuitive basis which is, I feel, not at all unsound.
4.2 Expert’s Prior Probability and Presumption of Innocence
Common ground is, I suppose, that the expert’s starting point has to be unbiased (or neutral). Otherwise the expertise can hardly claim to be a scientific device. The point of controversy is not neutrality but how neutrality may be achieved. Which prior probability represents neutrality? And (or) do we need a special prior to establish a fair (neutral) starting point for the expert?
First: On the conversant probability scale from zero to one (zero: „non-A is certain“; one: „A is certain“) the neutral start is „0.5“ or 50% (on the likelihood ratio scale from zero to infinite the deadlock mark is „1“).
Second: Do we need a special prior to estabish a fair (neural) starting point for the expert?
Neither the court nor the expert can start from zero (of „guilt“). If the expert chooses as her prior probability the prior odds that reflect the other, non genetic evidence of the partiular case she invades the domain of the court and thus does not inform the court properly about the isolated probative value of her findings. R. D. Friedman (2000) deems apparently that the prior as a rule should be a descriptive one and that a prior of 0.5 is unfairly high and utterly unrealistic. Although he clearly recognises that in criminal trials a (descriptively) realistic prior would be above 0.5, to counter the prejudice sticking on the defendant he opts for a prior on the bottom of the scale. Thus he slides into the presumption of innocence which is a prescriptive device.
Indeed, in a criminal hearing a heavy burden of prejudice weights unmistakably on the shoulders of the defendant. To reach a fairly neutral start the law puts the presumption of innocence as heavy a burden on the prosecution. It protects the defendant against the strong prejudice resulting simply and only from his part in the play. The presumption is a normative device. And it is an utterly important one to get the factfinder into the right position. It is the imperative starting point for the court.
Analytically the presumption of innocence and the criminal standard „proof beyond a reasonable doubt“ not only have the same aim, namely protecting the defendant and the community from errors in justice, but they are just one and the same threshhold bearing different names at the beginning and at the end of the trial. This double warning underlines its importance.
Now, is this the presumption also appropriate for the expert? Or does she, in applying it, invade again in the court’s domain? I think, the answer to the second question is „yes“, she does, and she should not do so!
In order to enable the judiciary to develop little by little a feeling for the probative weight of the expert’s figures her starting point should be a consented one amongst the judiary and the community of scientists, and it should be one and the same, if not in all cases, at least in all comparable cases, i. e. in all cases that are to be decided on the same standard of proof. This is another strong reason why the expert’s starting point should be normative, not descriptive.
Note: Even if there are more than one man who could have begotten this child in my view the issue remains a twofold alternative just the same: Has this defendant fathered the child or is somebody else the father? And this approach shall be applied to every alleged man separately, one after the other, i.e. the expert has to investigate the relevant test materials every time without considering the other possible fathers and without including them in her calculations. In average cases the non-fathers will be excluded by the expert’s findings. If more than one man cannot be excluded, it is up to the court to draw conclusions as well from the findings of the expert as from the other evidence of the particular case. The expert using the same prior probability for all alleged fathers indicates her preference by the posterior probability or likelihood ratio attributed to each of them. Peter Donnelly and Richard D. Friedman (1999) write on this behalf:
We are inclined to belief that the binary approach, either declaring a match or not according to the previously prescribed critera, is a satisfactory method of summarizing the data, even though some information is lost in this way. In any event, this is the way the data is usually presented, and we will not challenge it here.
4.3 Prior of 0.5, the P> 0.5 Rule and Suggestions for Loosening the Knot Concerning the Expert’s Fair Prior Probability
Another strong reason for D. H. Kaye’s and R. D. Friedman’s misgivings concerning the PoP (50) approach is the p>0.5 rule as the general standard of proof in the common law.  „Proof“ being a threshold where is the threshold if the prior probability itself fulfills the criterion of the ultimate decision? Is it sound to start with a prior probability of 0.5 if the criterion of the ultimate decision is 0.5?  Is it coherent to support both, the p>0.5 rule on the one hand and the PoP (50) approach on the other, as I am inclined to? In a system with „probabitity at the limit of 100%“ as the general standard of proof, misgivings of this kind are much less or even absent: the threshold is the distance between 0.5 and 0.99 and more.
Before I come to my conclusions I will draw the attention to viewpoints that may loosen the knot a little bit.
First I put two questions:
1) An alternative proposal in paternity cases is to let the expert start by distributing the prior probability uniformly over all men in the relevant racial and geographic population. As we have seen this approach demonstrates the great momentum of the prior probability. But is it the suitable approach in a real case? Do we have any rational reason to suppose that the mother, when identifying the defendent, has made a radom choice among all men in the relevant racial and geographic population, regardless of the contacts she had during the relevant period? Should that assumption really be the starting point for the court or the expert?
As far as the information relating to the frequency of the characteristics of the particular child/mother/alleged father setting within the relevant population is concerned, this information is the heart of the expert’s message to the court (it equals in fact the likelihood of the defendant not being the father). Does it make sense to use as the the expert’s starting point prior odds that are likewise based on the assumption of a random choice among the same population, thus offsetting, as it were, the posterior probabilities? 
(2) I think we all agree that the expert should present to the court simply her information about genetics relevant in the particular case and nothing else. But she must compute the whole of her expert knowledge into one figure. For doing this computation there are, as far as I see, two main devices: The Essen-Möller-formula and the likelihood ratio (paternity index= PI). Kaye criticizes the use of the Essen-Möller-formula pointing out that it starts implicitly with a prior of 0.5. He proposes to use simply the PI. But how may the expert explain to the court the meaning of a likelihood ratio without comparing the likelihood ratio scale (from zero to infinte) with the probability scale (from zero to one) and without showing how the PI may be integrated into the whole of the evidence of the particular case?
Second: I have to confess that it took me some time to understand why the scale of a likelihood ratio (from zero to infinite) should provide a more appropriate measure of statitical evidence than a number on the well established probability scale (from zero to one) that measures the degree of certainty/uncertainty. Meanwhile I agree that in communicating the likelihood ratio the expert gives her statistical information without making any assumptions on the prior odds.
Third: To me under the p>0.5 rule the threshold is the amount (or value) of evidence speaking in favour of the respondent’s case. If the respondent does not offer any evidence whatsoever, and if the general knowledge and experience of the court together with the plaintiff’s story does not give any clue to a flaw in his story, how can the court avoid following the plaintiff’s disclosures?
To some extent at least the controversy seems to be a communication problem caused by different language games intertwined with different backgrounds (L. Wittgenstein). I give an example: To demonstrate how Bayes’ theorem provides a way to escape from the prosecutors’s fallacy M. Redmayne (1997 ) shows how an answer to a question of type (1) „What is the probability of finding the DNA evidence given that the defendant is innocent?“ is to be changed into an answer to a question of type (2) „What is the probability that the defendant is innocent, given the DNA evidence?“ and he gives an example:
Suppose a juror has heard some evidence in a trial (say, that the defendant had threatened to kill the victim the week before the victim was found dead) and, on this evidence thinks that it is ten times more probable that the defendant is innocent than guilty.
This sentence bewildered me for quite a while, the bit of evidence heard in trial indicating guilt and not innocence. But the context makes it clear that there is no confusion of words. Obviously Redmayne’s juror started with a strong presumption of innocence (perhaps 100:1) which was layered, but not rebutted, by the (soft) evidence of guilt to the level of 10 to 1. My difficulty in understanding M. Redmayne’s demonstration is, I submit, due to a communication barrier caused by (subconscious) standards of proof in the back of my mind.
5. Conclusions and Final Remarks
5.1 Expert’s Prior Probability and Standard of Proof
Regarding the criminal standard Alex Stein (1996) writes: 
The prior probability problem can possibly be resolved by unfolding the ‚beyond reasonable doubt‘ requirement. …This means that sometimes a person may be convicted when innocent. ….The social choice besetting this requirement clearly needs to be explicated by the permissible percentage of erroneous convictions. …this percentage can be employed as prior probability in all criminal cases. Prior probability would thus be postulated normatively, which appears to be an attractive approach. To postulate prior probability normatively, as, say, 1/100, would certainly be more feasible than to determine it through quasi-empirical conjectures. * Bayesian evaluation of evidence may thus take off.
I agree. For the court the (first) prior probability has to be normative, not empirical. And Alex Stein connects or even identifies this starting point with the standard of proof modelled on the Condorcet formula. Thus he extends his concept to all kinds of standards of proof. I perfectly agree to this extension too. But this approach should not be extended to the expert for the reason I mentioned above: She assists the fact-finder best if she does not invade the court’s domain. And she would do so both by using a descriptive prior probability taken from the given case or by using a normative prior probability reflecting the relevant legal presumption (or what is the same: the corresponding ultimate burden of proof).
So what has she to do? The remaining alternative for her is to use as her start the center, i. e. on the probability scale the 0.5 mark, or to refrain from using any prior probability, i. e. to give, as D. H. Kaye suggests, simply the likelihood ratio – an approach that appears to me appropriate only if the court is able to integrate this item of information into the whole of the evidence properly.
5.2 Ultimate decision-making of the fact-finder
Suppose all the evidence from both sides is in, and is carefully scrutinized. The fact-finder now proceeds from the stage of gathering and scrutinizing to the stage of the ultimate decision making: She or he compares the overall probability in the given case with the relevant standard of proof. Doing this the case tilts one way or the other, either abruptly if the evidence stays significantly behind or exceeds significantly the relevant standard of proof or after some oscillation if the evidence is either evenly balanced or parsimonious.
Note: The difficulty in close cases is by no means a characteristic of the p>0.5 rule. The deadlock emerges from the evenly balanced evidence on the one hand and a certain lack of precision of the decision mark (standard of proof) on the other hand, and this coincidence can happen wherever we have fixed our mark (see the discussed German non-liquet above 1.5).
Putting the deadlock constellation aside the fact-finder is confronted with a simple twofold issue: Guilty or not guilty? Has the respondent begotten this child or has he not?
Applying probability theory to fact-finding in court means using an artificial scale representing degrees of certainty respectively uncertainty in view of the hidden (or better: only partially revealed) twofold reality. On the stage of gathering, scrutinizing and indeed computing the numerous items of evidence we detect finally the overall probability of the given case. Applying the standard of proof we shift to the next stage: the ultimate fact-finding. This move can be seen as a step from a probability state to a somewhat safer state: Facts of this kind are hardened. To dispute them is formalized and thus rendered more difficult. „The movement from a probability state to one of certainty represents a „leap of faith“ rather than a logical consequence.“ Having transgressed from the stage of gathering and scrutinizing the evidence to the stage of the ultimate judicial fact-finding this leap does not look irrational or shady to me.
A common saying is: Statistics concern sets, not individuals. However combining all the evidence concerning the alleged fact in issue by calculating its overall probability means enclosing it in its most specific reference class (Einschluss des Falles in die speziellste Wahrscheinlichkeitsklasse). We are performing this in our daily life all the time by intuition. The probability calculus is a device of controlling, and sometimes rectifying, our intuition.
Judges and magistrates, members of a jury and lawyers deliberate and decide on the ground of common sense. Gottlob Frege (1879), father of formalized modern logic, held in his book’s preface that the proportion between mathematical logic and common language is to be compared with the proportion between a microscope and the unfettered eye: The formalized reasoning is indeed much more precise than ordinary reasoning. But it is much less comprehensive and flexible. Some believe that the Bayesian formula may be (under certain conditions) helpful in analysing subjective guesses objectively by forcing the fact-finder to transform the various components of the evidence into numbers qualified for probalility calculus.  Like D. Hodgson (1995) and many others I am sceptical in this respect. To use probability calculus in court in view of non scientific evidence might even be harmful:  The very preciseness of numbers yields an illusive reliability and feigned probative force. And formalisations may detract the judge or juror from his or her job, namely to keep her or his mind open during the whole of the trial and follow the day in court with free floating attention.
„Chacun appelle idées claires celles qui sont au même degré de confusion que les siennes propres.“ With this proviso a good deal of the extremely rich writing on evidence and proof of the last decades in English language is perfectly clear to me and quite a lot of this research impressed me deeply. However I finally agree with D. A. Schum’s (1994) summing up: 
Probabilistic reasoning is an enormously rich intellectual activity. The more I observe such activities in many different contexts, the richer they appear to me. I simply do not believe it possible to capture all of this behavioral richness within the confines of any single formal system of probabilities that now exists or that any of us is likely to invent.
To achieve accurate factfinding the best evidence is needed, scientific evidence included. Courts have to confront extraordinary cases. They are aiming at certainty in an ocean of uncertainties. From time to time new promising islands like DNA evidence arise. To make sound use of them the court needs sound support from experts. But the stage of the ultimate factfinding is for the court alone, not for the expert.
My starting point was the striking difference between the civilian and the common law system concerning the standard of proof in civil disputes. Separating the stage of gathering and scrutinizing the evidence from the stage of the ultimate factfinding, we recognize that the criterion for the optimal search ( the sensible amount) of evidence and the criterion for the final decison making are not the same. The Condorcet formula gives the lawmaker a hint for determining the latter. However for this formula we need to know the disutilities of both kinds of possible errors. Determining them is fuzzy. Therefore this hint is in my view an ordinal propensity device and not a cardinal decision-mark.
Aitken, Colin G. G. and Franco Taroni (2004) „Statistics and the evaluation of evidence for forensic scientists“, 2nd ed. Chichester, England.
Allen, Ronald (1997) „Rationality, algorithms and juridical proof: a preliminary inquiry 1 International Journal of Evidence and Proof ( =E & P) p. 254-275.
Allen, Ronald (1997) „Reasoning and its foundations: some responses“,1 E & P p. 343-347.
Allen, Ronald (2002) „The error of expected loss minimisation“ Law, Probability and Risk, 2, p. 1-7.
Ball, V. C. (1961) „The Moment of Truth: Probability Theory and Standards of Proof“, in: Essays on Procedure and Evidence, ed. by Roady and Covington, Nashville 1961 p. 84–107 = Vanderbilt Law Review 14 (1961) p. 807-830.
Clermont, Kevin M. and Emily Sherwin (2002) „A Comparative View of Standards of Proof“ 50 TheAmerican Journal of Compatative Law p. 243-275.
Clermont, Kevin M. (2003) „Standards of Proof in Japan and the United States“ Cornell Law School Working Papers Series 5 http:/lsr.nellco.org/cornell/clsops/papers/5
Donnelly, Peter and Richard D. Friedman (1999) „DNA Database Searches and the Legal Consumption of Scientific Evidence“ 97 Michigan Law Review, Issue 4, p. 931-985.
Frege, Gottlob (1879) „Begriffsschrift. Eine der arithmetischen nachgebildete Formelsprache des reinen Denkens“, Halle (Saale), Germany; reprint 1971.
Friedman, Richard D. (2000) „A Presumption of Innocence, Not of Even Odds“ 52 Stanford Law Review p. 873-887.
Garrett, Anthony J.M. (2004) „Probabitity Theory: the Logic of Science, by E. T. Jaynes“, 3 Law, Probability and Risk p. 243-246.
Hodgson, D. (1995) „Probability: the Logic of the Law – a Response“ 15 Oxford Journal of Legal Studies p. 51-68.
Kaplan, John (1968) „Decision Theory and the Factfinding Process“ 20 Stan. Law Review p. 1065-1092.
Kaye, D. H. (1989) „The probability of an ultimate issue: the strange cases of paternity testing“ 75 Iowa Law Review p.75-109.
Kaye, D. H. (1999) „Clarifying the burden of persuasion: what Bayesian decision rules do and do not do“ 3 E & P p.1-28.
Kaye, D. H. (2000) „Bayes, burdens and base rates“ 4 E & P p.260-267.
Kaye, D. H. (2002) „The error of equal error rate“ 1 Law, Probability and Risk p. 3-8.
Kaye, D. H. (2003) „Two theories of the civil burden of persuasion“ 2 Law, Probability and Risk p. 3-8.
Kaye, D. H. and Koehler, Jonathan J. (2003) „The misquantification of probative value“ 27 Law and Human Behavior p. 645-659.
Kaye, D. H. , David E. Bernstein and Jennefer L. Mnookin (2004) „The New Wigmore. A Treatise on Evidence. Expert Evicence“, 2005 Supplement.
Krauss and Hertwig (2000) „Muss DNA-Evidenz schwer verständlich sein? – Der Ausweg aus einem Kommunikationsproblem“ MschrKrim p. 155-162.
Mauer, Michael “ (1978) „Zur Interpretation und zum Beweiswert serostatistischer Abstammungsuntersuchungen“ 9 Rechtstheorie p. 348-372.
Meester, Ronald, and Sjerps, Marian (2004) „Why the effect of prior odds should accompany the likelihood ratio when reporting DNA evidence“, 3 Law, Probability and Risk p. 51-62.
Meintjes-van der Walt, L. (2001) „Ruling on Expert Evidence in South Africa. A Comparative analysis“ 5 E & P p. 227.
Motsch, Richard (1978) „Vom Prozeß als Beweis zum Überwiegensprinzip“, in: Gedächtnisschrift für Jürgen Rödig, U. Klug, Th. Ramm, F. Rittner, B. Schmiedel, Editors (Berlin-Heidelberg-New York) p. 334-348.
Motsch, Richard (1983), „Vom rechtsgenügenden Beweis. Zur Entscheidung von Zivilsachen nach Wahrscheinlichkeit unter besonderer Berücksichtigung der Abstammungsfeststellung“ (Berlin).
Motsch, Richard (2007) „Gerichtlicher Beweis in Deutschland und England“, in: Festschrift des Fachbereichs Rechtswissenschaft zum 400jährigen Gründungsjubiläum der Justus-Liebig-Universität Gießen (Tübingen) p.247-258.
Murphy, Peter (2003) „Evidence, Proof, and Facts – An Introductory Essay“, in: Evidence, Proof, and Facts, A Book of Sources, ed. by P. Murphy, Oxford/New Yor.
Nijboer, J.F. (1993) „Common Law Tradition in Evidence Scholarship Observed from a Continental Perspective“ The American Journal of Comparative Law p. 299-338.
Pope, Robin Elizabeth (1988) „The Bayesian Approach: Irreconcilable with expected Utility Theory?“ in: B. Munier, Ed., Risk, Decision and Rationality, Reidel, p. 221-230.
Posner, R. A. (1999) „An Economic Approach to the Law of Evidence“ 51 Stanford L. Review p. 1477-1546 = University of Chicago Law School, John M. Olin Law & Economics Working Paper No. 66. http://ssrn.com/abstract=165176.
Redmayne, Mike (1997) „Presenting probabilities in Court: the DNA experience“ 1 E & P p. 187-214.
Redmayne, Mike (1999) „Standards of proof in civil litigation“ 62 Modern L. Rev. p. 167-195.
Rödig, Jürgen (1973) „Die Theorie des gerichtlichen Erkenntnisverfahrens“ (Berlin/Heidelberg/New York).
Rüßmann, Helmut (1985) „Indizien, Kausalität und Wahrscheinlichkeit“, in: Pasternack, ed. „Erklären, Verstehen, Begründen“ tome 1, 1985, p. 139 (reprint in: R. Alexis/ H.-J. Koch/L. Kuhlen/ H. Rüßmann „Elemente einer juristischen Begründungslehre“ (Baden-Baden 2003 ) p. 419seq.)
Schuberth, Lutz (2001) „Zur Verarbeitung unsicherer und unvollständiger Informationen“ HADES (internes Memorandum).
Schum, D. A. (1994) „The Evidential Foundations of Probabilistic Reasoning“ (New York).
Scotchmer, Suzanne (1998) „Rules of evidence and statistical reasoning in court“, in: The New Palgrave. Dictionary of Economics and Law, ed. P. Newman, vol. 3 (London/New York) p. 389-393.
Selten, Reinhard (2001) „What is Bounded Rationality?“, in: G. Gigerenzer and R. Selten, Bounded Rationality: The Adaptive Toolbox, MIT Press, p. 13-36.
Shafer, Glenn (1986) „The construction of probability arguments“, in: P. Tillers and E. D. Green, eds. (1988) p. 185-204.
Stein, Alex (1996) „Judicial fact-finding and the Bayesian method: the case for deeper scepticism about their combination“ 1 E & P p. 25-47.
Tillers, Peter and E. D. Green, eds. (1988) „Probability and inference in the law of evidence: the use and limits of Bayesianism“ (Symposion April 1986 at Boston, Mass.).
Tillers, Peter (1986) „Mapping inferential domains“, in: P.Tillers and Green (eds.) 1988, p. 277-336.
Tillers, Peter (1988) „The value of evidence in law“ 39 Northern Ireland Law Quarterly No. 2.
* Dr. Robin Elizabeth Pope, Prof.. David H. Kaye, Michael Mauer and Lutz Schuberth gave me most valuable help and advice.
 Note esp. their footnotes 56 and 58. In the text they observe: „But strangely, nobody seems to make much of the difference, in the United States or in Europe“, my study – Motsch (1983) – having slipped them. For a comparison of the standards in Japan and the United States (Japan belonging in this repect to the civil law system) see Clermont (2003). Another informative piece of research is J. F. Nijboer’s (1993). See also L. Meintjes-Van der Walt (2001).
 Note esp. their footnotes 56 and 58. In the text they observe: „But strangely, nobody seems to make much of the difference, in the United States or in Europe“, my study – Motsch (1983) – having slipped them. For a comparison of the standards in Japan and the United States (Japan belonging in this repect to the civil law system) see Clermont (2003). Another informative piece of research is J. F. Nijboer’s (1993). See also L. Meintjes-Van der Walt (2001).
 In the 2005 Supplement of Kaye’s and others The New Wigmore. Expert Evidence (2004) § 8.4 p. 303 there is an analysis of recent literature advocating or purporting to demonstrate a different standard of review for scientific evidence in civil as opposed to criminal cases. Expert evidence appears to be more often excluded in civil than in criminal cases. The authors however conclude, rightly, I think: „But the statistics on civil and criminal rulings on the expert evidence do not establish that a double standard is in place. And even if they were indicative of such a difference, it would not be justified. The same heightened scrutiny is appropriate in both civil and criminal cases.“
 The term optimum is here meant in a common sense way: the sensible amount of information to collect. If the term optimum is too literally interpreted within an expected utility theory an endless regress problem may arise – see R. Selten (2001)
 Richard A. Posner (1999) p.1484seq. offers a formula saying „the search for the evidence should be carried to the point at which the last bit of evidence obtained yields a reduction in error costs equal to the cost of obtaining the evidence.“ He discusses the issue thoroughly including the interdependence of the prior odds in a given case, the standard of proof and the benefit of additional evidence (id. p.1486seq.).
 To call it a formula may be a little bit presumptuous being finally nothing more than the rule of three. However it was developed by the mathematics Professor Ernst Ruh of Bonn University and myself thirty years ago. In substance it is identical with the formula of John Kaplan (1968). A. Stein (1996) p. 37 – footnote 25 offers the formula with reference to J. Kaplan. But the relationship was put in math shorthand already by Antoine de Condorcet in his „Essai sur l’application de l’analyse à la probabilité des décisions rendus à la pluralité des voix“p. 41seq. (Paris 1785; reprint New York 1972). Thus it seems fair enough to baptize it the Condorcet formula. For further details see Motsch (1978) and Motsch (1983), p. 82seq and p.103seq. Instead of „error disutilities“ one can speak of „higher stakes in some criminal cases (or more precisely, the disparity in the impact of factual incorrect verdicts) (that) justifies a heightened burden of persuasion (proof beyond a reasonable doubt) in those cases,…“ see D. H. Kaye (2004/2005) § 8.4 p. 302 note 10 with reference to Roger C. Park, 33 Seton Hall L. Rev. p. 1113 (2003).
 For references to other authors see D. H. Kaye (2002) footnotes 1-5 and S. Scotchmer (1998) p. 391 (she alludes to the standard of proof determined by the effort of minimizing the expected subjective cost of errors as „endogenous“).
 Especially if the disutilities of both kinds of errors are great and if the evidence is either parismonious or evenly balanced it is very natural and reasonable to look whether there is a „third way“ to escape from the dilemma.
 In a case decided by the German Federal High Court of Justice (Bundesgerichtshof) on January 12, 1994 (NJW 1994 p. 1348) – the rate deduced from the blood test alone was 99.99999998%, and including the DNA-evidence 99.9999999999%.
 The same was true for instance in New Jersey according to N.J.S.A. 2A:83-3 which was repealed in 1983 by the N.J.Parentage Act, L. 1983, c. 17, § 23 – see Supreme Court N. J. January 5, 1993 6117 A.2d 247 (N.J. 1993).
 In a judgment pronounced on July, 13th, 1988, the Bundesgerichtshof (FamRZ 1988 p.1037) commented on a rate of 99.9996% as follows: „Unlike the exclusion of paternity a positive indication is always the result of a probability calculus which cannot exclude completely the possibility – however slim it may be – of a different affilation of the child. Therefore additional evidence that may shed light on the case, ought not to be dismissed as unfit.“ Here the respondent, an Iranian, had applied to hear a compatriot as a witness for having had sexual intercourse with the mother during the period of conception. The Court of Appeal had cancelled the application on the grounds that it was unduly delaying.
 Kaye’s view is shared by the Supreme Court of New Jersey, January 5, 1993 617 A.2d 247 (N.J.1993) with a penetrating and comprehensive discussion of the subject and rich references. The court groans: „The intensity and complexity of the dispute is mind boggling on occasion for those other than mathematical experts. Indeed, even the experts have difficulty with it.“
 Kaye (2004/2005 § 12.4.3 p. 492seq.) Butcher v. Commonwealth 96 S. W. 3d 3 (Ky.2002) – a rape case about paternity under closest relatives (the mother of the child was the sister of the defendant and the father of both alleged as being possibly the father too).
 M. Redmayne (1999 p. 184): „…the fact-finder must start by according a small prior probability to the plaintiff’s case.“ – but how small?; C. G. G. Aitken and F. Taroni (2004) § 9. 8.1 p. 315: „Notice that the assumption Pr(Hp) = Pr(Hd) = 0.5 is unrealistic and can lead to breaches of the law of probabilities…“. F. Taroni and A. Biedermann (2005) p. 95: „Clearly, a predefined assumption of a 50% chance of the suspect being the source of the crime stain, prior to the receipt of the DNA evidence, appears quite unfavourable here.“ – with further references.
 In Germany (in contrast for instance to Switzerland where a decision-mark of 99.8% is legally stated) there is no numerical limit stated in law in view of paternity proceedings. The 99.73 % mark is the turning point of the Gauss-Laplace’s bellshaped curve (the so called 3-standard deviation point). As a standard of proof it is just numerology. But faut de mieux it plays still a rather prominent part in the thinking of German experts and courts.
 I don’t think it is fair to leave the child without a father (as the magistrate court did in effect). Shall both of them be stated to be fathers? No. At the most both should contribute for maintenance. In my view the one of the two identical twins alleged by the mother should be stated as being the father presuming that she knows best who fits best.
 But see also the critical comments of David J. Balding, A. P. Dawid and Christopher M. Triggs and John S. Buckleton and the reponse of the authors, (2004) 3 Law, Probability and Risk p. 63-86. – „Bayes’ tables“ have been proposed earlier – see M. Redmayne (1997) p. 201 with further references, f. i. to Kaye (1993) and to the US- National Research Coucil’s Report „The Evaluation of Forensic DNA Evidence“ (1996).
 According to very experienced German experts as K. Hummel or O. Prokop in German paternity cases the prior of 0.5 is unrealistic too but because it is much too low! A more realistic still cautious approach would be a prior of 0.75.
 In England the standard of proof for all paternity cases is the p>0.5 rule, at least since July 1970 when sect. 26 Family Law Reform Act 1969 entered into force. For qualifications see M. Redmayne (1999) p. 178seq.
 According to P. Donnelly and R. D. Friedman (1999) both reports of the US National Research Council (NRC I and NRC II) drastically understate the value of DNA evidence in the so called trawl cases. The caution in those reports they conclude is unwarranted.
No downward adjustment in the force of the evidence is appropriate. Indeed, we will show that the confirmation case and the trawl case are really poles of a continuum, and that the analysis of the two cases are fundamentally the same. It may well be that the total weight of the evidence is less in the trawl case than in the confirmation case – both because there may be less inculpatory evidence in the trawl case* and because the identification of the defendant in that case may taint at later-deveoloped evidence by suggestivness. But this is an entirely different matter, and one that can be left to the ordinary process of arguments to the jury. ….
 One still may be inclined to ask whether the concept of likelihood ratios is not as such an „one-to-one“ approach par excellence putting together two opposed likelihood estimations on one and the same footing the factor „one“ covering the prior perhaps under a more perfected camouflage? See also G. Shafer’s critical comment on the likelihood ratio as a general tool for explicating and correcting legal doctrine (1986/1988 p. 202).
 Ludwig Wittgenstein’s – posthumously published – notes „On Certainty“ – edited by G. E. M. Anscome and G. H. von Wright in German and English (1969 Hongkong – © Basil Blackwell, Oxford) – are from the last years of his life (1949-1951); they were stimulated by S.G.E. Moore’s „Proof of the External World“, in: Proceedings of the British Academy 1939, and „A Defense of Common Sense“, in: Contemporary British Philosophy, 2nd Series, 1925.
 H. Rüßmann (1985) p. 425seq. recommends suspending decision if the given information does not fulfil the criterion of the (available) maximum of accuracy (Bestimmtheit), a criterion „that is better known under the postulate to electing the closest reference class in statistics.“
 The German Federal High Court of Justice commented – March 28th, 1989 ZZP 103 (1990) p. 62-72 (commented by Rüßmann) the use of the Baysian theorem in a car accident case concerning damages for negligence: „If the weighing of the evidence is fit to be duplicated and free of mistakes in logic there is no need for controlling by a probability calculus in the frame of the so called Bayesian theorem.“ And the English Court of Appeal said in R v Adams (1996) 2 Cr App R p.467, at p.482: „To introduce Bayes’ theorem, or any similar method, into a criminal trial plunges the jury into inappropriate and unnecessary realms of theory and complexity deflecting them from their proper task.“ This view was strongly supported by Lord Bingham C.J. in R v Adams (N° 2) (1998) 1 Cr App R p. 377 who ended his speech as follows: „We are very clearly of opinion that in cases such as this, lacking special features absent here, expert evidence should not be admitted to induce juries to attach mathematical values to probabilities arising from non-scientific evidence adduced at the trial.“
The Victorian Court of Appeal (Australia) underlined the necessity to remove from the minds of the jury the possible misconception that the criminal standard of proof could be expresssed as a per cent probability; in R v Cavikic (2005) VSCA 182 the court set aside verdicts of murder because the trial judge had answered a note of the jury saying „Definition of reasonable doubt. Also expressed as ratio, i. e. 70 per cent or 80 per cent sur?“ simply by a redirection of the jury in the terms of his first – correct – summing up : per Vincent JA, R v Cavikic (2005) VSCA 182 at (227): „It is inherent in the expression of the standard by reference to a percentage chance of guilt or by some assessment of the odds as in a wager, that some doubt must exist that is to be disregarded once the arbitrarily fixed percentage or rate is reached. The law has never proceeded on that basis and, whilst it is for the jury to determine what constitutes a reasonabe doubt in their minds, that task cannot be seen to habe been undertaken by the adoption of a process which, in effect, enables them to avoid grappling with their own genuine doubts or perhaps, submerging them in a decision which essentially rests upon a pseudo mathematical assessment of probabilities.“ See (2006) E & P p. 136-138.