In recent days, I was informed of a change in management at the blog, Uncommon Descent, which materially altered circumstances there.
In that context, I decided to revisit the blog, and give greetings and well wishes.
I then spent some time monitoring developments there, which were significant and in my opinion, generally positive.
On the strength of some issues that seemed to require inputs of the sort I had made when I formerly was a regular commenter [as a part of a now more or less completed recon in force on trends affecting the Caribbean in the context of the TKI's remit], I thereafter in recent days have made some balancing and corrective remarks on two threads; horrid doubts and a multiverse discussion. (You will observe from the former that there is a problem where off-thread links are often not followed up, even by principals in an exchange, which is in fact the immediate context for the length of the deleted post reproduced below.)
However, this is all in a fairly narrow context. many who may come here may be wondering what the fuss is all about.
So, for those needing an overview of the significance of the Intelligent Design issue and/or balancing remarks on why it is in fact -- many criticisms by objectors notwithstanding -- a legitimate scientific enterprise, I present just below the synopsis to my online briefing note. (NB: This note was always linked through my handle in every post I made at UD.):
SYNOPSIS: The raging controversy over inference to design too often puts out more heat and smoke than light. However, as Cicero pointed out [ . . . ], the underlying issues are of such great importance, that all educated people need to think about them carefully. Accordingly, in the below we shall examine a cluster of key cases, starting with the often overlooked but crucial point that in communication systems, we first start with an apparent message, then ask how much information is in it. This directly leads to the question first posed by Cicero, as to what is the probability that the received apparent message was actually sent, or is it "noise that got lucky"? The solution to this challenge is in fact an implicit inference to design, and it is resolved by essentially addressing the functionality and complexity of what was received, relative to what it is credible -- as opposed to logically possible -- that noise could do. That is, the design inference has long since been deeply embedded in scientific thought, once we have had to address the issue: what is information? Then, we look at several key cases: informational macromolecules such as DNA, the related origins of observed biodiversity, and cosmological finetuning. Thereafter, the issue is broadened to look at the God of the gaps challenge. Finally, a technical note on thermodynamics and several other technical or detail appendices are presented: (p) a critical look at the Dover/ Kitzmiller case (including a note on Plato on chance, necessity and agency in The Laws, Bk X), (q) the roots of the complex specified information concept, (r) more details on chance, necessity and agency, (s) Newton in Principia as a key scientific founding father and design thinker, (t) Fisherian vs Bayesian statistical inference in light of the Caputo case, and (u) the issue of the origin and nature of mind. Cf. Notice below. (NB: For FAQs and primers go here. This Y-zine also seems to be worth a browse.)The -- still very relevant -- c. 50 BC remark by Cicero is:
Is it possible for any man to behold these things, and yet imagine that certain solid and individual bodies move by their natural force and gravitation, and that a world so beautifully adorned was made by their fortuitous concourse? He who believes this may as well believe that if a great quantity of the one-and-twenty letters, composed either of gold or any other matter, were thrown upon the ground, they would fall into such order as legibly to form the Annals of Ennius. I doubt whether fortune could make a single verse of them. How, therefore, can these people assert that the world was made by the fortuitous concourse of atoms, which have no color, no quality—which the Greeks call [poiotes], no sense? [Cicero, THE NATURE OF THE GODS BK II Ch XXXVII, C1 BC, as trans Yonge (Harper & Bros., 1877), pp. 289 - 90.]Then, yesterday, I saw a thread on a matter long ago discussed at length with Professor PO [cf my briefing note appendix on the issue here and another on some linked broader issues here].
I in particular noted that commenter R thought that he had undercut the general validity of Dembski's Explantatory Filter.
IMO, he is mistaken, having made key logical errors including strawman misrepresentations of the filter and its use, as well as on the issue of false positives and false negatives in statistical inference testing, and most importantly by the self-referential inconsistency of using the filter intuitively even as he set out to "refute" it.
But, such arguments are often persuasive, especially when they are not promptly and fully rebutted.
No such rebuttal was present.
I therefore thought that a response at a somewhat deeper level than was present at the time of my post was indicated, and built one based on my general briefing note, in light of an earlier, well presented comment by commenter GP.
Alas -- even as I was typing up and posting -- GP's comment was deleted by the poster of the original post, on the claim that its length was inappropriate.
Sadly, GP's apt remarks on the use of epidemiological statistical analyses by Behe to identify the empirically observed boundary to darwinian style evolutionary mechanisms [NB: the malaria parasite undergoes in one year about as many reproductive events as all mammalia would have on the usually accepted bio-geological timeline], and much more; has evidently now been permanently lost. [At first I thought it had been captured by my save-off of the thread on posting, but it had already been deleted by the thread owner, and subsequently no-one has been able to capture a cached copy. UPDATE: I see that it has been captured by ES58, and was briefly put back up to allow GP to copy it, by DS. Further update: the comment is now published here and also appears in the comment below, courtesy ES58.]
Shortly thereafter, the comment reproduced below was also deleted, on the same reported grounds. (Of course, this, if it is UD policy, is a new one; one previously unknown to both GP and myself and contrary to the customary practices at the blog. UD has long been known for the importance of comment exchanges, many of them quite substantial.)
On observing the remark by commenter Tribune that I had a good answer, which he hoped someone had saved, and on receiving a separate request for a copy of the comment, I have decided not to let the matter rest, but to put the comment up here, and to make a brief announcement in the thread that it may be seen here.
I trust this will be acceptable to all.
GEM of TKI
PS: I see that subsequently commenter R has gone on to make remarks on Bayesian probability issues. I therefore refer him to the notes here in the appendix to my general note in Information, Design Science etc. As he will see from the below, Fisherian-style statistical inference by elimination [to some confidence level] is still a very relevant approach in real world decision making, at intuitive and formal levels. Indeed, in the lost GP post, he made remarks that for most real world epidemiological work, that is what is used. And, on the very simple and solid grounds that for a reasonable scaled sample, distribution tails beyond a reasonable cutoff SHOULD not -- on probabilistic grounds -- appear. Thence, we see the significance of the Dembski style explanatory filter across chance, necessity and agency. [Note to R and DS: I have simply linked for those willing to investigate further for themselves; I will make no attempt to carry out a clumsy cross-blogs debate on a technical matter.]
_________________
APPENDIX: The deleted post, slightly cleaned up
[Nov 26, 2008, UD thread on Prof PO's ID critique paper at http://www.uncommondescent.com/intelligent-design/some-thanks-for-professor-olofsson/ ]
ribczynski:
I think you have unfortunately set up a strawman and are indulging in some selective hyperskepticism, which then leads to self-referential inconsistency on your part:
ID needs its own Eddington to come up with experiments that support ID while disconfirming NDE. . . . . If Dembski’s filter indicates design where there is none — if it cries wolf, in other words — then we can’t trust it, and our question about the existence of design in the biological world remains unanswered . . . .1. Without accurate probability distributions, Dembski’s method cannot avoid false positives.2. Dembski concedes that if his method were to generate false positives, it would be useless.3. To form an accurate probability distribution, we most know all of the possible ways in which the structure in question could have arisen.4. As DaveScot points out, this is impossible, and so we can never be sure we have an accurate probability distribution.5. Therefore, according to Dembski’s own criteria, his method is useless.
I will comment:
1 –> Let’s start with a basic fact: you are plainly inferring that the functionally specific, complex patterns of alphanumeric characters encountered in this thread [in the form of "posts"] are the products of intelligence, not what I have elsewhere called “lucky noise.”
2 –> But, in fact, as I discuss here, there is nothing in the physics or the logic of the situation to prevent all the above apparent messages from being the product of mere noise.
3 –> Q: So, why do you infer that you are dealing with messages, not noise?
4 –> ANS: By making an intuitive estimate that it is a far better, and more probable explanation, that the messages above are the product of intelligent agents, rather than mere noise. Namely:
Test 1: Lo/hi contingency — if lo or no, that would most likely be lawlike regularity tracing to mechanical necessity. (Cf: where we have heat, air and fuel, reliably due to mechanical necessity, we have a fire. Similarly, unsupported heavy objects fall.)Result: But in fact high contingency — e.g. 128-state ASCII characters — so explanations should revert to [a]chance [evidently undirected contingency] or [b] design [purposefully directed contingency]Test 2: Complex [UPB or a relevant looser criterion] and [especially functionally] specified?–> Here, the issue is that on the presumption of undirected contingency, we have no reason to prefer any one configuration over another, so the relative statistical weights of clusters will dominate likely observations if chance is the driving force.–> And if we have some shift from the Laplacian indifference, we can factor that in. [For instance, we routinely do so in assessing codes. In English, E is most common, and X rather unlikely, and Q is usually followed by U. But, in the 1930's someone wrote an entire book in which E does not appear even once.]–> By contrast, we know from experience and observation that designers direct contingencies to fulfill purposes; so clusters of functionally specific complex configurations that would otherwise be overwhelmingly unlikely to occur, are seen–> So, if we can identify clusters of accessible configurations that are vastly unlikely to occur by chance, but are functionally specified, we can be confident [note, not "certain"; demanding such would be selectively hyperskeptical] that design was at work
RESULT: in this case, we see sufficiently long passages of ASCII text in English to conclude “design” with high confidence.
5 –> IMPLICATION: In short, you have made a common-sense level design inference to a provisional, probabilistic — but on experience, reliable — conclusion. (For instance, I am highly confident that you have not characterised, modelled and proved out the relevant probability distributions to an indisputable degree of precision and accuracy.) So, simply by participating in this thread, you have not only applied but demonstrated your trust in the explanatory filter at an intuitive level. To then object to the same filter when it delivers inconvenient results, then is both self-referentially inconsistent and selectively hyperskeptical.
6 –> Moreover, far from being useless, the filter is a necessity of real life.
7 –> The above therefore illustrates how we may — and indeed must set out to — credibly and reliably and responsibly know, decide and act beyond what we can prove beyond all rational dispute; especially on matters of momentous fact.
8 –> This is not new, as we may see from section 5 of the introduction to Locke’s Essay on Human Understanding, circa 1690 (pardon the Biblical cites and allusions in Locke’s classic text, Mr Scot):
Men have reason to be well satisfied with what God hath thought fit for them, since he hath given them (as St. Peter says [NB: i.e. 2 Pet 1:2 - 4]) pana pros zoen kaieusebeian, whatsoever is necessary for the conveniences of life and information of virtue; and has put within the reach of their discovery, the comfortable provision for this life, and the way that leads to a better. How short soever their knowledge may come of an universal or perfect comprehension of whatsoever is, it yet secures their great concernments [Prov 1: 1 - 7], that they have light enough to lead them to the knowledge of their Maker, and the sight of their own duties [cf Rom 1 - 2, Ac 17, etc, etc]. Men may find matter sufficient to busy their heads, and employ their hands with variety, delight, and satisfaction, if they will not boldly quarrel with their own constitution, and throw away the blessings their hands are filled with, because they are not big enough to grasp everything . . . It will be no excuse to an idle and untoward servant [Matt 24:42 - 51], who would not attend his business by candle light, to plead that he had not broad sunshine. The Candle that is set up in us [Prov 20:27] shines bright enough for all our purposes . . . If we will disbelieve everything, because we cannot certainly know all things, we shall do muchwhat as wisely as he who would not use his legs, but sit still and perish, because he had no wings to fly.
9 –> In short, far from being useless, the more formalised explanatory filter as elaborated by Dembski et al, is an extension of how we reason in commonsensical situations.
10 –> More particularly, and as GP pointed out at 16 above, it is an extension of how Fisher and many other investigators have routinely used the concept of clusters of possible outcomes bearing differing relative weights, to identify rejection regions for null hypotheses where he could model and parameterise the relevant distributions.
11 –> As my adverting to configurations, contingencies and relative statistical weights hints at [and as was discussed in earlier threads at length with Prof PO and others], the filter is also building on the foundational principles of statistical thermodynamics; principles that give the statistical underpinnings to say the second law of thermodynamics. In effect, there is a direction of spontaneous — i.e. undirected — change in systems with large numbers of microstates: towards clusters with higher relative statistical weights. (A classic illustration is Hoyle’s tornado in a junkyard in Seattle. Such is utterly highly unlikely to assemble a flyable 747; but, such an outcome is not strictly forbidden by the relevant physics or logic.)
12 –> Further, we see from the above that one may profitably effectively and reliably use intuitive estimates of probabilities, one may use bounds on probabilities, one may use general order of magnitude estimates, one may infer to statistical models from observations of the real world that almost invariably do not precisely fit such ideal models, and one may use a priori criteria for calculation of such probabilities. And, we do so successfully sufficiently often that many statistical inferences that we have a great reliance on statistics in the sciences, management and life generally.
13 –> Worse, many highly useful and commonly applied statistical tests are vulnerable to false negatives and positives, which are often traded-off the one against the other in the design of the test. But that does not make them useless; it simply means that we assign degrees of confidence to our conclusions.
14 –> Further to this, the Dembski UPB is deliberately chosen such that it is very resistant to false positives, cheerfully accepting the prospect of false negatives given the significance of the cases in which it will rule. For, given that the number of possible quantum states in the observed universe across its lifetime are several orders of magnitude below 10^150, if we have a reasonable estimate that a particular configuration (or more properly the island and/or archipelago of function on which it sits) is less than 1 in 10^150 of the reasonably estimated accessible space, then we can be highly confident [indeed, morally certain] that a functional outcome in that island is most unlikely indeed to have been by chance.
15 –> For instance, let us consider genomes to get to first life:
The first challenge of abiogenesis is to start from the 0 square [genome length zero], and in a plausible chemical environment, (i) get to a viable and sustainable prebiotic soup with the requisite monomers in it, then (ii) move to the first islands of function [in the vast pacific of in principle possible configurations]. (For the moment, we will simply assume for the sake of our argument that once a proto-genome reaches the shores of a viable island, it may then proceed through hill-climbing processes such as “random variation” and “natural selection” [i.e. culling based on differential average reproductive success], to move to the mountains of peak performance in that bio-functional niche.)The immediate problem is that the first such observed islands are of order 100,000 - 1,000,000 base pairs; and in a context where the organisms at the lower end are in effect typically dependent on more complex life forms to make life components they cannot. The relevant 1 million chain-length sub-space has about 1.010 * 10^698,970 configurations, which is again a very large number, which will easily swamp the search resources of the observed cosmos. Even if we take 100,000 elements as a working lower limit, that puts us at 1.001 * 10^69,897; still well beyond the available search resources of the observed cosmos . . . .Why is that so?First, biofunction is observed to be code based and specific, i.e. it is vulnerable to perturbation. For instance, three of the 64 possible three-letter codons code for STOP. So, immediately, if we form a substring of three DNA letters at random, the odds are just under 5% that they will be a stop codon . . . . consider a hypothetical genome that requires 100 “necessary” proteins, each with just 100 residues, using altogether 10,000 codons, or 30,000 DNA base pairs. This will require 10,000 codons without an accidental stop in the wrong place, to get the required complement of proteins. The fraction of such 30,000-length genomes that would not be truncated by wrongly placed stop codons is (61/64)^10,000 ~ 1 in [3 * 10^208]. This in itself would make it maximally unlikely that we would get by chance concatenation of DNA elements to a single such minimally functional genome; on the gamut of our observed universe across its typically estimated lifetime. (Nor will autocatalysis of RNA molecules in a hypothetical RNA world, get us to bio-functional, protein-making DNA codes.)
16 –> So, we see here how order of magnitude and bound calculations are more than adequate to show just how tellingly relevant the Dembski filter is for real world situations.
There is no need to demand “accurate” probability distributions, nor is there any need to accept the super-task of specifically identifying all possible pathways for observed configurations to emerge. (Onlookers: Notice the selectively hyperskeptical shift of burden of proof here.)
There is no need to demand “accurate” probability distributions, nor is there any need to accept the super-task of specifically identifying all possible pathways for observed configurations to emerge. (Onlookers: Notice the selectively hyperskeptical shift of burden of proof here.)
17 –> Moreover, we know from experience that the filter works: where we have more than 500 - 1,00 or so bits of effective storage space, we see that functionally specified complex information [FSCI] is a reliable artifact of agency, not of chance.
18 –> Finally, the issue is not that the explanatory filter may make false negatives, but that in cases where we have contingencies beyond the UPB, it rules DESIGN, and per our observations it does so reliably. Indeed, it would be an interesting challenge to see if R can come up with a case where:
[a] the EF rules “design” and we directly know the causal story separately well enough to see that it has committed a false positive, or[b] we see an entity storing more than 500 - 1,000 bits of information and showing functional specification, where the filter will rule “NOT DESIGNED” where in fact it is designed.
19 –> In the case of say Mt Rushmore, the faces each store well beyond 1 k bit of information, and fit the specification of resembling four specific individuals who are well-known historical figures. the filter rules design,a nd we know independently that this is so. There is no false negative there that undermines our confidence in the filter.
________
As they say in folk dances here in the Caribbean: wheel and tun and come again.
G’day
GEM of TKI
_______________
I trust the above will be helpful. END
4 comments:
KF,
I don't think gppucio has a copy of his own post; I tried to re-post it, but it was deleted again; I tried to make a post to tell him I had it, but that disappeared too;
here's the post - I guess a quest for truth also has to fit into a certain "length" now, or it's not valid?
[I guess he has a point, because I could post wikipedia and claim the answer is in there; but not everyone has other options]
anyway, I'm making it available to you and you can post it or not, but this is my last attempt: No warranties on my successful cut/paste, so he might want editing rights before it's posted;
Thanks for the opportunity.
es58
gpuccio
11/25/2008
6:38 pm
I have read PO’s essay and, while recognizing the correctness of the general tone, I am really disappointed by the incorrectness of the content. With this I do not mean, obviously, that PO does not know his statistics, but that he uses it completely out of context. And many of his errors derive essentially from not being apparently really familiar with biology.
I will try to make some comments.
PO’s arguments are essentially dedicated first to Dembski, and then to Behe. I think he fails in both cases, but for different reasons.
In the first part, he argues against Dembski’s approach to CSI and his explanatory filter.
The first, and main, critic that he does is the following: “He presents no argument as to why rejecting the uniform distribution rules out every other chance hypothesis.”
I’ll try to explain the question as simply as possible, as I see it. Dembski, example, when applies to biological issues like the sequence of aminoacids in proteins, correctly assume a uniform probability distribution. Obviously, such an assumption is not true in all generic statistical problems, but Dembski states explicitly, in his works, that it is warranted when we have no specific information about the structure of the search space. This is a statistical issue, and I will not debate it in general. I will only affirm that, in the specific case of the sequence of aminoacids in proteins, as it comes out from the sequence of nucleotides in the genome through the genetic code, it is the only possible assumption. We have no special reason to assume that specific sequences of aminoacids are significantly more likely than others. There can be differences in the occurrence of single aminoacids due to the asimetric redundant nature of the genetic code, or a different probability of occurrence of the individual mutations, but that can obviously not be related to the space of functional proteins. There is really no reason to assume that functional sequences of hundreds of aminoacids can be in any way more likely than non functional ones. This is not a statistical issue, but a biologic one. So, PO’s critic may have some theoretical ground (or not), but it is totally irrelevant empirically.
His second critic is the following:
“As opposed to the simple Caputo example, it is now very unclear how a relevant rejection region would be formed. The biological function under consideration is motility, and one should not just consider the exact structure of the flagellum and the proteins it comprises. Rather, one must form the set of all possible proteins and combinations thereof that could have led to some motility device through mutation and natural selection, which is, to say the least, a daunting task.”
In general, he affirms that Dembski does not explicitly state how to define the rejection region.
Let’s begin with the case of a single functional protein. Here, the search space (under a perfectly warranted hypothesis of practically uniform probability distribution) is simply the number of possible sequences of that length (let’s say, for a 300 aa protein, 20^300, which is a really huge space). But which is the “rejection region”? In other words, which is the probability of the functional target? That depends on the size of the set of functional sequences. What is that size, for a definite protein length?
It depends on how we define the function. We can define it very generically (all possible proteins of that length which are in a sense “functional”, in other words which can fold appropriately and have some kind of function in any known biological system). Or, more correctly, we can define it relatively to the system we are studying (all possible proteins of that length which will have an useful, selectable function in that system). In the second case, the target set is certainly much smaller.
It is true, however, that nobody, at present, can exactly calculate the size of the target set in any specific case. We simply don’t know enough about proteins.
So, we are left with a difficulty: to calculate the probability of our functional event, we have the denominator, the search space, which is extremely huge, but we don’t have the numerator, the target space. Should we be discouraged?
Not too much. It is true that we don’t know exactly the numerator, but we can have perfectly reasonable ideas about its order of magnitude. In particular we can be reasonably certain that the size of the target space will never be so big as to give a final probability which is in the boundaries, just to make an example, of Dembski’s UPB. Not for a 300 aa protein. And a 300 aa protein is not a very long protein. (I will not enter in details here for brevity, but here the search space is 20^300; even if it were 10^300, we still would need a target space of at least 10^150 functional proteins to ensure a probability for the event of 1:10^150, and such a huge functional space is really inconceivable, at the light of all that we know about the restraints for protein function.)
That reasoning becomes even more absolute if we consider not one protein, but a whole functional system like the flagellum, made of many proteins of great length interacting for function. There, if it is true that we cannot calculate the exact size of the target space, proposing, as PO does, that it may be even remotely relevant to our problem is really pure imagination. Again, I am afraid that PO has too vague a notion of real biological systems.
So, again, PO’s objections have some theoretical grounds, but are completely irrelevant empirically, when applied to the biological systems we are considering. That is a common tactic of he darwinian field: as they cannot really counter Dembski’s arguments, they use mathematicians or statisticians to try to discredit them with technical and irrelevant objections, while ignoring the evident hole which has been revealed in their position by the same arguments. PO should be more aware that here we are discussing empirical science, and, what is more important, empirical biological science, which is in itself very different from more exact sciences, like physics, in the application of statistical procedures.
The last point against Dembski regards his arguments in favor of frequentist statistics against the Bayesian approach. This part is totally irrelevant for us, who are not pure statisticians. Indeed, it will be enough to say that, practically in all biological and medical sciences, the statistical approach is Fisherian, and is based on the rejection of the null hypothesis. So, Dembski is right for all practical applicatons. indeed, PO makes a rather strange affirmation: “A null hypothesis H0 is not merely rejected; it is rejected in favor of an alternative hypothesis HA”. hat is simply not true, at least in biology and medicine. H0 is rejected, and HA is tentatively affirmed if there is no other causal model which can explain the data which appear not to be random. So, the rejection of H0 is done on statistical terms (improbability of the random nature of the data), but the assertion of HA id done for methodological and cognitive reasons which have nothing to do with statistics.
The second part of PO’s essay is about Behe’s TEOE, and the famous problem of malaria resistance.
Here, PO’s arguments are not only irrelevant, but definitely confused. I’ll do some examples:
“The reason for invoking the malaria parasite is an estimate from the literature that the set of mutations necessary for choloroquine resistance has a probability of about 1 in 10^20 of occurring spontaneously.”
Yes, Behe makes that estimate from epidemiological data of the literature. But he also points out that the likely reason for that empirical frequence is that chloroquine resistance seems to require at least two different coordinated mutations, and not only one, like resistance to other drugs. Indeed, Behe’s point is that he empirical occurrence of the two kinds of resistance is in good accord with the theoretical probability for a single functional mutation and a double coordinated functional mutation. Again, PO seems to be blind to the biological aspects of the problem.
“Any statistician is bound to wonder how such an estimate is obtained, and, needless to say, it is very crude. Obviously, nobody has performed huge numbers of controlled binomial trials, counting the numbers of parasites and successful mutation events.”
But Behe’s evaluation is epidemiological, not experimental, and that is a perfectly valid approach in biology.
“Rather, the estimate is obtained by considering the number of times chloroquine resistance has not only occurred, but taken over local populations — an approach that obviously leads to an underestimate of unknown magnitude of the actual mutation rate, according to Nicholas Matzke’s review in Trends in Ecology & Evolution.”
Here PO seems to realize, somewhat late, that Behe’s argument is epidemiological, and so he makes a biological argument at last. Not so relevant, and from authority (Matzke, just to be original!). But yes, maybe there is some underestimation in Behe’s reasoning. Or maybe an overestimation. Thats’ the rule in epidemiological and biological hypotheses. nobody has absolute truth.
“Behe wishes to make the valid point that microbial populations are so large that even highly improbable events are likely to occur without the need for any supernatural explanations”
No, he only makes the correct point that random events are more likely to occur in large populations than in small populations. If they are not too “higly improbable”, of course. In other words, a two aminoacid coordinated functional mutation “can” occur (and indeed occurs, although rarely) in the malaria parasite. But it is almost impossible in humans. What has that to do with supernatural explanations?
“but his fixation on such an uncertain estimate and its elevation to paradigmatic status seems like an odd practice for a scientist.”
Uncertain estimates are certainly not an odd practice for a biologist. And anyway, Behe does not elevate his estimate to “paradigmatic status”: he just tries to investigate a quantitative aspect of biological reality which darwinists have always left in the dark, conveniently for them I would say, and he does that with the available data.
“He then gores on to claim that, in the human population of the last 10 million years, where there have only been about 1012 individuals, the odds are solidly against such an unlikely event occurring even once.”
For once, that’s correct.
“On the surface, his argument may sound convincing.”
It is convincing.
“First, he leaves the concept “complexity” undefined — a practice that is clearly anathema in any mathematical analysis.”
That’s not true. He is obviously speaking of the complexity of a functional mutation which needs at least two coordinated mutations, like chloroquine resistance. That is very clear if one reads TEOE.
“Thus, when he defines a CCC as something that has a certain “degree of complexity,” we do not know of what we are measuring the degree.”
The same misunderstanding. we are talking of mutational events which require at least two coordinated mutations to be functional, like chloroquine resistance, and which in the natural model of the malaria parasite seem to occur with an approximate empirical frequency of 1-in-10^20.
“As stated, his conclusion about humans is, of course, flat out wrong, as he claims no mutation event (as opposed to some specific mutation event) of probabililty 1 in 10^20 can occur in a population of 10^12 individuals (an error similar to claiming that most likely nobody will win the lottery because each individual is highly unlikely to win).”
Here confusion is complete. Behe is just saying a very simple thing: that a “functional” mutation of that type cannot be expected in a population of 10^12 individuals. PO, like many, equivocates on the concept of CSI (functional specification) and brings out, for the nth time, the infamous “deck of cards” or “lottery” argument (improbable things do happen; OK, thank you, we know that).
“Obviously, Behe intends to consider mutations that are not just very rare, but also useful,”
Well, maybe PO understands the concept of CSI, after all. But then why does he speak of “error” in the previous sentence?
“Note that Behe now claims CCC is a probability; whereas, it was previously defined as a mutation cluster”
That’s just being fastidous. OK, Behe meant the probability of that cluster…
“A problem Behe faces is that “rarity” can be defined and ordered in terms of probabilities; whereas, he suggests no separate definition of “effectiveness.” For an interesting example, also covered by Behe, consider another malaria drug, atovaquone, to which the parasite has developed resistance. The estimated probability is here about 1 in 10^12, thus a much easier task that chloroquine resistance. Should we then conclude atovaquone resistance is a 100 million times worse, less useful, and less effective than chloroquine resistance? According to Behe’s logic, we should.”
Now I cannot even find a logic here. What does that mean? Atovaquone resistance has an empirically estimated probability of 1 in 10^12, which is in accord with the fact that it depends on a single aminoacid mutation. What has that to do with “usefulness”, “effectineness”, and all the rest?
“But, if a CCC is an observed relative frequency, how could there possibly have been one in the human population? As soon as a mutation has been observed, regardless of how useful it is to us, it gets an observed relative frequency of at least 1 in 1012 and is thus very far from acquiring the magic CCC status.”
Here, Po goes mystical. CCC is an observed relative frequency in the malaria parasite. That’s why we cannot reasonably “expect” that kind of mutation an empirical cause of functional variation in humans. What is difficult in that? Obviously, we are assuming that the causes of random mutations are similar in the malaria parasite and in humans. Unless PO want to suggest that humans are routinely exposed to hypermutation.
“Think about it. Not even a Neanderthal mutated into a rocket scientist would be good enough; the poor sod would still decisively lose out to the malaria bug and its CCC, as would almost any mutation in almost any population.”
I have thought about it, and still can find no meaning in such an affirmation. The point here is not a sporting competition between the malaria parasite and the human race. We are only testing scientific hypotheses.
“If one of n individuals experiences a mutation, the estimated mutation probability is 1/n. regardless of how small this number is, the mutation is easily attributed to chance because there are n individuals to try. Any argument for design based on estimated mutation probabilities must therefore be purely speculative.”
That’s just the final summary of a long paragraph which seems to make no sense. PO seems to miss the point here. we have two different theories which try to explain the same data (biological information). The first one (darwinian evolution) relies heavily on random events as causal factors. Therefore, its model must be consistent with statisticalm laws, both theoretically and empirically. Behe has clearly shown that that is not the case. His observations about true darwinian events (microevolution due to drug pressure) in the malaria parasite, both theoretical (number of required coordinated functional mutations and calculation of the relative probabilities) and empirical (frequency of occurrence of those mutations in epidemiological data) are in accord with a reasonable statistical model. The same model, applied to humans, cannot explain the important novel functional information that humans exhibit vs their assumed precursors. Therefore, that functional information cannot be explained by the same model which explains drug resistance in the malaria parasite.
Does that seem clear? It is.
In the end, I will borrow PO’s final phrase:
“Careful evaluation of these arguments, however, reveals their inadequacies.”
Post a Response
You must be logged in to post a comment.
RSS: Entries | Comments | Recent Links © 2006 Uncommon DescentDesign by Zeit Studios
Hosted by Network Omega
-----Inline Attachment Follows-----
kf,
This is Patrick from UD. Dave no longer has an Admin or Publisher account, so he cannot ban anyone, but he does have an Author account which does allow him to edit comments. My personal policy is to not argue with fellow site admins--even if I disagree with them on a decision--and to defer to the site owner even if he overrules my decisions. As such, I would suggest contacting Barry if you think he, as site owner, needs to make a decision on what the site policy should be in this regard.
Thanks Patrick
I surmised as much, following a further experience in the thread, with post 59.
perhaps, there is need for some clarification on policy.
GEM of TKI
I just received an anonymous post in I believe Russian, that begins after the salutation:
>> Если вы интересуетесь немного политикой, то должны были заметить - эти резкие волнения в странах Африки . . . >>
I will not pass from moderation a comment in a language I do not understand, so could the poster try again in English?
Assuming this is not just a spam post . . .
GEM of TKI
Post a Comment