In recent days, I was informed of a change in management at the blog, Uncommon Descent, which materially altered circumstances there.
In that context, I decided to revisit the blog, and give greetings and well wishes.
I then spent some time monitoring developments there, which were significant and in my opinion, generally positive.
On the strength of some issues that seemed to require inputs of the sort I had made when I formerly was a regular commenter [as a part of a now more or less completed recon in force on trends affecting the Caribbean in the context of the TKI's remit], I thereafter in recent days have made some balancing and corrective remarks on two threads; horrid doubts and a multiverse discussion. (You will observe from the former that there is a problem where off-thread links are often not followed up, even by principals in an exchange, which is in fact the immediate context for the length of the deleted post reproduced below.)
However, this is all in a fairly narrow context. many who may come here may be wondering what the fuss is all about.
So, for those needing an overview of the significance of the Intelligent Design issue and/or balancing remarks on why it is in fact -- many criticisms by objectors notwithstanding -- a legitimate scientific enterprise, I present just below the synopsis to my online briefing note. (NB: This note was always linked through my handle in every post I made at UD.):
SYNOPSIS: The raging controversy over inference to design too often puts out more heat and smoke than light. However, as Cicero pointed out [ . . . ], the underlying issues are of such great importance, that all educated people need to think about them carefully. Accordingly, in the below we shall examine a cluster of key cases, starting with the often overlooked but crucial point that in communication systems, we first start with an apparent message, then ask how much information is in it. This directly leads to the question first posed by Cicero, as to what is the probability that the received apparent message was actually sent, or is it "noise that got lucky"? The solution to this challenge is in fact an implicit inference to design, and it is resolved by essentially addressing the functionality and complexity of what was received, relative to what it is credible -- as opposed to logically possible -- that noise could do. That is, the design inference has long since been deeply embedded in scientific thought, once we have had to address the issue: what is information? Then, we look at several key cases: informational macromolecules such as DNA, the related origins of observed biodiversity, and cosmological finetuning. Thereafter, the issue is broadened to look at the God of the gaps challenge. Finally, a technical note on thermodynamics and several other technical or detail appendices are presented: (p) a critical look at the Dover/ Kitzmiller case (including a note on Plato on chance, necessity and agency in The Laws, Bk X), (q) the roots of the complex specified information concept, (r) more details on chance, necessity and agency, (s) Newton in Principia as a key scientific founding father and design thinker, (t) Fisherian vs Bayesian statistical inference in light of the Caputo case, and (u) the issue of the origin and nature of mind. Cf. Notice below. (NB: For FAQs and primers go here. This Y-zine also seems to be worth a browse.)The -- still very relevant -- c. 50 BC remark by Cicero is:
Is it possible for any man to behold these things, and yet imagine that certain solid and individual bodies move by their natural force and gravitation, and that a world so beautifully adorned was made by their fortuitous concourse? He who believes this may as well believe that if a great quantity of the one-and-twenty letters, composed either of gold or any other matter, were thrown upon the ground, they would fall into such order as legibly to form the Annals of Ennius. I doubt whether fortune could make a single verse of them. How, therefore, can these people assert that the world was made by the fortuitous concourse of atoms, which have no color, no quality—which the Greeks call [poiotes], no sense? [Cicero, THE NATURE OF THE GODS BK II Ch XXXVII, C1 BC, as trans Yonge (Harper & Bros., 1877), pp. 289 - 90.]Then, yesterday, I saw a thread on a matter long ago discussed at length with Professor PO [cf my briefing note appendix on the issue here and another on some linked broader issues here].
I in particular noted that commenter R thought that he had undercut the general validity of Dembski's Explantatory Filter.
IMO, he is mistaken, having made key logical errors including strawman misrepresentations of the filter and its use, as well as on the issue of false positives and false negatives in statistical inference testing, and most importantly by the self-referential inconsistency of using the filter intuitively even as he set out to "refute" it.
But, such arguments are often persuasive, especially when they are not promptly and fully rebutted.
No such rebuttal was present.
I therefore thought that a response at a somewhat deeper level than was present at the time of my post was indicated, and built one based on my general briefing note, in light of an earlier, well presented comment by commenter GP.
Alas -- even as I was typing up and posting -- GP's comment was deleted by the poster of the original post, on the claim that its length was inappropriate.
Sadly, GP's apt remarks on the use of epidemiological statistical analyses by Behe to identify the empirically observed boundary to darwinian style evolutionary mechanisms [NB: the malaria parasite undergoes in one year about as many reproductive events as all mammalia would have on the usually accepted bio-geological timeline], and much more; has evidently now been permanently lost. [At first I thought it had been captured by my save-off of the thread on posting, but it had already been deleted by the thread owner, and subsequently no-one has been able to capture a cached copy. UPDATE: I see that it has been captured by ES58, and was briefly put back up to allow GP to copy it, by DS. Further update: the comment is now published here and also appears in the comment below, courtesy ES58.]
Shortly thereafter, the comment reproduced below was also deleted, on the same reported grounds. (Of course, this, if it is UD policy, is a new one; one previously unknown to both GP and myself and contrary to the customary practices at the blog. UD has long been known for the importance of comment exchanges, many of them quite substantial.)
On observing the remark by commenter Tribune that I had a good answer, which he hoped someone had saved, and on receiving a separate request for a copy of the comment, I have decided not to let the matter rest, but to put the comment up here, and to make a brief announcement in the thread that it may be seen here.
I trust this will be acceptable to all.
GEM of TKI
PS: I see that subsequently commenter R has gone on to make remarks on Bayesian probability issues. I therefore refer him to the notes here in the appendix to my general note in Information, Design Science etc. As he will see from the below, Fisherian-style statistical inference by elimination [to some confidence level] is still a very relevant approach in real world decision making, at intuitive and formal levels. Indeed, in the lost GP post, he made remarks that for most real world epidemiological work, that is what is used. And, on the very simple and solid grounds that for a reasonable scaled sample, distribution tails beyond a reasonable cutoff SHOULD not -- on probabilistic grounds -- appear. Thence, we see the significance of the Dembski style explanatory filter across chance, necessity and agency. [Note to R and DS: I have simply linked for those willing to investigate further for themselves; I will make no attempt to carry out a clumsy cross-blogs debate on a technical matter.]
APPENDIX: The deleted post, slightly cleaned up
[Nov 26, 2008, UD thread on Prof PO's ID critique paper at http://www.uncommondescent.com/intelligent-design/some-thanks-for-professor-olofsson/ ]
ID needs its own Eddington to come up with experiments that support ID while disconfirming NDE. . . . . If Dembski’s filter indicates design where there is none — if it cries wolf, in other words — then we can’t trust it, and our question about the existence of design in the biological world remains unanswered . . . .1. Without accurate probability distributions, Dembski’s method cannot avoid false positives.2. Dembski concedes that if his method were to generate false positives, it would be useless.3. To form an accurate probability distribution, we most know all of the possible ways in which the structure in question could have arisen.4. As DaveScot points out, this is impossible, and so we can never be sure we have an accurate probability distribution.5. Therefore, according to Dembski’s own criteria, his method is useless.
Test 1: Lo/hi contingency — if lo or no, that would most likely be lawlike regularity tracing to mechanical necessity. (Cf: where we have heat, air and fuel, reliably due to mechanical necessity, we have a fire. Similarly, unsupported heavy objects fall.)Result: But in fact high contingency — e.g. 128-state ASCII characters — so explanations should revert to [a]chance [evidently undirected contingency] or [b] design [purposefully directed contingency]Test 2: Complex [UPB or a relevant looser criterion] and [especially functionally] specified?–> Here, the issue is that on the presumption of undirected contingency, we have no reason to prefer any one configuration over another, so the relative statistical weights of clusters will dominate likely observations if chance is the driving force.–> And if we have some shift from the Laplacian indifference, we can factor that in. [For instance, we routinely do so in assessing codes. In English, E is most common, and X rather unlikely, and Q is usually followed by U. But, in the 1930's someone wrote an entire book in which E does not appear even once.]–> By contrast, we know from experience and observation that designers direct contingencies to fulfill purposes; so clusters of functionally specific complex configurations that would otherwise be overwhelmingly unlikely to occur, are seen–> So, if we can identify clusters of accessible configurations that are vastly unlikely to occur by chance, but are functionally specified, we can be confident [note, not "certain"; demanding such would be selectively hyperskeptical] that design was at work
RESULT: in this case, we see sufficiently long passages of ASCII text in English to conclude “design” with high confidence.
Men have reason to be well satisfied with what God hath thought fit for them, since he hath given them (as St. Peter says [NB: i.e. 2 Pet 1:2 - 4]) pana pros zoen kaieusebeian, whatsoever is necessary for the conveniences of life and information of virtue; and has put within the reach of their discovery, the comfortable provision for this life, and the way that leads to a better. How short soever their knowledge may come of an universal or perfect comprehension of whatsoever is, it yet secures their great concernments [Prov 1: 1 - 7], that they have light enough to lead them to the knowledge of their Maker, and the sight of their own duties [cf Rom 1 - 2, Ac 17, etc, etc]. Men may find matter sufficient to busy their heads, and employ their hands with variety, delight, and satisfaction, if they will not boldly quarrel with their own constitution, and throw away the blessings their hands are filled with, because they are not big enough to grasp everything . . . It will be no excuse to an idle and untoward servant [Matt 24:42 - 51], who would not attend his business by candle light, to plead that he had not broad sunshine. The Candle that is set up in us [Prov 20:27] shines bright enough for all our purposes . . . If we will disbelieve everything, because we cannot certainly know all things, we shall do muchwhat as wisely as he who would not use his legs, but sit still and perish, because he had no wings to fly.
The first challenge of abiogenesis is to start from the 0 square [genome length zero], and in a plausible chemical environment, (i) get to a viable and sustainable prebiotic soup with the requisite monomers in it, then (ii) move to the first islands of function [in the vast pacific of in principle possible configurations]. (For the moment, we will simply assume for the sake of our argument that once a proto-genome reaches the shores of a viable island, it may then proceed through hill-climbing processes such as “random variation” and “natural selection” [i.e. culling based on differential average reproductive success], to move to the mountains of peak performance in that bio-functional niche.)The immediate problem is that the first such observed islands are of order 100,000 - 1,000,000 base pairs; and in a context where the organisms at the lower end are in effect typically dependent on more complex life forms to make life components they cannot. The relevant 1 million chain-length sub-space has about 1.010 * 10^698,970 configurations, which is again a very large number, which will easily swamp the search resources of the observed cosmos. Even if we take 100,000 elements as a working lower limit, that puts us at 1.001 * 10^69,897; still well beyond the available search resources of the observed cosmos . . . .Why is that so?First, biofunction is observed to be code based and specific, i.e. it is vulnerable to perturbation. For instance, three of the 64 possible three-letter codons code for STOP. So, immediately, if we form a substring of three DNA letters at random, the odds are just under 5% that they will be a stop codon . . . . consider a hypothetical genome that requires 100 “necessary” proteins, each with just 100 residues, using altogether 10,000 codons, or 30,000 DNA base pairs. This will require 10,000 codons without an accidental stop in the wrong place, to get the required complement of proteins. The fraction of such 30,000-length genomes that would not be truncated by wrongly placed stop codons is (61/64)^10,000 ~ 1 in [3 * 10^208]. This in itself would make it maximally unlikely that we would get by chance concatenation of DNA elements to a single such minimally functional genome; on the gamut of our observed universe across its typically estimated lifetime. (Nor will autocatalysis of RNA molecules in a hypothetical RNA world, get us to bio-functional, protein-making DNA codes.)
There is no need to demand “accurate” probability distributions, nor is there any need to accept the super-task of specifically identifying all possible pathways for observed configurations to emerge. (Onlookers: Notice the selectively hyperskeptical shift of burden of proof here.)
[a] the EF rules “design” and we directly know the causal story separately well enough to see that it has committed a false positive, or[b] we see an entity storing more than 500 - 1,000 bits of information and showing functional specification, where the filter will rule “NOT DESIGNED” where in fact it is designed.
I trust the above will be helpful. END