Thursday, February 26, 2009

Following up a comment thread exchange

Arising from an incident at a blog discussing the design controversy, I was asked to intercede on the behalf of a commenter from the pro-Darwinism side.

Having done so, an exchange on some substantial points has now emerged.

I put up the relevant comments below:

________________

KF: Maya:

[ . . . ]

. . . I think the man in the Clapham bus stop, presented with the case of a falling and tumbling die, would find that it is reasonable to perceive the aspects that are mechanical forces at work [falling], those that are contingent, and to see that there is difference between uncontrolled undirected stochastic contingency [a fair die], and those that are directed [a loaded die].

Similarly, on being presented with a functional informational string of 1,000 bits or more of capacity, and observing its functionality, such an ordinary man would at once realise that lucky noise is a far inferior explanation than design. [Cf this comment . . . ]

Thirdly, if an ordinary man were to stumble across a computer in a field, he would infer on "like causes like" to design. Biologists exploring the cell have stumbled across an autonomous, self-assembling nanomachine- based computer, complete with sophisticated information storage and processing beyond human technical capacity at present. On inference to best -- and empirically anchored -- explanation, you would have to come up with very strong points to lead such a person in a fair forum to conclude that the result is credibly a chance + necessity only one.

So, I am not at all sure that Rob's arguments at the thread in question -- very similar to those he has up currently in several other threads at UD -- would carry the day [ . . . ]

-----------

R0b: kairosfocus, first of all, thanks for getting my back on UD.

and to see that there is difference between uncontrolled undirected stochastic contingency [a fair die], and those that are directed [a loaded die].

Quite right. I, like most people, have no problem telling the difference between a loaded die and a fair die.

Similarly, on being presented with a functional informational string of 1,000 bits or more of capacity, and observing its functionality, such an ordinary man would at once realise that lucky noise is a far inferior explanation than design.

I agree, but with a caveat: Without a closed definition of "functional", who's to say that random strings aren't functional? They're certainly useful.

Of course, if we measure information as Durston does, namely -log2(M/N), then even very long random strings have very little information. But my sense is that most FSCI proponents measure the amount of information in a binary string by simply counting the number of binary digits.

In any case, I agree with the principle that you're stating. Complex functional systems do not come about by lucky noise. There needs to be some deterministic, or partially determinist, causal forces at work.

Thirdly, if an ordinary man were to stumble across a computer in a field, he would infer on "like causes like" to design.

I would certainly infer design, but I don't know about the "like causes like" logic. Isn't it ID's position that computers, which execute strictly according to law+chance, cannot be designers?

Biologists exploring the cell have stumbled across an autonomous, self-assembling nanomachine- based computer, complete with sophisticated information storage and processing beyond human technical capacity at present. On inference to best -- and empirically anchored -- explanation, you would have to come up with very strong points to lead such a person in a fair forum to conclude that the result is credibly a chance + necessity only one.

There are two issues here: The question of whether such biological systems are designed, and the question of whether design is itself a case of chance+necessity. I haven't addressed the first question. As a non-biologist, I'm unequipped to do so, but I will say that I put more stock in the consensus opinion of experts than that of Clapham bus riders.

As for the second question, I'll spare you any further beating on that drum.

[ . . . ]

------------

KF: On this thread:

A string of comments have been entertained that are off topic.

They now need to become a separate thread if they are to continue.

I will note on the above as follows:

1] M --> It seems DS has rescinded his decision. Besides, I have no authority on threads at UD.

2] Rob:

The core issue is very simple: like causes like.

(That is, I start from the general uniformity principle on which the founders of modern science such as Newton built our whole experimentally and observationally anchored approach to understanding the way the world works. [They were seeing laws of nature as just that: the overarching decrees of Pantocrator for the general governing of physical creation; while leaving room for his direct actions as appropriate.] For instance, cf. Newton's General Scholium to his famous Principia, the greatest of all modern scientific works. )

So, when we see that there is a reasonable -- and longstanding factorisation of causal forces across chance, necessity and design, with well-recognised characteristics [necessity --> natural regularity; chance --> undirected, stochastic contingency; intelligence --> directed contingency], then that applies.


A falling die falls because of necessity, and tumbles contingently to a value; if fair, in an undirected, stochastic fashion. If loaded, the contingency is significantly directed.

So also, if one were to stumble across a computer in a field, the source would be obvious; per empirically well supported inference to best explanation. What has happened, though, is that we have stumbled across a computer in the heart of the cell. So, where does its design come from? [The above suggestion that a response that computers are not originators of designs answers to this, is a fallacy of irrelevancy. Computers, plainly, manifest that they are designed; the functionally specific complex information [FSCI] in their storage media and in the hardware structures and interfaces found in them speak eloquently to that.]

As to Durston's metric of information, you are mistaking an early citation in his 2007 paper from someone else for his real metric. He with others has developed a metric for functional sequence complexity, and has thus published a table of 35 peer-reviewed values thereof.

Namely:
>> The measure of Functional Sequence Complexity, denoted as ζ, is defined as the change in functional uncertainty [ H(Xf(t)) = -∑P(Xf(t)) logP(Xf(t)) . . . Eqn 1 ] from the ground state H(Xg(ti)) to the functional state H(Xf(ti)), or

ζ = ΔH (Xg(ti), Xf(tj)). [Eqn. 6] >>
Also, the context of such function is fairly easy to see: it is in this case algorithmic, and in other relevant cases is often in the description of a functional structure. (Random strings or structures seldom are useful in the cores of life forms or in aircraft or arrowheads.)

As to the idea that forces of necessity can join with chance to give rise to such functionally specific, complex information [FSCI], at the 1,000 functional bit threshold, this is false.

PROGRAMS do that, when triggered, but they are expressions of a higher level of directed contingent action. And, here, we are dealing with a degree of required contingency such that for 1,000 bits we have ~ 10^301 states, i.e. over ten times the square of the number of quantum states of the 10^80 or so atoms in our observed universe across a reasonably generally held estimate of its lifespan.

Until we get to that level of configuration, no relevant function is there to address, and in the case of life, observed life forms have DNA cores that effectivley start at 600,000 bits. It is only att hat level that we have independent cells that are reproducing themselves, and so will be subject to the probabilistic culler based on differential reproductive success that is commonly called natural selection; which is often held to be a manifestation of "necessity." natural selection may help explain the survival of the fittest, but it is helpless to explain the arrival thereof.

So, if instead, one wants to say that the DNA-ribosome-enzyme etc engine in the heart of the cell was somehow written into the laws of our cosmos through (as yet unobserved . . . ) laws of spontaneous complex organisation; one is effectively saying that that cosmos was programmed to trigger such life forms on getting to a plausible prebiotic soup.

The first problem with that is the obvious one: origin of life [OOL] researchers testify that not even under unrealistically generous prebiotic soup conditions does one see life systems emerging. And, it is a defining characteristic of natural laws, that once circumstances are right, they act more or less immediately: just drop a die.

Indeed, we see instead, that what we expect from the well-supported principles of statistical thermodynamics happens: low complexity, energetically favourable molecules and only rather short chains tend to form. Indeed, famed OOL researcher Robert Shapiro, remarking on the popular RNA world OOL hypothesis (in words that inadvertently also apply to his own metabolism first model) acidly remarks:
>>RNA's building blocks, nucleotides, are complex substances as organic molecules go. They each contain a sugar, a phosphate and one of four nitrogen-containing bases as sub-subunits. Thus, each RNA nucleotide contains 9 or 10 carbon atoms, numerous nitrogen and oxygen atoms and the phosphate group, all connected in a precise three-dimensional pattern. Many alternative ways exist for making those connections, yielding thousands of plausible nucleotides that could readily join in place of the standard ones but that are not represented in RNA. That number is itself dwarfed by the hundreds of thousands to millions of stable organic molecules of similar size that are not nucleotides . . . . inanimate nature has a bias toward the formation of molecules made of fewer rather than greater numbers of carbon atoms, and thus shows no partiality in favor of creating the building blocks of our kind of life . . . I have observed a similar pattern in the results of many [Miller-Urey-type] spark discharge experiments . . . .

The analogy that comes to mind is that of a golfer, who having played a golf ball through an 18-hole course, then assumed that the ball could also play itself around the course in his absence. He had demonstrated the possibility of the event; it was only necessary to presume that some combination of natural forces (earthquakes, winds, tornadoes and floods, for example) could produce the same result, given enough time. No physical law need be broken for spontaneous RNA formation to happen, but the chances against it are so immense, that the suggestion implies that the non-living world had an innate desire to generate RNA. The majority of origin-of-life scientists who still support the RNA-first theory either accept this concept (implicitly, if not explicitly) or feel that the immensely unfavorable odds were simply overcome by good luck. [Scientific American, Feb. 2007.>>
Such is also at least suggested by the fact that our galactic neighbourhood, an obvious habitable zone, seems to be rather quiet for such a zone (starting with our own solar system) if there is a blind life-facilitating program in the laws of nature that naurally promotes origin of life and diversification up to interlligent life such as we manifest.

Moreover, such a program, if it were to be observed as a law of nature, would strongly point to an extra-cosmic intelligence as the designer of the observed cosmos. (That is, it would be a case of front-loading design into the very fabric of the cosmos as a mechanism for design; rather than being an alternative to design.)

In short, for excellent reasons, chance + necessity is simply not a plausible designer of a sophisticated information system.

Finally, in this context, biologists are not only not experts on information systems, but -- sadly -- thanks to the implications of decades of Lewontinian a priori materialism being embedded into biological education and into the institutions of science, are not likely to look at the evidence with the rough and ready, open-minded common sense approach of the man at the Clapham bus stop. (This, BTW, is one of the most credible explanations for the sharp gap between the views of most people [including a lot of people who are knowledgeable and experienced on what it takes to develop information systems and on related information theory and computer science] and relevant groups of such materialistically indoctrinated scientists.)

For, as we may read:
>> Our willingness to accept scientific claims that are against common sense is the key to an understanding of the real struggle between science and the supernatural. We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism. It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [Lewontin, NY review of Books, 1997; sadly, now enforced officially though decrees of the US National Academy of Sciences, etc.] >>
Plainly, "consensus" in such a question-begging, ideologised context -- as has been known for 2400 years -- is worse than useless; it may actually hinder our ability to seek the best explanation for what we have stumbled upon, not in a field but in the heart of the cell: a computer.

_______________

Any further discussion should be addressed in this thread. END