Thursday, February 26, 2009

Following up a comment thread exchange

Arising from an incident at a blog discussing the design controversy, I was asked to intercede on the behalf of a commenter from the pro-Darwinism side.

Having done so, an exchange on some substantial points has now emerged.

I put up the relevant comments below:

________________

KF: Maya:

[ . . . ]

. . . I think the man in the Clapham bus stop, presented with the case of a falling and tumbling die, would find that it is reasonable to perceive the aspects that are mechanical forces at work [falling], those that are contingent, and to see that there is difference between uncontrolled undirected stochastic contingency [a fair die], and those that are directed [a loaded die].

Similarly, on being presented with a functional informational string of 1,000 bits or more of capacity, and observing its functionality, such an ordinary man would at once realise that lucky noise is a far inferior explanation than design. [Cf this comment . . . ]

Thirdly, if an ordinary man were to stumble across a computer in a field, he would infer on "like causes like" to design. Biologists exploring the cell have stumbled across an autonomous, self-assembling nanomachine- based computer, complete with sophisticated information storage and processing beyond human technical capacity at present. On inference to best -- and empirically anchored -- explanation, you would have to come up with very strong points to lead such a person in a fair forum to conclude that the result is credibly a chance + necessity only one.

So, I am not at all sure that Rob's arguments at the thread in question -- very similar to those he has up currently in several other threads at UD -- would carry the day [ . . . ]

-----------

R0b: kairosfocus, first of all, thanks for getting my back on UD.

and to see that there is difference between uncontrolled undirected stochastic contingency [a fair die], and those that are directed [a loaded die].

Quite right. I, like most people, have no problem telling the difference between a loaded die and a fair die.

Similarly, on being presented with a functional informational string of 1,000 bits or more of capacity, and observing its functionality, such an ordinary man would at once realise that lucky noise is a far inferior explanation than design.

I agree, but with a caveat: Without a closed definition of "functional", who's to say that random strings aren't functional? They're certainly useful.

Of course, if we measure information as Durston does, namely -log2(M/N), then even very long random strings have very little information. But my sense is that most FSCI proponents measure the amount of information in a binary string by simply counting the number of binary digits.

In any case, I agree with the principle that you're stating. Complex functional systems do not come about by lucky noise. There needs to be some deterministic, or partially determinist, causal forces at work.

Thirdly, if an ordinary man were to stumble across a computer in a field, he would infer on "like causes like" to design.

I would certainly infer design, but I don't know about the "like causes like" logic. Isn't it ID's position that computers, which execute strictly according to law+chance, cannot be designers?

Biologists exploring the cell have stumbled across an autonomous, self-assembling nanomachine- based computer, complete with sophisticated information storage and processing beyond human technical capacity at present. On inference to best -- and empirically anchored -- explanation, you would have to come up with very strong points to lead such a person in a fair forum to conclude that the result is credibly a chance + necessity only one.

There are two issues here: The question of whether such biological systems are designed, and the question of whether design is itself a case of chance+necessity. I haven't addressed the first question. As a non-biologist, I'm unequipped to do so, but I will say that I put more stock in the consensus opinion of experts than that of Clapham bus riders.

As for the second question, I'll spare you any further beating on that drum.

[ . . . ]

------------

KF: On this thread:

A string of comments have been entertained that are off topic.

They now need to become a separate thread if they are to continue.

I will note on the above as follows:

1] M --> It seems DS has rescinded his decision. Besides, I have no authority on threads at UD.

2] Rob:

The core issue is very simple: like causes like.

(That is, I start from the general uniformity principle on which the founders of modern science such as Newton built our whole experimentally and observationally anchored approach to understanding the way the world works. [They were seeing laws of nature as just that: the overarching decrees of Pantocrator for the general governing of physical creation; while leaving room for his direct actions as appropriate.] For instance, cf. Newton's General Scholium to his famous Principia, the greatest of all modern scientific works. )

So, when we see that there is a reasonable -- and longstanding factorisation of causal forces across chance, necessity and design, with well-recognised characteristics [necessity --> natural regularity; chance --> undirected, stochastic contingency; intelligence --> directed contingency], then that applies.


A falling die falls because of necessity, and tumbles contingently to a value; if fair, in an undirected, stochastic fashion. If loaded, the contingency is significantly directed.

So also, if one were to stumble across a computer in a field, the source would be obvious; per empirically well supported inference to best explanation. What has happened, though, is that we have stumbled across a computer in the heart of the cell. So, where does its design come from? [The above suggestion that a response that computers are not originators of designs answers to this, is a fallacy of irrelevancy. Computers, plainly, manifest that they are designed; the functionally specific complex information [FSCI] in their storage media and in the hardware structures and interfaces found in them speak eloquently to that.]

As to Durston's metric of information, you are mistaking an early citation in his 2007 paper from someone else for his real metric. He with others has developed a metric for functional sequence complexity, and has thus published a table of 35 peer-reviewed values thereof.

Namely:
>> The measure of Functional Sequence Complexity, denoted as ζ, is defined as the change in functional uncertainty [ H(Xf(t)) = -∑P(Xf(t)) logP(Xf(t)) . . . Eqn 1 ] from the ground state H(Xg(ti)) to the functional state H(Xf(ti)), or

ζ = ΔH (Xg(ti), Xf(tj)). [Eqn. 6] >>
Also, the context of such function is fairly easy to see: it is in this case algorithmic, and in other relevant cases is often in the description of a functional structure. (Random strings or structures seldom are useful in the cores of life forms or in aircraft or arrowheads.)

As to the idea that forces of necessity can join with chance to give rise to such functionally specific, complex information [FSCI], at the 1,000 functional bit threshold, this is false.

PROGRAMS do that, when triggered, but they are expressions of a higher level of directed contingent action. And, here, we are dealing with a degree of required contingency such that for 1,000 bits we have ~ 10^301 states, i.e. over ten times the square of the number of quantum states of the 10^80 or so atoms in our observed universe across a reasonably generally held estimate of its lifespan.

Until we get to that level of configuration, no relevant function is there to address, and in the case of life, observed life forms have DNA cores that effectivley start at 600,000 bits. It is only att hat level that we have independent cells that are reproducing themselves, and so will be subject to the probabilistic culler based on differential reproductive success that is commonly called natural selection; which is often held to be a manifestation of "necessity." natural selection may help explain the survival of the fittest, but it is helpless to explain the arrival thereof.

So, if instead, one wants to say that the DNA-ribosome-enzyme etc engine in the heart of the cell was somehow written into the laws of our cosmos through (as yet unobserved . . . ) laws of spontaneous complex organisation; one is effectively saying that that cosmos was programmed to trigger such life forms on getting to a plausible prebiotic soup.

The first problem with that is the obvious one: origin of life [OOL] researchers testify that not even under unrealistically generous prebiotic soup conditions does one see life systems emerging. And, it is a defining characteristic of natural laws, that once circumstances are right, they act more or less immediately: just drop a die.

Indeed, we see instead, that what we expect from the well-supported principles of statistical thermodynamics happens: low complexity, energetically favourable molecules and only rather short chains tend to form. Indeed, famed OOL researcher Robert Shapiro, remarking on the popular RNA world OOL hypothesis (in words that inadvertently also apply to his own metabolism first model) acidly remarks:
>>RNA's building blocks, nucleotides, are complex substances as organic molecules go. They each contain a sugar, a phosphate and one of four nitrogen-containing bases as sub-subunits. Thus, each RNA nucleotide contains 9 or 10 carbon atoms, numerous nitrogen and oxygen atoms and the phosphate group, all connected in a precise three-dimensional pattern. Many alternative ways exist for making those connections, yielding thousands of plausible nucleotides that could readily join in place of the standard ones but that are not represented in RNA. That number is itself dwarfed by the hundreds of thousands to millions of stable organic molecules of similar size that are not nucleotides . . . . inanimate nature has a bias toward the formation of molecules made of fewer rather than greater numbers of carbon atoms, and thus shows no partiality in favor of creating the building blocks of our kind of life . . . I have observed a similar pattern in the results of many [Miller-Urey-type] spark discharge experiments . . . .

The analogy that comes to mind is that of a golfer, who having played a golf ball through an 18-hole course, then assumed that the ball could also play itself around the course in his absence. He had demonstrated the possibility of the event; it was only necessary to presume that some combination of natural forces (earthquakes, winds, tornadoes and floods, for example) could produce the same result, given enough time. No physical law need be broken for spontaneous RNA formation to happen, but the chances against it are so immense, that the suggestion implies that the non-living world had an innate desire to generate RNA. The majority of origin-of-life scientists who still support the RNA-first theory either accept this concept (implicitly, if not explicitly) or feel that the immensely unfavorable odds were simply overcome by good luck. [Scientific American, Feb. 2007.>>
Such is also at least suggested by the fact that our galactic neighbourhood, an obvious habitable zone, seems to be rather quiet for such a zone (starting with our own solar system) if there is a blind life-facilitating program in the laws of nature that naurally promotes origin of life and diversification up to interlligent life such as we manifest.

Moreover, such a program, if it were to be observed as a law of nature, would strongly point to an extra-cosmic intelligence as the designer of the observed cosmos. (That is, it would be a case of front-loading design into the very fabric of the cosmos as a mechanism for design; rather than being an alternative to design.)

In short, for excellent reasons, chance + necessity is simply not a plausible designer of a sophisticated information system.

Finally, in this context, biologists are not only not experts on information systems, but -- sadly -- thanks to the implications of decades of Lewontinian a priori materialism being embedded into biological education and into the institutions of science, are not likely to look at the evidence with the rough and ready, open-minded common sense approach of the man at the Clapham bus stop. (This, BTW, is one of the most credible explanations for the sharp gap between the views of most people [including a lot of people who are knowledgeable and experienced on what it takes to develop information systems and on related information theory and computer science] and relevant groups of such materialistically indoctrinated scientists.)

For, as we may read:
>> Our willingness to accept scientific claims that are against common sense is the key to an understanding of the real struggle between science and the supernatural. We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism. It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [Lewontin, NY review of Books, 1997; sadly, now enforced officially though decrees of the US National Academy of Sciences, etc.] >>
Plainly, "consensus" in such a question-begging, ideologised context -- as has been known for 2400 years -- is worse than useless; it may actually hinder our ability to seek the best explanation for what we have stumbled upon, not in a field but in the heart of the cell: a computer.

_______________

Any further discussion should be addressed in this thread. END

14 comments:

Anonymous said...

KF:

1] M --> It seems DS has rescinded his decision.

I just checked the Complex Specified Information thread dated February 20th and see no evidence of that. Has he spoken of it in another thread?

Thanks,

Maya

GEM of The Kairos Initiative said...

Maya

I don't know if I read too much into Rob's remarks as shown above.

GEM of TKI

Ilíon said...

KairosFocus: "Similarly, on being presented with a functional informational string of 1,000 bits or more of capacity, and observing its functionality, such an ordinary man would at once realise that lucky noise is a far inferior explanation than design."

Rob: "I agree, but with a caveat: Without a closed definition of "functional", who's to say that random strings aren't functional? They're certainly useful.

Of course, if we measure information as Durston does, namely -log2(M/N), then even very long random strings have very little information. But my sense is that most FSCI proponents measure the amount of information in a binary string by simply counting the number of binary digits.
"

Rob, you're making a mistake about 'information' (and perhaps GEM is, too).

No string of bits, no matter how long or complex, is inherently informational -- 'information' exists only "within" (for lack of better word) minds. On the contrary, all such strings are inherently meaningless. What you're doing here, whether you realize it or not, is equating information *about* the hypothetical string with the string itself.

Allow me to offer a concrete example.

Before the discovery of the Rosetta Stone, our scholars did not know how to read ancient Egyptian inscriptions. To use the frustratingly imprecise (and thus, misleading) language which people insist upon using, we were ignorant of the information contained within ancient Egyptian inscriptions (*).

However, scholars could describe the symbols the Egyptions used. They could assign names to each discrete shape, and thus make it easier to talk about any specific inscription. Scholars could, had they chosen, have had learnéd conferences concerning this or that inscription; all the while having no idea at all at what information the ancient Egyptians had intended to covey by any inscription.

Now, these acts our scholars could have undertaken are analogous to what you're talking about in the statement I quoted. The scholars, while being ignorant of the symbolic conventions used by the ancient Egyptians, could nonetheless create information about the symbols and the inscriptions. Likewise, even while being ignorant of whether some specific string is/was intended to convey information, and/or what that information is, we can nonetheless create information about the string.

When someone does the sort of thing you're mentioning, he isn't "measuring the amount of information" contained in the string (which contains no information, in any event). Rather, he's creating new informtion which happens to be about the string in which he's interested.



(*) The truth is, of course, that ancient Egyptian inscriptions no more "contain" information than does this post. Rather, the inscriptions, and the content of this post, symbolize information -- inherently meaningless shapes are being used by some agents/minds to stand for some information or other; if some other agent/mind knows the conventions of how the inherently meaningless symbols are being used, then he can use his knowledge to re-create the information for himself.

GEM of The Kairos Initiative said...

Ilion:

An observation or two:

1: No string of bits, no matter how long or complex, is inherently informational -- 'information' exists only "within" (for lack of better word) minds.On the contrary, all such strings are inherently meaningless. What you're doing here, whether you realize it or not, is equating information *about* the hypothetical string with the string itself.

"Information," of course, has several meanings.

In one of those an algorithmically functional digital bit string may properly be said to be informational, by contrast with say a typical at random stochastically generated string of similar length would be likely to be, or a string that endlessly repeats a short sequence such as . . . khtkhtkht . . .

This is what Trevors and Abel characterise in their discussion of functional vs random vs orderly sequence complexity.

Functional, in this setting is also strongly contextual: a program that has a given step in a given machine needs a particular kind of string at a particular length at a given time. (So, by far and away most text strings form the space of configurations will be non-functional.)

So, we are speaking of information in largely an algorithmically functional context, not a semantic interpretation one.

2] Before the discovery of the Rosetta Stone, our scholars did not know how to read ancient Egyptian inscriptions. To use the frustratingly imprecise (and thus, misleading) language which people insist upon using, we were ignorant of the information contained within ancient Egyptian inscriptions

they were able to recognise them as functional in a communicative context, and not merely decorative. That is why they sought a key to their meaning, one in the main provided by the Rosetta stone.

Thus, they recognised that a code was at work, and that it expressed a likely functional content, though not likely an algorithmic but instead a linguistic.

3] When someone does the sort of thing you're mentioning, he isn't "measuring the amount of information" contained in the string (which contains no information, in any event). Rather, he's creating new informtion which happens to be about the string in which he's interested.

To measure something is indeed to create "new" information about it, which can then be expressed and communicated in linguistic symbols and even used algorithmically.

It is a reasonable use of the term to see that such a measure is information, and that the string that was measured was also informational, requiring a certain bit capacity or whatever, in a context that it has to be functional. So, to measure in functional bits is reasonable usage, not projecting that which has merely subjective reference.

Of course, our knowing and measuring etc has a subje4ctive component as we are subjects, But that does not entail that there is no truth about a situation that we discover rather than merely imagine or invent and project outwards. That is, objective truth is a reasonable construct,the denial of which lands one in absurdities. But truth and meaning are at the same time inescapably mental rather than merely symbolic. information bridges the physical and the mental worlds.

Hence the many controversies that surround it in an era that deprecates that which suggests that some things are more than merely material and/or that other things are more than merely subjective.

4] Rather, the inscriptions, and the content of this post, symbolize information -- inherently meaningless shapes are being used by some agents/minds to stand for some information or other; if some other agent/mind knows the conventions of how the inherently meaningless symbols are being used, then he can use his knowledge to re-create the information for himself.

And so, we see that there is truth about the situation that is discoverable not merely invented. That is, there is objectivity at work. And, the symbols and their characteristics can be measured and compared with an imaginary random string generator that has the similar symbolic frequencies of occurrence, thence resulting in a metric of information capacity.

In parallel, we can conceive of the string of length X as sitting in a multidimensional space of all possible configurations of bit strings of that length [2 ^ x cells], and characterise the islands or archipelagos that within that space would be functional in the context of the string. Form this we can also make up metrics of functionality of information, as has Durston.

GEM of TKI

Ilíon said...

GEM: "So, we are speaking of information in largely an algorithmically functional context, not a semantic interpretation one."

"Meaningless information" is an oxymoron.


GEM: ""Information," of course, has several meanings."

That some persons, even that some highly educated persons, misuse and/or misapply the word does not change reality. Though, it does greatly contribute to pervasive misunderstanding.


GEM: "This is what Trevors and Abel characterise in their discussion of functional vs random vs orderly sequence complexity."

Which is something else again from 'information.'

GEM of The Kairos Initiative said...

Hi again Ilion:

Algorithmic functionality has a meaning, but it is not meaning in the usual sense of how we rational communicating creatures interact. Bit arrangements that trigger step by step algorithmic processes are not generally the same as natural language, but they are important and meaningful.

Similarly, a random sequence is in a meaningful -- Shannon -- sense information, but it is not the same as functional info.

Okay -- been busy overnight on cosmology issues

G

Ilíon said...

GEM: "Similarly, a random sequence is in a meaningful -- Shannon -- sense information, but it is not the same as functional info."

If you really understood Shannon, you'd not be engaging in this pointless "arguing" against what I've said.

Ilíon said...

Here is Shannon's paper in .PRF format: A Mathematical Theory of Communication

GEM of The Kairos Initiative said...

Thanks Ilion. The pdf format Shannon paper is helpful.

G

Ilíon said...

Will possessing this .PDF change the truth of what I'd said (and which you choose to supress)?

And, if after you digest what Shannon was actually about you do decide to *engage* what I'd said in my first post, what then? How is one to take that? How is one to understand/deal with that you will not attend to the criticisms I offer unless you see that some "authority" says the same thing(s)?

GEM of The Kairos Initiative said...

Ilion:

I can see that, being busy over the past few days (on multiple fronts) I evidently missed one of your comments.

I looked in and saw that, put it up and your follow up comment.

On Shannon, I note that his metric is best understood as information capacity towards the theoretical limit on error-free information transfer rates of channels.

The H = SUM pi log2pi avg info per symbol "informational entropy" metric in particular is sensitive to redundancy in a signal, so that real codes will -- as they have a certain degree of that in them -- come up, oddly as lower in info capacity than an absolutely random bit sequence.

Trevors and Abel et al have in recent years done some important work on the distinction between orderly, random and functional sequence complexity, that speaks tot he matter in more details than I can her or have elsewhere. In brief summary:

1] Order will be very repetitive and highly redundant, thus will be high on algorithmic compressibility AND low on complexity (i.e simply and briefly describable in a short summary program: simplistic e.g. "type a string of 150 A's.").

2] Random sequences will have lowest compressibility and highest complexity. Basically you have to repat the sequence as a rule, to get the "message."

3] Algorithmically functional sequences will be of somewhat lower complexity than a RS of the same bit length, as there will normally be some redundancy in it [symbols will not be in a flat random or near flat-random distribution]. They will not be very K-compressible. And of course they will have high algorithmic functionality.

It is in the context of that functionality that such sequences will be meaningful. That is they make sense to some system,a d they make some difference to the performance thereof. this is observable and in relevant contexts measurable, as Durston et al have now published on 35 protein families.

If you want to read a bit more on the FSCI and FSC issue, in my online note, appendix 3, I discuss the matter, and actually use a key diagram from T & A.

I trust that this will be helpful. Shannon is important, but he is not he whole story on information theory and on the emerging theory of FUNCTIONAL information.

GEM of TKI

PS: T & A et al focus on strings. As the use of computers shows, essentially any digital data structure can be mapped to bit strings. Indeed a computer memory is usually mapped as a stack of short [say 8-bit] strings. Since also analogue information can be digitised, this approach is obviously without loss of relevant generality.

Ilíon said...

Thank. It was just so odd that the one post I'd made was posted but that the one I'd made just a moment before was not.

Ilíon said...

GEM: "On Shannon, I note that his metric is best understood as information capacity towards the theoretical limit on error-free information transfer rates of channels."

Not at all. He doesn't *care* about "information" (*); he cares about the faithful transmission of a signal or message, which may be (or may not be, but this is irrelevant to him) intended to convey information.

From the paper: "The fundamental problem of communication is that of reproducing at one point either exactly or approximately
a message selected at another point. Frequently the messages have
meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design."

(*) Despite that he sometimes uses the term 'information,' that is not at all his concern. His concern is the meaningless symbols which may stand for some information or other.

GEM of The Kairos Initiative said...

Ilion:

I have been busy, so it is very possible I simply overlooked it.

On relevant points, i simply said that Shannon information is about capacity to convey/store etc information, as I discussed, not content.

that's why I said:

>> On Shannon, I note that his metric is best understood as information capacity towards the theoretical limit on error-free information transfer rates of channels. >>

We have not said things that are very far apart on that. My note on error free bit rates is on the usual immediate application of his metric that is done in telecomms courses.

The wider context of course is the use of digital information so conveyed or stored etc. And that is what T & A etc speak about.

GEM of TKI