Tuesday, April 10, 2007

Blog Visits, 8: On Thermodynamics and origin of life etc

Over the past couple of years, I have visited with various blogs for a time, to examine and discuss important issues. Partly, to understand the dynamics and signs of our times, and also to test out my analysis of how issues are being addressed -- especially in the lands subjected to the wave of de-Christianising forces from the North..

Currently, I have been visiting over at the Uncommon Descent blog of Prof William Dembski for a time, and as I have been winding down my visit there, I have been discussing on the issue of thermodynamics and the origin of life.

One of the commenters, Pixie, has asked me to continue this discussion there, here. This thread, DV, is designed to accommodate this request.

The core issue posed by Sewell, is on the reasonableness of open systems spontaneously acquiring the complex specified information that is characteristic of the systems of life. Pixie takes the view that open systems are not subject to any particular constraints on the acquiring of such information, and I have held that there is a major probabilistic hurdle connected to the second law of thermodynamics in its statistical form; namely, that what Nash called the predominant configuration, a cluster of microstates in the close neighbourhood of equilibrium.

I have also pointed to the point that from the first example of thermodynamics:
Isol System:
| | (A, at Thot) --> d'Q, heat --> (B, at T cold) | |
. . . we can see that the closed systems within it [open only to energy transfer], that injection of raw random energy normally increases entropy. To avert this, we need to instead have a heat engine (or more broadly an energy converter):
| | (A, heat source: Th): d'Qi --> (B', heat engine, Te): -->
d'W [work done on say D] + d'Qo --> (C, sink at Tc) | |
So, by exporting enough waste heat and coupling the rest into work, the B now can import energy without necessarily increasing its entropy. indeed, suitably programmed work can decrease its entropy. In some cases such heat engines form naturally [e.g. a hurricane] but in cases where the engines exhibit functionally specific, complex information [I revert to my favoured abbreviation and terminology here: FSCI], where we observe the origin directly, they are always the product of intelligent agents. Thence, the debates on FSCI and its origins, especially in life systems. A vat thought experiment has played a major role inthe last part of the discussion at UD, and will probably resurface here.

As background, I think onlookers will probably need to read:
1] The thread at UD, along with Dr Sewell's linked works [UPDATE, Apr 13: Cf his discussions here, here, and here] Excerpting the first of these, Dr Sewell's essential argument is:
_______

The second law is all about probability, it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid is, that is what the laws of probability predict when diffusion alone is operative. The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur.
The discovery that life on Earth developed through evolutionary "steps," coupled with the observation that mutations and natural selection -- like other natural forces -- can cause (minor) change, is widely accepted in the scientific world as proof that natural selection -- alone among all natural forces -- can create order out of disorder, and even design human brains, with human consciousness. Only the layman seems to see the problem with this logic. In a recent Mathematical Intelligencer article ["A Mathematician's View of Evolution," The Mathematical Intelligencer 22, number 4, 5-7, 2000] I asserted that the idea that the four fundamental forces of physics alone could rearrange the fundamental particles of Nature into spaceships, nuclear power plants, and computers, connected to laser printers, CRTs, keyboards and the Internet, appears to violate the second law of thermodynamics in a spectacular way.1 . . . .

What happens in a closed system depends on the initial conditions; what happens in an open system depends on the boundary conditions as well. As I wrote in "Can ANYTHING Happen in an Open System?", "order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door.... If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth's atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here . . . But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here." Evolution is a movie running backward, that is what makes it special.

THE EVOLUTIONIST, therefore, cannot avoid the question of probability by saying that anything can happen in an open system, he is finally forced to argue that it only seems extremely improbable, but really isn't, that atoms would rearrange themselves into spaceships and computers and TV sets . . .

[NB: Emphases added. They show, too, that Dr Sewell has in mind an objection to the evolutionary materialistic concept that chance and natural regularities acting on random initial conditions, will lead to t e spontaneous emergence of functionally specific, complex information-based systems -- more or less, from hydrogen to humans. That objection is based on the issue that a sufficiently rare microstate is probabilistically isolated to searches that are not intelligently directed, on any reasonable scope of probabilistic resources in the gamut of the observed universe. His last phrase reasonable infers to the currently popular assertion of a quasi-infinite array of sub-universes such that the speculative rise in probabilistic resources swamps the configuration states issue. As I discuss in my online note, that is of course both an admission of the cogency of hte probability argument and a resort to speculative metaphysics as opposed to science. Such a resort is open to the reply that on a comparative difficulties basis, on pain of worldviews level question-begging, one is here subject to the stricture of comparative difficulties. On that basis, inference to agency is at least as credible on the face of it as that to an unobserved vastly larger cosmos as a whole. So suppression of such a discussion is questionable.]
__________

2] My online note on Information, Design, Science and Creation, especially the appendix on thermodynamics. [BTW, part of why I am engaging in this dialogue is that I intend to use the result to update this note a bit.]

3] This note has in it a significant number of onward links that will give wider background information that should also be followed up.

4] The three online chapters of Thaxton, Bradley and Olsen's The Mystery of Life's Origin. Bradley's ASA peer-reviewed paper here will be a good follow up, though the scan is in need of a major clean-up.

5] If you can get it, a glance at Robertson's Statistical Thermophysics will also be helpful. [Fair warning: this is an advanced textbook.]

6] The online excerpt from Brillouin here on the "negentropy" link between thermodynamics and information theory will also be nice. I excerpt and discuss it here.

7] Some of course may need to go look up further on basic thermodynamics. Standard textbooks and online resources may help, but take fair warning that sometimes there is an agenda in even the most "neutral" of sources. Keep your worldviews analysis cap on.
As a note, commenters should realise that I recently had to impose a moderation policy in this blog due to abusive commentary. So, the pace will be a bit slow -- I have to personally approve comments before they appear. That usually means, when I am on insomnia patrol, maybe 3 - 5 am local time here in Montserrat. But, unless a comment is abusive, I will put it up -- within [generous] reason on length.
Okay, I trust this will be helpful. END

ADDENDUM, April 14:
Here is the thought experiment being discussed, from comments 42, 43 at the Specified Complexity thread at UD. The experiment responds to the challenge that the isolation of a clumping and a configuring decrement in entropy on making say a protein do not make thermodynamic sense. I have used dSclump to conform to how the onward discussion developed, cf. point j below:
_________

THOUGHT EXPT:

a] Consider the assembly of a Jumbo jet, which plainly requires intelligently designed, physical work in all actual observed cases. That is, orderly motions were impressed by forces on selected, sorted parts, in accordance with a complex specification. (I have already contrasted the case of a tornado in a junkyard that it is logically and physically possible can do the same, but the functional macrostate is so rare relative to non functional ones that random search strategies are maximally unlikely to access it, i.e. we see here 2nd LOT at work.]

b] Now, let us shrink the example, to a nano-jet so small that the parts are susceptible to brownian motion, i.e they are of sub-micron scale and act as large molecules, say a million of them, some the same, some different etc. In-principle possible. Do so also for a car a boat and a submarine, etc.

c] In several vats of a convenient fluid, decant examples of the differing nanotechnologies, so that the particles can then move about at random.

d] In the control vat, we simply leave nature to its course. Will a car, a boat a sub or a jet , etc, or some novel nanotech emerge at random? [Here, we imagine the parts can cling to each other if they get close enough, in some unspecified way, similar to molecular bonding; but that the clinging is not strong enough for them to clump and precipitate.] ANS: Logically and physically possible but the equilibrium state will on stat thermodynamics grounds overwhelmingly dominate — high disorder.

e] Now, pour in a cooperative army of nanobots into one vat, capable of recognising jet parts and clumping them together haphazardly. [This is of course, work, and it replicates bonding at random. We see here dSclump] After a time, will we be likely to get a flyable nano jet?

f] In this vat, call out the random cluster nanobots, and send in the jet assembler nanobots. These recognise the parts, and rearrange them to form a jet, doing configuration work. A flyable jet results — a macrostate with a much smaller statistical weight of microstates, probably of order ones to tens or perhaps hundreds. [We see here separated dSconfig.]

g] In another vat we put in an army of clumping and assembling nanobots, so we go straight to making a jet based on the algorithms that control the nanobots. Since entropy is a state function, we see here that direct assembly is equivalent to clumping and then reassembling from random “macromolecule” to configured functional one.” That is:

dS tot = dSclump + dS config.

h] Now, let us go back to the vat. For a large cluster of vats, we use direct assembly nanobots, but in each case we let the control programs vary at random – say hit them with noise bits generated by a process tied to a zener noise source. We put the resulting products in competition with the original ones,and if there is an improvement, we allow replacement. Iterate. Given the complexity of he relevant software, will we be likely to for instance come up with a hyperspace-capable spacecraft or some other sophisticated and un- anticipated technology? Justify your answer on probabilistic grounds. My prediction: we will have to wait longer than the universe exists to get a change that requires information generation on the scale of 500 – 1000 or more bits. [See the info-generation issue over macroevolution by RM + NS?]

I] Try again, this time to get to the initial assembly program by chance . . .See the abiogenesis issue?

j] In the actual case, we have cells that use sophisticated machinery to assemble the working macromolecules, direct them to where they should go, and put them to work in a self-replicating, self-maintaining automaton. Clumping work [if you prefer that to TBO’s term chemical work, fine], and configuring workl can be identified and applied to the shift in entropy through the s = k ln w equation. This, through Brillouin, TBO link to information, citing as well Yockey-Wicken’s work at the time and their similar definition of information. [As you know I have pointed to Robertson on why this link makes sense — and BTW, it also shows why energy converters that use additional knowledge can couple energy in ways that go beyond the Carnot efficiency limit for heat engines.]

In short, the basic point made by TBO in Chs 7 - 8 is plainly sound. The rest of their argument follows . . .

_________

89 comments:

The Pixie said...

Kairosfocus

Thanks for starting this thread. I was getting pretty frustrated with the anti-spam at UD (though I appreciate they need for them). Hopefully this will be smoother.

The Second Law of Thermodynamics is about Thermodynamics
"The core issue posed by Sewell, is on the reasonableness of open systems spontaneously acquiring the complex specified information that is characteristic of the systems of life. Pixie takes the view that open systems are not subject to any particular constraints on the acquiring of such information..."
To be a little more specific I take the view that the second law of thermodynamics is about thermodynamics (the movement and distribution of energy) and not about information at all, whether in an open, closed or isolated system.

It is my belief that the entropy of DNA (of a specific length) is the same, whether it is a random sequence, a simple repeating sequence or human DNA. The third law of thermodynamics says that at absolute zero entropy is zero. The reason is simple enough: At absolute zero, there is no energy in the system. No matter what the energy levels are for the molecule, if there is no energy to distribute around those energy levels, then there is only one possible arrangement; one macrostate. From Boltzmann, S = k ln W, if W is 1, then S is zero. It is zero for helium, it is zero or water, it is zero for DNA no matter what its configuration. At absolute zero, the entropy is independant of the information content. If this is true at absolute zero, why should it be different at higher temperature? Is information dependant on temperature?

The Second Law and Probability
"... I have held that there is a major probabilistic hurdle connected to the second law of thermodynamics in its statistical form; namely, that what Nash called the predominant configuration, a cluster of microstates in the close neighbourhood of equilibrium."
Actually, I do agree that the second law ultimately rests on probabilities of different macrostates. What I object to is the claim that any and all probabilistic arguments are connected to the second law of thermodynamics.

For example, it is very improbable that I will win the lottery on Saturday (this is the UK lottery, by the way). I could say that there are 14 million "macrostates" corresponding to the withdrawal of any combination of six ball, and only one "macrostate" results in my winning big time. But just labelling the outcomes as "macrostates" does not make it thermodynamics. Winning the lottery has nothing to do with the second law of thermodynamics. The "complexity" part of specified complexity is how improbable the event is. Just because something is highly complex (i.e., highly improbable) that does not imply that it is limited by the second law.

Sewell's argument (at UC and elsewhere) seems to be that a certain process is highly improbably, therefore it is restricted by the second law of thermodynamics, therefore it is impossible.

The Second Law in a Closed System
"I have also pointed to the point that from the first example of thermodynamics:
Isol System:
| | (A, at Thot) --> d'Q, heat --> (B, at T cold) | |
. . . we can see that the closed systems within it [open only to energy transfer], that injection of raw random energy normally increases entropy."

I disagree. The isolated system has two closed systems within it, A and B. The entropy in B increases, the entropy in A decreases. No heat engine is required; it happens spontaneously. Entropy can decrease in a closed system without machinery, intelligence, heat engines, etc.

The Second Law and Thaxton et al
I believe Thaxton et al got it wrong in chapter eight. They discuss a configurational entropy, which seems completely unsupported, and that they describe as the "mass distribution". I have never seen anyone else use configurational entropy (in the sense they use it) in thermodynamic calculations. I believe this is because it is superfluous; they just made it up. Thermodynamics is about accounting; energy in must equal energy out. If there really was a discrepancy corresponding to this configurational entropy, how come no one noticed?

Thermodynamics is about the energy distributation. Thermodynamics is not about mass distribution. Think about the Boltzmann equation, S = k ln W. W is composed of several parts all of which correspond to energetic modes. There is no room for mass distribution in there. How about classical thermodynamics? Entropy can be determined for a material experimentally by heating a sample from near absolute zero and measuring the heat input. The entropy is given by integrating dS = dQ/T. Where does the mass distribution feature in there?

Their use of configurational entropy is arbitrary. They calculate it based on the order of amino acids. If they had calculated it from the position of the atoms, they would have got a much higher figure.

As an aside, their probability calculation is based on the probability of a certain sequence of 100 amino acids forming from a pool of 100 amino acids. Surely a more realistic calculation would be to assume an effectively infinite pool of amino acids?

Gordon said...

Hi Pixie:

I see you posted a comment. Following up from the thread at UD, and in response to several points:

1] I take the view that the second law of thermodynamics is about thermodynamics (the movement and distribution of energy) and not about information at all, whether in an open, closed or isolated system.

So soon as one goes to the microscopic, statistical view and marks the distinction between heat and work, information issues surface. Excerpting Robertson again:

[[. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms [pp. Vii – viii] . . . . the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context. [pp. 3 – 7, 36, Stat Thermophysics, PHI] ]]

--> So, heat/work is about a certain degree of [lack of] information about micro-particles and their behaviour [and, often, location].

2] It is my belief that the entropy of DNA (of a specific length) is the same, whether it is a random sequence, a simple repeating sequence or human DNA.

Not at all, as the vat + nanobot thought experiment example shows. The work to collect and clump components of a complex systesm can be in principle defined and measured. This will include the resulting bonding energy [pressure-volume work being here irrelevant]. The further work to rearrange the components to form a macroscopically recognisable functional whole is incremental to that of clumping.

The first decrement in entropy, we both recognise. The second is just as real, as the nanobots doing configuration work demonstrate.

First, the scattered micro-compoents of the flyable jet are clumped together from all over the vat. Then, the randomly clumped elements are reconfigured to form a flyable mini-jet. The clump is utterly unlikely to be flyable by chance, but the jet once the assembly program is correct, will.

Similarly, a DNA strand at random has similar bonding energy to an informational, functional macromolecule. But, by overwhelming probability, the randomly clumped DNA is macroscopically distinguishable from organised DNA -- it does not work with the cellular nanotechnology to do anything that will work -- i.e. it is nonviable. There is a recognisable configurational work there to get from clumping to fucntioning.

3] The third law of thermodynamics says that at absolute zero entropy is zero.

One way to state the third law is: As temperature goes to 0, the entropy S approaches a constant S0. Furthermore, it guarantees that the entropy of a pure, perfectly crystalline substance is 0 if the absolute temperature is 0.

Trivially, as T --> 0, S --> zero-point value. But this is irrelevant tot he matter being discussed:

a] No finite number of refrigeration cycles can get to T = 0. So, it cannot be a proper target of a thought experiment, absent reason to believe we can induce a contradiction in physics. That is not on our table. (Thought experiments should conform to the known laws of physics, as a basic rule.)

b] We are looking at decrements in entropy, which are relevant at accessible temperatures. And, for the component related to specificity of configuration, having clumped components, we can calculate entropy as s = k ln w or equivalent. If a state is unique, its entropy is relative to ln 1 which is zero, for that component. [If you look back at the thread at UD you will see that I looked at up to hundreds of flyable configurations as a possibility.]

c] By contrast, for the clumped state -- which is overwhelmingly likely to be non-functional -- with real-world compponent counts of order 10^6, we are looking at a vast configuration space, comparable to 10^6! [we divide down by the factorials of identical elements . . . as TBO do].

d] That space is in turn dwarfed by the space of the scattered components throughout the vat. [Just convert a cubical 1 cu m vat into 1 micron size location cells. that gives 10^18 locations. 10^ 6 items scattered at random across 10^18 locations is an astronomical space.]

4] At absolute zero, the entropy is independant of the information content. If this is true at absolute zero, why should it be different at higher temperature? Is information dependant on temperature?

And, the physics tells us we cannot get to zero, which is consistent.

Information,as Roberson discusses, is in fact in part a function of temperature.

It is a commonplace that one can induce a breakdown of structure and loss of information by injecting raw, random energy. This of course explodes the accessible configuration space and provides randomly available energy to move away from the sort of tightly specified configurations that are based on and store information.

Trivial E.g.: write a poem by scratching it into ice, then heat until it melts.

5] What I object to is the claim that any and all probabilistic arguments are connected to the second law of thermodynamics.

But, no such claim was made by either myself or TBO or Sewell.

What we have spoken to is information that is stored in improbable configurations of elements. We have implicitly or explicitly pointed to the scope of clumped states and scattered states, then have pointed out that there is a drop in statistical weight of macrostate to move from scattered to clumped, then a further drop in w to move from clumped at random to configured in some macroscopically recognisable functional way.

The drops in W directly imply drops in S = k ln w, and are associated with the increase in information, especially in the second stage. Cf TBO's cite of Brillouin on the point in TMLO ch 8.

So, this is a highly specific probabilistic argument, similar to the classic ones in textbooks, e.g. my Yavorski and Detlaf's discussion of 10 red and 10 black beads in two rows. There is but one state in which the 10 reds are on top and the 10 blacks on the bottom. The number of states in which more or less 5 + 5 are in each row, is vastly larger, and one can estimate changes in statistical weights in moving from one state to the other.

It is obvious that it is easier to move at random from the highly specified state to the predominant cluster, but to get back from it to the unique state at random is far less likely. And that is for tiny configuration spaces, relatively speaking!

6] just labelling the outcomes [of a lottery] as "macrostates" does not make it thermodynamics. Winning the lottery has nothing to do with the second law of thermodynamics.

But we are not looking at lotteries. We are looking at states of configurations that are specified by inherent function based on the configuration of components within the system: a flyable mini-jet, a computer, an enzyme molecule, a DNA strand, or the like.

We know from experience and observation, that these systems are subject to the influences of random energy injections, and that this overwhelmingly moves such towards predominant configurations that are non-functional. That is why maintenance is such a big issue in such systems.

When we turn tothe origin of such systems, we find that chance is maximally improbable to account for their origin because of the same patern -- the avciailable probabilistic resources are simply not credible to achieve the desired outcome at random within any reasonable time. But we do see that such functionally specified, complex systems are routinely produces by intelligent agents. So on an inference to best explanation basis, such systems are most credibly explained as the product os such agency.

This, we routinely do in for instance inferring to the explanation that this blog comment is not an instance of lucky noise that just happened to make sense. [The relevant issues and issues of likelihood are quite similar.]

So, there arises the challenge that the real issue is not thermodynamics as such but philosophy: an institutionally dominant worldview in contemporary science as practised, is being challenged. If the IBE result is acceptable in other cases, for obvious reasons, why is it suddenly not acceptable when even more sophisticated specified complexity is on the table, apart form worldviews level question begging?

7] Sewell's argument (at UC and elsewhere) seems to be that a certain process is highly improbably, therefore it is restricted by the second law of thermodynamics, therefore it is impossible.

This is a distorted, strawmannish caricature of Sewell's argument. Excerpting:

[[ The second law is all about probability, it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid is, that is what the laws of probability predict when diffusion alone is operative. The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur.

The discovery that life on Earth developed through evolutionary "steps," coupled with the observation that mutations and natural selection -- like other natural forces -- can cause (minor) change, is widely accepted in the scientific world as proof that natural selection -- alone among all natural forces -- can create order out of disorder, and even design human brains, with human consciousness. Only the layman seems to see the problem with this logic. In a recent Mathematical Intelligencer article ["A Mathematician's View of Evolution," The Mathematical Intelligencer 22, number 4, 5-7, 2000] I asserted that the idea that the four fundamental forces of physics alone could rearrange the fundamental particles of Nature into spaceships, nuclear power plants, and computers, connected to laser printers, CRTs, keyboards and the Internet, appears to violate the second law of thermodynamics in a spectacular way.1 . . . .

What happens in a closed system depends on the initial conditions; what happens in an open system depends on the boundary conditions as well. As I wrote in "Can ANYTHING Happen in an Open System?", "order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door.... If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth's atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here . . . But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here." Evolution is a movie running backward, that is what makes it special.

THE EVOLUTIONIST, therefore, cannot avoid the question of probability by saying that anything can happen in an open system, he is finally forced to argue that it only seems extremely improbable, but really isn't, that atoms would rearrange themselves into spaceships and computers and TV sets . . . ]]

That final highlight of course shows that the rearranging Sewell has in mind is spontaneous, not directed by an external intelligent agent.

8] The isolated system has two closed systems within it, A and B. The entropy in B increases, the entropy in A decreases. No heat engine is required; it happens spontaneously. Entropy can decrease in a closed system without machinery, intelligence, heat engines, etc.

Notice how the above refusal to recognise the underlying issue of configuration now leads to a classic confusion. A hot body cooling off will naturally end up in a state with fewer configurations. But there is utterly no reason to believe that the relevant state will be one in which -- presto, without intelligently directed internal rearrangement -- we suddenly are likely relative tot he probabilistic resources available emerge in a functionally configured state that is far, far away form the predominant cluster.

What is highly relevant in my example from the classic Clausius case, is that when a body accepts an injection of random thermal energy, its internal entropy naturally increases, indeed that is the basis for Clausius asserting that the overall entropy of the isolated system enclosing A and B will increase as the rise in B's entropy d"Q/Tcold exceeds the fall in A's entropy induced by its cooling off, d'Q/Thot.

For B to reliably -- with high probability -- produce a rise in order [much less complexity], some of that energy injected has to be converted into work, and enough waste heat exhausted to the surroundings to compensate. In short, B is now a heat engine.

Such heat engines do arise in nature based on boundary conditions and dissipative structures, e.g hurricanes, vortices etc [which comes up in TBO's discussion], but in the relevant case, the energy converters are functionally specific at microscopic level, and embed rich information. Such systems have never been observed to emerge from randomly clumped or scattered components by themselves. Instead, we routinely observe agents producing such systems through application of intelligence. And, as discussed above, the basic facts about the relevant configuration spaces tell us why.

9] I believe Thaxton et al got it wrong in chapter eight. They discuss a configurational entropy, which seems completely unsupported . . . I believe this is because it is superfluous; they just made it up. Thermodynamics is about accounting; energy in must equal energy out. If there really was a discrepancy corresponding to this configurational entropy, how come no one noticed?

This is disappointing. First, "someone[s]" have noticed, going back to Brillouin, and including Yockey-Wickens and Shapiro. Read the discussion again, and notice the historical chain of thought. Excerpting Brillouin [have you read the links, even just to my appendix 1? Onward links are there . . .]:

[[ Physics enters the picture when we discover a remarkable likeness between information and entropy . . . The connection between information and entropy was rediscovered by C. Shannon [relative to Szilard's pioneering discussion] . . . information must be considered as a negative term in the entropy of a system; in short, information is negentropy. The entropy of a physical system has often been described as a measure of randomness in the structure of the system . . . . Every physical system is incompletely defined. We only know the values of some macroscopic variables, and we are unable to specify the exact positions and velocities of all the molecules contained in a system. We have only scanty, partial information on the system, and most of the information on the detailed structure is missing. Entropy measures the lack of information; it gives us the total amount of missing information on the ultramicroscopic structure of the system. [Cf Robertson's similar remark as already cited.]

This point of view is defined as the negentropy principle of information , and it leads directly to a generalization of the second principle of thermodynamics, since entropy and information must, be discussed together and cannot be treated separately . . . The essential point is to show that any observation or experiment made on a physical system automatically results in an increase of the entropy of the laboratory. It is then possible to compare the loss of negentropy (increase of entropy) with the amount of information obtained. The efficiency of an experiment can be defined as the ratio of information obtained to the associated increase in entropy. This efficiency is always smaller than unity, according to the generalized Carnot principle. Examples show that the efficiency can be nearly unity in some special examples, but may also be extremely low in other cases. ]]

Second, as again discussed above, the distinction is not a tall unsupported, once one accepts that there is a difference between a scattered state, a randomly clumped state and a functional, tightly configured state.

--> The vats and assembly thought experiment shows that we can distinguish the work of clumping and that of configuring, which of course relates to a redistribution of mass of the components.

--> Further it shows there is an identifiable difference: a clump at random with all but certain probability, will not fly. The properly configured components will.

Third, as a footnote, you are confusing conservation of energy with rise/fall of entropy, which is not a conserved quantity -- the AVAILABILITY of energy tends to decrease in the universe, so, entropy is time's arrow.

10] Thermodynamics is about the energy distributation. Thermodynamics is not about mass distribution . . . W is composed of several parts all of which correspond to energetic modes. There is no room for mass distribution in there.

At microscopic level, the two are closely integrated, but asthe vats example shows, they can be separated in principle and in special cases in praxis. The vats example of a micro-jet assembled by nanobots, as opposed to an un-flyable clump of components is such a case, and DNA and proteins are another. For that matter, so is digital information storage under certain conditions -- the state of the component is tied to the distribution of both mass and energy and yields an identifiable difference in funciton.

11] As an aside, their [TBO's] probability calculation is based on the probability of a certain sequence of 100 amino acids forming from a pool of 100 amino acids. Surely a more realistic calculation would be to assume an effectively infinite pool of amino acids?

Infinity is of course impossible inthe real world, and TBO discuss a case of a planetwide solution with unimolar conc in the relevant amino acids.

In fact, their discussion is far more sophisticated than you suggest, starting with the non-online chapters. But more tot he point, in discussing the amino acids they consider a planet full of nicely selected amino acids of the right type, then go on to work out the conc of the relevant configuration, using standard reaction kinetics. They point out that in so doing they are deliberately suppressing the selecting and sorting work and are subverting chain-stopping reagents etc. They use Fermi-Dirac statistics to estimate the selection of a protein having in it for ease of calculation purposes 5 each of the 20 acids of life, and thus reduce the configuration space -- otherwise they would be looking at 20^100, not 100!/([51}^20). Then they look at unimolar conc in each of the 20 acids -- vastly unrealistically generous given the relevant issues discussed in earlier chapters where they show the conc of the acids in realistic prebiotic situations will be close to pure water.

In short, at each stage they are generous to the abiogenesis case and still end up at a result: there is little chance of abiogenesis through spontaneous synthesis of macromolecules of life -- before you get to clustering these molecules to get anywhere with making integrated life functions work.

_______

Okay, trust this helps -- and you see just how much I too have been working under handicaps on length and citation or links . . . I do miss ability to do blockquotes.

GEM of TKI

Zachriel said...

The Pixie: "It is my belief that the entropy of DNA (of a specific length) is the same, whether it is a random sequence, a simple repeating sequence or human DNA."

kairosfocus: "Not at all, as the vat + nanobot thought experiment example shows."

Any thought-experiment that is contrary to the facts may need rethinking. The entropy of DNA can and has been measured. There is no significant difference based on sequence (assuming a rough equivalence in the number but not the order of bases). That's because entropy is primarily a measure of the energy involved in twisting and stretching bonds between the atoms. Even if you were to count microstates, there are vastly more ways to stretch and twist the individual bonds than there are ways to arrange the bases.

The Pixie said...

Hi Kairosfocus
"So, heat/work is about a certain degree of [lack of] information about micro-particles and their behaviour [and, often, location]. "
Okay, I can see that. Now how does that relate to information in a DNA sequence? Does the entropy of the DNA sequence change if we determine what the sequence is? I think not, so tis is not about lack of information then.
Robertson is saying the entropy can be related (inversely) to how much we can know about the energy distribution. At absolute zero, we know exactly how zero energy is distributed, so the entropy is zero. At higher temperatures, there are more and more ways, so we know less and less, i.e., entropy is higher.
But this is a very specific information measure: it is how much we do not know about energy distribution. Of course it is, the second law s about energy distribution.
"Not at all, as the vat + nanobot thought experiment example shows."
The problem with that thought experiment is that I have no idea about the thermodynamics of nanobots. Do you?
"The work to collect and clump components of a complex systesm can be in principle defined and measured. This will include the resulting bonding energy [pressure-volume work being here irrelevant]. The further work to rearrange the components to form a macroscopically recognisable functional whole is incremental to that of clumping.
The first decrement in entropy, we both recognise. The second is just as real, as the nanobots doing configuration work demonstrate."

Sorry, but I have no idea how we can determine the amount of work the nanobots must do. Why should the process
D-A-B-C -> A-B-C-D
be any different thermodynamically to the process
A-B-C=D -> D-A-B-C
Talk me through deltaH , deltasG and deltaS.
"Similarly, a DNA strand at random has similar bonding energy to an informational, functional macromolecule. But, by overwhelming probability, the randomly clumped DNA is macroscopically distinguishable from organised DNA -- it does not work with the cellular nanotechnology to do anything that will work -- i.e. it is nonviable. There is a recognisable configurational work there to get from clumping to fucntioning."
Can you measure that difference? If you burn the functional DNA do you get more energy released or less? Entropy is a measureable quantity. If you measyure it, what will the difference be?
"a] No finite number of refrigeration cycles can get to T = 0. So, it cannot be a proper target of a thought experiment, absent reason to believe we can induce a contradiction in physics. That is not on our table. (Thought experiments should conform to the known laws of physics, as a basic rule.) "
But people can and do routinely measure the absolute entropy of materials! Therefore, I think it not unreasonable that my though experiment ask what the entropy of three different DNA sequences would be.
"b] We are looking at decrements in entropy, which are relevant at accessible temperatures. And, for the component related to specificity of configuration, having clumped components, we can calculate entropy as s = k ln w or equivalent. If a state is unique, its entropy is relative to ln 1 which is zero, for that component. [If you look back at the thread at UD you will see that I looked at up to hundreds of flyable configurations as a possibility.]"
But entropy is about how energy is distributed, the macrostates correspond to ways of arranging energy across energy levels. Your clump can only have entropy zero if the temperature is zero.
"Information,as Roberson discusses, is in fact in part a function of temperature. "
So do you believe the configurational entropy or the informational entropy increases when temperature increases? That is what Robertson claims, with his perspective that entropy is the lack of information about the system. As the temperature goes up, you have more energy to distribute, and so know less about that distribution.
But the sequence of the DNA is constant. Any entropy that depends on the information in the sequence is independant of temperature. This is the very crux of the matter; this is why the configurational entropy of Thaxton et al should be rejected. It is not a function of temperature.
"It is a commonplace that one can induce a breakdown of structure and loss of information by injecting raw, random energy. This of course explodes the accessible configuration space and provides randomly available energy to move away from the sort of tightly specified configurations that are based on and store information.
Trivial E.g.: write a poem by scratching it into ice, then heat until it melts."

I agree. How is this relevant?
"But, no such claim was made by either myself or TBO or Sewell.
...
When we turn tothe origin of such systems, we find that chance is maximally improbable to account for their origin because of the same patern -- the avciailable probabilistic resources are simply not credible to achieve the desired outcome at random within any reasonable time."

I see here a probabilistic argument, not a thermodynamic one. What am I missing?
"But we do see that such functionally specified, complex systems are routinely produces by intelligent agents. So on an inference to best explanation basis, such systems are most credibly explained as the product os such agency."
'I have only ever seen white swans, therefore all swans are white.' Is that good logic? I think not.
How about we rephrase the arguent. But we do see that such functionally specified, complex systems are routinely produced by naturalistic agents. So on an inference to best explanation basis, such systems are most credibly explained as the product of such agency. Is that reasonable? Why not?
"So, there arises the challenge that the real issue is not thermodynamics as such but philosophy: an institutionally dominant worldview in contemporary science as practised, is being challenged. If the IBE result is acceptable in other cases, for obvious reasons, why is it suddenly not acceptable when even more sophisticated specified complexity is on the table, apart form worldviews level question begging?"
Because I have direct experience of intelligent agents capable of producing blog comments, and the non-intelligent explanation seems so unlikely, I will go with the former. There is no reason to suppose the existence of an intelligent agent capable of producing life, so science ajudges any explanation that invokes one to be rather unlikely. And we do have a competing explanation, that seems to hold up well.
"This is a distorted, strawmannish caricature of Sewell's argument. Excerpting:"
Your excerpt seemed remarkably close to my strawmannish caricature.
"The discovery that life on Earth developed through evolutionary "steps," coupled with the observation that mutations and natural selection -- like other natural forces -- can cause (minor) change, is widely accepted in the scientific world as proof that natural selection -- alone among all natural forces -- can create order out of disorder, and even design human brains, with human consciousness."
I do not think any scientist would say this is proven. But most (>95% I guess) would say it was the best explanation we currently have. Anyone who knows anything about crystal formation will know that natural forces can create order out of disorder.
"What happens in a closed system depends on the initial conditions; what happens in an open system depends on the boundary conditions as well. As I wrote in "Can ANYTHING Happen in an Open System?", "order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door.... If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth's atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here . . . But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here." Evolution is a movie running backward, that is what makes it special."
A note to the casual reader; Sewell uses closed and open different to kairosfocus. I have adopted kairosfocus' convention here.
Sewell misses the point that the second law of thermodynamics is about thermodynamics, i.e., the movement of energy. What walks through that open door is energy.
Remember that isolated system, with the hot A and the cold B. Consider A a closed system (an open systm for Sewell). There is a metaphorical open door between A and B, and heat energy walks from A to B, resulting in the entropy in A going down.
"THE EVOLUTIONIST, therefore, cannot avoid the question of probability by saying that anything can happen in an open system, he is finally forced to argue that it only seems extremely improbable, but really isn't, that atoms would rearrange themselves into spaceships and computers and TV sets . . . ]] "
See to my mind this is an argument about probability, not entropy. I am only guessing because Sewell says "question of probability", but says nothing about energy of entropy.
"Notice how the above refusal to recognise the underlying issue of configuration now leads to a classic confusion. A hot body cooling off will naturally end up in a state with fewer configurations. But there is utterly no reason to believe that the relevant state will be one in which -- presto, without intelligently directed internal rearrangement -- we suddenly are likely relative tot he probabilistic resources available emerge in a functionally configured state that is far, far away form the predominant cluster."
Right. The hot body, A, ends up in a state with fewer configurations for the energy. It has lower entropy. This happens because overall A and B together have more configurations at the same temperature than if they are at different temperatures (indeed, that is what being at the same temperature means). So far so good.
Now I do not recall claiming any more than that.
"What is highly relevant in my example from the classic Clausius case, is that when a body accepts an injection of random thermal energy, its internal entropy naturally increases, indeed that is the basis for Clausius asserting that the overall entropy of the isolated system enclosing A and B will increase as the rise in B's entropy d"Q/Tcold exceeds the fall in A's entropy induced by its cooling off, d'Q/Thot.
For B to reliably -- with high probability -- produce a rise in order [much less complexity], some of that energy injected has to be converted into work, and enough waste heat exhausted to the surroundings to compensate. In short, B is now a heat engine.
Such heat engines do arise in nature based on boundary conditions and dissipative structures, e.g hurricanes, vortices etc [which comes up in TBO's discussion], but in the relevant case, the energy converters are functionally specific at microscopic level, and embed rich information. Such systems have never been observed to emerge from randomly clumped or scattered components by themselves. Instead, we routinely observe agents producing such systems through application of intelligence. And, as discussed above, the basic facts about the relevant configuration spaces tell us why."

I was talking about A. For A to produce a rise in order - energetic order, the stuff entropy is based on - it has to lose heat energy. It does. Spontaneously, without machinery, intelligence, etc.

The Pixie said...

Hi Kairosfocus
"So, heat/work is about a certain degree of [lack of] information about micro-particles and their behaviour [and, often, location]. "
Okay, I can see that. Now how does that relate to information in a DNA sequence? Does the entropy of the DNA sequence change if we determine what the sequence is? I think not, so tis is not about lack of information then.
Robertson is saying the entropy can be related (inversely) to how much we can know about the energy distribution. At absolute zero, we know exactly how zero energy is distributed, so the entropy is zero. At higher temperatures, there are more and more ways, so we know less and less, i.e., entropy is higher.
But this is a very specific information measure: it is how much we do not know about energy distribution. Of course it is, the second law s about energy distribution.
"Not at all, as the vat + nanobot thought experiment example shows."
The problem with that thought experiment is that I have no idea about the thermodynamics of nanobots. Do you?
"The work to collect and clump components of a complex systesm can be in principle defined and measured. This will include the resulting bonding energy [pressure-volume work being here irrelevant]. The further work to rearrange the components to form a macroscopically recognisable functional whole is incremental to that of clumping.
The first decrement in entropy, we both recognise. The second is just as real, as the nanobots doing configuration work demonstrate."

Sorry, but I have no idea how we can determine the amount of work the nanobots must do. Why should the process
D-A-B-C -> A-B-C-D
be any different thermodynamically to the process
A-B-C=D -> D-A-B-C
Talk me through deltaH , deltasG and deltaS.
"Similarly, a DNA strand at random has similar bonding energy to an informational, functional macromolecule. But, by overwhelming probability, the randomly clumped DNA is macroscopically distinguishable from organised DNA -- it does not work with the cellular nanotechnology to do anything that will work -- i.e. it is nonviable. There is a recognisable configurational work there to get from clumping to fucntioning."
Can you measure that difference? If you burn the functional DNA do you get more energy released or less? Entropy is a measureable quantity. If you measyure it, what will the difference be?
"a] No finite number of refrigeration cycles can get to T = 0. So, it cannot be a proper target of a thought experiment, absent reason to believe we can induce a contradiction in physics. That is not on our table. (Thought experiments should conform to the known laws of physics, as a basic rule.) "
But people can and do routinely measure the absolute entropy of materials! Therefore, I think it not unreasonable that my though experiment ask what the entropy of three different DNA sequences would be.
"b] We are looking at decrements in entropy, which are relevant at accessible temperatures. And, for the component related to specificity of configuration, having clumped components, we can calculate entropy as s = k ln w or equivalent. If a state is unique, its entropy is relative to ln 1 which is zero, for that component. [If you look back at the thread at UD you will see that I looked at up to hundreds of flyable configurations as a possibility.]"
But entropy is about how energy is distributed, the macrostates correspond to ways of arranging energy across energy levels. Your clump can only have entropy zero if the temperature is zero.
"Information,as Roberson discusses, is in fact in part a function of temperature. "
So do you believe the configurational entropy or the informational entropy increases when temperature increases? That is what Robertson claims, with his perspective that entropy is the lack of information about the system. As the temperature goes up, you have more energy to distribute, and so know less about that distribution.
But the sequence of the DNA is constant. Any entropy that depends on the information in the sequence is independant of temperature. This is the very crux of the matter; this is why the configurational entropy of Thaxton et al should be rejected. It is not a function of temperature.
"It is a commonplace that one can induce a breakdown of structure and loss of information by injecting raw, random energy. This of course explodes the accessible configuration space and provides randomly available energy to move away from the sort of tightly specified configurations that are based on and store information.
Trivial E.g.: write a poem by scratching it into ice, then heat until it melts."

I agree. How is this relevant?
"But, no such claim was made by either myself or TBO or Sewell.
...
When we turn tothe origin of such systems, we find that chance is maximally improbable to account for their origin because of the same patern -- the avciailable probabilistic resources are simply not credible to achieve the desired outcome at random within any reasonable time."

I see here a probabilistic argument, not a thermodynamic one. What am I missing?
"But we do see that such functionally specified, complex systems are routinely produces by intelligent agents. So on an inference to best explanation basis, such systems are most credibly explained as the product os such agency."
'I have only ever seen white swans, therefore all swans are white.' Is that good logic? I think not.
How about we rephrase the arguent. But we do see that such functionally specified, complex systems are routinely produced by naturalistic agents. So on an inference to best explanation basis, such systems are most credibly explained as the product of such agency. Is that reasonable? Why not?
"So, there arises the challenge that the real issue is not thermodynamics as such but philosophy: an institutionally dominant worldview in contemporary science as practised, is being challenged. If the IBE result is acceptable in other cases, for obvious reasons, why is it suddenly not acceptable when even more sophisticated specified complexity is on the table, apart form worldviews level question begging?"
Because I have direct experience of intelligent agents capable of producing blog comments, and the non-intelligent explanation seems so unlikely, I will go with the former. There is no reason to suppose the existence of an intelligent agent capable of producing life, so science ajudges any explanation that invokes one to be rather unlikely. And we do have a competing explanation, that seems to hold up well.
"This is a distorted, strawmannish caricature of Sewell's argument. Excerpting:"
Your excerpt seemed remarkably close to my strawmannish caricature.
"The discovery that life on Earth developed through evolutionary "steps," coupled with the observation that mutations and natural selection -- like other natural forces -- can cause (minor) change, is widely accepted in the scientific world as proof that natural selection -- alone among all natural forces -- can create order out of disorder, and even design human brains, with human consciousness."
I do not think any scientist would say this is proven. But most (>95% I guess) would say it was the best explanation we currently have. Anyone who knows anything about crystal formation will know that natural forces can create order out of disorder.
"What happens in a closed system depends on the initial conditions; what happens in an open system depends on the boundary conditions as well. As I wrote in "Can ANYTHING Happen in an Open System?", "order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door.... If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth's atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here . . . But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here." Evolution is a movie running backward, that is what makes it special."
A note to the casual reader; Sewell uses closed and open different to kairosfocus. I have adopted kairosfocus' convention here.
Sewell misses the point that the second law of thermodynamics is about thermodynamics, i.e., the movement of energy. What walks through that open door is energy.
Remember that isolated system, with the hot A and the cold B. Consider A a closed system (an open systm for Sewell). There is a metaphorical open door between A and B, and heat energy walks from A to B, resulting in the entropy in A going down.
"THE EVOLUTIONIST, therefore, cannot avoid the question of probability by saying that anything can happen in an open system, he is finally forced to argue that it only seems extremely improbable, but really isn't, that atoms would rearrange themselves into spaceships and computers and TV sets . . . ]] "
See to my mind this is an argument about probability, not entropy. I am only guessing because Sewell says "question of probability", but says nothing about energy of entropy.
"Notice how the above refusal to recognise the underlying issue of configuration now leads to a classic confusion. A hot body cooling off will naturally end up in a state with fewer configurations. But there is utterly no reason to believe that the relevant state will be one in which -- presto, without intelligently directed internal rearrangement -- we suddenly are likely relative tot he probabilistic resources available emerge in a functionally configured state that is far, far away form the predominant cluster."
Right. The hot body, A, ends up in a state with fewer configurations for the energy. It has lower entropy. This happens because overall A and B together have more configurations at the same temperature than if they are at different temperatures (indeed, that is what being at the same temperature means). So far so good.
Now I do not recall claiming any more than that.
"What is highly relevant in my example from the classic Clausius case, is that when a body accepts an injection of random thermal energy, its internal entropy naturally increases, indeed that is the basis for Clausius asserting that the overall entropy of the isolated system enclosing A and B will increase as the rise in B's entropy d"Q/Tcold exceeds the fall in A's entropy induced by its cooling off, d'Q/Thot.
For B to reliably -- with high probability -- produce a rise in order [much less complexity], some of that energy injected has to be converted into work, and enough waste heat exhausted to the surroundings to compensate. In short, B is now a heat engine.
Such heat engines do arise in nature based on boundary conditions and dissipative structures, e.g hurricanes, vortices etc [which comes up in TBO's discussion], but in the relevant case, the energy converters are functionally specific at microscopic level, and embed rich information. Such systems have never been observed to emerge from randomly clumped or scattered components by themselves. Instead, we routinely observe agents producing such systems through application of intelligence. And, as discussed above, the basic facts about the relevant configuration spaces tell us why."

I was talking about A. For A to produce a rise in order - energetic order, the stuff entropy is based on - it has to lose heat energy. It does. Spontaneously, without machinery, intelligence, etc.

The Pixie said...

... continuing
"[[ Physics enters the picture when we discover a remarkable likeness between information and entropy . . . The connection between information and entropy was rediscovered by C. Shannon [relative to Szilard's pioneering discussion] . . . information must be considered as a negative term in the entropy of a system; in short, information is negentropy. The entropy of a physical system has often been described as a measure of randomness in the structure of the system . . . . Every physical system is incompletely defined. We only know the values of some macroscopic variables, and we are unable to specify the exact positions and velocities of all the molecules contained in a system. We have only scanty, partial information on the system, and most of the information on the detailed structure is missing. Entropy measures the lack of information; it gives us the total amount of missing information on the ultramicroscopic structure of the system."
Shannon's information entropy is completely different to what Robertson is talking about. Shannon was thinking about loss of data during the transmission of a message; the entropy has to increase. The entropy is a measure of degradation; you cannot receive a message that is more than perfect, it can only be perfect or less than perfect. Thus Shannon entropy must increase. It haas nothing to do with thermodynamics or even probability.
"The efficiency of an experiment can be defined as the ratio of information obtained to the associated increase in entropy. This efficiency is always smaller than unity, according to the generalized Carnot principle. Examples show that the efficiency can be nearly unity in some special examples, but may also be extremely low in other cases."
This sounds interesting. Can you give a couple of examples of experiments? I am particularly interested in the near unity cases. The amount of information in a gram of material must be immense if the information entropy is to be comparable to the thermodynamic entropy, given the number of macrostates.
Or to put it another way, I do not believe the claim...
"At microscopic level, the two are closely integrated, but asthe vats example shows, they can be separated in principle and in special cases in praxis. The vats example of a micro-jet assembled by nanobots, as opposed to an un-flyable clump of components is such a case, and DNA and proteins are another. For that matter, so is digital information storage under certain conditions -- the state of the component is tied to the distribution of both mass and energy and yields an identifiable difference in funciton."
Remember that we do not have nanobots, so the vats example is not that helpful.
Why do you believe mass and energy distribution are closely integrated at the microscopic level? I agree there is some connection, but not in the way TBO claim. Perhaps if you can explain?
"They use Fermi-Dirac statistics to estimate the selection of a protein having in it for ease of calculation purposes 5 each of the 20 acids of life, and thus reduce the configuration space -- otherwise they would be looking at 20^100, not 100!/([51}^20)."
Can you talk me through this? According to Wiki:
"In statistical mechanics, Fermi-Dirac statistics is a particular case of particle statistics developed by Enrico Fermi and Paul Dirac that determines the statistical distribution of fermions over the energy states for a system in thermal equilibrium. In other words, it is a probability of a given energy level to be occupied by a fermion. Fermions are particles which are indistinguishable and obey the Pauli exclusion principle, i.e., no more than one particle may occupy the same quantum state at the same time. Statistical thermodynamics is used to describe the behaviour of large numbers of particles. A collection of non-interacting fermions is called a Fermi gas."
I really do not think their calculation is about "the statistical distribution of fermions over the energy states for a system in thermal equilibrium". It is not about fermions. It is not about energy states. It is not necessarily about a system in thermal equilibrium.

Gordon said...

Pixie and Zachriel:

I came by this afternoon.

Published comments -- remember, the thread is moderated given problems with abusive comments some while back. Really nasty stuff.

I will comment on a few points -- turns out there were more than a few so pardon being a bit summary on several of them -- I have an offline life, and an 8 yo who needs attention:

1] Z: entropy is primarily a measure of the energy involved in twisting and stretching bonds between the atoms.

We are looking at the differential work done to sequence the DNA, relative to a random sequence. This too may be estimated and it is not negligible. Cf TBO's work in CH 8.

Further to this, cf the issue of clumping and configuring a micro-jet to see that we are not doing anything in violation of physics -- but a flyable micro-jet is vastly different from mere at-random clumped together micro-jet parts, methinks.

2] P: Does the entropy of the DNA sequence change if we determine what the sequence is?

The MACROSTATE changes rather dramatically on going to a biofunctional state, collapsing w rather dramatically. Cf again the difference between clumped and configured functional states.

3] Robertson is saying the entropy can be related (inversely) to how much we can know about the energy distribution.

Not at all, he also includes locational issues -- as "dynamical state" and "microscopic structure" entail. Consider for instance the issue of degreed of freedom accessible to a system -- it in part depends on locational issues not just energy, e.g freezing out of modes of vibration or rotation etc. Also consider the difference between water and ice at the same temp.

More on the point, we have seent he significance of configuration not just clumping and the accessing of a functional macrostate as a result by intelligent/ information driven action that would not be credible for chance to access -- clump at random.

4] The problem with that thought experiment is that I have no idea about the thermodynamics of nanobots.

We know enough to know that the nanobots can recognise the jet parts and move them around, in the context of the usual viscous forces etc. That requires work, both to search and to acquire information,then to process it and finally to carry out mechanical work to clump then assemble the micro-jet's parts, which recall are ~ 1 micron in the vat example. None of that is against physics. [Cf Maxwell's demons . . .]

The result of clumping then configuring is first an overwhelmingly probably non-functional clump of parts, then a flyable micro-jet. These are macroscopically distinguishable states. Further to this, the clumped state has far more microstates associated with it than the flyable one. Apply s = K ln w and the decrements in entropy are obvious. In short, there is good reason to make the distinctions in the TdS term that TBO make.

Further to this, we know that in the real world more complex case of the cell, there is parts recognition, sequentially governed assembly, precise folding and key-lock fitting, all duly energised. And the biofuncitonal state is quite plainly distinguishable from random sequenced biopolymers.

5] Why should the process D-A-B-C -> A-B-C-D be any different thermodynamically to the process
A-B-C=D -> D-A-B-C


Already done: a random cluster state is by overwhelming probability non-functional. The intelligently configured one is in the nanobots example, a flyable micro-jet.

That makes a very large difference in w for say n ~ 10^6 parts. [Similarly, the scattered parts in a vat ~ 1 cu m are again in a yet vastly expanded configuration space relative tot he clumped state. The decrements in w are plain in both cases -- clumping being comparable to bonding at random, and configuring being comparable to getting to a biofunctional macromolecule. Think about how complex the cellular nanomachinery is, and why!]

The key changes are in the eqns in the UD blog and/or TMLO ch 8: we are looking at decrements in the split-dS terms, and I am not worried about the energy used up by the nanobots for housekeeping etc at the moment.

Indeed, it was precisely because you had difficulty seeing how dH = dE + pdV could be left alone and TdS split that I pointed to the thought experiment to show that it makes sense to analytically split T dS as TBO do.

6] Can you measure that difference? If you burn the functional DNA do you get more energy released or less?

Burning DNA would of course simply release bond energy. A random DNA or protein chain and a configured chain as per the earlier discussion will have more or less equivalent chemical energy available for release. (Similarly, burning a book or a carving will release energy similar to burning a lump of wood. But it takes a lot more intelligently directed energy to configure cellulose into book or carving than into lump.)

The relevant number of microstates is in principle measurable/countable, and from that the entropy decrement on configuration is a simple direct function, from Boltzmann. That decrement is also observable in the complexity of the cellular nanotechnology to ensure configurations of biopolymers.

7] But entropy is about how energy is distributed

And so in part is also about how mass is distributed. Compare the entropy of water and ice at the same temperature. Think of the energetics of different stereo-isomers and vibrational or rotational modes of freedom -- notice how mass distribution is relevant there. Then look back at the difference between a random clump of micro-jet parts and a functional flyable jet. And, a random molecule using biomonomers and a DNA strand or protein that is functional, with the same set of monomers.

8] I see here a probabilistic argument, not a thermodynamic one.

You have taken things out of context. Observe, I have highlighted that configurations that are functional and rare in the configuration space can be accessed intelligently through assembly processes. I have also pointed out -- and the calculation is quite similar to that of all the molecules in a room running off to one end at random -- that the access to such states through simple chance processes which lie at the heart of stat thermoD, are maximally improbable across the gamut of available time and material resources. Probability is part and parcel of stat thermodynamics.

I then went on to point out that to access such configurations relative to intelligent action is far more credible.

9] I have only ever seen white swans, therefore all swans are white.' Is that good logic? I think not.

We are dealing with inductive not deductive logic. Scientific arguments run along the lines:

If Theory then Observations, observations so theory. This, strictly, affirms the consequent.

But when there is a multiplication of cases and circumstances, we see that the empirical support makes the inference to a general pattern far more credible, and we provisionally infer to a law etc. That is a case where we routinely argue, in short that we know of enough cases of white swans to trust the claim that -- pending a counter-example -- all swans are white. We have built an entire grand edifice on such reasoning.

Kindly show a case where you know independently the origin of a system that exhibits fucntionally specified complex information [FSCI], and the observed cause is not an intelligent agent. (Similarly, the laws of thermodynamics are not established deductively -- just go build yourself a perpetual motion machine and they stand defeated.)

I posit that in all known cases of FSCI, intelligent agency is involved, and that we see that the accessing of such systems in cases of interest through chance plus necessity only is maximally improbable relative to the available resources. So, intelligent agency is their best, most credible explanation.

10] Because I have direct experience of intelligent agents capable of producing blog comments, and the non-intelligent explanation seems so unlikely, I will go with the former. There is no reason to suppose the existence of an intelligent agent capable of producing life

REALLY! The probabilities are in fact comparable.

Perhaps, the highlighted part may reflect your being a philosophical materialist who is willing to posit any alternative to the massively empirically supported, known cause of FSCI. But, it is a routine matter that on very similar probabilities, the strings of digital characters in this post and those in DNA say, are in tiny islands of functionality within a vastly larger configuration space.

In short the very observation of FSCI is strong evidence of intelligent agency at work in life.

11] Sewell:

Read the excerpt again. [And Sewell does use the more engineering oriented convention on isolated, closed, open systems. But then if there is no agreement on even the algebraic sign of work in the 1st law, the rest is a matter of being careful as you read.]

Also, thermodynamics is about energy and about the micro-particles that bear it, so cf above.

12] I was talking about A.

And thus changing the subject from the relevant case, B -- which even Sewell points out is a case of a system that accepts importation of energy [and matter].

13] Shannon's information entropy is completely different to what Robertson is talking about.

In fact he shows that the problems are the same and end up at the same point -- cf my excerpt and his book, I am unable to reproduce several pages of derivation here or even in my page on the matter. Observe especially his remark:

[[A thermodynamic system is characterized by a microscopic structure [implying both energies and positions of microparticles, NB] that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context.]]

In short I am pointing out that there is a whole other school of thought on statistical thermodynamics out there, with a line of pedigree tracing to Gibbs, and stopping of at Brillouin and Jaynes along the way. Robertson gives a good intro.

14] This sounds interesting.

Why not look up and read Brillouin's book -- starting with the linked source for the excerpt, which is quite long? [He was particularly interested in Maxwell's Demons -- for which my nanobots are, blush, blush, a stand-in in more modern technology.]

The basic idea is that to make the measurements you have to use enough energy so you never get away from the entropy issues. No perpetual motion machines by using smart nanobots [or tiny enslaved demons], I am afraid.

15] Why do you believe mass and energy distribution are closely integrated at the microscopic level? I agree there is some connection, but not in the way TBO claim. Perhaps if you can explain?

Already discussed. Particles carry energy bound up in their vibrational, rotational, etc modes. That already ties micro-location and energy issues tightly.

But also as discussed, there are functional configurations that collapse the configuration space and can effectively only be accessed through intelligent programs, whether on the ground in cells or in thought by nanobots [and demons too . . .].

16] Fermi Dirac:

They are using the exclusionary states, as opposed to Bose-Einstein statistics. [In short, we cannot collapse all into the ground state. Think of the Fermi level in say semiconductors.]

GEM of TKI

Gordon said...

A bit of a PS:

I should note on 16 supra, and other things as follows. For, it seems from remarks above unless an excerpt is presented, there will be the impression that the dG, dS etc stuff has not been done:

a] TBO's clumping space calc . . .

In TMLO ch 8 we may read, using w for omega . . . oddly w was used by Boltzmann it seems:

[[Brillouin20 has shown that the number of distinct sequences one can make using N different symbols and Fermi-Dirac statistics is given by

w = N! (8-6)

If some of these symbols are redundant (or identical), then the number of unique or distinguishable sequences that can be made is reduced to

wc = N! / n1!n2!n2!...ni! (8-7)]]

8-7 is of course the calc for arranging N things, nj each of different kinds.

b] Random polypeptide, 100 elements 5 of each kind vs unique functional sequence polymer:

They go on to calculate, using s = k ln w:

[[ wcr = N! / n1!n2!...n20! = 100! / 5!5!....5! = 100! / (5!)20

= 1.28 x 10^115 (8-8)

The calculation of equation 8-8 assumes that an equal number of each type of amino acid, namely 5, are contained in the polypeptide. Since k, or Boltzmann's constant, equals 1.38 x 10^-16 erg/deg, and ln [1.28 x 10^115] = 265,

Scr = 1.38 x 10^-16 x 265 = 3.66 x 10^-14 erg/deg-polypeptide . . . .

If only one specific sequence of amino acids could give the proper function, then the configurational entropy for the protein or specified, aperiodic polypeptide would be given by

Scm = k lnwcm
= k ln 1
= 0
(8-9)]]

Note: they go on to observe that by suppressing selecting and sorting work, they have been conservative in this estimate.

c] TdSconfig

The decrement in entropy from randomly clumped to configured states is of course the difference.

[[The configurational entropy work (-T dSc) at ambient temperatures is given by

-T dSc = - (298oK) x (-3.66 x 10^-14) erg/deg-polypeptide
= 1.1 x 10^-11 erg/polypeptide
= 1.1 x 10^-11 erg/polypeptide x [6.023 x 10^23 molecules/mole] / [10,000 gms/mole] x [1 cal] / 4.184 x 10^7 ergs

= 15.8 cal/gm (8-11)

where the protein mass of 10,000 amu was estimated by assuming an average amino acid weight of 100 amu after the removal of the water molecule.]]

They do a similar calc for DNA, yielding TdSconfig = 2.39 cal/g

d] Energy requisites of making a configured polymer:

Going back to dG = dH - T dS, and relative to work cited on empirical heats of formation etc, cf. earlier in CH 8 where they note:

[[Wickens17 has noted that polymerization reactions will reduce the number of ways the translational energy may be distributed, while generally increasing the possibilities for vibrational and rotational energy. A net decrease results in the number of ways the thermal energy may be distributed, giving a decrease in the thermal entropy according to eq. 8-2b (i.e., Sth 0). Quantifying the magnitude of this decrease in thermal entropy (Sth ) associated with the formation of a polypeptide or a polynucleotide is best accomplished using experimental results.]]

. . . they work out that this translates into:

[[The work to code this random polypeptide into a useful sequence so that it may function as a protein involves the additional component of T Sc "coding" work, which has been estimated previously to be 15.9 cal/gm, or approximately 159 kcal/mole for our protein of 100 links with an estimated mass of 10,000 amu per mole.]]

e] Protein formation in a "generous" prebiotic soup:

TBO here use a unimolar conc in each acid solution -- implauibly high relative to their own work in earlier CHs on the likely prebiotic Geology and atmosphere etc.

[[It was noted in Chapter 7 that because macromolecule formation (such as amino acids polymerizing to form protein) goes uphill energetically, work must be done on the system via energy flow through the system. We can readily see the difficulty in getting polymerization reactions to occur under equilibrium conditions, i.e., in the absence of such an energy flow.

[[Under equilibrium conditions the concentration of protein one would obtain from a solution of 1 M concentration in each amino acid is given by:

K= [protein] x [H2 0] / [glycine] [alanine]... (8-15)

where K is the equilibrium constant and is calculated by

K = exp [ - dG / RT ] (8-16) . . . .

G = 459 kcal/mole for our protein of 101 amino acids. The gas constant R = 1.9872 cal/deg-mole and T is assumed to be 298oK. Substituting these values into eqs. 8-15 and 8-16 gives

protein concentration = 10^-338 M (8-18)]]

That is, effectively nil.

f] Ch 9

This goes on to discuss the various energy flow through etc scenarios, and shows that such will in general not avert the basic issue just shown.

Subsequently, of course Shapiro has sharply corrected the tendency to think that an RNA world cuts across the problems:

[[A careful examination of the results of the analysis of several meteorites led the scientists who conducted the work to a different conclusion: inanimate nature has a bias toward the formation of molecules made of fewer rather than greater numbers of carbon atoms, and thus shows no partiality in favor of creating the building blocks of our kind of life. (When larger carbon-containing molecules are produced, they tend to be insoluble, hydrogen-poor substances that organic chemists call tars.) I have observed a similar pattern in the results of many spark discharge experiments . . . . no nucleotides of any kind have been reported as products of spark discharge experiments or in studies of meteorites, nor have the smaller units (nucleosides) that contain a sugar and base but lack the phosphate.

To rescue the RNA-first concept from this otherwise lethal defect, its advocates have created a discipline called prebiotic synthesis. They have attempted to show that RNA and its components can be prepared in their laboratories in a sequence of carefully controlled reactions, normally carried out in water at temperatures observed on Earth . . . . Unfortunately, neither chemists nor laboratories were present on the early Earth to produce RNA . . . .

The analogy that comes to mind is that of a golfer, who having played a golf ball through an 18-hole course, then assumed that the ball could also play itself around the course in his absence. He had demonstrated the possibility of the event; it was only necessary to presume that some combination of natural forces (earthquakes, winds, tornadoes and floods, for example) could produce the same result, given enough time. No physical law need be broken for spontaneous RNA formation to happen, but the chances against it are so immense, that the suggestion implies that the non-living world had an innate desire to generate RNA. The majority of origin-of-life scientists who still support the RNA-first theory either accept this concept (implicitly, if not explicitly) or feel that the immensely unfavorable odds were simply overcome by good luck.]]

But then that simply brings us back to the issues over metabolism first scenarios as just outlined.

In short, the issue of getting to islands of functionality without intelligence faces formidable energy-related barriers. The easy assertion that there is "no evidence" that life originated other than by undirected chemical evolutionary pathways resting on chance and natural regularities is thus revealed as a statement of faith in the teeth of the evidence, which points compellingly in a different direction.

For, FSCI, in EVERY case where we do see the origin directly, is the product of agency. And, given the issue of needing to generate coding schemes, information processing nanotechnology and the key-lock fitting molecules of life, we are looking at serious issues about searching vast coordinated configuration spaces, and assembly of the results of the alleged blind evolution to form the first cells of life.

The overwhelming abundantly accessible empirical evidence is that information -based sequentially controlled systems can do that, but in fact there is "no evidence" that chance plus necessity can credibly do so in the face of the configuration spaces involved -- an issue that is at the heart of the statistical form of thermodynamics. Indeed, it is no surprise to see that the energy issues of forming the macromolecules under commonly asserted prebiotic conditions, are so unfavourable.

GEM of TKI

Gordon said...

Pixie

There seems to be a comment in mod that for some reason does not show up in my email box, and on hitting publish from blogger, seems to lock up.

Resend?

GEM of TKI

Gordon said...

On inspection, seems to have already been sent through. Did you double-send?

GEM of TKI

The Pixie said...

I have posted two more Comments (on Saturday); have they got lost?

Gordon said...

Hi Pixie:

I am quite puzzled. On checking I saw two comments in the mod pole, including the one you sent to ask about missing comments. I published right away, but something seems to have gone missing if there are comments that you sent but have not appeared.

Kindly re-send.

(BTW, a check of my bulk items folder turned up a legitimate message from someone else! Thanks for the prodding . . .)

GEM of TKI

Gordon said...

Hi Pixie:

Seems there is a comment that came though on Friaday and has been double posted above. I find it in my inbox this AM again.

Could you check back above then see if we can resolve it?

GEM

Gordon said...

Pixie:

A follow-up point, on:

My: heat/work is about a certain degree of [lack of] information about micro-particles and their behaviour [and, often, location].

Thus, Yr: Okay, I can see that. Now how does that relate to information in a DNA sequence? Does the entropy of the DNA sequence change if we determine what the sequence is? I think not, so tis is not about lack of information then.

1] I was of course summarising a point Robertson makes, that due to lack of information about the particular microstate of a system, one has to treat it as if the particles were acting at random, thence the usual heat engine limit. More energy than that can be harvested if one has a better knowledge of the relevant microstate, and a viable coupling mechanism to take advantage of the knowledge.

2] In DNA [and protein -- a bit more complex of course . . .] sequences, the key point is that the specific arrangement of monomers if often tied to a strongly observable macroscopic effect: bio-functionality. That is, there is now a macroscopically recognisable state that takes up a tiny fraction of the overall configuration space for a macromolecule of that length and proportions of the various monomers.

3] In short, once we know we are in the biofunctional state, W in s = k ln w has collapsed, sometimes as near 1 as makes no effective difference [the case TBO calculate]. For, say, ln 100 ~ 4.61, ln 1000 ~ 6.91, etc.

4] By sharpest contrast, the w-value for the randomly arranged polymer is far larger: w = N!/[n1! * n2! . . . nk!], for k component monomers present in the proportions n1, n2 etc. That is a far higher value, as with DNA, N ~ 500k and up, severely up. [TBO work with a case where N ` 4 * 10^6.]

5] Thus, the shift in information on knowing that we are in the bio-functional macrostate makes a big difference to entropy. (The bio-functional outcome in the random state is so remote that we can for practical purposes ignore it. This point is supported by the almost Rube Goldbergian complexity of cellular synthesis. BTW, Rube shows that coupling co-opted elements to make a novel system requires quite careful, often non-intuitive design. I recall once constructing a soldering iron mini stand for students from metal orange juice caps, lengths of Al wire from hurricane Hugo-felled power cables, and purchased cellulose sponges that were cut up. (During the practical exams that followed, I recall seeing the external examiner carefully take up and inspect one of the stands, and nod his head.) I co-opted and adapted, but that hardly evades the issue of design.)

6] In the micro-jet case above, a similar outcome is seen: a randomly clumped set of jet parts -- even after selecting them from the other possible parts in the vat -- is most unlikely to be functional. But an intelligently assembled set of parts will very likely fly. We see a specific way to analyse the decrements in entropy as we first clump then configure the micro-jet state.

7] And, of course the point on OOL studies is to account for the functionally specified, complex information exhibited by bio-systems in a context where effectively random boundary conditions and compositions are at work under the natural regularities of Chemistry and Physics. The resulting configuration spaces and required cluster of precisely targeted, collaborating informational molecules, makes the spontaneous generation of life under any plausible prebiotic conditions utterly improbable beyond reason.

8] Or, equivalently, the thermodynamically constrained equilibrium concentrations of the required molecules [as TBO calculate] -- much less clumping and configuring them together to get the systems of life -- are as near zero on a planetary or even observed cosmos scale -- that there is no viable current chance + necessity only prebiotic model.

In short, since we know that intelligent agency routinely produces FSCI beyond the plausible reach of chance + necessity only, we have pretty good reason to infer on a best and most likely explanation basis that cell-based life is the product of such agency. (Note how this is an empirically based, knowledge based, inference to intelligence as opposed to an inference to the supernatural as such.)

GEM of TKI

The Pixie said...

I resubmitted my first comment last night (I did not save my second comment), before your follow-up point was there. Did it get lost again?

Gordon said...

Hi Pixie

I confess, I do not understand what is going on.

I did see a comment recently, but it is this one, which Blogger will not let me put up again as it is already through:

Hi Kairosfocus
"So, heat/work is about a certain degree of [lack of] information about micro-particles and their behaviour [and, often, location]. "
Okay, I can see that. Now how does that relate to information in a DNA sequence? Does the entropy of the DNA sequence change if we determine what the sequence is? I think not, so tis is not about lack of information then.
Robertson is saying . . .


That comment apparently appears above twice, at:

1] At Fri Apr 13, 10:38:00 AM, The Pixie said...

Hi Kairosfocus
"So, heat/work is about a certain degree of [lack of] information about micro-particles and their behaviour [and, often, location]. "
Okay, I can see that . . .


And 2] immediately following at:

At Fri Apr 13, 10:39:00 AM, The Pixie said...

Hi Kairosfocus
"So, heat/work is about a certain degree of [lack of] information about micro-particles and their behaviour [and, often, location]. "
Okay, I can see that . . .


Following these is:

3] At Fri Apr 13, 10:49:00 AM, The Pixie said...

... continuing
"[[ Physics enters the picture . . .


I responded to these two comments At Fri Apr 13, 08:06:00 PM

Is this what you are looking for? If not, could you -- sorry [and I am real puzzled over Blogger now . . .] -- try again? [As to the part about Saturday, I have no record of comments submitted Sat that did not make it.]

Or, maybe you want to pick up from where I left off responding? (I wish we can resolve this bug stuff and get back to the subject in the main -- or are we having a lesson on the point that sometimes things go wrong and the hypothetical lucky noise scenario may have something to it practically . . .?]

GEM of TKI

Gordon said...

H'mm:

Brainwave -- could your comment be too long? [Here, I note that the short remarks are obviously getting through!]

Try a split-up if it is not the one that got through twice already.

Then, if necessary, we will see what to say to blogger. [BTW, I see you are also signed up in blogger -- what is your blog?]

GEM of TKI

The Pixie said...

Kairosforcus

This is a second repost of my first lost comment.
"Not at all, he also includes locational issues -- as "dynamical state" and "microscopic structure" entail. Consider for instance the issue of degreed of freedom accessible to a system -- it in part depends on locational issues not just energy, e.g freezing out of modes of vibration or rotation etc. Also consider the difference between water and ice at the same temp."
No, entropy does not depend on location. It does depend on degrees of freedom, but that is how much the molecules can move, not where it is (think about the speedo in your car; it tells you how you are moving, not where you are). The difference betwen ice and water is that the water molecules can freely move around; this implies that the energy levels associated with movement (translation and rotation) are much lower, much more accessible. There is far more scope for variation in distributing energy among ten energy levels per molecules than three (I do not know the real numbers), so therefore water has much more entropy than ice.
"We know enough to know that the nanobots can recognise the jet parts and move them around, in the context of the usual viscous forces etc. That requires work, both to search and to acquire information,then to process it and finally to carry out mechanical work to clump then assemble the micro-jet's parts, which recall are ~ 1 micron in the vat example. None of that is against physics. [Cf Maxwell's demons . . .]"
I did not say it was against physics, I said we do not know the thermodynamics. How much work does it take to do ten rearrangements to make a working nano-plane, compared to the work to do ten rearrangements to make junk?
"Pix: 5] Why should the process D-A-B-C -> A-B-C-D be any different thermodynamically to the process
A-B-C=D -> D-A-B-C

Already done: a random cluster state is by overwhelming probability non-functional. The intelligently configured one is in the nanobots example, a flyable micro-jet.

That makes a very large difference in w for say n ~ 10^6 parts. [Similarly, the scattered parts in a vat ~ 1 cu m are again in a yet vastly expanded configuration space relative tot he clumped state. The decrements in w are plain in both cases -- clumping being comparable to bonding at random, and configuring being comparable to getting to a biofunctional macromolecule. Think about how complex the cellular nanomachinery is, and why!]

The key changes are in the eqns in the UD blog and/or TMLO ch 8: we are looking at decrements in the split-dS terms, and I am not worried about the energy used up by the nanobots for housekeeping etc at the moment.

Indeed, it was precisely because you had difficulty seeing how dH = dE + pdV could be left alone and TdS split that I pointed to the thought experiment to show that it makes sense to analytically split T dS as TBO do."

May be I have missed it, but as far as I can see your argument is based on the assumption that the two processes are different thermodynamically. Let us break this up into individual steps, and you can talk us through it, maybe.
D-A-B-C + nanobot -> nanobot-[D-A-B-C] ... (nanobot associates with sequence)
nanobot-[D-A-B-C] -> nanobot-D + A-B-C ... (nanobot takes off out-of-sequence component)
nanobot-D + A-B-C -> A-B-C + D-nanobot ... (nanobot moves out-of-sequence component to other end)
A-B-C + D-nanobot -> [A-B-C-D]-nanobot ... (nanobot attaches out-of-sequence component)
[A-B-C-D]-nanobot -> A-B-C-D + nanobot ... (nanobot departs)
Which steps have a different deltaS or deltaH compared to this process:
A-B-C-D + nanobot -> nanobot-[A-B-C-D] ... (nanobot associates with sequence)
nanobot-[A-B-C-D] -> nanobot-A + B-C-D ... (nanobot takes off random component)
nanobot-A + B-C-D -> B-C-D + A-nanobot ... (nanobot moves random component to other end)
B-C-D + A-nanobot -> [B-C-D-A]-nanobot ... (nanobot attaches random component)
[B-C-D-A]-nanobot -> B-C-D-A + nanobot ... (nanobot departs)
"Burning DNA would of course simply release bond energy. A random DNA or protein chain and a configured chain as per the earlier discussion will have more or less equivalent chemical energy available for release. (Similarly, burning a book or a carving will release energy similar to burning a lump of wood. But it takes a lot more intelligently directed energy to configure cellulose into book or carving than into lump.)"
So you also believe a book is low entropy? Would it follow that a book of random characters has higher entropy than a book of prose? How this mean we can measure the literary worth of a book thermodynamically. Some how I doubt it, but this would seem to be the implication if you are correct.
"We are dealing with inductive not deductive logic. Scientific arguments run along the lines:"
Okay. But science restricts its conclusions to areas it knows about. Thermodynamics is a good example. Originally the second law was only applied to heat transfer. It was only extended to include chemical reactions once it had been established that it applied in a broad set of diverse chemical reactions. It would have been bad science for Carnot to do a set of experiments on heat engines, and then on that basis alone claim that entropy must go up in chemical reaction too.
Yes, we see humans produce functionally specified, complex systems. Those systems have only a passing semblence to organisms, so it is dubious to say that therefore humans must have produced functionally specified, complex systems. Oh, wait, you are picking one specific quality of humans, and declaring that it is that quality, and none of the others, that are important. So even more of a stretch.
"I posit that in all known cases of FSCI, intelligent agency is involved, and that we see that the accessing of such systems in cases of interest through chance plus necessity only is maximally improbable relative to the available resources. So, intelligent agency is their best, most credible explanation."
And I posit that in all known cases of FSCI, a non-supernatural agency is involved. And I note you ignored this last time around.
"And thus changing the subject from the relevant case, B -- which even Sewell points out is a case of a system that accepts importation of energy [and matter]."
We both agree that entropy can go up in a closed system, so B is not under contention.
Why is B relevant, but not A? Seems to me you are trying to sweep A under the carpet. The fact is that A decreases in entropy, without machinery, etc.
"Why not look up and read Brillouin's book -- starting with the linked source for the excerpt, which is quite long? [He was particularly interested in Maxwell's Demons -- for which my nanobots are, blush, blush, a stand-in in more modern technology.]"
Why not just tell me one near unity example? I do not believe you!
Are you aware that Maxwell's demon does not actually exist?
"Already discussed. Particles carry energy bound up in their vibrational, rotational, etc modes. That already ties micro-location and energy issues tightly. "
No, vibrational, rotational, etc modes describe how they move, not where they are.

Gordon said...

Pixie:

At last, the comment gets through. Now, on points:

1] No, entropy does not depend on location. It does depend on degrees of freedom, but that is how much the molecules can move, not where it is

In this case, location is a relevant degree of freedom, as configuration makes a macrostate difference. Remember, we are dealing with functional/non-functional "polymers' based in part on the arrangement of components. In the ice/water example, the point is that in water, there is a "semi-polymerisation" which allows for "holes" that give fluidity. [Water is a really odd molecule!] Pull out latent heat of fusion, and the polymer locks up in a solid structure.

2] Thermodynamics and micro-jets:

Here, we differ. We may not for the moment know the specific value of the energy numbers, but we do know enough to see that TdSclump and TdSconfig can be separated. That is all that is needed to highlightthe underlying point.

3] Which steps have a different deltaS or deltaH compared to this process . . .

It seems that the real difference is in the meaning of S = k ln w. Once we have a recognisable macrostate, its entropy is known fromthe number of microstates that are compatible therewith. That is a generally accepted principle of stat mech, and is used in even elementary presentations starting with arranging red and black balls in two rows:

10R/10B has but one config,

5 + 5 R & B/ 5 + 5 R & B has a far larger number [how many ways 10 things in each of 2 rows may be arranged, 5 of a kind each in the rows: {10!/5! *5!}^2 = 63,504]

--> Thus appears the phenomenon of the predominant cluster

The at-random clumped state of microjet parts has a far larger w than the functional, configured one, and predominates in a clump at random algorithm. THAT has to be recognised and accounted for, and the nanobots in moving from any clumped state to the [intelligently designed and targetted] micro-jet configuration, are doing work that reduces the entropy of the cluster of parts.

By dismissing pressure-volume work as essentially irrelevant and accepting that bonding energies etc are more or less the same in all clusters, dH issues take a back seat.

4] Books and entropy . . .

The case of books and carvings -- or for that matter, my second case, a micro-bridge, show that there is such a thing as a specifically configurational element in entropy, which in a digital world is calculable in principle by means similar to the above: how many configurations of ink spots and cellulose fibres, thread etc fall into the macrostate "book" vs how many would be a random mess? [In short, the statistical view and the macroscopic, heat flow view bring different things tot he table, though of course they are linked as heat flow is a transfer of random molecular level energy distributed across accessible degrees of freedom.]

If you treat a book [or maybe, better for this point, a punch-paper roll] as an at random cluster of molecules subject to combustion, you will get a certain amount of emergy released by burning. But if the same tape were say a program, one could use it to inform a machine to harvest energy in one of those informed ways that beats the Carnot limit.

5] we see humans produce functionally specified, complex systems. Those systems have only a passing semblence to organisms, so it is dubious to say that therefore humans must have produced functionally specified, complex systems.

First, have you read my long since linked note on information and FSCI? If so, then how could you conflate intelligence with human intelligence?

Second, as I discussed in my note, there is far more than a passing resemblance between human artifacts and the nanotechnologies in the cell. The cell is a self-replicating, self-regulating, self-maintaining automaton that uses information expressed in sophisticated codes and integrated information-processing elements based on molecular nanotechnologies. We can in part understand how it works, and it evinces all the signs of highly sophisticated nanotechnology that would put out theoretical micro-jet to shame!

Since we know that intelligent agents do produce FSCI-rich systems, the obvious inference from such an observed case that is far beyond the reach of chance on the scale of the observed cosmos, is that it was intelligently produced. [Mere natural regularities cannot account for such a highly contingent structure, so the credible material causal forces are chance and/or agency.]

6] And I posit that in all known cases of FSCI, a non-supernatural agency is involved. And I note you ignored this last time around.

Kindly, observe the distinction between an empirically based inference to intelligent agency, and a worldviews level inference to a supernatural entity. Empirical data supports the inference that agency is a known -- indeed, the only known -- source of FSCI. Statistical reasoning in a thermodynamics context dupports the point that such FSCI is most unlikely to emerge by chance, the only significant alternative source. So, while -- as I have discussed in details in the linked -- the raw physics does not rule out incredibly improbable things happening by chance, the probabilities are such that we routinely infer in commonsense and scientific contexts to FSCI as a reliable sign of agency. For instance,we do not account for complex digital strings in this blog that are informational and specifiecd as more or less being in English, as absent absolute proof otherwise, the fruit of lucky noise.

7] We both agree that entropy can go up in a closed system, so B is not under contention.
Why is B relevant, but not A? Seems to me you are trying to sweep A under the carpet. The fact is that A decreases in entropy, without machinery, etc.


REALLY!

In the case of an isolated system embracing closed subsystems A and B, I have explicitly considered in the linked and in the above excerpted in the lead post, the stories of A and B. I then extended B to the case of an energy converter: imports energy,t hen exports heat and work. Methinks it is plain that micro-jets and cells and precursors to cells are obviously energy importers and/or energy converters.

In my analysis in the above anfd linked, I have long since noted that A has a reduction in entropy due to exporting heat. This is relevant to cases such as water freezing by cooling off. The ice takes up a structure that is driven by the molecular forces and structure of the H2) molecule.

I then pointed out that we have cases such as hurricanes, where flows self-organise into natural heat engines. Again, shaped by pre-exiting, naturaly occurring boundary conditions.

I then pointed out the relevant case: highly contingent systems based on multiple small parts that are joined together in functionally specific, complex, information-rich ways to make energy converters that process information to yield useful structures and functions.

In every case that we directly know, such systems are the product of agency. Chance-based processes in light of the predominant clusters of microstates are maximally unlikely to get to these functional configurations on the gamut of the observed cosmos. THe case of the micro-jet shows why,and how this is tied to shifts in s = k ln w once we recognise that the jet state is macroscopically distinguishable from the overwhelmingly probable random cluster outcome.

That hardly counts as sweeping under the carpet.

8] Why not just tell me one near unity example? I do not believe you!

This being essentialy irrelevant to the discussion in the main, I have referred you to Brillouin. He is speaking of measurement systems, and to the cases that come out from that.

See his defence of his case -- it is not my case.

9] Are you aware that Maxwell's demon does not actually exist?

First, with nanobots, we are close. Certain features of the cell and indeed the PN junction, come close to Maxwell's demon concept. [The barrier potential at such a junction appears as differential diffusion across the junction leads to charge separation. Work can in principle be harvested from such an assymetry, and it can exceed the Carnot efficiently limits, cf. how a silicon PV cell works.]

10] vibrational, rotational, etc modes describe how they move, not where they are.

The ability of a molecule -- including macromolecules -- to access vibrational, rotational etc states and to become bio-functional -- is closely tied to the structure of the molecules, i.e. tot he locations of component atoms and monomers.

The case of interest is the biofuncitonal one. E.g. consider the specificity of the chain of an enzyme, which folds to give the energy and chemically active clefts. That is, we have a macroscopically recognisable state dependent on a relatively narrow and specific configuration of components that is not credibly accessed by random chaining. We can intuitively see that Scongfig is much lower than Sclump for such an enzyme. TBO give us calculations on the entropy and reaction equilibria that would be relevant to simply forming such a biofuncitonal molecule at random in a prebiotic soup, relative to those entropy numbers.

GEM of TKI

Gordon said...

H'mm:

There was a problem with the link to the excerpt proper: here.

The section in which Brillouin makes the remark:

_______________

Every physical system is incompletely defined. We only know the values of some macroscopic variables, and we are unable to specify the exact positions and velocities of all the molecules contained in a system. We have only scanty, partial information on the system, and most of the information on the detailed structure is missing. Entropy measures the lack of information; it gives us the total amount of missing information on the ultramicroscopic structure of the system.

This point of view is defined as the negentropy principle of information, and it leads directly to a generalization of the second principle of thermodynamics, since entropy and information must, be discussed together and cannot be treated separately. This negentropy principle of information will be justified by a variety of examples ranging from theoretical physics to everyday life. The essential point is to show that any observation or experiment made on a physical system automatically results in an increase of the entropy of the laboratory. It is then possible to compare the loss of negentropy (increase of entropy) with the amount of information obtained. The efficiency of an experiment can be defined as the ratio of information obtained to the associated increase in entropy. This efficiency is always smaller than unity, according to the generalized Carnot principle. Examples show that the efficiency can be nearly unity in some special examples, but may also be extremely low in other cases.

This line of discussion is very useful in a comparison of fundamental experiments used in science, more particularly in physics. It leads to a new investigation of the efficiency of different methods of observation, as well as their accuracy and reliability.

An interesting outcome of this discussion is the conclusion that the measurement of extremely small distances is physically impossible. The mathematician defines the infinitely small, but the physicist is absolutely unable to measure it, and it represents a pure abstraction with no physical meaning. If we adopt the operational viewpoint, we should decide to eliminate the infinitely small from physical theories, but, unfortunately, we have no idea how to achieve such a program.
_________

He is of dourse essentially measuring information as negative entropy, and is asserting that we can approach the limit that the increase in lab level information approaches the rise in entropy of the lab system as a whole [lab machines are energy converters], but never go beyond that.

On the related demonology, observe Wiki's simple summary:

_______

Several physicists have presented calculations that show that the second law of thermodynamics will not actually be violated, if a more complete analysis is made of the whole system including the demon. The essence of the physical arguments is to show by calculation that any demon must "generate" more entropy segregating the molecules than it could ever eliminate by the method described. That is, it would take more effort to gauge the speed of the molecules and allow them to selectively pass through the opening between A and B than the amount of energy saved by the difference of temperature caused by this.

One of the most famous responses to this question was suggested in 1929 by Leó Szilárd and later by Léon Brillouin. Szilárd pointed out that a real-life Maxwell's demon would need to have some means of measuring molecular speed, and that the act of acquiring information would require an expenditure of energy. The second law states that the total entropy of an isolated system must increase. Since the demon and the gas are interacting, we must consider the total entropy of the gas and the demon combined. The expenditure of energy by the demon will cause an increase in the entropy of the demon, which will be larger than the lowering of the entropy of the gas. For example, if the demon is checking molecular positions using a flashlight, the flashlight battery is a low-entropy device, a chemical reaction waiting to happen. As its energy is used up emitting photons (whose entropy must now be counted as well!), the battery's chemical reaction will proceed and its entropy will increase, more than offsetting the decrease in the entropy of the gas.

[. . . . ]

Real-life versions of Maxwellian demons occur, but all such "real demons" have their entropy-lowering effects duly balanced by increase of entropy elsewhere.

Single-atom traps used by particle physicists allow an experimenter to control the state of individual quanta in a way similar to Maxwell's demon.

Molecular-sized mechanisms are no longer found only in biology; they are also the subject of the emerging field of nanotechnology.

A large-scale, commercially-available pneumatic device, called a Ranque-Hilsch vortex tube separates hot and cold air. It sorts molecules by exploiting the conservation of angular momentum: hotter molecules are spun to the outside of the tube while cooler molecules spin in a tighter whirl within the tube. Gas from the two different temperature whirls may be vented on opposite ends of the tube. Although this creates a temperature difference, the energy to do so is supplied by the pressure driving the gas through the tube.

[. . . . ]

In the 1 February 2007 issue of Nature, David Leigh, a professor at the University of Edinburgh, announced the creation of a nano-device based on this thought experiment. This device is able to drive a chemical system out of equilibrium, but it must be powered by an external source (light in this case) and therefore does not violate thermodynamics.

Previously, other researchers created a ring-shaped molecule which could be placed on an axle connecting two sites (called A and B). Particles from either site would bump into the ring and move it from end to end. If a large collection of these devices were placed in a system, half of the devices had the ring at site A and half at B at any given moment in time.

Leigh made a minor change to the axle so that if a light is shone on the device, the center of the axle will thicken, thus restricting the motion of the ring. It only keeps the ring from moving, however, if it is at site A. Over time, therefore, the rings will be bumped from site B to site A and get stuck there, creating an imbalance in the system. In his experiments, Leigh was able to take a pot of "billions of these devices" from 50:50 equilibrium to a 70:30 imbalance within a few minutes.[3]

__________

So, our friendly demon is indeed a real prospect!

GEM of TKI

The Pixie said...

Gordon

1] In this case, location is a relevant degree of freedom, as configuration makes a macrostate difference.
Ah, you mean relative location within the molecule. I had not appreciated that (though in hindsight that makes sense). So for a DNA sequence that is a simple repeating pattern, compared to human DNA of the same length, what is the difference in the degree of freedom? I would suggest that it is the same. The fact that one molecule describes a man has no impact on how much the atoms can move around.

2] We may not for the moment know the specific value of the energy numbers, but we do know enough to see that TdSclump and TdSconfig can be separated. That is all that is needed to highlightthe underlying point.
I will come back to this in a later post; it all hangs on whether Sconfig is a valid quantity

3] It seems that the real difference is in the meaning of S = k ln w. Once we have a recognisable macrostate, its entropy is known fromthe number of microstates that are compatible therewith. That is a generally accepted principle of stat mech, and is used in even elementary presentations starting with arranging red and black balls in two rows:
The arrangement of red and black balls is an analogy to how energy can be distributed in thermodynamics.

The at-random clumped state of microjet parts has a far larger w than the functional, configured one, and predominates in a clump at random algorithm. THAT has to be recognised and accounted for, and the nanobots in moving from any clumped state to the [intelligently designed and targetted] micro-jet configuration, are doing work that reduces the entropy of the cluster of parts.
With you for the first part. It is the work the nanobots do on rearranging that I do not follow. Look back at my last post, and the steps I describe for this process. Why does this require more work to go random sequence to specified sequence, rather than vice versa (assuming the same number of steps)?

But also, how does the degrees of freedom differ between the at-random clumped microjet parts to the functional one? What is the difference in their rotational, vibrational, etc. modes?

By dismissing pressure-volume work as essentially irrelevant and accepting that bonding energies etc are more or less the same in all clusters, dH issues take a back seat.
But the fundamental definition of entropy is dS = dQ/T, so what we are concerned with is that dQ term. This would seem to fit with your "doing work". Please explain the energetic differences between the two stepwise processes I described, because to me, they look the same.

If you treat a book [or maybe, better for this point, a punch-paper roll] as an at random cluster of molecules subject to combustion, you will get a certain amount of emergy released by burning. But if the same tape were say a program, one could use it to inform a machine to harvest energy in one of those informed ways that beats the Carnot limit.
So energetically the meaningful book is identical to the nonsense book?

5] First, have you read my long since linked note on information and FSCI? If so, then how could you conflate intelligence with human intelligence?
I had not noticed that. To be clear, I am using human intelligence as a subset of intelligence, so not conflating it.

Second, as I discussed in my note, there is far more than a passing resemblance between human artifacts and the nanotechnologies in the cell. The cell is a self-replicating, self-regulating, self-maintaining automaton that uses information expressed in sophisticated codes and integrated information-processing elements based on molecular nanotechnologies. We can in part understand how it works, and it evinces all the signs of highly sophisticated nanotechnology that would put out theoretical micro-jet to shame!
Right. So compare that to a bicycle, which is not a self-replicating, self-regulating, self-maintaining automaton that uses information expressed in sophisticated codes and integrated information-processing elements based on molecular nanotechnologies. Not too similar.

6] Kindly, observe the distinction between an empirically based inference to intelligent agency, and a worldviews level inference to a supernatural entity. Empirical data supports the inference that agency is a known -- indeed, the only known -- source of FSCI. Statistical reasoning in a thermodynamics context dupports the point that such FSCI is most unlikely to emerge by chance, the only significant alternative source. So, while -- as I have discussed in details in the linked -- the raw physics does not rule out incredibly improbable things happening by chance, the probabilities are such that we routinely infer in commonsense and scientific contexts to FSCI as a reliable sign of agency. For instance,we do not account for complex digital strings in this blog that are informational and specifiecd as more or less being in English, as absent absolute proof otherwise, the fruit of lucky noise.
I do not follow this. Did you read what I wrote: "And I posit that in all known cases of FSCI, a non-supernatural agency is involved. " You seem to be arguing against a supernatural agency.

7] I then pointed out the relevant case: highly contingent systems based on multiple small parts that are joined together in functionally specific, complex, information-rich ways to make energy converters that process information to yield useful structures and functions.
I have never seen a statement of the second law mentions these features. As far as I was aware, the second law says entropy goes up in a closed system. There is nothing about "functionally specific", "complex", "information-rich", "process information" or "useful structures and functions". All it is is:
S[final] > S[initial]

In every case that we directly know, such systems are the product of agency.
In every case that we directly know, such systems are the product of physical, non-supernatural agency.

9]First, with nanobots, we are close.
Really? I did not think nanobots were real either. Do you have any links?

10] The case of interest is the biofuncitonal one. E.g. consider the specificity of the chain of an enzyme, which folds to give the energy and chemically active clefts. That is, we have a macroscopically recognisable state dependent on a relatively narrow and specific configuration of components that is not credibly accessed by random chaining.
So you are saying that the vibrational, rotational, etc. modes are significantly different for a protein that folds to give a biologically functions enzyme, compared to those of a protein that folds into an inactive form?

We can intuitively see that Scongfig is much lower than Sclump for such an enzyme.
No we cannot! Please talk me through it. Surely this is the fundamental point upon which we disagree!

Gordon said...

Hi Pixie:

I will comment on a few points:

1] for a DNA sequence that is a simple repeating pattern, compared to human DNA of the same length, what is the difference in the degree of freedom?

I see we have been at cross-purposes: in the case of the micro-jet, and the DNA or protein molecules, the location and orientation within the cluster of parts makes a difference that is discernible through functionality/non functionality.

In the DNA case, there is all the difference in the world, functionally, between simple repetitive patterns and those that bear code for life function. The latter have to be aperiodic, and information bearing. Cf the peer-reviewed discussion in Trevors and Abel on this, here and here.

Nor is this a new, ID produced inference, as indeed TBO discuss in CH 8. By the early 1980's men like Polanyi, Yockey, Wickens and Orgel in looking at OOL, had come to the distinction between order, complexity and specified complexity. It is only this last that is capable of carrying the information required for generation of the sort of functionality we are looking at.

The constraints, then, do not lie in the chemistry but in the informational functionality. Clumped at random, the sort of chain required to carry out biofunction is maximally improbable. And indeed, in the cell, such chains are very carefully formed and maintained.

2] it all hangs on whether Sconfig is a valid quantity

That has long since been shown: the functional macrostate has its own small cluster of microstates, thus we can infer from s = k ln w. Note, because at-random clumped parts are utterly unlikely to arrive at this island of functionality on the gamut of the observed cosmos, we can mark a macroscopically observable difference between the states. To move from one to the other requires work, as the nanobots thought expt shows.

3] The arrangement of red and black balls is an analogy to how energy can be distributed in thermodynamics.

Not at all. It is an introductory case on how macrostates emerge and are linked to microstates as a matter of the possible distributions of micro-entities consistent with a given macrostate. Such distributions are connected to both energy and mass distributions, in general. [Cf the cases of a micro-jet or DNA or protein. FUNCTIONALITY distinguishes the relevant macrostate from what overwhelmingly would happen by simply randomly clumping.]

4] Why does this require more work to go random sequence to specified sequence, rather than vice versa (assuming the same number of steps)?

To get to a random clump, simply select and bring parts together haphazardly. Any old clumped set of parts will do. Apart from microscopic inspection, we cannot distinguish the states. So for 10^6 or so parts, the number of acceptable configurations for such a state is astronomical: in say a 10^-2 cm cube [within the 1 cu m vat], there are [10^4]^3 = 10^12 one-micron sided cells, then on orientation, a 45 degree coarse orientation has 512 possible orientations.

By sharpest contrast, parts have to be identified and moved to specific relative locations and orientations for a functional configuration to emerge. [In effect I am viewing the nanobots as accessing a supercomputer sitting next to the vat, through a wireless network. A friend of mine was actually doing research on in effect "smart paint" where an army of smart nanobots would do a lot of interesting things.]

The clumping nanobots simply identify "jet parts", and pull them together until they stick together. Configuring nanobots have to identify the specific part, move it to the proper relative location, and orient it so it sticks in a way consistent with function. Function is easily recongisable and highly specified. Only a relatively few configurations will fly, implying there is also an Sconfig that is calculable in principle; probably less than 10^4 states.

Now of course you could let loose the clumping nanobots on the jet like a bull in a china shop,and indeed they would do work, but that would be in principle no different from what happens in a crash. [That is, they would simply accelerate the disintegration into non-functionality that is the natural across-time fruit of random events on the microscale.]

So the issue is not just expenditure of energy and thermal energy [within limits -- melting will destroy the jet], but the resulting macro-state and its statistical weight.

5] the fundamental definition of entropy is dS = dQ/T

The microscopic view is at least as "fundamental."

Indeed, look a the d'Q part. In effect we are saying, other things being equal, a small increment of random molecular energy in a body at a given temp [itself based on random distributions of energy across molecular degrees of freedom!] will increase its micro-scale randomness. In short the micro-view is implicit in the macro-view. [The project of stat mech is to connect the two. Of course historically, that was at first controversial, as the objectivity of atoms was suspect in some quarters, indeed, it in part led to Boltzmann's sad fate. Recall, Einstein's explanation of Brownian motion was a material factor in his Nobel Prize -- as this was accepted as direct evidence of molecules at work.]

The nanobots do just the opposite of injecting random, undirected molecular scale motion. They follow a program, whether the simple one to clump, or the complex one to configure a flyable jet. Thus, given the viscous forces etc, they have to impart ordered motion by impressing directed forces, whether to the clumping site, or tot he relative location and orientation for a flyable jet within that site.

Programmed work to go to a functional macrostate that is complex and specified.

That is, in turn quite similar in effect to the actual nanobots in the cell. Observe the almost Rube Goldbergian complexity at work.

6] So energetically the meaningful book is identical to the nonsense book?

Let's say that we have a computer that can read the text and fulfill a function based on it -- e.g. a bar-coded program. [About 20 years ago Wireless World if memory serves tried to distribute programs that way.] An at-random state is distinct from the one that works in the information system, and so can be energetically distinguished from it.

In short, we can see a distinction between at-random clumped states and the configured ones.

But if you simply view the books as lumps of cellulose [a high energy molecule produced by the FSCI within cells . . .], then you can release some of that bond energy by burning both.

You are in that case doing what Robertson pointed out: lacking specific coupling information you are treating the entity as at random, and harvesting the energy available for use in a heat engine that implicitly makes that at-random assumption.

7] compare that to a bicycle, which is not a self-replicating, self-regulating, self-maintaining automaton that uses information expressed in sophisticated codes and integrated information-processing elements based on molecular nanotechnologies.

The common bicycle is a macroscopic technology that is based on rather clever and in some cases counter-intuitive engineering. [Consider, why is it that leaning it will cause it to turn. The vector calculus to explain that is not at all intuitive!]

The cell is a far MORE sophisticated technology, but it is recognisably a technology from a class identified in the 1930's by Turing. [Indeed, I think it was von Neumann who in the 1940's predicted that the cell would turn out o use the sort of systems architecture Turing had identified. This counts as an ID prediction . . .]

We can analyse such systems, but do not as yet know how to design and build them. Actually, reverse-engineering life systems is the likeliest prospect for making such systems de novo.

In short, in the materially relevant sense, a bicycle and a cell are comparable.

8] non-supernatural agency is involved

We do not know that at all. Whether human beings are wholly within the physical cosmos is a matter for wordviews debate. Let's just say that the most direct experience we have is of mind, and that within physicalist frames, this is in effect inexplicable. [Cf my summary here.]

What we can discuss with profit, is that INTELLIGENT action leaves empirical traces, e.g. FSCI.

So when we see a case of even more sophisticated FSCI with the causal story being not directly known, we may credibly infer to intelligent agency and leave the worldviews debate to the philosophers for the moment to account for in their phil systems as a credible fact.

9] As far as I was aware, the second law says entropy goes up in a closed system.

We are looking at the microstate view on entropy, and we are applying that view to a case where we happen to see FSCI at work. We do not expect that to e written into the laws from the outset.

That is, we know that microscopic entities generally behave based on random energy distributions. We know that such random distributions of mass and energy give vast configuration spaces [phase spaces], which can be broken into cells, astronomical numbers of cells.

Within these spaces for the cases of interest, we see a macroscopically distinguishable functional state. But, such islands of functionality [w is very small relatively speaking] cannot be credibly accessed by random walks in phase space within the gamut of the observed cosmos. However, we routinely see that intelligent agents produce such entities, e.g. consider the state of a functional hard disk as opposed to a trashed one.

So in light of the logic of inference to best explanation, FSCI in the relevant systems is best explained as the product of such agency. This is an application of thermodynamic reasoning, not any claimed statement of the laws, and is consistent with the point that entropy increases overall in interactions among components in an isolated system.

10] nanobots

The technology is being developed, mostly as extensions to the ones used to make integrated circuits. I cited some simple cases yesterday. Cf here for ideas and debates etc.

11] Please talk me through it [Sconfig vs Sclump]. Surely this is the fundamental point upon which we disagree!

In effect already done, starting with the thought expt. In summary, clumping collapses the configuration space relative to submicron parts jouncing around in a vat under Brownian motion. Configuring further collapses the space to thie microstates consistent with a flyable plane.

The decrements in s can in principle be estimated using cells of appropriate physical scale, and orientational configurations within the cells. That work has to be done to get these decrements is plain. [Of course thsat leads to dissipation of energy all over the vat, so I am not at all doubting Brillouin's point that thecompensating lossed net increase entropy. But such is compatible with the production of localised zones of FSCI.

The point is, due to the isolation of such zones in the configurational space, intelligent agency is the only credible way to reliably and reasonably access them. Thus, W, s and probability issues are all relevant, as we are not dealing with constrained systems where natural regularities force the outcome -- FSCI production is not equivalent to say crystallisation of Ice from the vat by inserting s tiny freezer unit.

GEM of TKI

Gordon said...

PS: Here is an even more interesting link on the practical progress projected:

_______

A genuine nanoscale fabrication capability might arrive soon that would transform industrial society, though not in the fashion initially envisioned, according to a study by Chris Phoenix, director of research at the Center for Responsible Nanotechnology (New York). Phoenix proposes a desktop nanofabrication system that could build industrial components from the molecular level up under programmable control. The concept blends traditional mass-production techniques with an assembler that would use a combination of chemistry and physical mechanics to assemble objects from individual atoms.

Such fabricators, Phoenix believes, could arrive as soon as 2010 and certainly before 2020.
Pivotal system
The key component that's still needed to ramp up nanofactories rapidly is a nanoscale fabricator that could itself build other nanoscale fabricators. This pivotal nanofabricator would be designed using mechanosynthesis, a process that operates at the atomic level using atomic-force microscope techniques to position components and molecular milling systems to shape objects at that scale. The first step in building such a system was demonstrated last year at Osaka University by a research group that used an AFM to pick up silicon atoms on a surface and move them to any desired location.

The significance of the experiment was that no special chemical or physical properties were required of the atoms in order to complete the operation . . . .

Once a basic fabricator has been built, it would be able to build a small number of copies of itself. Those copies could be aligned to perform a series of nanofabrication steps, just as conventional factories perform specialized operations and then pass the assembly along to the next station.

The assembly process then could be used to create fabricators that would operate on a larger scale; for example, designer molecules could be created. A hierarchy of machines could be created wherein the machines would operate at successively larger scales.

That hierarchy of machines would be assembled into a desktop-sized unit that would have software input for controlling the factory and special containers of basic materials used for manufacturing.

Phoenix believes rapid development would follow the creation of the first nanofabricator, since the remaining steps are well-known from conventional factory design. In addition, nanofactories could be put to work building more nanofactories, so the technology would spread quickly, with perhaps only a year passing from the first nanofactory to worldwide deployment.

It may seem impractical to build materials from the atomic level up, since there is such a huge number of atoms in objects on the human scale. Small-scale machines move at extremely high speed, however, and biology is one example of nanofabrication that produces working systems-ranging in size from bacteria to blue whales-in a reasonable time, Phoenix pointed out.

____________


Of course, they go on to say: The nanobot concept has blinded the public to the more practical routes to nanofabrication But in fact, we already have nanobots based on polymer adn associated information technologies that self assemble macroscopic functional systems all the time -- we call them "cells."

I put my bet on the nanobots as an extension to mechatronics, maybe in 10 - 20 years or so.

GEM of TKI

Gordon said...

PPS: A glance at UD this morning reveals the Paley Watch Co post, which makes the point about configuration spaces very well.

GEM of TKI

The Pixie said...

Hi Gordon

1] I see we have been at cross-purposes: in the case of the micro-jet, and the DNA or protein molecules, the location and orientation within the cluster of parts makes a difference that is discernible through functionality/non functionality.
...
The constraints, then, do not lie in the chemistry but in the informational functionality. Clumped at random, the sort of chain required to carry out biofunction is maximally improbable. And indeed, in the cell, such chains are very carefully formed and maintained.

But entropy is about the degrees of freedom, not the functionality. The only way that location impacts on entropy is through the degrees of freedom, which affects the energy levels, and so affects how energy can be distributed. The functionality does not affect the energy distribution, so does not affect the entropy, so is irrelevant to the second law.

2] That has long since been shown: the functional macrostate has its own small cluster of microstates, thus we can infer from s = k ln w.
But that alone is not enough to make it entropy, in the thermodynamic sense. Does it make sense to say that the second law prevents me from winning the lottery? I think not, but the logic would seem to be the same, if we consider a winning ticket to be functional. Thus we can determine S[lottery] from k ln w. Do you think S[lottery] is a valid thermodynamic quantity? If not, why not?

3] It is an introductory case on how macrostates emerge and are linked to microstates as a matter of the possible distributions of micro-entities consistent with a given macrostate.
Hmm, well then let us consider the arrangement of 12 books on a shelf. I put work into the system, and arrange them alphebetically by title. I calculate that originally there entropy, S = k ln(12!). After my sorting, the entropy has decreased to S = k ln(1).

Fifteen minutes later, my wife comes along, and is shocked to find that the books she had just arranged alphebetically by author are now completely out of order. She demands to know why the entropy has increased.

What is going on? Did the entropy go up or go down? It depends on your perspective. Which, for me, is enough to reject is sort of entropy from thermodynamics.

4] To get to a random clump, simply select and bring parts together haphazardly.
Please go back to my earlier post and tell me specifically at what step the energy is different - or devise your own stepwise system.

By sharpest contrast, parts have to be identified and moved to specific relative locations and orientations for a functional configuration to emerge.
So you are saying that it requires extra energy to identify a part? Or that it requires extra energy to move a part to a specific location rather than a random location?

5] The microscopic view is at least as "fundamental."

Indeed, look a the d'Q part. In effect we are saying, other things being equal, a small increment of random molecular energy in a body at a given temp [itself based on random distributions of energy across molecular degrees of freedom!] will increase its micro-scale randomness. In short the micro-view is implicit in the macro-view. [The project of stat mech is to connect the two...]

What Boltzmann did was show that entropy from S = k ln W is the same as entropy from dS = dQ/T. If your entropy is different to dS = dQ/T, then it must therefore follow that your entropy is different to Boltzmann's.
If there is no energy change...

dQ = 0
=> dS = 0
=> S is constant
=> k ln W is constant
=> W is constant

This is why I keep asking you to explain what is happening to the energy in each step of the process for your nanobots. It is the movement of energy is that is fundamental to understanding what is happening to the entropy. If there is no energy movement then it must be the case that the number of macrostates is the same. Which tells me something about macrostates.

Programmed work to go to a functional macrostate that is complex and specified.
What has that to do with the second law?

6] Let's say that we have a computer that can read the text and fulfill a function based on it -- e.g. a bar-coded program. [About 20 years ago Wireless World if memory serves tried to distribute programs that way.] An at-random state is distinct from the one that works in the information system, and so can be energetically distinguished from it.
So in certain prescribed situations we can energetically distinguish between something meaningful and nonsense, but this is not the general case. I believe it takes the same amount of energy to produce a book of nonsense to a book of prose, or a DVD full of information, to one full of nonsense. And I maintain that the entropy of the system comes down to dQ!

7] In short, in the materially relevant sense, a bicycle and a cell are comparable.
Sorry, I missed something here. You seem to have lists some features that each has, but they do not share, aside from the debatable claim that the cell is technology. Your argument would seem to be that cells are technology because they share features with manmade technology, specifically that they are technology.

8] Pix:: non-supernatural agency is involved
We do not know that at all.
We do know that for all the technology we know the origins of.

Surely that was your point too. I thought your claim was that an intelligent agency was involved for all the technology we know the origins of, therefore we can suppose this is true for other technology too.

Whether human beings are wholly within the physical cosmos is a matter for wordviews debate.
Well, okay then, I shall rephrase: I posit that in all known cases of FSCI, an agency with a physical body involved. Therefore the natural inference is that in all cases of FSCI, an agency with a physical body involved.

Do you find that logic acceptable?

What we can discuss with profit, is that INTELLIGENT action leaves empirical traces, e.g. FSCI.
No, agencies with physical bodies leave empirical traces, eg FSCI.

9] We are looking at the microstate view on entropy, and we are applying that view to a case where we happen to see FSCI at work. We do not expect that to be written into the laws from the outset.
Okay, fine. At the outset we have the observation that entropy goes up in all known cases, and we also have this one case of FSCI at work.

Within these spaces for the cases of interest, we see a macroscopically distinguishable functional state.
Yes, we see that. But the energy does not. Energy distributed around a DNA molecule has no idea if that is a simple repeating pattern or human DNA.
But, such islands of functionality [w is very small relatively speaking] cannot be credibly accessed by random walks in phase space within the gamut of the observed cosmos.
There goes that improbability argument. This is very improbable, therefore it is limited by the second law. That logic is not valid. It does not matter if you call them macrostates. It does not matter if you talk about random walks, cells and phase spaces. Yes, the second law relies on probability, but it is wrong to conclude that improbable implies the second law. And I thought we had already settled this.

The Pixie said...

Nanobots
The technology is being developed, mostly as extensions to the ones used to make integrated circuits. I cited some simple cases yesterday. Cf here for ideas and debates etc.
The Wiki page you linked to does not mention the word "nanobot", and neither did the page on nanotechnology by Foresight.

PS: Here is an even more interesting link on the practical progress projected:
Look at the article linked from that page, Nanotechnology: Nanobots Not Needed.
"Nanobots have plagued nanotechnology from the beginning...
Studies have shown that most readers don't know the difference between molecular manufacturing, nanoscale technology, and nanobots. Most nanoscale technologies use big machines to make small products. Molecular manufacturing is about tiny manufacturing systems. But those manufacturing systems are not nanobots.3 Modern plans for molecular manufacturing do not involve self-contained nanoscale construction robots at all."
Emphasis in the original. Sounds like nanotechnology is just around the corner. And nanobots are still restricted to Star Trek at al.

we already have nanobots based on polymer adn associated information technologies that self assemble macroscopic functional systems all the time -- we call them "cells."
My prediction is that genetic engineering of existing cells is the only way we will have anything like "nanobots" within 20 years.

The Pixie said...

By the way...

I think the problem was that I was not putting in the verification word. I was putting it in, then clicking to review, and then checking through, then clicking publish. Meanwhile, the s/w was expect me to do a second verification word, back at the top of the page (though not always).

I do not have my own blog; I signed up at Google blogs to post here, not quite knowing what I was doing. I do occasionally (three times anyway) blog at Science Just Science.

Gordon said...

Hi Pixie:

As per usual, things are controversial wherever there is progress!

1] Nanobots:

Actually, they EXIST -- and have for a very long time. We call them "cells." That is part of the promise -- and of the threat.

In terms of the evolving technology, first steps are being made, but there is a lot of "expectations management" on one side [so the "failure" label does not get attached -- my cynical marketing view intrudes . . .].

There is also the implication that nanobots can become a form of artificial bio-scale weapon -- in effect artificial viruses and bacteria, that could disassemble their victims -- not just machines of war and civilian infrastructure but people too. (There is somebody out there who is shouting loudly that the Chinese are investing heavily on this "dark side.")

On the other, significant steps of progress are being made as the excerpts show. Here is Wiki's brief discussion.

2] My prediction is that genetic engineering of existing cells is the only way we will have anything like "nanobots" within 20 years.

That is my precise expectation -- using and adapting the known effective technologies to implement targetted systems.

For instance, I play around with fishing. Cell-derived technologies could build on spider silk production systems to get us to smart fishing lines. Other smart fibres are in view, and that opens up a whole new world for materials and structures.

One application is: new building materials and composites, which transform the current technologies. (Here my thoughts run to: if we could come up with a modular assembly technique that could build starter homes based on sturdy fire resistant materials, then that would help transform 3rd world development.)

3] the problem was that I was not putting in the verification word.

Okay, that makes sense. I wish I could make posting comments easier -- given my Caribbean location. But I have had to deal with really abusive commentary. [BTW, Caribbean people tend to use my email and email me instead of blog commenting! There is a way to that through my reference web site link.]

4] I do occasionally (three times anyway) blog at Science Just Science.

Followed up the link. Note that "prediction" is sometimes used to embrace what some call "retrodiction." [This is especially relevant to sciences that try to reconstruct the past or explain observed systems.)

The issue then becomes inference to best explanation or abduction.

GEM of TKI

Gordon said...

Pixie:

Re comments on thermodynamics issues:

1] The constraints, then, do not lie in the chemistry but in the informational functionality . . . . But entropy is about the degrees of freedom, not the functionality. The only way that location impacts on entropy is through the degrees of freedom, which affects the energy levels, and so affects how energy can be distributed. The functionality does not affect the energy distribution

I note again: entropy from the statistical viewpoint, relates in the first instance to the statistical weight of macrostates. This can have quite direct informational implications [as Brillouin and Robertson discussed], that speak not just to bond energies, and vibe-rot-translation, but also to structural configurations.

In the case of functional nanotechnologies, whether nanobots or micro-jets as thought expts or real world DNA, proteins, enzymes, ribosiomes and cells, the information in the structure puts the elements and systems into definable macrostates that have rather low statistical weights, comparatively speaking.

The effort seen in nature to assemble these elements shows just how low-entropy they are and how inaccessible to mere chance processes.

2] But that alone is not enough to make it entropy

Observe, s = k ln w, again. And, that w relates to identifiable, observable macrostates.

For, we are able to define a functional macrostate, which is energetically related [i.e. the system correctly configured behaves in a certain specific, orderly energy to work fashion]. NB:Work is done when orderly motion is imposed by acting forces on matter. In this case, we see the work to assemble, and also the work shown in the functionality.

3] Lottery:

Lotteries of course are in part governed by thermodynamics processes, as is economic activity in general. For instance, consider the cage with balls rolled around at random until one falls into the slot for the next number in the lottery winning number.

But on your direct point: the configuration space of outcomes of lottery balls does not have the sort of direct functionality such that we can define a macrostate with associated statistical weights of microscopic states of molecular scale particles in the relevant sense.

The red/black ball example, by contrast again, is about observable states with numbers of individual outcomes consistent with it. It shows how at molecular level, configurations can be such that certain states are more or less accessible to chance distributions. In that sense, there is a link to the lottery, as this is a case where a certain outcome is for arbitrary reasons targetted (though the targetting is doe after the choosing of states to bet on . . .].

Proposed Molecular lotteries connected to OOL, of course, have such overwhelmingly large numbers of accessible states that the functional outcomes are by and large inaccessible to chance shuffling.

4] the arrangement of 12 books on a shelf

There is of course the issue of inherent functionality in the arrangements.

And by the way, your wife is right: there is a reason why libraries [and bibliographies] arrange books by author and subject not title. This is a more functional arrangement than the accident of just what letter of the alphabet a book title begins with. Whren you have a few dozen books you can get away with almost any arrangement, but when you have millions to deal with and you need to make a targetted selection, then a very different constraint applies -- indeed, to scatter books at random in a library is one way to get yourself expelled!

In the case of molecules and quasi-molecules, the arrangements in question are functional in the context of energy conversion. And the energy conversion/non-conversion is observable. So we can define relevant macrostates and count associated statistical weights. Entropy metrics based on s = k ln w or equivalent forms, are relevant and important.

They also make sense out of a great scientific puzzle: why certainthings are not observed to hapen spontaneously through the known random agitation of molecules. Namely, the lottery is too steeply improbable, so

5] Stepwise process.

I have already summarised that to clump then configure has in it two steps of incremental ordering work:

--> pulling together the scattered jet parts [which requires searc hand recognition then navigation], then

--> configuring the clumped parts to a targetted clumped whole that is functional independent of how it was arrived at [this requires recognition of parts, location in the relative configuration, and orientation].

I think that is plain enough to show just how w collapses in two steps: [1] from scattered to clumped in the phase space, [2] from clumped to configures in a functional configuration. There are three identifiable and in principle countable macrostates, and only one of them is likely to have in it a flyable micro-jet.

Since s = k ln w, and we have identifiable macrostates with countable statistical weight the decrements dS clump [duly integrated of course] and dSconfig, are separable and calculable.

6] you are saying that it requires extra energy to identify a part? Or that it requires extra energy to move a part to a specific location rather than a random location?

As a rule, yes. It requires energy to reliably communicate information. It requires energy to search a space to find relevant particles: jet/non-jet. It requires work to clump them. It requires work to further identify and orient then configure the parts to form a flyable micro-jet.

7] What Boltzmann did was show that entropy from S = k ln W is the same as entropy from dS = dQ/T. If your entropy is different to dS = dQ/T, then it must therefore follow that your entropy is different to Boltzmann's.

What Boltzmann did was to first identify a statistical weight of macrostate metric, then he linked it to classical entropy through a log measure and appropriate constant of proportionality.

8] It is the movement of energy is that is fundamental to understanding what is happening to the entropy. If there is no energy movement then it must be the case that the number of macrostates is the same. Which tells me something about macrostates.

Of course, equally, I keep showing you just how the energy is moving to go from scattered to clumped then configured states. In each step, the number of accessible microstates consistent with the emerging macrostate falls. First, we move from the number of cells in a vat ~ 1 m across, to the clumped tate that is of order ~ 10^-2 or less m across, then we move from the still astronomical number of clumped microstates, to the relative few that are flyable.

9] Programmed work

A 1 m across vat full of ~ 10 - 100 million particles small enough to be part of brownian motion will by overwheming improbability not be likely to -- sitting by itself -- produce a clumped and configured micro-jet that flies.

Clumping work assembles the parts together in a much smaller space that hey will bond together. But still there is an overwhelming improbability blocking random access to the jet.

Configuring work in accordance with an assembly program will reliably produce such a jet. [Compare the program controlled chemistry of the cell with the frustrations of the OOL researchers.]

This speaks strongly to how such a spontaneous reduction in the entropy of the vat or even the clumped lump of parts is not likely to occur. Inturn that goes back tothe pint that entropy of classically scaled systems net is at leadt constant despite spontaneous energy flows. Programmed energy flows, of course can produce local regions of incresed order or reduced entropy, but as Brillouin points out, the price in entropy elsewhere more than compensates.

This is all relevant to OOL issues etc, as the OOL prebiotic soup or similar scenarios and the clumped monomers in a meteorite etc examples all are looking to get the spontaneous local reduction without intelligent intervention. The microscopic view shows why such attempts are overwhelmingly likely to fail.

My micro-jet thought experiment brings out the result in a more familiar setting.

10] in certain prescribed situations we can energetically distinguish between something meaningful and nonsense, but this is not the general case

I am not arguing about the general case, but about certain relevant applications of the relevant physics of microstates consistent with given, functionally identifiable macrostates.

11] Bicycles, cells and intelligent agents

Again, Bicycles show functionally specific, complex information. They are one of many known cases that show that intelligent agency produces FSCI. The cell also exhibits FSCI, and indeed, at a far more sophisticated level than in a bicycle.

Step by step:

a] In all observed cases, FSCI is the product of intelligent agency, and

b] we see that chance + natural regulatities is maximally unlikely to generate it on the gamut of the observed universe.

c] There are three known causal forces: chance, natural regularities [deterministic cause-effect bonds], intelligent agents. (They can of course interact, e.g a falling object is based on the NR, gravitation. If it is a die, the face that is up is a matter of chance. If it is tossed as part of a game,t hat is agency using chance plus NR to fulfill its purposes.)

d] On an inference to best explanation [IBE] basis we can rule NR out as a dominant force as we are dealing with contingency -- outcomes from a large configuration space here.

e] Chance is also not likely as the islands of fucntionality are too sparse.

f] Agency is known to routinely generate FSCI-based functional systems, so

g] Agency is the best explanation relative to what we DO know.

h] Whether or not the agents are in physical bodies is irrelevant, as the key issue is information, which is not locked up to any particular physical expression, especially an atom-based one. (In short we know that agents exist, but we have no reason to force the inference that agents are necessarily located in material, atom-based bodies. Indeed, given that the physical universe as observed is a contingent entity in itself, and shows the marks of FSCI, it points to the possibility of agents beyond the physical cosmos.]

12] Energy distributed around a DNA molecule has no idea if that is a simple repeating pattern or human DNA.

That is why information is a relvant issue, and it is deeply connected to localised entropy reduction, as discussed.

13] This is very improbable, therefore it is limited by the second law. That logic is not valid.

The second law in statistical form, is connected to the point that there is a predominant cluster of microstates,which overwhelms the probabilities of the system being spontaneously found by chance in a special state. It is overwhelmingly unlikely to be in that state, unless it is constrained to go to that state.

My thought expt shows how that can be done and its thermodynamic implications.

14] Improbability and 2nd LOT

In this context, improbability does indeed lead us to the second law. Here is Yavorsky and Pinsky on the point, as excerpted Physics [I choose a 1970's era Russian book to make the point that this is not an ID inference]:

________

"The thermodynamic probability of a given state of a certain system of bodies is the number of combinations of the separate elements of the system by means of which the state is realised . . . . the irreversibility oftheprocess of mixing balls [R/B example . . .] is detrermined by the the TP of the statesin whichthe given system can exist. THe process of transition from a state with low TP to one with a higher probability is spontaneous. The reverse process of transition from a disordered . . . distribution of the elements of a system . . . to an ordered state practically never occurs by itself precisely because the probability of such a process is entirely negligible . . . . diffusion . . . is similar to the mixing of the balls . . . . the reason for the irreversibility of diffusion is . . . [that] the TP of a state with a uniform mixture is incommensurably greater than one in which the two components are separated . . . . The TP serves to indicate the direction of thermal processes . . . However,the calculation of the TP is a very complex task . . . For this reason . . . entropy is needed in thermodynamic calculations . . . . Boltzmann found that entropy is proportional to the log of TP . . . . entropy increases in irreversible processes . . . . [COnsideringthe Body A - B case] the process of heat exchange . . . is irreversible: energy (in the form of heat) flows by itself from a hotter body to a colder one . . . "
________

In short, I am observing that the sort of FSCI we see in micro-jets [as a thought expt] or cells [in the real world] is most credibly explained by agency not fluctuations or other essentially random processes or boundary conditions.

GEM of TKI

The Pixie said...

Hi Gordon

I do appreciate you have no control over the blogging software!

1] Nanobots: Actually, they EXIST -- and have for a very long time. We call them "cells." That is part of the promise -- and of the threat.
Well, yes, we call them cells, not nanobots. You predicted we would have nanobots with a few years, implying that we do not have them now. None of the pages we linked to mentioned genetic engineering of existing cells, and I think the reason for this is that that would be considered bioengineering, rather than nanotechnology. The Wiki page on nanobots is a start, I will accept, but it is a long, long way from a simple sensor to a working nanobot.

Gordon said...

Pixie:

Three interesting links on the topic thermal vs configurational entropy:

1] Here -- "Mass, Energy, and Freedom - The Coins of Thermodynamics" Gary L. Bertrand, University of Missouri-Rolla. Gives a useful survey. Uses free expansion of a gas as a case in point of thermal entropy shifts not whilst config entropy shifts.

Excerpting his early discussion:

__________

Most of us have a general idea of what mass and energy are, and we may have a fair understanding of how we can quantify them, or to say how much of them we have. Freedom is a more complicated concept. The freedom within a part of the universe may take two major forms: the freedom of the mass and the freedom of the energy. The amount of freedom is related to the number of different ways the mass or the energy in that part of the universe may be arranged while not gaining or losing any mass or energy . . . . The thermodynamic term for quantifying freedom is entropy, and it is given the symbol S. Like freedom, the entropy of a system increases with the temperature and with volume. The effect of volume is more easily seem in terms of concentration, especially in the case of mixtures. For a certain number of atoms or molecules, an increase in volume results in a decrease in concentration. Therefore, the entropy of a system increases as the concentrations of the components decrease. The part of entropy which is determined by energetic freedom is called thermal entropy, and the part that is determined by concentration is called configurational entropy.

__________

He is of course considering cases in which the internal config of molecules or quasi-molecules is not in question.

In our cases, it is, as we can distinguish macrostates by the resulting functionality. THe point is, that there are certain configs that are quite rare inthe overall config space, and which have independently specified and observable function. The states are complex and specified, so both useful and hard to get to by random searches or things that in effect reduce to such searches. But we know that routinely, intelligent agents create such entities, using information technologies.

2] Here -- students at NUS do a term paper under supervision with a useful exposition.

Observe the discussion from about p 14 on . . .

3] Here -- a midterm test at U Oregon, with solutions. Notice the little point on the two phases of Ba as a crystalline solid.

I hope this will help on the significance of the point hat the issue is available configurations of both matter and energy at micro-level, however many challenges there may be over how such can be counted. [Remember, a not less than count is in its own place quite legitimate . . .]

GEM of TKI

The Pixie said...

Re comments on thermodynamics issues:

1] I note again: entropy from the statistical viewpoint, relates in the first instance to the statistical weight of macrostates. This can have quite direct informational implications [as Brillouin and Robertson discussed], that speak not just to bond energies, and vibe-rot-translation, but also to structural configurations.
But only where those structural configurations have energetic implication. As I showed mathematicaly last time, if there is no change in energy, there can be no change in entropy.

In the case of functional nanotechnologies, whether nanobots or micro-jets as thought expts or real world DNA, proteins, enzymes, ribosiomes and cells, the information in the structure puts the elements and systems into definable macrostates that have rather low statistical weights, comparatively speaking.
But this is only relevant for macrostates that are energetically different.

The effort seen in nature to assemble these elements shows just how low-entropy they are and how inaccessible to mere chance processes.
Perhaps you can take me though the logic here.

Scientists have done experiment with mice recently in which they knock out whole sections of "junk" DNA. The mice still grow normally. Would you say that the replication of junk DNA requires less effort than the replication of coding DNA? I think not. I believe that the replication process is identical in either case. Think about what happens during that process. The replication system has no way of knowing if it is producing junk DNA or coding DNA. It just behaves the same either way. It takes no more work to produce coding DNA. There is no less entropy in coding DNA.

3] But on your direct point: the configuration space of outcomes of lottery balls does not have the sort of direct functionality such that we can define a macrostate with associated statistical weights of microscopic states of molecular scale particles in the relevant sense.
But think of the DNA replication. In that specific system, neither the junk DNA nor the coding DNA have direct functionality. It is only when the DNA has moved on, out of the replication system, that functionality is an issue.

DNA can be synthesised in the lab (eg see here). You can produce DNA with known functionality, or DNA that has no functionality. The DNA has no direct functionality, of course. One DNA defines a protein that is functional, the other a protein that is non-functional. I believe the work required is the same either way.

Hmm, suppose we stop the DNA synthesise of the functional DNA when it is nearly at the end... This will now produce a non-functional protein (let us suppose). So the DNA is, if we believe you, high in entropy, just like the original non-functional DNA. It is only when the DNA sequence is finished that the entropy suddenly drops. What this means is that you could measure the entropy of the DNA sequence as you build it, and determine whether or not it was functional, without the coded protein ever existing, either naturally or not.

The very fact that people do not routinely do this to discover new functional proteins tells me this is wrong.

5] Stepwise process. I have already summarised that to clump then configure has in it two steps of incremental ordering work:
Yes, but I was specifically looking at the second stage, configuring the clumped parts, and further breaking that into steps.


6] It requires energy to reliably communicate information. It requires energy to search a space to find relevant particles: jet/non-jet. It requires work to clump them. It requires work to further identify and orient then configure the parts to form a flyable micro-jet.
Okay, so go through each step that I listed before and explain why the energy is different when rearranging pieces to make a functional nano-plane, as opposed to junk.

7] Pix: What Boltzmann did was show that entropy from S = k ln W is the same as entropy from dS = dQ/T. If your entropy is different to dS = dQ/T, then it must therefore follow that your entropy is different to Boltzmann's.

What Boltzmann did was to first identify a statistical weight of macrostate metric, then he linked it to classical entropy through a log measure and appropriate constant of proportionality.
Are you agreeing with me here or disagreeing? The constant of proportionality justs converts to conventional units.

8] Of course, equally, I keep showing you just how the energy is moving to go from scattered to clumped then configured states. In each step, the number of accessible microstates consistent with the emerging macrostate falls.
I keep asking you to show me how the energy is moving, you keep talking about microstates and macrostates.

First, we move from the number of cells in a vat ~ 1 m across, to the clumped tate that is of order ~ 10^-2 or less m across, then we move from the still astronomical number of clumped microstates, to the relative few that are flyable.
See again nothing about how energy is moving.

9] A 1 m across vat full of ~ 10 - 100 million particles small enough to be part of brownian motion will by overwheming improbability not be likely to -- sitting by itself -- produce a clumped and configured micro-jet that flies.
Okay. But this is a discussion about the second law. What is the change in entropy? What is the movement of energy?

This speaks strongly to how such a spontaneous reduction in the entropy of the vat or even the clumped lump of parts is not likely to occur. In turn that goes back to the point that entropy of classically scaled systems net is at leadt constant despite spontaneous energy flows. Programmed energy flows, of course can produce local regions of incresed order or reduced entropy, but as Brillouin points out, the price in entropy elsewhere more than compensates.
I am not entirely following this. Yes, the clumping of parts is unlikely to occur, as the thought experiment is set up in that way. I do not get the next sentence. And non-programmed energy flows can produce regions of reduced entropy, think back to A in the isolated A and B system. If A is a hot iron block, and B is cold water, then A will not only decrease in entropy, it will end up with less entropy than the water (though obviously the same temperature).

My micro-jet thought experiment brings out the result in a more familiar setting.
Well, no, it does not. Micro-jets and nanobots do not exist, and I am not certain they ever will. So how can you describe that as a familiar setting?

10] I am not arguing about the general case, but about certain relevant applications of the relevant physics of microstates consistent with given, functionally identifiable macrostates.
Great. So we are agreed that there is no general law that relates the entropy of the system to its information content.

11] Again, Bicycles show functionally specific, complex information. They are one of many known cases that show that intelligent agency produces FSCI. The cell also exhibits FSCI, and indeed, at a far more sophisticated level than in a bicycle.
What you seem to be doing is picking a very specific property that bikes and cells apparently share (FSCI), noting that bikes have a second very specific property (designed by an intelligent agent), and inferring that cells must therefore also share this property. Kind of like noting that tables and cows both have four legs, therefore you can get milk from a table. Yes, okay, that is an absurd example, but as far as I can see it is quantatively different, not qualitatively.

Step by step:

a] In all observed cases, FSCI is the product of agency with a physical body

b] Debatable.

c] Would you say the second law was NR or chance?

d] On an inference to best explanation [IBE] basis we can rule NR out as a dominant force as we are dealing with contingency -- outcomes from a large configuration space here.
Talk me through this. I would have thought NR can cope with contungency. If the rock gets to close to the edge then it will inevitably fall. And I am not sure how the last bit relates to that.

e] We really do not know enough to be able to estimate the probability reliably. And what about the combination of chance and NR? That has to be the favourite, and yet you are ignoring it.

f] Agency is known to routinely generate FSCI-based functional systems, so
g] Agency is the best explanation relative to what we DO know.
Humans are known to routinely generate FSCI-based functional systems. Therefore the best explanation we have is that humans created life on Earth. Personally, I doubt that, but that is where the logic takes us.

14] "The thermodynamic probability of a given state of a certain system . . . "
So what? I know the second law is based on improbability. Which bit of the text supports your claim? I think the last bit supports my case that entropy is about energy and energy only (accepting that entropy can be related to a measure of our knowledge of that energy).

The Pixie said...

Hmm, has my last post got lost? I am sure I submitted it properly. Anyway, a quick responses to your latest, with the three links...

Bertrand says "The part of entropy which is determined by energetic freedom is called thermal entropy, and the part that is determined by concentration is called configurational entropy." So far so good. But later he says: "If the volume of the gas is increased while the temperature remains constant, the energy of the gas does not change and there is no change in thermal entropy. The increase in volume lowers the concentration and there is an increase in configurational entropy. "
Bertrand relates configurational entropy to concentration, not to information! I linked before to a Wiki page on statistic mechanic (I think, anyway here it is again). From that page: "Configurational energy refers to that portion of energy associated with the various attractive and repulsive forces between molecules in a system." I believe Bertrand is referring to this sort of configurational entropy, given that this will change when the concentration changes, while the information does not.

I read the report from NUS, and I agree that they support your position. It did, however, strike me that their claims get stuck with oil and water. It is of course well known that oil and water do not mix. Shake them up vigorously, and they spontaneously separate! And yet (on page 25) they say: "Since mixing always corresponds to an increase in configurational entropy, and mixing involves the dilution or spread of matter in space, we conclude that: Spreading of matter in space is spontaneous as configurational entropy increases. " Their prediction fails! We could also try this prediction for saturated salt solution dissolving more salt. They predict the salt will dissolve.

As for the exam question, why do you think that supports your position?

Gordon said...

Hi Pixie:

I respond on points:

1] only where those structural configurations have energetic implication

Actually, free expansion is a case in point where there is no shift in energy per molecular degree of freedom etc, but due to config shift, there is a rise in entropy. Besides, in the micro-jet and cellular cases, there is an energetic difference related to functionality/non functionality.

In short, I am pointing out that entropy is not only a measure of energy unavailability, but also of degree of disorder, linked to availability of degrees of freedom of behaviour.

Being constrained to a particular configuration tied to functionality which is observable leads to a tightly defined i.e. quite constrained, macrostate. This is of low entropy compared tot he clumped state, much less the scattered one.

This further reveals itself inthe energy and information required to process the components to get to the functional states. [Which also manifests itself as in effect being endothermic in formation.]

2] Scientists have done experiment with mice recently in which they knock out whole sections of "junk" DNA. The mice still grow normally. Would you say that the replication of junk DNA requires less effort than the replication of coding DNA?

This is not relevant to the point we are discussing in the main, as we have not fully decoded DNA, but already know enough to know there is redundancy connected to error correction capacity. "Junk" is a misleading term, as there is growing evidence that a lot of the regulatory functions required are in these regions [starting with the high degree of conservation of the code in some of these regions etc.].

We do know that in a great many cases, single point errors in coding regions so-called, have serious consequences, a point also seen in the ease with which proteins can be destabilised by random substitutions.

In the relevant case, the micro-jet: we see that it takes information-based, highly energy consumptive processes to reliably [or even feasibly!] get from scattered to clumped states, and thereafter to functional configurations.

The case with the cellular assembly mechanisms underscores this as not a mere happenstance.

3] In that specific system, neither the junk DNA nor the coding DNA have direct functionality. It is only when the DNA has moved on, out of the replication system, that functionality is an issue . . . . The DNA has no direct functionality, of course.

DNA is a core part of the nano-assembly system of the cell.

To focus on the process of its replication and to assert that there it is "non-functional," or to assert that it has no "direct" funciton" makes no sense.

If the repliciiton mechanism proved incapable of preserving and transmitting the information from generation to generation, then you would have a point -- but obviously, an enormously complex and energy-consumptive system process is used to preserve and transmit DNA and its information.

Further to that, in the cell, DNA has a very direct and vital function: information storage and transmission, which in large part controls the energy-converting mechanisms that run the cell.

Where there is an issue, it does not run the way you imagine. For, we see here a CODE at work, and the associated algorithms that make it operate. That is, we are seeing layers of software complexity on top of the physical complexity of the molecular hardware -- and without which the molecular hardware is in effect pointless. The implied further improbabilities of chance origination and of the known source of software [intelligent action], are telling.

5] It is only when the DNA sequence is finished that the entropy suddenly drops

We are looking at a functional entity, which exhibits complex, specified information. That is inherent in its macrostate as a molecule.

6] Okay, so go through each step that I listed before and explain why the energy is different when rearranging pieces to make a functional nano-plane, as opposed to junk.

I have already outlined the relevant states to creating a functional entity. Again, the functionality is relevant to identifying the macrostate. [That is, there is a specific, fucntional difference between a clumped at random state and a funcitonal configuration; connected to the collapse in the range of possible microstates associated withte two alternatives. So hard is it to access the functional state by chance that the only practical way to get there is by assembly based on programmed information, which is of course manifested in the resulting flyable micro-jet.]

6] Boltzmann:

By providing [with Gibbs] a statistical framework, he was giving underpinnings of the classical result, refining and extending it. For instance, consider the case of free expansion and why it results in a rise of entropy -- the number of ways matter and energy at micro-levels has increased so entropy rises.

7] I keep asking you to show me how the energy is moving, you keep talking about microstates and macrostates.

That is a little unfair.

I have from the very outset at UD -- and excerpted above -- showed that work is required to assemble particles from scattered to clumped at random states, then to further go on to the configured state. I then pointed out how the statistical weight of the macrostates collapses as the numbers of possible microstates associated with the different stages changes.

This in turn has been quite consistent with the points made by Sewell, as excerpted and linked.

Specifically,a s discussed in earlier points in this post: the macrostate is connected to observable outcomes: scattered/clumped, non-functional [by overwhelming probability]/functional. To shift from one to the other in a way that collapses W requires work -- spontanoeusly, either the clumped or the flyable states will eventually revert to the scattered, i.e. we see why maintenance is so important. Work is done when forces move their points of application in orderly ways. In this case, the work is countering random kinetic energy [the micro-jet parts are in effect giant molecules subject to Brownian motion] and against viscosity [which is in large part another effect of molecular randomness].

So, I HAVE given -- several times -- an account of how the work is done to clump then configure the parts. And, if ABCD is functional, but BCDA is not, then moving from the second tot he first is part of the work of collapsing the macrostate towards functionality.

If we use nanobots to randomly reassemble the flyable jet [ABCD to any other config, for instance], that would be comparable to loosing a bull in a china shop -- accelerating the increase in the degrees of freedom by putting in perverse "work."

By the way, that is precisely one of the major concerns about nanobots: their potential as weapons of a peculiarly horrible kind. Sadly, that is a major human enterprise, using old fashioned technologies -- bombs etc.

8] this is a discussion about the second law. What is the change in entropy? What is the movement of energy?

First, entropy, as already repeatedly pointed out, excerpted and linked, goes beyond just energy movements to also configurations and availability of degrees of freedom.

So, work that collapses the available numbers of microstates in observably different macrostates, is flowing to reduce entropy. For s = k ln w, and w relates to the statistical weight of a given macroscopically observable state. What Boltzmann showed was thathis definition -- arguably more fundamental than the Clausian one as it gives its underpinnings in molecular behaviour -- is consistent with the classical result and shows its limitations [i.e the question of fluctuations].

9] Yes, the clumping of parts is unlikely to occur, as the thought experiment is set up in that way. I do not get the next sentence. And non-programmed energy flows can produce regions of reduced entropy, think back to A in the isolated A and B system

At the first step, you accept that there is an energetically uphill, improbable process involved inc lumping scattered parts. Now, equally, if the marts are merely clumped at random, that is ALSO energetically and functionally downhill from a flyable microjet.

In the case of an isolated system with a hot metal body immersed in cold water, the loss of entropy on the part of A is simply due to cooling -- transfer of random molecular energy to B ewhich so increased its number of accessible microstates that the overall entropy is higher at the end.

But, this is not the case we are examining. We are looking at FUNCTIONAL CONFIGURING and its impact on microstates. The micro-jet parts are not merely heated or cooled [in fact the work done on them would very slightly heat them up, which would in large part then go to the liquid in the vat, helping to raise its entropy] but clumped then configured to a functional entity.

In so doing the numbers of microstates compatible with the observable macrostate collapses dramatically. So long as we have observable macrostates and relevant shifts in statistical weights we can make reasonable statements about movements in entropy based on s = k ln w. That, I have repeatedly done.

10] Do nnaobots or microjetx exist?

We are dealing with thought experiments along the lines of the Maxwellian demons, and the Einsteinian taking a ride on a beam of light.

Going back further, Galileo established the principle of inertia by starting with U-shaped grooves in which balls rolled. He observed that the smoothen the groove the more the ball tended to rise back to its original level on release to roll down the U. Then he said what if we streched out the U so that in effect there is not a rise? (Thus, we saw that there would be a natural continuation of motion in the absence of frictional forces [which of course tie back into entropy . . .]. Inertia of motion.)

Frictionless, infinitely long grooved tracks are no more real than are men who can ride beams of light or demons who can watch molecules and open trapdoors to let them pass through. But, by contrast, micro-jets and nanobots are a matter of time to create the technology.

More to the point, such thought experiments have played, repeatedly, vital roles in the progress of physics. I have in my analysis violated no laws of physics, and have only showed that clumping and configuring collapse w in s = k ln w. That, together with the "need" for programmed energy flow to "practically" achieve the result, makes my point.

10] Kind of like noting that tables and cows both have four legs, therefore you can get milk from a table. Yes, okay, that is an absurd example, but as far as I can see it is quantatively different, not qualitatively.

The relevant comparison is: Bikes and microjets and DNA molecules etc have FSCI, which -- IN EVERY OBSERVED CASE WHERE WE KNOW THE CAUSAL STORY -- is known to be a product of agency. Further to this, we know that due tot he statistical behaviour of matter and hte comparison of statistical weights of macrostates, it is maxiamlly improbable to get to FSCI by mere random processes or search techniques reducible thereto. So, in the cases where we do not directly observe cause-effect, we make an inference to the best explanation: agency.

Tables do not have to have four legs [my old family dining tables had two], not even cows [amputations . . .]. Getting milk from a cow is not directly related to how many legs she has. For excellent biological reasons, it is connected with her status as a female, mature mammal who has had a calf [in the usual case].

In the case of FSCI, is am similarly not confusing correlation with causation, but have given a causal account tied to the issue of searching configurational spaces.

FSCI implies that we cannot reasonably search the config spaces at random or by means reducible tot hat, but we do know that agents directly and even routinely generate FSCI based systems. In fair comment: for all the objections about work to configure -- which have been repeatedly answered -- you have simply never properly addressed form your perspective.

The objection that FSCI is produced in observed cases by agents that have physical bodies is irrelevant to the nature of intelligence and agency,a s I have already pointed out. In summary: agency and corporeality are not at all necessarily connected, starting with the issue that the observed cosmos is a contingent, finitely old being.

So, to insist on corporeality of agents in absence of addressing the relevant alternatives on a comparative difficulties basis, is to beg a metaphysical/worldviews question. For instance, evolutionary materialist views cannot credibly arrive at a mind recognisable from our experience of consciousness and choice.

12] Second law

This is anchored in statistical properties of matter at micro level,as I have repeatedly discussed. After what has already happened above, the implication from your point,that I don't know that, is snide.

13] Combining chance and natural regularities

This of course routinely happens in all thermodynamically relevant situations.

In the case of OOL, Kenyon in the 1960's argued that there was biochemical predestination, but took occasion of the publication of TMLO to publicly recant. This is because Bradley et al showed that there is not a credible pattern of bonding across protein monomers that is tied to bonding forces. Cf Ch 9.

More generally, ability to store and transmit information is based on contingency, which is exactly what NR does not by itself do. Contingency is produced by two known forces: chance and agency.

Chance is incapable of searching the config spaces in view efficiently enough to be a credible source. This is seen from the microjets case and from the OOL issues. As a parallel case, the infamous million monkeys at random banging away on keyboards will not create even one page of Shakespeare in the lifetime of the known cosmos. That was first highlighted in a similar case by Cicero -- as appears at the very top of my always linked page.

14] has my last post got lost?

Yesterday was Sunday, and it was broadcast service, so we headed out extra early . . . I am trying to take a net break on Sundays.

15] Rtc.

The NUS case ois of course a student paper. I cited it as a secondary example to show that the concept of configurational entropy is a commonly seen one.

Similarly,the mid-term test.

The key point in that one was the Ba phases -- reconfiguration of a metal crystal structure can shift its entropy, at the same temp; of course with due injections of energy to promote the shift in state. The states are observable through X-ray crystallography, of course.

in my cases the states are observable through optical microscopes and functionality. [Ironically, an optical microscope is macroscopic in our sense! Brillouin has a point in using ULTRA-microscopic . . .]

GEM of TKI

The Pixie said...

Hi Gordon

Not sure if it is clear, but I number my points the same as yours (hopefully to make it easier to reference your point). I have missed quite a few this time around as I would just be repeating myself.

1] Actually, free expansion is a case in point where there is no shift in energy per molecular degree of freedom etc, but due to config shift, there is a rise in entropy.
Free expansion lowers the energy levels associated with translation, so they can be more readily filled, there are therefore more ways the energy can be distributed. Thus the entropy increase.

Besides, in the micro-jet and cellular cases, there is an energetic difference related to functionality/non functionality.
There is? What is that?

In short, I am pointing out that entropy is not only a measure of energy unavailability, but also of degree of disorder, linked to availability of degrees of freedom of behaviour.
Yes, because the degrees of freedom determine how readily energy can be distributed.

Being constrained to a particular configuration tied to functionality which is observable leads to a tightly defined i.e. quite constrained, macrostate. This is of low entropy compared tot he clumped state, much less the scattered one.
Only if that configuration has an impact on how energy can be distributed. If it stops the moleculke rotating, then that cuts down on the degrees of freedom, so entropy will be higher. I cannot see why a functional nano-plane will have less degrees of freedom for the energy modes than the non-functional assembles.

Yes, they is a restriction on what configurations are allowed, but that is entirely different.

3] Pix: In that specific system, neither the junk DNA nor the coding DNA have direct functionality. It is only when the DNA has moved on, out of the replication system, that functionality is an issue . . . . The DNA has no direct functionality, of course.
DNA is a core part of the nano-assembly system of the cell.
Yes, I know. Perhaps I should ask he question: What exactly do you mean be "direct functionality"?

5] Pix: It is only when the DNA sequence is finished that the entropy suddenly drops
We are looking at a functional entity, which exhibits complex, specified information. That is inherent in its macrostate as a molecule.
Not sure what your point is here. Are you saying cells have low entropy because they are functional entities, even though all their component parts may be high entropy, as each on its own is non-functional? That would imply that human DNA is not of lower energy that the simple repeating pattern, which would seem to be at odds with your early posts.

Bear in mind I was talking about in vitro synthesis of DNA. Are you saying that any such DNA will have the same entropy (which I would agree with); it is only when it is in a functional system that it becomes low entropy? I.e., the entropy of the molecule depends on the system that it is in?.

6] Again, the functionality is relevant to identifying the macrostate.
So suppose we get a new batch of nanobots, but these are faulty. They do the same pattern recognition, determine what rearrangement is needed just the same, but due to an error, they do random rearrangements. What is the different in energy and entropy at each step of the process now? Here are the steps again, for reference:

D-A-B-C + nanobot -> nanobot-[D-A-B-C] ... (nanobot associates with sequence)
nanobot-[D-A-B-C] -> nanobot*-[D-A-B-C] ... (nanobot determines that D must be moved to other end)
nanobot-[D-A-B-C] -> nanobot-C + D-A-B ... (nanobot takes off random component)
nanobot-C + D-A-B -> D-A-B + C-nanobot ... (nanobot moves random component to other end)
D-A-B + C-nanobot -> [D-A-B-C]-nanobot ... (nanobot attaches random component)
[D-A-B-C]-nanobot -> D-A-B-C + nanobot ... (nanobot departs)

9] In the case of an isolated system with a hot metal body immersed in cold water, the loss of entropy on the part of A is simply due to cooling -- transfer of random molecular energy to B ewhich so increased its number of accessible microstates that the overall entropy is higher at the end.
Yes, I know it is due to cooling. Nevertheless, it is an example of entropy decreasing without programming. And whether programming is present of not, the overall entropy must be higher at the end. So what is the relevance of programming?

10] More to the point, such thought experiments have played, repeatedly, vital roles in the progress of physics. I have in my analysis violated no laws of physics, and have only showed that clumping and configuring collapse w in s = k ln w. That, together with the "need" for programmed energy flow to "practically" achieve the result, makes my point.
The problem with your nanobots is that I have little idea of their thermodynamics. Einstein's beam of light is very simple in its properties, so works well as a thought experiment. In contrast, Maxwell's demon, like your nanobots, is an unknown quantity. Maxwell proposed his demon to disprove the second law. However, I believe there are no exceptions to the second law; I believe Maxwell's demon smuggled some trickery into the system, which let it hypothetically overcome the second law. I am concerned that your nanobots do the same (not by deliberate effort on your part, just as I doubt Maxwell was deliberately setting out to trick people with his demon).

10] So, to insist on corporeality of agents in absence of addressing the relevant alternatives on a comparative difficulties basis, is to beg a metaphysical/worldviews question. For instance, evolutionary materialist views cannot credibly arrive at a mind recognisable from our experience of consciousness and choice.
Surely it draws on our direct experience. Just as we know that manufactured goods have all been designed by an intelligent agent, we know they were all produced by material agents. I believe there is good reasoning to suppose that only a physical entity can produce physical entities. What is the difference in the logic?

Are you sure you are not begging an analogous metaphysical/worldviews question by insisting on intelligence to design? What is the difference?

12] Pix: Would you say the second law was NR or chance?
This is anchored in statistical properties of matter at micro level,as I have repeatedly discussed. After what has already happened above, the implication from your point,that I don't know that, is snide.
It was not meant to be snide. The second law is based on the idea that energy is randomly distributed. And yet at the macroscopic level, it is entirely deterministic. Is the second law NR or chance? Clearly it is both. And yet, in your point 11 in an earlier post, you eliminated purely chance situations (in e) and purely NR situations (in f), without bothering with the combination (despite mentioning it earlier in c). I find that odd, especially as in your last post you mention the combination again with respect to OOL.

15] The NUS case ois of course a student paper. I cited it as a secondary example to show that the concept of configurational entropy is a commonly seen one.
People use "configurational entropy" to mean different things. It can mean the energy mode associated with intermolecular bonding (as in the Wiki page I linked to). In proteins it usually refers to the entropy of when a specific protein curls up in a certain way. This paper uses the term in another manner, which I believe is erroneous. They predict oil and water will mix. Common experience tells us they separate.

The key point in that one was the Ba phases -- reconfiguration of a metal crystal structure can shift its entropy, at the same temp; of course with due injections of energy to promote the shift in state. The states are observable through X-ray crystallography, of course.
Yes, I get that. I just do not see how this supports your position.

Gordon said...

Hi Pixie:

Seems one of your comments went missing -- the one about "having no control." [I approved, it has not showed up. Just as there is a grey band in my display of this thread and suspiciously odd behaviour below it -- maybe we have hit a wall for length of threads?]

Also, I am simply numbering excerpts for convenience . . .

Anyway, let me try to respond on points:

1] Free expansion lowers the energy levels associated with translation,

Quantum case -- I was looking at classical, which is also relevant.

2] Energetic differences in Micro-jets: I cannot see why a functional nano-plane will have less degrees of freedom for the energy modes than the non-functional assembles.

First, increment of work to get to each stage affects the energy of the parts. E.g clumping removes a lot of the freedom of translational motion, but increases vibrational states. (Vibrations are of course a bane of all serious engineering design -- and contribute e.g to fatigue in "normal" size aircraft.) For these "macromolecules" rotation will not be an issue -- the parts are locked together.

Further to this, the configured state will share this same pattern of vibration in general [though different in particular]. Of course the critical, observable difference, is that to a high degree of probability, the clumped state won't fly. The jet state will. That is a vital, energy related difference: capacity to undergo orderly translation under control.

Associated with scattered --> clumped --> Jet, we see a collapse in W.

3] What exactly do you mean be "direct functionality" [of DNA]? . . . . Are you saying cells have low entropy because they are functional entities, even though all their component parts may be high entropy, as each on its own is non-functional?

DNA is an integral, thus functional, component of a sophisticated information-processing system, which is either designed or in Dawkins' coinage: "designoid."

The component parts of cells are low entropy, on the grounds that they are in highly constrained states relative to the available at-random configuration states. The resulting system exhibits functionally specified complex information and integrated information processing that is arguably often irreducibly complex. We cannot properly assess the state apart from that known integration.

That is, through knowing that there is a key-lock and coded interlocking of components, we know w is constrained relative to an at-random configuration space. Note how the components of the cell work together to constrain reaction pathways from being at-random, a strong indicator that such would be ruinous to biofunction. It also implies that there is a sharply reduced W operating, hence s falls.

The in vitro proposed synthesis of DNA chains [are we talking about 500 k and up there?] is irrelevant to the observed functionality and constraints on W. I suspect, we are here looking at at best clumping, or replication through the "zipper" chain effect of DNA.

4] suppose we get a new batch of nanobots, but these are faulty. They do the same pattern recognition, determine what rearrangement is needed just the same, but due to an error, they do random rearrangements

Already addressed in the original thought expt as above: at-random changes in the programming. This puts the system into a higher entropy [less constrained] state. The consequence of that, is degradation then loss of function. [This has serious implications for body-plan level macro-evolution. Cf. the cite from McDonald in Meyer as excerpted in my remarks here.]

Of course, by and large the observed cases of micro-evolution are based on partial loss of information and functionality. Beyond a certain point, the changes become lethal. That is why the classic exercises to enhance mutation rates by irradiating pine trees etc, led to no particularly impressive result.

5] whether programming is present of not, the overall entropy must be higher at the end. So what is the relevance of programming?

The programming, at the expense of of course degrading the environment, creates a zone of order and organisation which is low-entropy. Such a zone, due to the complexity involved, would not otherwise come into existence, by a very high probability barrier.

6] The problem with your nanobots is that I have little idea of their thermodynamics.

Maxwell's demon posed a paradox based on intelligent action. (No trickery was involved.)

Brillouin and others showed that the relevant factor is that once the demon needs to interact physically with the system to manipulate it, he feeds entropy and so waste energy into the wider system that more than compensates for the order he creates.

Oddly, this raises the interesting point that a non-physical omniscient, omnipotent entity that knows everything directly can reorder systems without increasing entropy; that is, God is not constrained by 2nd LOT! (Maxwell would have been happy with that. Maybe there is a reason he spoke of demons . . .?)

The existing nanobots in the cell, we know, are based on physical and chemical interactions, and we can estimate the relevant entropy and energy relationships. The proposed nanobots would be similar, as I have discussed -- nowhere have I claimed that they frustrate 2nd LOT; just the opposite.

At the in-the large level, we know that people and computer-controlled systems, in working, are able to create zones of order but exhaust waste heat and materials, aka pollution. The key further point is, that such systems exhibit FSCI and raise the issue of intelligent origin.

7] Just as we know that manufactured goods have all been designed by an intelligent agent, we know they were all produced by material agents. I believe there is good reasoning to suppose that only a physical entity can produce physical entities.

Do we know that? [Your assertion implies just what we do not know: an infinite chain of material cause-effect bonds.]

We observe that the cosmos as we experience it is credibly finite in duration, going back was it 13.7 BY on the typical estimates? That is we have reason to infer it is contingent -- not necessarily existing.

That in turn raises the question, whence?

When it comes to explaining contingent beings in a contingent cosmos, that leads to the issue of necessary beings through the principle of sufficient reason. In effect, if something begins to exist, it has a cause, ultimately tracing to something that is a necessary being that explains its own existence by being a necessary being. The notion that that necessary being is a wider material cosmos as a whole, runs into serious difficulties, much more so than the inference to a Creator. [Cf my brief discussion here and here.]

That is, I have set up the comparative difficulties issues discussion here [as I also previously outlined in this thread] and am therefore precisely not begging metaphysical questions.

As also noted above, the mind we have and use also raises similar questions that lead on materialist assumptions to very hot philosophical waters.

Since I am not committed to evo mat, I can simply take the datum for granted: we experience the reality of agent action every day, and see direct reason to believe that mind can influence matter. (Indeed Max, the demon's usual nickname, gives us a good model of thought on that!)

Finally on this, information is not a simple physical entity -- it has properties that are vastly different from physical bodies' properties: it can be in multiple places at once, it is true/false in some cases, etc etc. It may be stored and transmitted using physical entities and phenomena, but it is not itself the same sort of thing. And, we cannot operate as intelligent beings without using information . . .

8] Design inference and IBE

Observe, I have asserted that design is the best explanation, i.e on a comparative basis across comparative difficulties. These include factual adequacy, coherence and explanatory elegance vs simplisticness or ad hocness concerns. The reasoning is provisional and defeatable in principle, though formidable to overthrow in fact.

This invitation to IBE relative to comparative difficulties is just the opposite of begging phil [and linked phil of science and linked science]questions.

9] at the macroscopic level, it is entirely deterministic. Is the second law NR or chance

The 2 LOT is a probabilistically anchored, natural regularity that holds with increasing reliability as the number of accessible microstates in the components of a system rises. In short, fluctuations damp down as that predominant cluster of microstates emerges.

For most practically scaled systems of interest, the classical calculation is more than good enough.

Stat mech goes beyond the veil to reveal the underlying probabilistic issues.

This, I have repeatedly discussed or pointed to or implied or excerpted. A first course in statistical mechanics will address it, and more and more so will a first course in thermodynamics that touches on 2nd LOT.

10] Referring to Sat Apr 21 post: in your point 11 in an earlier post, you eliminated purely chance situations (in e) and purely NR situations (in f), without bothering with the combination (despite mentioning it earlier in c). I find that odd

You will see that I set the comment in the context of accounting for CONTINGENT situations. NR dominates in situations that are deterministic, and non-contingent at the relevant scale.

For instance, once the necessary and sufficient conditions: heat, fuel and oxidiser are present, a fire will exist. This is NR [and one that through the role of heat ultimately rests on the link from the micro to the macro-worlds!].

In a contingent situation, many possible outcomes can happen. For instance, with the 26 letters many sentences can be composed or many random chains of text can be put down. In such situations agency or chance -- or sometimes both, predominate. That is why I spoke to chance and agency as the critical issues on Sat last.

You will also recall my often used e.g of heavy objects falling under the NR gravity. If the object is a die,then the number on top is chance-driven [for all practical purposes], and the die can be tossed as part of a game. That is the three can be integrated.

But, once contingency is there, it is chance or agency that prevail. Where FSCI exists, as repeatedly shown, chance cannot credibly access the islands of functionality in the config space in the gamut of the observed cosmos. Thence, we note that agents routinely produce such FSCI.

So on IBE, I infer to design to explain e.g. OOL,and body plan level biodiversity, as well as cosmogenesis of a finely tuned cosmos. I find this the elegant solution.

11] People use "configurational entropy" to mean different things . . .

The common core to these things is that entropy is not just tied to micro-distributions of energy but also to microdistributions of mass.

I have pointed out, repeatedly, that in certain relevant cases,we can macroscopically distinguish (and energetically distinguish) between states dependent on mass configurations. When we look at the associated statistical weights, we see that clumping and configuring collapse W so reduce s.

The Ba phase issue, is a case in point: indeed, Iron forms different crystal structures too, with different energetic properties.

I have given adequate reason to see that TBO [and others] are right in identifying micro-distributions of matter as relevant to degree of order/disorder, and having implications for the chemical and physical behaviour of the relevant entities.

The thought experiment and the observed nano-tech of life are examples in point. Ba and other metals etc show the same point. Indeed, the common observed phase change phenomenon is along the same lines.

More broadly, I have gone through the reason why it is credible to associate changes in configuration that are functionally constrained as relevant to estimates of entropy and changes in it.

They show that the thinking in TMLO [which BTW seems to be a commonplace among OOL researchers on differing schools of thought on the question]is sound.

GEM of TKI

Gordon said...

H'mm:

1] Phases:

That oh so humble source, Wiki, on phases:

____________

If there are two regions in a chemical system that are in different states of matter, then they must be different phases. However, the reverse is not true -- a system can have multiple phases which are in equilibrium with each other and also in the same state of matter. For example, diamond and graphite are both solids but they are different phases, even though their composition may be identical. A system with oil and water at room temperature will be two different phases of differing composition, but both will be the liquid state of matter . . . . In general [notes on exceptions . . .], two different states of a system are in different phases if there is an abrupt change in their physical properties while transforming from one state to the other. Conversely, two states are in the same phase if they can be transformed into one another without any abrupt changes . . . .

In more technical language, a phase is a region in the parameter space of thermodynamic variables in which the free energy is analytic; between such regions there are abrupt changes in the properties of the system, which correspond to discontinuities in the derivatives of the free energy function. As long as the free energy is analytic, all thermodynamic properties (such as entropy, heat capacity, magnetization, and compressibility) will be well-behaved, because they can be expressed in terms of the free energy and its derivatives. For example, the entropy is the first derivative of the free energy with temperature.

When a system goes from one phase to another, there will generally be a stage where the free energy is non-analytic. This is a phase transition. Due to this non-analyticity, the free energies on either side of the transition are two different functions, so one or more thermodynamic properties will behave very differently after the transition . . .

___________

In short, really odd possibilities exist, and are tied to micro-distributions of mass not just energy. Both are as a rule found together.

2] NUS and oil-water:

BTW, I cannot find in the NUS paper, any discussion in which they "expect" oil and water to mix: cf your: They predict oil and water will mix.

From p. 18 on they discuss FAVOURBLE mixing in general terms, using the spreading out of a dye droplet [presumably in water].

They then go to a spatial cell model for configuration space, similar to my spreading out of micro-jet etc parts. THe then go to a more simple model of cells and do some calculations.

Kindly point out where they predict that oil and water will under simple circumstances mix.

(And, even if they make an error there, it does not undermine the general force of the discussions from pp 14 on and 18 on. Are you putting up a strawman here, probably inadvertently? (it is easy to look for a flaw in a case then dismiss the case as a whole based on the flaw, which may in fact not affect the material force of the argument. I once had to deal with a cultic group that specialised in that . . . More on point, in Kepler's classic work on the elliptical orbits of the planets there was a crucial arithmetical error early in his work, just about balanced out by an oppositely directed error later in the work. Koestler makes quite a discussion on that in his the Sleepwalkers. As a personal note I was once embarrassed in a 4th form physics class by finding that I had over the years inadvertently conflated location and displacement in my thinking -- turns out it makes no difference to the onward discussion of kinematics or dynamics . . . locations are changing, so displacement is simply a way of insisting on moving bodies! Similarly, pardon the typos of this old dyslexic -- I have learned the hard way that our visual imaging systems are active not passive.]

[BTW, too, emulsified oils do spread out in a way in water that is relevant to my discussion i.e. microparticles. Milk is the classic example.]

3] Config and informational molecules and micro jets

My point is to show through the microjets case that configuration can make an entropy difference tied to shifts in W through changing macrostate.

From the macro behaviour changes -- clumped jet parts precipitate, configured ones are flyable, we know we have changed state. The bonding between particles is the more or less similar for both clumped states, but by configuring the particles we can make a difference in how the thing behaves including energetically -- we have constructed an energy converter.

The similar configuring of atoms and monomers in cells and their parts, tells the same story. In short all the way back to the beginning, TBO have a right to make the analysis they do, and it explains observations in interesting ways.

Not least, we see an explanation for the ongoing -- ~25 years later -- frustration of the OOL researchers.

BTW, Bradley has updated the analysis in the 2004 book edited by Dembski and Ruse, Debating Design. In so doing he addresses Cytochrome C and speaks in information terms directly. Cf his presentation here.

My argument is that the two approaches are both valid, and end up at the same basic point. For, as Robertson, Brillouin et al point out [event hough it can be debated, they have the telling point IMHCO], information and entropy issues are indeed linked, though not precisely equivalent.

[Cf Robertson's nuanced discussion. Information is a broader issue than entropy in physical systems. Observe how in his derivation he interprets physically the Math results. I have briefly excerpted; I assume you can at least get an interlibrary loan.]

GEM of TKI

The Pixie said...

Hi Gordon

3] Pix: What exactly do you mean be "direct functionality" [of DNA]?
DNA is an integral, thus functional, component of a sophisticated information-processing system, which is either designed or in Dawkins' coinage: "designoid."
Can I assume from this that when you say "direct functionality" for a component in a thermodynamic system, you mean any component that is an integral component of a sophisticated information-processing system, which is either designed or designoid?

From this, I assume that DNA produced synthetically, whatever the sequence, does not have "direct functionality" as it is not part of a sophisticated information-processing system. It is only when the DNA is placed in the right context inside a cell that it gains the property "direct functionality" (if it is functional DNA).

A lottery ticket with the correct numbers on is an integral component of winning the lottery. Thus, I suppose it is a functional component. Hmm, perhaps the lottery sophistication is too low to qualify... What is the threshold? Hhmm, what are the units of sophistication, I wonder (my point being, is there a place in a scientific definition for terms like "sophistication" that are essentially unquantifiable).

The component parts of cells are low entropy, on the grounds that they are in highly constrained states relative to the available at-random configuration states.
This would be true if we used those nanobots to place the component parts at specific locations, whether that resulted in a working cell or not...

The resulting system exhibits functionally specified complex information and integrated information processing that is arguably often irreducibly complex. We cannot properly assess the state apart from that known integration.
But we can with the cell. We can consider what the entropy of the DNA is outside the cell, and inside the cell. I get the impression the DNA rises abruptly when it is removed from the cell, and drops again when replaced, in your opinion. Suppose I put back instead the DNA of another organism; this does not work in the cell, so although the new DNA has the same freedom of movement in the cell as the other, has the same energy, the same energy levels, etc. you seem to believe that the entropy is quite different.

The in vitro proposed synthesis of DNA chains [are we talking about 500 k and up there?] is irrelevant to the observed functionality and constraints on W. I suspect, we are here looking at at best clumping, or replication through the "zipper" chain effect of DNA.
I believe DNA sequences can be built up one base at a time, in a computer-controlled synthesis to produce the exact sequence desired. This might be restricted to short lengths, but to extend the idea to 500k and up for a thought experiment seems far more reasonable than supposing the existence of nanobots!

4] Pix: suppose we get a new batch of nanobots, but these are faulty. They do the same pattern recognition, determine what rearrangement is needed just the same, but due to an error, they do random rearrangements
Already addressed in the original thought expt as above: at-random changes in the programming. This puts the system into a higher entropy [less constrained] state. The consequence of that, is degradation then loss of function.
But these nanobots are doing the same work as the ones producing nano-planes. They do the same recognition, the same information processing. They then do rearrangements. The only difference is that the rearrangement is random, ignoring the results of the information processing. As far as I can see, they do exactly the same work, they expend and shift around exactly the same energy. So why is the resultant nano-plane lower in energy?

Suppose we did this low pressure conditions, such that the nano-planes cannot fly. In the system, one set of nanobots are constructing nanoplanes that are non-functional due to the conditions in the vessel, while the other set are constructing nanoplanes that cannot fly in any conditions. What is the entropy of the two types of non-functional nanoplanes? I must assume the same. It is only when the pressure in the vessel rises that at some abrupt point one set opf nanoplanes becomes functional, and entropy goes down.

5] The programming, at the expense of of course degrading the environment, creates a zone of order and organisation which is low-entropy. Such a zone, due to the complexity involved, would not otherwise come into existence, by a very high probability barrier.
So are you saying that such a zone cannot happen because the second law forbids it? I do not believe that can be the case. The programming has to work within the second law; it can only achieve a local decrease in entropy if the tital entropy goes up. Just as the non-programmed process.

Sure, there are plenty of things that a programmed process can do, but a non-programmed process cannot, like build computers. But that is not because the second law forbids it.

This is the fundamental error in Sewell's articles. He argues that non-programmed process cannot build computers because it is highly improbable, therefore (as I read it) the second law must be stopping non-programmed processes from building computers.

6] Maxwell's demon posed a paradox based on intelligent action. (No trickery was involved.)
But the demon does not exist! It is like a thought experiment involving a perpetual motion machine. "Here is my hypothetical perpetual motion machine. But the second law says a perpetual motion machine is impossible. Therefore the second law is wrong." Personally, I find that reasoning suspect. But it is the same as Maxwell's as far as I can see.

Brillouin and others showed that the relevant factor is that once the demon needs to interact physically with the system to manipulate it, he feeds entropy and so waste energy into the wider system that more than compensates for the order he creates.
Brillouin spotted the trick.

Oddly, this raises the interesting point that a non-physical omniscient, omnipotent entity that knows everything directly can reorder systems without increasing entropy; that is, God is not constrained by 2nd LOT!
Well of course. An omniscient, omnipotent entity can do anything. That is what omnipotent means!

7] Pix: Just as we know that manufactured goods have all been designed by an intelligent agent, we know they were all produced by material agents. I believe there is good reasoning to suppose that only a physical entity can produce physical entities.
Do we know that? [Your assertion implies just what we do not know: an infinite chain of material cause-effect bonds.]
As I said "Just as we know..." I.e., we know they were all produced by material agents just as well we we know manufactured goods have all been designed by an intelligent agent. So how does that leave your argument?

When it comes to explaining contingent beings in a contingent cosmos, that leads to the issue of necessary beings through the principle of sufficient reason. In effect, if something begins to exist, it has a cause, ultimately tracing to something that is a necessary being that explains its own existence by being a necessary being. The notion that that necessary being is a wider material cosmos as a whole, runs into serious difficulties, much more so than the inference to a Creator. [Cf my brief discussion here and here.]
That is way outside this discussion, but I think you miss the combination of random and NR.

9] Pix: at the macroscopic level, it is entirely deterministic. Is the second law NR or chance
The 2 LOT is a...
This was originally a rhetorical question meant to illustrate that it is really not that helpful dividing the universe into NR, chance and intelligent. There are too many things that blur the lines. The second law is one, evolution is another, a game of poker another.

10] You will see that I set the comment in the context of accounting for CONTINGENT situations. NR dominates in situations that are deterministic, and non-contingent at the relevant scale.
I think I was misunderstanding your use of continuent; I was thinking of "dependent for existence, occurrence, character, etc., on something not yet certain; conditional", you seem to use it to mean "happening by chance or without known cause; fortuitous; accidental" (which I had not heard of, and seems quite opposite to the other meaning) (both definitions from here).

However, that leaves me wondering how you have addressed the combination of chance and NR (accepting that I am still not too sure quite what you mean be contingency).

You will also recall my often used e.g of heavy objects falling under the NR gravity. If the object is a die,then the number on top is chance-driven [for all practical purposes], and the die can be tossed as part of a game. That is the three can be integrated.
Well, yes, that was the point. You seem to have eliminated chance and NR on their own, but not the combination.

11] People use "configurational entropy" to mean different things . . .
The common core to these things is that entropy is not just tied to micro-distributions of energy but also to microdistributions of mass.
Only in so far as that distrubtion impacts on energy, in at least two cases. The configurational entropy of proteins, depending of how they curl up, depends on the energy released or taken in when going from one form to another. The configurational entropy that Wiki talks about is down to intermolecular bond energy.

In all the cases when configurational entropy has a legitimate use in thermodynamics, it relates to energy.

I have pointed out, repeatedly, that in certain relevant cases,we can macroscopically distinguish (and energetically distinguish) between states dependent on mass configurations.
If you can energetically distinguish them, I am right with you. Eg the configurational entropy of proteins.

The Ba phase issue, is a case in point: indeed, Iron forms different crystal structures too, with different energetic properties.
Right. Different energetic properties. Lots of things do it, like ice which has about seven structures.

I have given adequate reason to see that TBO [and others] are right in identifying micro-distributions of matter as relevant to degree of order/disorder, and having implications for the chemical and physical behaviour of the relevant entities.
But to support that you give examples that have, as you say, different energetic properties. In comparison, your argument for DNA is about it being in a functional system. That is entirely different, and it is bizarre that you think one supports the other.

2] I cannot find in the NUS paper, any discussion in which they "expect" oil and water to mix: cf your: They predict oil and water will mix.
By "their prediction", I mean the inevitable consequences, if their theory is correct. They do not mention the oil and water case; if they had, they might have realised their mistake! Page 25 (bold in the original):
"This result based on the cell model is satisfactory to ideal gases, liquid solutions and
solutions of solids...
After this healthy dose of mathematics, we finally arrive at equation (48). What we need now is an understanding of how we can conclude that the spatial spreading of matter is favourable. This next step is relatively easy, to much relief. Since mole fractions are always less than one, therefore the term “ln Xi” is always negative. This implies, from equation (48), that S is always positive. Since mixing always corresponds to an increase in configurational entropy31, and mixing involves the dilution or spread of matter in space, we conclude that: Spreading of matter in space is spontaneous as configurational entropy increases. In essence, the concept of configurational entropy applied to mixing processes helps explain the tendency of matter to spread in space. It is therefore no coincidence that a solution of well-mixed dye does not separate into solute and solvent spontaneously – both substances tend to spread out in space to maximise configurational entropy.
"

Thus, for the process:

oil + water -> oil/water mixture

They say S is positive, that the "Spreading of matter in space is spontaneous as configurational entropy increases". And yet, put oil and water in a thermos flask (isolated system), shake it up a lot, and the reverse happens. The oil and water separate out. Either entropy has decreased in the isolated system and the second law has an exception at the macroscopic scale or the NUS report is wrong. I favour the latter.

And, even if they make an error there, it does not undermine the general force of the discussions from pp 14 on and 18 on.
Yes it does. The oil and water case is an instance of S being negative for the mixing process, which contradicts their hypothesis. That implies there is something wrong with their their hypothesis. If you can tell me whatthat it, and why it is not relevant, then all well and good. I believe the problem is fundamental; that they are mistakenly trying to apply the second law to a distribution of mass, when it can only be applied to a distribution of energy.

Gordon said...

Hi Pixie:

Let's pause and look back. Granville Sewell made some remarks, which I have amplified by excerpting. [Note, no-one has as yet shown an observationally verified case where his remark is inaccurate: that is, he has raised a serious problem.]

In responding to objections, I mentioned TBO's TMLO. You claimed there was a flaw. In response I showed that their splitting up of TdS was in accordance with the reasonable irrelevance of dH here, and the state-function nature of entropy [so that one can in effect superpose different vectors to the same point in entropy space], in the further context that there is an entropy of configuration. I provided a thought experiement, and ultimately links and excerpts that show that the concept is a valid, and even reasonably frequently used one.

In that context, I showed that there is a functionally related specification of the states; scattered --> clumped --> functionally configured, and thence showed that W collapses as we move down the cascade. When the microjet is in the last state, it is macroscopically distinguishable. Similarly for biofunctional DNA etc, we can see that once it is biofunctional, we atre in an island that is hard to get to at random in the configuration space. Indeed, we see that the number of corresponding microstates that are functional as opposed to being at random, is very small.

In short, there is every reason to see that Sewell, and TBO are correct.

Now on points you raised overnight:

1] I assume that DNA produced synthetically, whatever the sequence, does not have "direct functionality" as it is not part of a sophisticated information-processing system.

The question of functionality is a matter of specification, and the issue is whether the synthetic DNA in view is or is not in an island of functionality. That can be tested by inserting it in an organism. If it is functional, it is in a tightly specified configuration, which means that its entropy of configuration is low. (The real issue is that biofunctional DNA would inthe case in point obviously be again intelligently created. At-random DNA chaining would with very high probability fail the test. Clumping vs configuring again.]

And of course the "lotteries" in question for DNA have in them vastly more tickets than there are atoms in the observed cosmos -- or more to the point, than the credible number of quantum states int hat gamut. For, they have in them more than 500 bits of information, the Dembski upper bound. The "lottery" is infeasible -- no tto mention, a lottery simply targets a match between a ticket number and a set of balls that fall out of a spinning cage orr the like. Here we are talking about corresponding to a code and to machines in the cell that process such coded information.

2] This would be true if we used those nanobots to place the component parts at specific locations, whether that resulted in a working cell or not

The point is that targetted functionality as a specification automatically and discernibly shows that we are in such a tightly constrained state. That is, we are perfectly willing to leave off lotteries in which the winning ticket is just that, in favour of lotteries where the winning number just happens to open say Howard Hughes' bank vault.

3] We can consider what the entropy of the DNA is outside the cell, and inside the cell. I get the impression the DNA rises abruptly when it is removed from the cell, and drops again when replaced, in your opinion.

Not at all. I am pointing to an issue of macroscopic state specification and observability, tied to relatively small statistical weights; and I believe I have done so long since in clear enough terms. FYI, if the microjet is in a flyable configuration, or a sysnthesised DNA strand is in a biofuncitonal configuration, it would be as a result of intelligent action on your own suggested example, i.e it substantiates Sewell's point. [I would further assert that an at-random DNA strand of sufficent length to be relevant -- 250 to 500 or so base pairs would do -- will with maximal probability be non-functional. In short, I am here back to TBO's issue of improper investigator interference that inadvertently puts in the information that it was hoped would emerge by chance. Dawkins' Methinks program is a classic illustration, though of course computer simulations strictly are not experiments.]

A note on rhetoric: Your remark here is putting foolish words in my mouth that do not belong there, and is unworthy of serious dialogue. Cho man do betta dan dat!

4] I believe DNA sequences can be built up one base at a time, in a computer-controlled synthesis to produce the exact sequence desired.

See my point? Where does the 'puter get the smarts to do the job?

5] But these nanobots are doing the same work as the ones producing nano-planes. They do the same recognition, the same information processing.

No, they do not. They do a similar quantum of energy use, but not the same specified functional so-called "shaft" work.

(One of the subtleties in the definition of "work" as opposed to other forms of energy transfer [e.g. heat] is that in work, forces move their points of application to produce orderly motion; linked to work in the economic sense -- in short there is a bit of human context in this part of physics, which is what helps make it relevant. In short, there is an implicit specification on the sort of motion that can properly be described as "work." If one piece of work targets a functional state, and another is like a bull at random in a china shop, there is a vast and discernible difference. We must not confuse number of joules expended with whether or not that has done a specified bit of work.)

6] Suppose we did this low pressure conditions, such that the nano-planes cannot fly.

This is irrelevant to the context of functional specification -- the macrostate is observable in the appropriate environment. In short, we are looking at a test here: is the result flyable?

If so, its macrostate is specified and there is good reason to see that the number of associated microstates is relatively low. THat is independent of the conditions in the vat -- and besides, the vat has to have in it a sufficiently viscous and dense liquid to suspend the parts. Such a fluid would support "flight" within it. [But also, I am speaking in a context of flyable in the common garden variety air we are sitting in as we interact.]

Indeed, by coincidence, over the past few days, my 8 yo son has been experimenting [on his own initiative] with paper air-foils of classic NACA-style form with cylindrical paper fuselages attached. He has been having quite a frustrating time looking at the problems of stalling and dropping out of the air on one hand, and diving uncontrollably on the other, also on the problem of precise balancing to get the gliders to fly straight. Quite a lesson in the subtleties of aerodynamics, control and flight.

And, that is before we get to the issue of propulsion! (I have seen it argued that Taylor's engine may have been even more important in getting the 1903 Flyer to go than the air-frame. Indeed, so far as controllability and stability were concerned, the 1902 airframe was a better one . . .)

7] are you saying that such a zone cannot happen because the second law forbids it? I do not believe that can be the case.

Notice what I actually said, in the context of the microscopic, microstate underpinnings of 2nd LOT: The programming, at the expense of of course degrading the environment, creates a zone of order and organisation which is low-entropy. Such a zone, due to the complexity involved, would not otherwise come into existence, by a very high probability barrier.

In short, as is in my always linked online note, I am not looking at absolute forbidding based on logical or physical impossibility.

I am speaking instead of probabilistic barriers that are so extreme that random searches are maximally unlikely to achieve functionality, once we take o board the concept that all accessible microstates are equiprobable, which is foundational to stat mech. At the same time, agents routinely scan such spaces and create functionality that has that degree of complexity, e.g the text of this post; which is long since past 500 - 1000 bits of information.

Where 2nd LOT comes in, is that it shows that by overwhelming probability once all microstated inthe configuraiton space are more or less equally accesible -- clumped state in the terms we have been discussing -- the most likely outcome by a very overwhelming probability is non-functional [i.e the predominant cluster of so-called equilibrium microstates predominates the config space].

I find it interesting just how hard it is to get that basic point through. (I keep getting the impression that you have a working assumption that I simply cannot properly understand 2nd LOT, never mind how I have outlined its discovery in the classical sense and how I have pointed to the stat mech view that modifies the classical understanding.)

8] He argues that non-programmed process cannot build computers because it is highly improbable, therefore (as I read it) the second law must be stopping non-programmed processes from building computers.

Again, show how this Mt Improbable can be climbed credibly by non-agent processes. Sewell, TBO, Shapiro, other OOL researchers, and I are all pointing to the problem of overwhelming improbability. Then, I and other design thinkers are pointing out that FSCI is known to be routinely produced by agents. So, absent a priori assumptions and assertions, the best explanation for the origin of FSCI in life forms as we observe them, is agency.

In short when we sue the term -- and note that I do not [note my "maximally improbable" and the like] -- are using a soft sense of forbid: something is so overwhelmingly improbable that it is for practical purposes on even the scale of the observed cosmos, inaccessible to random processes.

One can of course always claim to believe in probabilistic miracles. But you cannot coherently live that way -- it is logically and physically possible that all of the posts "I" have made in this thread and at UD are simple random noise that has somehow constructed coherent interactive text though the internet. But, once we get up from the armchair world of abstract possibilities, we routinely infer to agents as the most likely and credible cause of such FSCI.

To revert to the armchair when the implications are inconveneient is gross inconsistency. Cf my remarks on the fallacy I have descriptively termed selective hyperskepticism. (Or, is this simply more "lucky noise"?)

9] But the demon does not exist!

Go back above and see where I have linked and excerpted to show that functionally equivalent cases do exist and how Brillouin's point on why 2nd LOT still holds, has been sustained.

The more basic point is also plain: the demon and its equivalents create local order outside of the range of fluctuations, and do so by reverting to intelligent action.

BTW, if you can build a PMM of the 2nd kind, it would overthrow 2nd LOT. The point is that stat mech shows why such is not at all likely to happen within the physical world.

But an agent that has direct knowledge of reality without need for physical measurement is not bound by 2nd LOT. This is an interesting insight . . .

And, there was no "trick" on Maxwell's part: he was several decades before the quantum view came on board, and worked inthe tradition of "Simon," Laplace's demon who knows the positions and initial speeds and forces on all particles in the universe so can predict its future state . . .

Demonology is a traditional part of physics . . .

10] An omniscient, omnipotent entity can do anything. That is what omnipotent means!

The omnipotent cannot do the logically impossible. A square circle is incoherent, i.e a self contradiction so one cannot be made even by an omnipotent person.

But if one knew particle states etc without need for quanta of investigation and resulting disturbances then just maybe Albert was right: the Old One does not play dice!

11] I said "Just as we know..." I.e., we know they were all produced by material agents just as well we we know manufactured goods have all been designed by an intelligent agent.

Physical materiality and information exist on different dimensions. We do not -- absent worldview level question-begging -- know that we are insofar as we exhibit agency, entirely material entities.

Agency is of course the most immediate experience we all have and we access the universe we observe through such agency. But, if we are the product of unconscious forces that drive and control our mentality in ways driven by natural regularities and/or chance, we have no good reason to trust the deliverances of our minds. [This, I have linked on previously. Follow up here and in the onward links. Evolutionary materialism is fundamentally logically incoherent. It is logically impossible . . .]

A glance there will show that for coming on 20 years or so, I have precisely analysed at worldviews level and in associated science, in terms of the link between chance and natural regularities. Alvin Plantinga agrees with me on the subject.

In our own immediate context, chance or agency predominates over NR in cases where contingency is material, as I discussed. All you have done is to brush that point aside. Kindly address it.

As to the idea that it is unhelpful to think in terms of agency, chance and necessity, let us just note that this framework has been a major context for human thought since Plato and beyond. Every time we come across a mysterious death we are asking: chance, natural circumstance, agency! THe same occurs when you see this blog post -- you eliminate NR as the text is contingent. Chance cannot credibly access the scope of FSCI, so you infer to agency. And so on; i.e., sadly, selective hyperskepticism is surfacing again.

12] Contingency

Here I am speaking of non-derministic outcomes, whether metaphysically so or for practical purposes.

Practically speaking, if you toss a die, the outcome is not predominated by the NR of gravity, or that of the dynamics of collisions, friction etc. Thanks to eight corners and twelve edges, the outcome is incalculable for us and so is for practical purposes a matter of chance. A lot of quantum noise stuff is held to be fundamentally not just incalculable but actually random. Our experience of agency is that decisions we make are real -- and positions that reject that end up in incoherent self-referential contradictions.

In short, Monod's famous Chance and Necessity -- note the echo of Plato in his Laws Bk 10 here -- fail to explain the world.

COntingency is, in effect connected to freedom of action. So, is that freedom taken up at random, or by intelligent choice? That is the question. You can reject freedom, but then you end up in incoherence.

BTW, note how TBO in CH 8 outline how this came out of the state of OOL research in the 70s - 80's, and led to the issue of defining complexity and specification so that one distinguishes a crystal from an at random polymer from an informational polymer.

Chance, agency and necessity routinely act together, but in highly contingent situations, it is chance or agency that are material to the outcome. (And on this apart from the tautology, so-called natural selection is actually based on chance and is fundamentally a chance process. Differential reproductive success is not a deterministic force. Indeed, it is not a force, it is a label for the factt hat because of environmental conditions and happenstance, some animals etc have more descendants. It is incapable of introducing information -- it only eliminates what does not work sufficiently well to reproduce often enough.)

13] In all the cases when configurational entropy has a legitimate use in thermodynamics, it relates to energy.

And how does the presence or absence of biofunciton or jet function not relate to energy?

The objection is vacuous. We csan see that targetted work can create states that function -- use and convert energy -- in certain ways that clumped-at-random states [to overwhelming probability] will not. The functionality macrostates have quire low statistical weights relative to the clumped ones.

They therefore have much lower configurational entropy. QED.

14] By "their prediction", I mean the inevitable consequences, if their theory is correct.

They are discussing favourable mixing, and the access to more microstates on mixing. Their analysis is correct.

Observationally, dye drops never unmix from vats by themselves, as that is overwhelmingly improbable. They simply show so through a model system based on locational cells.

You have here plugged in an example which is out of context: oils unless emulsified do not mix with water due to electrostatic forces tied to the difference in molecular structure and bonding.

If something will mix freely, it will go to the predominant configuration -- that is the whole study of diffusion, which BTW is foundational to the action of transistors in the ICs in the computers we are working on.

The objection clearly falls to the ground.

GEM of TKI

Gordon said...

PS: On the CSI section of TMLO, Ch 8:

___________

Only recently [circa 1984] has it been appreciated that the distinguishing feature of living systems is complexity rather than order.4 This distinction has come from the observation that the essential ingredients for a replicating system---enzymes and nucleic acids---are all information-bearing molecules. In contrast, consider crystals. They are very orderly, spatially periodic arrangements of atoms (or molecules) but they carry very little information. Nylon is another example of an orderly, periodic polymer (a polyamide) which carries little information. Nucleic acids and protein are aperiodic polymers, and this aperiodicity is what makes them able to carry much more information. By definition then, a periodic structure has order. An aperiodic structure has complexity. In terms of information, periodic polymers (like nylon) and crystals are analogous to a book in which the same sentence is repeated throughout. The arrangement of "letters" in the book is highly ordered, but the book contains little information since the information presented---the single word or sentence---is highly redundant.

It should be noted that aperiodic polypeptides or polynucleotides do not necessarily represent meaningful information or biologically useful functions. A random arrangement of letters in a book is aperiodic but contains little if any useful information since it is devoid of meaning.

[NOTE: H.P. Yockey, personal communication, 9/29/82. Meaning is extraneous to the sequence, arbitrary, and depends on some symbol convention. For example, the word "gift," which in English means a present and in German poison, in French is meaningless].

Only certain sequences of letters correspond to sentences, and only certain sequences of sentences correspond to paragraphs, etc. In the same way only certain sequences of amino acids in polypeptides and bases along polynucleotide chains correspond to useful biological functions. Thus, informational macro-molecules may be described as being and in a specified sequence.5 Orgel notes:

Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.6

Three sets of letter arrangements show nicely the difference between order and complexity in relation to information:

1. An ordered (periodic) and therefore specified arrangement:

THE END THE END THE END THE END

Example: Nylon, or a crystal.

[NOTE: Here we use "THE END" even though there is no reason to suspect that nylon or a crystal would carry even this much information. Our point, of course, is that even if they did, the bit of information would be drowned in a sea of redundancy].


2. A complex (aperiodic) unspecified arrangement:

AGDCBFE GBCAFED ACEDFBG

Example: Random polymers (polypeptides).

3. A complex (aperiodic) specified arrangement:

THIS SEQUENCE OF LETTERS CONTAINS A MESSAGE!

Example: DNA, protein.

Yockey7 and Wickens5 develop the same distinction, that "order" is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, "organization" refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity. In short, the redundant order of crystals cannot give rise to specified complexity of the kind or magnitude found in biological organization; attempts to relate the two have little future.

____________

This happens to be the very first sub section of the chapter, following the introductory remarks, and before the equations you originally objected to. This shows the sort of contingency I have been discussing [though it does not use the term explicitly] and is part of the background context for the discussions.

(In my always linked discussion, an excerpt from Dan Peterson on essentially the same example, addresses contingency in this sense explicitly.]

GEM of TKI

Gordon said...

PPS: Here is the Dan Peterson example:

___________

. . . . suppose my computer keyboard had only one key, and all I could type was:

AAAAAAAAAAAAAAAAAAAAAA

My computer would be incapable of producing contingency. This is rather like the operation of many physical laws in nature . . . . The sequence of 22 letters:

KAZDNHF OPZSJHQL ZXFNV

is complex in a certain sense, because that exact pattern is highly unlikely to be produced by chance . . . The total number of unique sequences of [27] characters that could be produced would be 27 multiplied by itself 22 times, or 27 to the 22nd power . . . If we . . . generate random strings 22 characters long . . . [with] a trillion tries every second, the odds would still be against producing this exact sequence by chance in 20 billion years . . . .

The third criterion is specification. Here's another 22-character sequence:

THE AMERICAN SPECTATOR

. . . . [which] is complex . . . It is also specified in relation to a pre-existing standard or function; in this case, the rules, spelling, and vocabulary of the English language . . . . In every case in which we know the "causal story" underlying complex specified information (writing a sonnet, creating a computer program, or sculpting Mount Rushmore) we know that it has been produced by an intelligence . . .

____________

I trust this helps make the issue clear.

GEM of TKI

The Pixie said...

Hi Gordon

1] The question of functionality is a matter of specification, and the issue is whether the synthetic DNA in view is or is not in an island of functionality.
But when I asked you What exactly do you mean be "direct functionality"? you replied "DNA is an integral, thus functional, component of a sophisticated information-processing system, which is either designed or in Dawkins' coinage: "designoid."". According to the earlier definition, DNA in a test tube is not directly functional, according to your new definition it is. Thermodynamics is an exact science. How can you claim to be using thermodynamics when your terms are as vague as this?

That can be tested by inserting it in an organism. If it is functional, it is in a tightly specified configuration, which means that its entropy of configuration is low. (The real issue is that biofunctional DNA would inthe case in point obviously be again intelligently created. At-random DNA chaining would with very high probability fail the test. Clumping vs configuring again.]
Wait, this is "biofunctional" now?

2] Pix: This would be true if we used those nanobots to place the component parts at specific locations, whether that resulted in a working cell or not
The point is that targetted functionality as a specification automatically and discernibly shows that we are in such a tightly constrained state. That is, we are perfectly willing to leave off lotteries in which the winning ticket is just that, in favour of lotteries where the winning number just happens to open say Howard Hughes' bank vault.
What?

Suppose the target was non-functional but specified (i.e., the nanobots place the components at specific locations, but build specified junk). Okay, this does not "automatically and discernibly" show the constraints, but nevertheless the constraints are still there. Does this system have the same entropy or not? Is the entropy only low when the the tight constraints are discernable, are automatic are both or neither? Or is this even relevant?

3] Not at all. I am pointing to an issue of macroscopic state specification and observability, tied to relatively small statistical weights; and I believe I have done so long since in clear enough terms.
Well, no, because now "direct functionality" means something different! Before you said it was about being a component of a system, this time around it is about specification. So no, your terms are not clear.

4] Pix: I believe DNA sequences can be built up one base at a time, in a computer-controlled synthesis to produce the exact sequence desired.
See my point? Where does the 'puter get the smarts to do the job?
It was built and programmed by an agent with a physical body.

Does that mean that the DNA sequence it produces must be low entropy whatever the sequence? I assume not, but what is your point otherwise?

5] Pix: But these nanobots are doing the same work as the ones producing nano-planes. They do the same recognition, the same information processing.
No, they do not. They do a similar quantum of energy use, but not the same specified functional so-called "shaft" work.
Here again I am getting confused. The nanobots that do the rearrangements do two things. First they inspect a clump and decide what rearrangeents need doing. Second, they do the rearrangements. Your working nanobots and my faulty ones both do the first stage. They both inspect clumps, they both decide what rearranging needs doing.

So I am thinking it is the secod stage when the difference in work comes. Your nanobots do the right rearrangements, mine receive the same instructions from the first stage, but it gets consistently scrambled, so they do the wrong rearrangements. But here again, the "work" required would seem to be the same. Both are pulling components off one place, moving the component, sticking it somewhere new. Why does one require more work than the other?

6] This is irrelevant to the context of functional specification
Yes, but before you were saying they were only "directly functional" if the system actually worked.

7] Notice what I actually said, in the context of the microscopic, microstate underpinnings of 2nd LOT: The programming, at the expense of of course degrading the environment, creates a zone of order and organisation which is low-entropy. Such a zone, due to the complexity involved, would not otherwise come into existence, by a very high probability barrier.
So you are arguing that it is unlikely. I thought we had established that that did not imply a necessary connection to the second law. And yet it keeps coming back. And yes, I know improbablity underpins the second law. Nevertheless, not all improbabilities are due to the second law.

8] Again, show how this Mt Improbable can be climbed credibly by non-agent processes. Sewell, TBO, Shapiro, other OOL researchers, and I are all pointing to the problem of overwhelming improbability.
Nothing to do with the second law though!

9] Pix: But the demon does not exist!
Go back above and see where I have linked and excerpted to show that functionally equivalent cases do exist and how Brillouin's point on why 2nd LOT still holds, has been sustained.
The demon is a hypothetical exception to the second law. How can you claim functionally equivalent cases exist, but accept that they are not exceptions to the second law?

11] In our own immediate context, chance or agency predominates over NR in cases where contingency is material, as I discussed. All you have done is to brush that point aside. Kindly address it.
...
12] Contingency
Here I am speaking of non-derministic outcomes, whether metaphysically so or for practical purposes.

Contingency implies a non-derministic outcomes. So of course "chance or agency predominates over NR in cases where contingency is material". It is true by definition. What more is there to say? I have no idea where that gets us...

As to the idea that it is unhelpful to think in terms of agency, chance and necessity, let us just note that this framework has been a major context for human thought since Plato and beyond. Every time we come across a mysterious death we are asking: chance, natural circumstance, agency! THe same occurs when you see this blog post -- you eliminate NR as the text is contingent.
The process that gets the text from your mind to the the blog database, and later from there to my screen is due to NR. Ciunts the steps involved, and consider how many were due to NR. Being intelligent agents, we tend to overlook that, and just concentrate on the intelligent part that was involved in writing it. You will find that forensic investigations of a murder start with the unwritten assumption that everything followed the laws of nature, that most of what happened was NR (brain sends electrical signal to muscle in hand, muscle in hand contracts, trigger is pulled, spark ignites explosion, which propels bullet, and so on; NR again and again). If there is a gunshot wound then NR requires that somewhere there is a gun. We do this all the time. NR pervaded everything we do, so we ignore it. If the investigator of a car crash decides it was an accident, does that imply chance? Or was the accident inevitable given the circumstances? Was it human error? Is that an intelligent cause, or chance? If the driver was drunk, then human error might be considered NR.

My objection here is that your model is way to simplistic. So much of life depends on a complexity of processes that to think we can divide them neatly into three groups like that is, to my mind, ridiculous

Practically speaking, if you toss a die, the outcome is not predominated by the NR of gravity, or that of the dynamics of collisions, friction etc. Thanks to eight corners and twelve edges, the outcome is incalculable for us and so is for practical purposes a matter of chance.
So you assign this to chance not because it is random, but because it is hard to calculate. How difficult must a determinstic process be exactly before you decide it is now random? Conversely, at what scale does the randomness of the microstates become the NR of the second law? Is there are non-arbitrary boundary in either case?

Chance, agency and necessity routinely act together, but in highly contingent situations, it is chance or agency that are material to the outcome.
Can you give an example of that? I think you are just ignoring NR because it is always present (for example, you mentioned making a blog post; would that be possible without NR).

(And on this apart from the tautology, so-called natural selection is actually based on chance and is fundamentally a chance process. Differential reproductive success is not a deterministic force. Indeed, it is not a force, it is a label for the factt hat because of environmental conditions and happenstance, some animals etc have more descendants. It is incapable of introducing information -- it only eliminates what does not work sufficiently well to reproduce often enough.)
Natural selection is one of those crazy chance and NR processes!

Yes, it is not a proper force, and yes it does not introduce information (but other processes do).

13] And how does the presence or absence of biofunciton or jet function not relate to energy?
The presence of biofunction does not change the energy of a DNA sequence in a test tube.

14] They are discussing favourable mixing, and the access to more microstates on mixing. Their analysis is correct.
I hope I am missing some here. You seem to be saying that they predict mixing will be favourable in those situations where mixing is favourable. Hardly a bold or useful prediction.

You have here plugged in an example which is out of context: oils unless emulsified do not mix with water due to electrostatic forces tied to the difference in molecular structure and bonding.
That is right. Oil and water do not mix ultimately for energetic reasons. Alcohol and water will mix, because the thermodynamic entropy goes up, oil and water separate because the thermodynamic entropy goes up when they separate. You can use thermodynamic entropy to predict both. You can only use configurational entropy to make a prediction if you already know the answer!

Me, I will stick with the thermodynamic entropy.
"Only recently [circa 1984] has it been appreciated that the distinguishing feature of living systems is complexity rather than order."
Right. And the second law is about order, nor complexity.

ENDENDENDENDENDENDENDENDENDENDEND
AGDCBFEGBCAFEDACEDFBGKLDSAJDHGTHCYY
THISSEQUENCEOFLETTERSCONTAINSAMESSAGE
I would be curious to know what the entropy is of these three sequences. I would guess it is the same for the first and third, as both are (we assume) specified, while the second is random. Or is it. Maybe the second is some sort of code. Interestingly, this would imply that the entropy depends on what meaning we can discern in the message, i.e., it is subjective. I think most people would say thermodynamic entropy can be measured directly and objectively - it is a property of state, not mind, afterall.

Gordon said...

Pixie:

It is becoming plain that his thread is reaching a point of diminishing returns, though in so doing it is showing me just how compelling the case made by TBO is. For, if in the teeth of clear evidence that entropy relates not only to vibration, rotation and translation but also to configuration of the micro-level mass in question, you are forced to deny the objective point to resist TBO's case, then TBO have got a telling point.

I will comment:

1] DNA in a test tube is not directly functional, according to your new definition it is. Thermodynamics is an exact science. How can you claim to be using thermodynamics when your terms are as vague as this? . . . . Before you said it was about being a component of a system, this time around it is about specification. So no, your terms are not clear . . . .

You are insisting on a rhetorical distinction without a difference.

DNA is in reality a name for a family of related informational molecules, which in life systems take on particular configurations to store information used in constructing the molecules of life. The same system of chaining can be used to store arbitrary strings of the 4-state elements, at random, or as targetted by an experimenter [at least in principle].

I have pointed out that DNA is functional as an integral component of life systems, in the former case, and that this has implications for the number of microstates of the GCAT etc chain that are consistent with life function. Namely, that the functional state is relatively very small and isolated in the space of possible configurations. Following Brillouin, we can therefore deduce an informational measure linked to the configurational component of the entropy, as TBO do in TMLO ch 8. The rest of their analysis follows directly.

My micro-jet example simply shows how the process works, using a more familiar technology.

And, from the very beginning, in addressing DNA, I have looked at the macrostate defined by its bio-function, as do TBO. The same for polypeptide chains. Being a functional component of a tightly integrated information- and energy-processing system plainly implies having functionality that is both direct and specific.

2] Is the entropy [of a macrostate?] only low when the the tight constraints are discernable, are automatic are both or neither? Or is this even relevant?

You will observe that I have consistently spoken in terms of macro-level observability as a basis for recognising that a system is in a given a macrostate. Without observability, we cannot distinguish any one microstate from any other, apart from micro-level inspection, which as "Max" shows, through Brillouin's analysis, is self-defeating energetically. Entropy of a macrostate is a simple log measure of associated microstates. And in the case of configurations of microcomponents, that component is quite easily countable.

Further, in that context, if nanobots are programmed to specify an arbitrary microstate, it by virtue of that programming, is "specified." But if it is non-functional, it is most unlikely to be macroscopically discernible. You will note from my always linked note, that I have always spoken to FUNCTIONALLY specified, complex information [FSCI -- note the subtle difference from CSI] when addressing bio-information and the fine-tuning of the cosmos. (I have recently noted where I think it was one of the principals at UD took up this more specific phrasing.)

My reason has always been thermodynamic/physical: there are a whole lot of possible microstates out there, and macro-level observations are going to give you in general not a lot of information that confines micro-state. But, functionality is a very different kettle of fish: it can in some cases confine a configuration down to a single micro-state, and in any case, to a relatively small island in the configuration-space.

3] It was built and programmed by an agent with a physical body. Does that mean that the DNA sequence it produces must be low entropy whatever the sequence? I assume not, but what is your point otherwise?

If a particular DNA strand is produced and controlled by a program to start out and continue to be -- i.e. there is a maintenance of configuration issue here, DNA being a high-energy, endothermically formed macromolecule thus at best metastable -- in a particular state, with more than 250 GCAT elements in it then by definition, it has in it complexity and specification. But unless it is also functional in some context that we can observe, we will have a hard time defining a macrostate at macro-level for s = k ln w to work beyond that first moment when the molecule comes out of the nanomachinery. [In short, I am pointing out that the microstate is prone to shift to other chemically accessible configurations which are not so high-energy.]

I also note that "an agent with a physical body" is not to be conflated with an agent who IS a physical configuration, i.e. a body. We have bodies, and we have minds too, by which we become aware of and use our bodies. If we try to reduce our minds to epiphenomena of underlying physical processes, we end up in inescapable self referential inconsistencies as long since linked, also cf here and here. In short, we have strong evidence from our own consciousness, that minds act as agents independent of the physical subsystems they may use.

4] The nanobots that do the rearrangements do two things. First they inspect a clump and decide what rearrangeents need doing. Second, they do the rearrangements. Your working nanobots and my faulty ones both do the first stage. They both inspect clumps, they both decide what rearranging needs doing.

They are not pursuing the same target, and in the random target case are in general unlikely to produce a functional macrostate, namely a flyable micro-jet. By inspection of the targetting subroutines in the program, we could see what the targetted state is, and so it is complex and specified, but it is by overwhelming probability, likely to be non-functional. [By contrast, the simple clumping nanorobots are only targetting clumped vs scattered. Any old clumping will work, like collecting rubbish and moving it to a junkyard at random.]

We have been specifically looking at a functionally defined, targetted, flyable macrostate. Given the ticklish nature of aerodynamics and control as well as the nature of aero engines, such a state is going to be rare in the configuration space, and is highly likely to be functionally isolated from other states that may have some other function. I.e. we are looking at not only functionally specific states, but also ones with irreducibly complex cores.

5] again, the "work" required would seem to be the same. Both are pulling components off one place, moving the component, sticking it somewhere new. Why does one require more work than the other?

Both may expend a similar number of joules in aggregate, but because precisely configuration counts, the work done is different.

One reliably creates a functional state, the other does not. That difference is not only discernible from inspection of the subroutines, but from the simple observation of the result. (That is a great advantage of focussing on FSCI.)

6] "Unlikely" [my clipboard has mysteriously turned off here . . .]

It is standard stat mech reasoning to account for the classical result that certain entropy increasing processes are "irreversible," by reference to relative statistical weights of microstates and also to refer to the principle that all accessible microstates are equiprobable. [Cf for instance my clip from Yavorski and Detlaf above.] That is what I have again done in the case you are objecting to.

Thus, your objections about "necessary connections" are simply specious. Not all probability arguments tie to 2nd LOT, but some do, including the ones I have been making. And, for the specific reasons I have pointed out.

7] Mt improbable

Ditto.

8] "Max"

Again, as long since excerpted, even so humble a source as WIki points out that such demons exist -- though of course so long as we are looking at needing to physically act in the world to find out the information required to manipulate the trapdoor or whatever, the energy to do this more than offsets the energy converted from random thermal motions or the like. (in short, Max if he is regarded as a corporeal entity, does not escape the point of 2 LOT. If he is a purely mental critter able to directly know the information without exerting physsical forces, then 2 LOT does not apply.]

9] Contingency

It gets us quite far. We are dealing with discrete elements that are multi state, multi-component [i.e digital] -- whether jet parts or dna monomers or protein residues. For such entities we can construct a configuration space, which in general turns out to be very large.

If this is to be searched at random or in ways that are tantamount to it, then we are most unlikely to arrive at functionally specified configurations by chance. And, that, through standard thermodynamics-style reasoning. [Indeed,the Gibbs tactic of splitting up phase space into cells is a digitalisation of the to him analogue space. Some years later we discovered that the space IS inherently digital, thanks to discovering the quantum].

But, we know, routinely, and ever since Cicero, that on a comparative likelihood basis, it is far more credible to account for such digital complexity [aka information storage capacity -- Shannon information lurks here] that is functional on agent grounds than chance grounds.

Complex, functional contingency calls out for agent explanation. And, as noted and exemplified, agents commonly use both natural regularities and chance in their work, indeed they have to in most cases. So, that additional objection -- count the number of cases in which ND is used in the contingent situation -- also falls to the ground.

Similarly, the core of assigning the identification, murder, rests on INTENT, not on the NRs and chance processes that intervened. That objection too, falls to the ground.

Nor am I being simplistic,as I have from the outset pointed out that agents routinely use NR and chance but in so doing introduce a third factor: intelligent action. One of the reliable hallmarks of such action is that it is purposeful, and produces states that are unlikely to be a result of chance initial conditions and NR acting alone [i.e back to the thermodynamics style reasoning . . .]. In particular, when FSCI results, we are looking at a strong indicator of agent action indeed, for many reasons long since adduced, discussed and linked.

10] Chance and random cases

The die is of course a classic example of sensitive dependence on initial conditions. The result of this, commonly called "chaos" is that the outcomes at any given time in the long, are effectively unpredictable to the finite and fallible. [In some cases someone has shown that a slight shift in the position of an electron in the Andromeda galaxy would be enough to disturb the result.]

Thus, for all practical purposes tossing a fair die is a chance process.

power cut so pause

GEM of TKI

Gordon said...

Power just came back . . .

11] Chance + NR + Agency in action

I have long since given examples.

I have pointed out that by definition of contingency,the outcome is non-deterministic relative to some set of possible outcomes. For instance states of text in a long message. The agency is material to there being a message.

As already noted, NR is not material to the outcomes observed where contingency is an important consideration. [That you type any one particular string of keys is material to what message you send. Apart from troubleshooting, the train of mechanical and electronic events so entrained -- the actuating/proximate and material cause, to produce the result are less important than the first [agent] and purposive [final] causes.

In short, there are four causal factors here:

a] First cause -- agent who decides and acts.

b] proximate -- the instruments he uses to actuate the situation to achieve his purpose.

c] Material -- the matter or substance he exploits [cf the physical layer in the classic layer cake ISO telecomms system model and other similar models, cf my diagram in the always linked] so to speak]. At this level, physical regularities and chance processes at micro level are often implicated. For instance, transistor action strongly depends on diffusion, which is a classic example of configurational entropy in action.

d] Final -- purpose. WHY the agent acts, often discernible from the classic: who benefits?

--> This is the opposite of being simplistic! (Nor is it ad hoc.)

12] Yes, it [NS] is not a proper force, and yes it does not introduce information (but other processes do).

Yay, I can cut and paste again -- maybe a power cut did some good after all. I'll have to tell this one to Mr Hoyte the night engineer! (Living in a SMALL community, you get to know people . . .)

OF course, the point is, that beyond a certain reasonable level of configurational complexity, it is not credible [here, cf the underpinnings of 2 LOT in the stat mech form] that chance introduces the functional information required. Agents routinely produce that level of functional complexity.

13] The presence of biofunction does not change the energy of a DNA sequence in a test tube.

It changes the known macrostate, once the functionality is identified. That automatically specifies statistical weight,t hence a low configurational entropy measure. [And, BTW, note that in conventional chemical entropy and enthalpy measurements, major components irrelevant to most operations are ignored, e.g nuclear binding energy contributions, gravitational contributions etc. My point is that specific configuration of these particular macromolecules is important.]

14] You seem to be saying that they predict mixing will be favourable in those situations where mixing is favourable. Hardly a bold or useful prediction.

This sounds like going out of one's way to make an objection, i.e selective hyperskepticism.

First, you accused these students [and by implication their supervisor] of PREDICTING -- an active, specific act -- that oil and water would mix in general. I looked, and no such statemetn was there.

You shifted to that their remarks IMPLY such a prediction. But I had anticipated that by pointing out that they were discussing FAVOURABLE mixing and did so by giving an illustrative example, dye drops in water -- something I did in 1st form as I recall. Dye drops diffuse and do not unmix, reliably. PN junctions work. And more.

Why is that? ANS: because the opening up of configuration space allows the molecules to scatter into location cells all across the beaker or test tube. By overwhelming probability, there is a predominant cluster of microstates: scattered. And, reliably, the dye drop goes to that state.

Oil, absent an emulsifier, faces a potential well problem -- the electrostatic forces it generates are unable to sufficiently cling to the water molecules to diffuse rapidly. But in fact if we are real patient, we will see a slow distribution of the oil molecules in the water [but with a rather low feasible concentration -- or they will clump and fall out of solution]. An emulsifier will allow the oil to diffuse into the water, e.g that is similar to how soaps and detergents work.

Indeed, observe my thought expt at the head of the thread and at UD: decanting the micro-parts into the vat leads to scattering,and as I noted the chances of clumping much less functional configuration at random, are next to nil. There is a logical and physical possibility, but the probability strongly goes against it tot he point where the classical result is reliable.

The NUS students are explaining the phenomenon of observed mixing, and they are doing so by using stat mech reasoning. It is specious to drag acros the track, the red herring of oil and water which by noting on FAVOURABLE and by giving a sufficiently clear example, they are simply not addressing.

What they are addressing is highly material, and should be considered on the merits, not impatiently brushed aside by following a red herring trail to a convenient strawman misrepresentation that one then beats up on.

15] Alcohol and water will mix, because the thermodynamic entropy goes up, oil and water separate because the thermodynamic entropy goes up when they separate.

You have this back way around. The micro-level dynamics come first, then the entropy accounting. [Once we are not dealing with, "we have this magic number that tells us which way things will go . . ."]

Alcohol has a polar molecule,so it will not face the potential barrier just discussed. Random vibe-rot-trans forces then lead to scattering as the natural result. Oil faces the potential barrier so there is a rather low limit on solubility in water. [VERY few tings will not dissolve to some extent in water -- ever tasted oil in water that did not have even a sheen on the surface?]

The NUS students were discussing the former sort of case,and the latter is in your hands a -- I suspect inadvertent -- distraction from their legitimate point.

16] You can only use configurational entropy to make a prediction if you already know the answer! Me, I will stick with the thermodynamic entropy.

Not at all. We look at the underlying micro-dynamics, and link the micro world to the macro one through identifying causal chains and associated statistical patterns. That is the project of Stat mech. And, a successful one it has been, too.

As has long since been pointed out, configurational entropy is real,and has significant applications once we look at diffusion and related phenomena.

Further to this, in the case of informational macromolecules, configurations of monomers is a functional issue and allows us to make significant analytical progress. [Judging by your remarks, the problem you seem to have is that the progress in question is not pointing where you would like it to.]

17] I would be curious to know what the entropy is of these three sequences. I would guess it is the same for the first and third, as both are (we assume) specified, while the second is random.

As TBO actually calculated in relevant cases -- i.e it seems your reading of the chapter has been selective -- the values.

The first is non-contingent, so it has but one configuration, W = 1, so k ln w = 0, its configurational entropy [assuming for the sake of argument we are dealing with molecular scale chemical coding] is zero, but vibration etc states are still accessible so overall entropy will be non-zero at accessible temperatures. The config entropy of a perfect and pure crystal is zero.

If you are dealing with the entropy of bits of ink on paper or dots on a computer screen, one would need to work out the statistical weight of a macrostate containing a screen or sheet with "the end . . ."] on it. In principle, far less than an at random state, but in practice a serious and involved physical calculation.

Back to the molecular codes. A random string of 27-state elements of length N has in the raw 27^N possible configs, so sconfig = n* k ln 27. Factoring in compositional constraints we go to working off:

W = N!/[n1! . . . n27!]

As the NUS students and TBO work out, that leads to A through s config = k ln w.

In the third case, due to the tight specification, at molecular level there would be just one config again. Sconfig = k ln 1 = 0. A functionally specified [here grammatically functionally specified . .] unique configuration has sconfig = o.

Using the Brillouin principle of negentropy, we can deduce the increment of information in moving from the random state to the configured one by doing a ratio measure, as TBO do in ch 8 for their 100 monomer hypothetical protein.

In the updated work on cytochrome c, Bradley does a theoretical calc on a flat probability distribution, then brings in various empirical factors, yielding an estimate in the general ball park of those by leading OOL researchers. Cf his slide show as linked yesterday.

In short we can answer the problem cogently and credibly.

18] Maybe the second is some sort of code. Interestingly, this would imply that the entropy depends on what meaning we can discern in the message

Of course, the "hidden code" issue would be bound to turn up. [Long since addressed in my always linked, in light of long experience on this topic . . .]

The answer is simple: unless the code is shown to function, we have no observationally anchored reason to assign the macrostate state to other than a random clumped one.

So we can happily accept the possibility that measures of entropy are -- as all things in science -- provisional.

Show the functionality and specificity then we can based on such metrics, infer to w thence s config. [Note that metrics often come in different flavours of scaling: e.g. ratio, interval,ordinal, nominal.]

By massive statistical weight, random states are reliably non-functional. That is why clumping micro-jet parts is unlikely to get to a flyable plane. Configuring in accord with a functionally specified program will with high reliability. The two macrostates are easily distinguishable, and we can see the relvant statistical weights.

Thence we see the entropy shifts. And, the situation with biopolymers is similar.

Unfortunately, this is a recircling to a point you made much earlier in the discussion, i.e Pixie we are going in circles back to already long since cogently addressed objections. That is not a mark of a progressive discussion.

Let's see if it makes sense to go on further.

You have underscored to me the strength of the underlying point I and others have made. For that, I must thank you for your time and effort. But, absent real progress -- and I don't mind if the path is spiral -- we are going in circles.

Why not summarise points where progress has been made and identify points for further discussion that would be progressive?

I guess I can start:

a] the first point of progress should be that not all ID supporting thinkers who raise thermodynamics issues are technically incompetent to look at he issues.

b] As a second point, open systems can reduce entropy is not a good enough response on the merits.

c] Third, the origin of functionally specified complex information in certain energy converting devices is significant as a problem.

d] where contingency is an issue, chance, natural regularities and agency have potentially all got a seat at the table.

e] configurational entropy is a credible component of entropy. Fifth, all of this makes the issue of design vs designoid a significant and longstanding one, not one to be brushed aside rhetorically or by begging worldview level questions.

f] we differ on the conclusion but not on the question, and the evidence is not as easy to assess as some imagine.

GEM of TKI

The Pixie said...

Hi Gordon

First, this "direct functionality" issue.
1] Pix: DNA in a test tube is not directly functional, according to your new definition it is. Thermodynamics is an exact science. How can you claim to be using thermodynamics when your terms are as vague as this? . . . . Before you said it was about being a component of a system, this time around it is about specification. So no, your terms are not clear . . . .
You are insisting on a rhetorical distinction without a difference.
No, I am not. And I thought it was clear that I was not. DNA in a test tube is not a component of a system, so according to your first definition, is not "directly functional". But human DNA is specified, and so according to your second definition, it is "directly functional", whether in the cell or the test tube. If your two definitions have different implication they are, well, different. This is no "rhetorical distinction"! I am surprised that you cannot see the difference.

My earlier comments two posts ago were based on the definition that DNA is "directly functional" because it is part of a system. You said I was wrong. Very clearly the reason I was wrong is that the definition you gave me was wrong!

DNA is in reality a name for a family of related informational molecules, which in life systems take on particular configurations to store information used in constructing the molecules of life. The same system of chaining can be used to store arbitrary strings of the 4-state elements, at random, or as targetted by an experimenter [at least in principle].

I have pointed out that DNA is functional as an integral component of life systems, in the former case, and that this has implications for the number of microstates of the GCAT etc chain that are consistent with life function. Namely, that the functional state is relatively very small and isolated in the space of possible configurations. Following Brillouin, we can therefore deduce an informational measure linked to the configurational component of the entropy, as TBO do in TMLO ch 8. The rest of their analysis follows directly.

My micro-jet example simply shows how the process works, using a more familiar technology.

And, from the very beginning, in addressing DNA, I have looked at the macrostate defined by its bio-function, as do TBO. The same for polypeptide chains. Being a functional component of a tightly integrated information- and energy-processing system plainly implies having functionality that is both direct and specific.

When I asked you "What exactly do you mean be "direct functionality"?" you replied, for DNA: "DNA is an integral, thus functional, component of a sophisticated information-processing system, which is either designed or in Dawkins' coinage: "designoid."". You seem to think I must be at fault here for believing that what you wrote was exactly what you meant by "direct functionality".

Please think about this from my point of view. Can you see how it looks to me? I ask you to say "exactly do you mean", you reply, and then next time around, you say something else, now it is about specification. If you had said you had forgotten to mention that, or you had changed your mind, well we could just move on (sure, I might comment on you changing your mind about a definition). But instead you are insisting that I am at fault here!

Thermodynamics is an exact science. The terms are defined precisely. To be honest, right now it looks as though "direct functionality" means whatever you want it to mean at any particular moment. And that is disappointing.

The Pixie said...

Okay, back to the thermodynamics. Yes, I agree the debate has pretty much run its course. I wonder if anyone else has bothered to read all the way down to here?

2] Without observability, we cannot distinguish any one microstate from any other, apart from micro-level inspection, which as "Max" shows, through Brillouin's analysis, is self-defeating energetically.
There is a difference between what we actually see, and what is potentially observable. The second law still happens on Mars, even though the processes are not observed. In theory, those macrostates can be distinguished. It does not matter if that is done or even practical. This is important later on.

But if it is non-functional, it is most unlikely to be macroscopically discernible.
In the same way that macrostates on Mars are not macroscopically discernable? These faulty nanobots are consistently producing the same configuration. We can discern that configuration under an electron microscope. So yes they are macroscopically discernable from other configurations, despite being non-functional.
You will note from my always linked note, that I have always spoken to FUNCTIONALLY specified, complex information
Sure, but the question we are trying to answer is how FSCI relates to entropy. It remains to be proven; it would be circular reasoning to build your argument on that assumption.

But, functionality is a very different kettle of fish: it can in some cases confine a configuration down to a single micro-state, and in any case, to a relatively small island in the configuration-space.
In thermodynamics a micro-state is a particular distribution of energy. The only time you can have a single micro-state is when there is no energy present, and entropy is therefore zero, i.e., at absolute zero (which cannot be achieved anyway).

3] If a particular DNA strand is produced and controlled by a program to start out and continue to be ... in a particular state, with more than 250 GCAT elements in it then by definition, it has in it complexity and specification. But unless it is also functional in some context that we can observe, we will have a hard time defining a macrostate at macro-level for s = k ln w to work beyond that first moment when the molecule comes out of the nanomachinery. [In short, I am pointing out that the microstate is prone to shift to other chemically accessible configurations which are not so high-energy.]
The macrostate (if we allow configurational entropy) is the sequence we put into the device in the first place. It is possible to observe this macrostate; DNA sequencing is done routinely.

Thus we have the functional DNA and the non-functional DNA, both slowly falling apart, but we can see how much it has fallen apart in both cases. Or we can pop it in a freezer. I think the DNA should last a fair time in there.

So I am left wondering about the answer to my question. And why you ducked it completely. The question is still valid, even if we just consider the DNA sequences straight from the process (it is irrelevant if they promptly fall apart). So what is the difference is entropy between the two DNA sequences?

Fortunately, you answer this (effectively) in point 17; a simple repeating pattern has the same entropy as a sequence with a message in it.

4] They are not pursuing the same target, and in the random target case are in general unlikely to produce a functional macrostate, namely a flyable micro-jetBy inspection of the targetting subroutines in the program, we could see what the targetted state is, and so it is complex and specified, but it is by overwhelming probability, likely to be non-functional.
Right.

We have been specifically looking at a functionally defined, targetted, flyable macrostate. Given the ticklish nature of aerodynamics and control as well as the nature of aero engines, such a state is going to be rare in the configuration space, and is highly likely to be functionally isolated from other states that may have some other function. I.e. we are looking at not only functionally specific states, but also ones with irreducibly complex cores.
It is a bit late in the discussion to discuss the impact of IC on entropy...

Er, so where does that leave us? How does this support your position? I have not argued that we will hit the nano-plane by accident, so this seems irrelevant.

5] Both may expend a similar number of joules in aggregate, but because precisely configuration counts, the work done is different.
I was using "work" in the technical sense of energy expended, so the work done would be the same. And I have spent several posts trying to get you to consider the energy differences, so this does feel a little bit like another dodge.

But okay, we both agree that the number of joules expended is the same. So we have three vats, under vacuum (so no flying in any case). All three with clumping nanobots, one also has working nanobots, the second faulty nanobots and the third which produces a specific asthetically pleasing arangement. All three produce compex specified aggregates. All three expend the same amount of energy. All three produce aggragates that do not fly under the current conditions. All three produce aggregates of a configuration that can observed under a microscope, s is macroscopically observable.

But you say aggregates from the first vat have much lower entropy than those from the second. What about the third? Later, in point 17 you say the entropy of a simple repeating sequence is identical to a "functional" message (i.e., with meaning). That would lead me to suppose that the asthetically pleasing aggregate is the same entropy as the nanoplanes.

One reliably creates a functional state, the other does not. That difference is not only discernible from inspection of the subroutines, but from the simple observation of the result. (That is a great advantage of focussing on FSCI.)
I can see that. But I do not see why that leads us to suppose a difference in the entropy. And that really is the point, surely?

6] It is standard stat mech reasoning to account for the classical result that certain entropy increasing processes are "irreversible," by reference to relative statistical weights of microstates and also to refer to the principle that all accessible microstates are equiprobable. [Cf for instance my clip from Yavorski and Detlaf above.] That is what I have again done in the case you are objecting to.
Only if those microstates relate to energy! And there is no reason for them to be equiprobable, by the way. Check out the Boltzmann distribution.

9] If this is to be searched at random or in ways that are tantamount to it, then we are most unlikely to arrive at functionally specified configurations by chance. And, that, through standard thermodynamics-style reasoning.
That depends on the search and the search space. If we suppose a fitness landscape, and a search routine that tries random increments, rejecting those of ower fitness, inevitably the search will get to a fitness peak.

Similarly, the core of assigning the identification, murder, rests on INTENT, not on the NRs and chance processes that intervened. That objection too, falls to the ground.
Of course! That is the specific issue in a murder investigation; who was the intelligent agent. Nevertheless, in any murder NR plays a big role (try shooting a gun in a universe with no natural laws or natural laws that vary wildly moment by moment, and see how far you get.

Nor am I being simplistic,as I have from the outset pointed out that agents routinely use NR and chance but in so doing introduce a third factor: intelligent action.
The objection is that you are being simplistic in not considering chance and NR (eg as used in the search routine above).

12] OF course, the point is, that beyond a certain reasonable level of configurational complexity, it is not credible [here, cf the underpinnings of 2 LOT in the stat mech form] that chance introduces the functional information required. Agents routinely produce that level of functional complexity.
Ah, what level of configurational complexity is that? Is this a well established law, with experimental and theoetical data to support that threshold? I am going to guess... No. In fact, I bet that actually no one has proposed a value for this supposed threshold, let alone found data to support it.

Which would be odd for a science like thermodynamics, so firmly rooted in maths. Why might that be?

14] First, you accused these students [and by implication their supervisor] of PREDICTING -- an active, specific act -- that oil and water would mix in general. I looked, and no such statemetn was there.
They make a general prediction: "Since mixing always corresponds to an increase in configurational entropy, and mixing involves the dilution or spread of matter in space, we conclude that: Spreading of matter in space is spontaneous as configurational entropy increase." I applied their prediction to the specific case of oil and water. In this instance the prediction fails.

You shifted to that their remarks IMPLY such a prediction. But I had anticipated that by pointing out that they were discussing FAVOURABLE mixing and did so by giving an illustrative example, dye drops in water -- something I did in 1st form as I recall. Dye drops diffuse and do not unmix, reliably. PN junctions work. And more.
Why is that? ANS: because the opening up of configuration space allows the molecules to scatter into location cells all across the beaker or test tube. By overwhelming probability, there is a predominant cluster of microstates: scattered. And, reliably, the dye drop goes to that state.

No, because opening up the space gives the molecules more freedom to move, lowering energy levels, making them more accessible for energy distribution, lowering entropy.

Oil, absent an emulsifier, faces a potential well problem -- the electrostatic forces it generates are unable to sufficiently cling to the water molecules to diffuse rapidly. But in fact if we are real patient, we will see a slow distribution of the oil molecules in the water [but with a rather low feasible concentration -- or they will clump and fall out of solution]. An emulsifier will allow the oil to diffuse into the water, e.g that is similar to how soaps and detergents work.
That is right. To understand why somethings mix and some do not the configurational entropy must be discarded - it tells us everything mixes! Instead, use the thermodynamic entropy, which gets it right every time.

Indeed, observe my thought expt at the head of the thread and at UD: decanting the micro-parts into the vat leads to scattering,and as I noted the chances of clumping much less functional configuration at random, are next to nil.
I thought you defined them as nil. Somethings do clump together (eg drop of water on your window pane).

15] You have this back way around. The micro-level dynamics come first, then the entropy accounting. [Once we are not dealing with, "we have this magic number that tells us which way things will go . . ."]
The value of deltaS for an isolated system or deltaG otherwise is indeed a "magic number" that tells us which way things we go. That is the point. The second law makes definite predictions about what can and cannot happen.

If we cannot use the second law to predict what can and cannot happen, then how could you use it to show evolution cannot happen?

And the micro-level dynamics is the entropy.

The NUS students were discussing the former sort of case,and the latter is in your hands a -- I suspect inadvertent -- distraction from their legitimate point.
Okay, may be I read more into it than was there. It seemed to me they were doing science. They had a theory about configurational entropy, they looked at the implications, and made the prediction: "Since mixing always corresponds to an increase in configurational entropy, and mixing involves the dilution or spread of matter in space, we conclude that: Spreading of matter in space is spontaneous as configurational entropy increase." Apparently what they were doing was noting that mixing always involves an increase in configurational entropy. And sometimes things mix, and sometimes they do not.

16] Pix: You can only use configurational entropy to make a prediction if you already know the answer! Me, I will stick with the thermodynamic entropy.
Not at all. We look at the underlying micro-dynamics, and link the micro world to the macro one through identifying causal chains and associated statistical patterns. That is the project of Stat mech. And, a successful one it has been, too.
But that is thermodynamic entropy!

Whatever their point, it is apparent from the work at NUS that you cannot make reliable predictions about mixing from the configurational entropy. You need to consider the energy situation. As you did.

Further to this, in the case of informational macromolecules, configurations of monomers is a functional issue and allows us to make significant analytical progress. [Judging by your remarks, the problem you seem to have is that the progress in question is not pointing where you would like it to.]
No it is not. I was hoping to see some maths, some data, some reasoning to support this claim. Instead I see vague and shifting definitions, irrelancies and thought experiments about systems I find it hard to understand, with no explanation about the deatils of what is going on in them.

17] The first is non-contingent, so it has but one configuration, W = 1, so k ln w = 0, its configurational entropy [assuming for the sake of argument we are dealing with molecular scale chemical coding] is zero...
In the third case, due to the tight specification, at molecular level there would be just one config again. Sconfig = k ln 1 = 0. A functionally specified [here grammatically functionally specified . .] unique configuration has sconfig = o

So the entropy of the simple repeating pattern is identical to the entropy of the complex message! I can ask for no more than that. I therefore assume that the entropy of DNA with a simple repeating pattern is identical to that of human DNA?

The Pixie said...

Okay, back to the thermodynamics. Yes, I agree the debate has pretty much run its course. I wonder if anyone else has bothered to read all the way down to here?

2] Without observability, we cannot distinguish any one microstate from any other, apart from micro-level inspection, which as "Max" shows, through Brillouin's analysis, is self-defeating energetically.
There is a difference between what we actually see, and what is potentially observable. The second law still happens on Mars, even though the processes are not observed. In theory, those macrostates can be distinguished. It does not matter if that is done or even practical. This is important later on.

But if it is non-functional, it is most unlikely to be macroscopically discernible.
In the same way that macrostates on Mars are not macroscopically discernable? These faulty nanobots are consistently producing the same configuration. We can discern that configuration under an electron microscope. So yes they are macroscopically discernable from other configurations, despite being non-functional.
You will note from my always linked note, that I have always spoken to FUNCTIONALLY specified, complex information
Sure, but the question we are trying to answer is how FSCI relates to entropy. It remains to be proven; it would be circular reasoning to build your argument on that assumption.

But, functionality is a very different kettle of fish: it can in some cases confine a configuration down to a single micro-state, and in any case, to a relatively small island in the configuration-space.
In thermodynamics a micro-state is a particular distribution of energy. The only time you can have a single micro-state is when there is no energy present, and entropy is therefore zero, i.e., at absolute zero (which cannot be achieved anyway).

3] If a particular DNA strand is produced and controlled by a program to start out and continue to be ... in a particular state, with more than 250 GCAT elements in it then by definition, it has in it complexity and specification. But unless it is also functional in some context that we can observe, we will have a hard time defining a macrostate at macro-level for s = k ln w to work beyond that first moment when the molecule comes out of the nanomachinery. [In short, I am pointing out that the microstate is prone to shift to other chemically accessible configurations which are not so high-energy.]
The macrostate (if we allow configurational entropy) is the sequence we put into the device in the first place. It is possible to observe this macrostate; DNA sequencing is done routinely.

Thus we have the functional DNA and the non-functional DNA, both slowly falling apart, but we can see how much it has fallen apart in both cases. Or we can pop it in a freezer. I think the DNA should last a fair time in there.

So I am left wondering about the answer to my question. And why you ducked it completely. The question is still valid, even if we just consider the DNA sequences straight from the process (it is irrelevant if they promptly fall apart). So what is the difference is entropy between the two DNA sequences?

Fortunately, you answer this (effectively) in point 17; a simple repeating pattern has the same entropy as a sequence with a message in it.

4] They are not pursuing the same target, and in the random target case are in general unlikely to produce a functional macrostate, namely a flyable micro-jetBy inspection of the targetting subroutines in the program, we could see what the targetted state is, and so it is complex and specified, but it is by overwhelming probability, likely to be non-functional.
Right.

We have been specifically looking at a functionally defined, targetted, flyable macrostate. Given the ticklish nature of aerodynamics and control as well as the nature of aero engines, such a state is going to be rare in the configuration space, and is highly likely to be functionally isolated from other states that may have some other function. I.e. we are looking at not only functionally specific states, but also ones with irreducibly complex cores.
It is a bit late in the discussion to discuss the impact of IC on entropy...

Er, so where does that leave us? How does this support your position? I have not argued that we will hit the nano-plane by accident, so this seems irrelevant.

5] Both may expend a similar number of joules in aggregate, but because precisely configuration counts, the work done is different.
I was using "work" in the technical sense of energy expended, so the work done would be the same. And I have spent several posts trying to get you to consider the energy differences, so this does feel a little bit like another dodge.

But okay, we both agree that the number of joules expended is the same. So we have three vats, under vacuum (so no flying in any case). All three with clumping nanobots, one also has working nanobots, the second faulty nanobots and the third which produces a specific asthetically pleasing arangement. All three produce compex specified aggregates. All three expend the same amount of energy. All three produce aggragates that do not fly under the current conditions. All three produce aggregates of a configuration that can observed under a microscope, s is macroscopically observable.

But you say aggregates from the first vat have much lower entropy than those from the second. What about the third? Later, in point 17 you say the entropy of a simple repeating sequence is identical to a "functional" message (i.e., with meaning). That would lead me to suppose that the asthetically pleasing aggregate is the same entropy as the nanoplanes.

One reliably creates a functional state, the other does not. That difference is not only discernible from inspection of the subroutines, but from the simple observation of the result. (That is a great advantage of focussing on FSCI.)
I can see that. But I do not see why that leads us to suppose a difference in the entropy. And that really is the point, surely?

6] It is standard stat mech reasoning to account for the classical result that certain entropy increasing processes are "irreversible," by reference to relative statistical weights of microstates and also to refer to the principle that all accessible microstates are equiprobable. [Cf for instance my clip from Yavorski and Detlaf above.] That is what I have again done in the case you are objecting to.
Only if those microstates relate to energy! And there is no reason for them to be equiprobable, by the way. Check out the Boltzmann distribution.

9] If this is to be searched at random or in ways that are tantamount to it, then we are most unlikely to arrive at functionally specified configurations by chance. And, that, through standard thermodynamics-style reasoning.
That depends on the search and the search space. If we suppose a fitness landscape, and a search routine that tries random increments, rejecting those of ower fitness, inevitably the search will get to a fitness peak.

Similarly, the core of assigning the identification, murder, rests on INTENT, not on the NRs and chance processes that intervened. That objection too, falls to the ground.
Of course! That is the specific issue in a murder investigation; who was the intelligent agent. Nevertheless, in any murder NR plays a big role (try shooting a gun in a universe with no natural laws or natural laws that vary wildly moment by moment, and see how far you get.

Nor am I being simplistic,as I have from the outset pointed out that agents routinely use NR and chance but in so doing introduce a third factor: intelligent action.
The objection is that you are being simplistic in not considering chance and NR (eg as used in the search routine above).

12] OF course, the point is, that beyond a certain reasonable level of configurational complexity, it is not credible [here, cf the underpinnings of 2 LOT in the stat mech form] that chance introduces the functional information required. Agents routinely produce that level of functional complexity.
Ah, what level of configurational complexity is that? Is this a well established law, with experimental and theoetical data to support that threshold? I am going to guess... No. In fact, I bet that actually no one has proposed a value for this supposed threshold, let alone found data to support it.

Which would be odd for a science like thermodynamics, so firmly rooted in maths. Why might that be?

14] First, you accused these students [and by implication their supervisor] of PREDICTING -- an active, specific act -- that oil and water would mix in general. I looked, and no such statemetn was there.
They make a general prediction: "Since mixing always corresponds to an increase in configurational entropy, and mixing involves the dilution or spread of matter in space, we conclude that: Spreading of matter in space is spontaneous as configurational entropy increase." I applied their prediction to the specific case of oil and water. In this instance the prediction fails.

You shifted to that their remarks IMPLY such a prediction. But I had anticipated that by pointing out that they were discussing FAVOURABLE mixing and did so by giving an illustrative example, dye drops in water -- something I did in 1st form as I recall. Dye drops diffuse and do not unmix, reliably. PN junctions work. And more.
Why is that? ANS: because the opening up of configuration space allows the molecules to scatter into location cells all across the beaker or test tube. By overwhelming probability, there is a predominant cluster of microstates: scattered. And, reliably, the dye drop goes to that state.

No, because opening up the space gives the molecules more freedom to move, lowering energy levels, making them more accessible for energy distribution, lowering entropy.

Oil, absent an emulsifier, faces a potential well problem -- the electrostatic forces it generates are unable to sufficiently cling to the water molecules to diffuse rapidly. But in fact if we are real patient, we will see a slow distribution of the oil molecules in the water [but with a rather low feasible concentration -- or they will clump and fall out of solution]. An emulsifier will allow the oil to diffuse into the water, e.g that is similar to how soaps and detergents work.
That is right. To understand why somethings mix and some do not the configurational entropy must be discarded - it tells us everything mixes! Instead, use the thermodynamic entropy, which gets it right every time.

Indeed, observe my thought expt at the head of the thread and at UD: decanting the micro-parts into the vat leads to scattering,and as I noted the chances of clumping much less functional configuration at random, are next to nil.
I thought you defined them as nil. Somethings do clump together (eg drop of water on your window pane).

15] You have this back way around. The micro-level dynamics come first, then the entropy accounting. [Once we are not dealing with, "we have this magic number that tells us which way things will go . . ."]
The value of deltaS for an isolated system or deltaG otherwise is indeed a "magic number" that tells us which way things we go. That is the point. The second law makes definite predictions about what can and cannot happen.

If we cannot use the second law to predict what can and cannot happen, then how could you use it to show evolution cannot happen?

And the micro-level dynamics is the entropy.

The NUS students were discussing the former sort of case,and the latter is in your hands a -- I suspect inadvertent -- distraction from their legitimate point.
Okay, may be I read more into it than was there. It seemed to me they were doing science. They had a theory about configurational entropy, they looked at the implications, and made the prediction: "Since mixing always corresponds to an increase in configurational entropy, and mixing involves the dilution or spread of matter in space, we conclude that: Spreading of matter in space is spontaneous as configurational entropy increase." Apparently what they were doing was noting that mixing always involves an increase in configurational entropy. And sometimes things mix, and sometimes they do not.

16] Pix: You can only use configurational entropy to make a prediction if you already know the answer! Me, I will stick with the thermodynamic entropy.
Not at all. We look at the underlying micro-dynamics, and link the micro world to the macro one through identifying causal chains and associated statistical patterns. That is the project of Stat mech. And, a successful one it has been, too.
But that is thermodynamic entropy!

Whatever their point, it is apparent from the work at NUS that you cannot make reliable predictions about mixing from the configurational entropy. You need to consider the energy situation. As you did.

Further to this, in the case of informational macromolecules, configurations of monomers is a functional issue and allows us to make significant analytical progress. [Judging by your remarks, the problem you seem to have is that the progress in question is not pointing where you would like it to.]
No it is not. I was hoping to see some maths, some data, some reasoning to support this claim. Instead I see vague and shifting definitions, irrelancies and thought experiments about systems I find it hard to understand, with no explanation about the deatils of what is going on in them.

17] The first is non-contingent, so it has but one configuration, W = 1, so k ln w = 0, its configurational entropy [assuming for the sake of argument we are dealing with molecular scale chemical coding] is zero...
In the third case, due to the tight specification, at molecular level there would be just one config again. Sconfig = k ln 1 = 0. A functionally specified [here grammatically functionally specified . .] unique configuration has sconfig = o

So the entropy of the simple repeating pattern is identical to the entropy of the complex message! I can ask for no more than that. I therefore assume that the entropy of DNA with a simple repeating pattern is identical to that of human DNA?

Gordon said...

Pixie:

It is pretty clear this thread has now run its course, and has shownt hat TBO's case is strong.

A few notes, on points that i think some clarification will help:

1] Distinction without a difference

I am afraid, it is. DNA is in effect a chemical carrier wave. The material issue is, what is the message; messages, by definition, function.

2] In the same way that macrostates on Mars are not macroscopically discernable?

Actually, first, a great many macrostates on Mars are discernible, starting with spectroscopically and through the known mass and composition of the planet; then on through the actual samples that have been taken. This sort of thing is why thermodynamics considerations play a part in astrophysics.

Second, again the cases are not relevantly comparable. Nanobots targetting any one at-random microstate of micro-jet parts will as I noted be getting to a specified and complex state. But, by overwhelming probability it will be non-functional, and so macroscopically indiscernible. [Indeed, that is precisely the issue that body-plan level random changes are vastly more likely to be non-functional than creative.]

The functional macrostate will be observable and is isolated in the config space. That is what is decisive on the issue of getting there by chance and/or natural regularities only.

3]the question we are trying to answer is how FSCI relates to entropy. It remains to be proven; it would be circular reasoning to build your argument on that assumption.

Crying "circular argument" is not at all the same as showing it.

What I have done and TBO and others in the OOL field before them have done, is to point out the nature of the functionally specified, complex information found in life systems [and in similar situations]. It turns out that there is a considerable sensitivity to perturbation of the stored information or the structurally expressed information - that is, we are looking at isolated islands of functionality in config space. Chance and NR are not credibly able to access those islands in the gamut of the observable cosmos. But agents routinely and as a matter of observation do so. SO on IBE, agency is the best explanation of FSCI.

In terms of entropy, I have taken the time to show that configuratonal entropy as used in the OOL field by TBO and others, is not a dubious novelty. For, we can macroscopically/functionally discern the relevant states, and we can reasonably assign their statistical weights. The entropy numbers follow, and they plainly decrease dramatically as one goes from scattered to clumped to configured states.

The objection to configurational entropy as used by TBO etc fails. The rest of their argument follows, and is reasonable relative to what we observe and experience.

Nor, is it fair to say that I have "ducked" your question on DNA. I have noted that a targetted, sequenced DNA strand of sufficient length would indeed be complex and specified, and would havein it therefore a high degree of information, though I noted too that its stability would be an issue and so the information stored in the test tube as opposed to the active site in the cell, is an open question. That is the context in which I raised the issue of the observability of macrostate in vitro as opposed to in vivo. And that is a relevant issue.

Similarly, a specified, "simple" repeating pattern in DNA has the same Shannon-measure of what properly is information-carrying capacity. In the space of configurations, we see isolated islands of functionality, and that is where the issue of functionality and associated relatively low statistical weights of macrostates becomes a decisive issue, one in favour of TBO's points and implications that chance plus necessity cannot reasonably get to the sort of integrated systems of life within the gamut of a planet or event he whole observabl4e cosmos.

BTW on entropy, note your acknowledgement of the deterioration in DNA triggered by random change, i.e. we are seeing entropy at work through rearranging atomic configurations.

Also, irreducible complex systems are, in many cases [the sufficiently complex in the sense of big enough config space ones] a subset of FSCI systems.

4] we have three vats, under vacuum (so no flying in any case)

This is simply a distraction form the issue of observable functionality.

Whether or not the vats are originally in a vacuum, the resulting clumps can be tested in the appropriate environment, thus functionality is plainly observable. THe macrostate-microstate pattern follows, and the rest of the point too.

Macrostate assignment is of course a matter of the observability of the relevant state-defining variables etc. As long since noted, the project of stat mech is to examine the statistical, micro level underpinnings that give rise to the macro-level overall patterns.

Further to this, I again point to Robertson's point on the issue that since macro-observations typically do not specify microstate within any great narrowness, the entropy measure is a metric of that lack of specification, which in energy-harvesting terms forces us to an assumption of randomness and thence loss of potential harvest. In the FSCI systems, we see how the functionality narrows down microstate dramatically, and allows for action based on that precision, i.e. the enhanced information we have. This is a very reasonable and well-anchored point -- one that it would be fair commetn to note that you have consistently "ducked" to use your own word from above.

5] Equiprobable . ..

As to the equiprobale distributions of accessible microstates, I think you should go reread the stat mech lit.

If there is no reason to prefer any one state, then the relevant states are viewed as equiprobable. That INCLUDES the Boltzmann distribution -- and gives rise to the predominant cluster of microstates as usually observed.

Even Wiki notes:

"The fundamental postulate in statistical mechanics (also known as the equal a priori probability postulate) is the following:

Given an isolated system in equilibrium, it is found with equal probability in each of its accessible microstates.

This postulate is a fundamental assumption in statistical mechanics - it states that a system in equilibrium does not have any preference for any of its available microstates. Given Ω microstates at a particular energy, the probability of finding the system in a particular microstate is p = 1/Ω. [Lets see if omega makes it and not my substitute W]

This postulate is necessary because it allows one to conclude that for a system at equilibrium, the thermodynamic state (macrostate) which could result from the largest number of microstates is also the most probable macrostate of the system.

The postulate is justified in part, for classical systems, by Liouville's theorem (Hamiltonian), which shows that if the distribution of system points through accessible phase space is uniform at some time, it remains so at later times.

Similar justification for a discrete system is provided by the mechanism of detailed balance.

This allows for the definition of the information function (in the context of information theory):

I = \sum_i \rho_i \ln\rho_i = \langle \ln \rho \rangle

When all rhos are equal, I is minimal, which reflects the fact that we have minimal information about the system. When our information is maximal, i.e. one rho is equal to one and the rest to zero (we know what state the system is in), the function is maximal.

This "information function" is the same as the reduced entropic function in thermodynamics." . . ."

Note the link to information theory that they bring up.

6] The energy context

And by the way, functioning in a context of physical information processing, i.e work is involved, is in a context of energy. The relevant macrostate is observable physically, and that is also a context of energy. There is information involved, and information storage and processing are in a context of energy. Chemical reactions are involved [but are precisely controlled at atomic/molecular levels -- what we usually can't routinely do in a chem lab, absent our own nanobot technology . . .] and that is a context of energy.

That is, your energy objection falls tot he ground.

BTW, the anticipated creation of nanobots will move chemistry out of the dark ages of random mixes in test tubes and process units driven by brute force constraints, up to something beginning to look like the sophisticated technologies in life systems.

Then physicists like me will become interested in chemistry! (Maybe, we will simply take over chemistry as a new branch of aplied physics . . .)

--> Of course, I am just kidding you on the usual quarrel between physicists and chemists . . . which seems to be a part of the underlying issue in debate BTW.

7] If we suppose a fitness landscape, and a search routine that tries random increments, rejecting those of [l]ower fitness, inevitably the search will get to a fitness peak.

SNAP goes the Behe mousetrap!

First, as you can see here from Bradley of TBO, the problem is that the fitness landscape for biofunctional molecules is inverted: the proteins etc require sophisticated, sequenced and controlled energy and matter inputs, gong energetically uphill. Spontaneous chemistry reliably disassembles such. [Indeed, TBO remarked on the energetic favourability under Miller's conditions to get to some amino acids; though more relaistic early earth atmospheres get to the opposite and presto, no amino acids etc. Moving from amino acids to proteins or the like is the problem. And for RNA, you are still stuck at the monomer synthesis level, much less to chain . . . not to mention chirality in a context where the basic chemical energy numbers are the same for D and L as a rule.]

Second, teh issue is that the sort of "natural selection" that eliminates lower fitness levels you anticipate is based on the assumption of at minimum a self-replicating molecule, That is, you are at RNA world, and are running up against the points Shapiro so recently highlighted: you cant get there tot he first level of functionality and coding.

I won't bother to say much on pre biotic natural selection other than what Dhobzhansky warned: contradiction in terms.

In the NS situation in general, filtering assumes prior synthesis. AN d the problem is that for FSCI,the levels of complexity, coding and integration [Behe's IC issue!] put yopu beyond the reach of chance as a creation mechanism for the required body plan level innovations.

In short, as my always linked online article long since has summarised, you can't get TO life and you cant get to body plan level biodiversity. And, all thanks to the isolation of the FSCI in the relevant config spaces.

In short "chance plus NR" lacks the creative, information generating capacity required to get tot he levels of information we are dealing with. But agency plainly does have that capacity as is massively empirically supported. That is why I can safely say that I have not ducked the issue, you have.

8] what level of configurational complexity is that?

Let's see: how many times have I pointed to 500 - 1000 bits of information-carrying capacity?

For a unique state . . .

for clusters there is a related calculation, which I will now excerpt from my always linked (and the onward popular level presentation by Peterson, the original source, which is helpfully clear) -- I get the impression you have never really read it before asserting that I have not addressed this or that or the other point:

________

Dembski has formulated what he calls the "universal probability bound." This is a number beyond which, under any circumstances, the probability of an event occurring is so small that we can say it was not the result of chance, but of design. He calculates this number by multiplying the number of elementary particles in the known universe (10^80) by the maximum number of alterations in the quantum states of matter per second (10^45) by the number of seconds between creation and when the universe undergoes heat death or collapses back on itself (10^25). The universal probability bound thus equals 10^150, and represents all of the possible events that can ever occur in the history of the universe. If an event is less likely than 1 in 10^150, therefore, we are quite justified in saying it did not result from chance but from design. Invoking billions of years of evolution to explain improbable occurrences does not help Darwinism if the odds exceed the universal probability bound.
__________

10^150 ~ 2^500. Hence my 500 - 1000 bits. FOr DNA, 1 DNA element is 2 bits of capacity thence we see that 250 - 500 functional sites in a DNA strand are at or beyond the initial level of the bound. Of course real proteins etc are not usually so tightly constrained all along their strands, so we are looking at a cumulative calculation across say the enzymes etc in the DNA-ribosiome-enzyme system. [This is the only observed system that is capable of basic life funciton. It is of course an excellent example of nanobots . . .]

For such a cluster, if the relative statistical weight of the functional macrostate to the clumped one gives a similar ratio we are back in the same ballpark. For multi-component systems, we are looking at compounding the individual probabilities or statistical weight ratios.

As can be imagined, given the utter complexity of life systems, the Dembski type bound has long since been exceeded in life systems.

The same holds for micro-jets, the actual example I have been discussing as it helps make the whole point far more clear.

In short, you lost your bet because you did not do your homework.

9] "Since mixing always corresponds to an increase in configurational entropy, and mixing involves the dilution or spread of matter in space, we conclude that: Spreading of matter in space is spontaneous as configurational entropy increase."

Diffusional MIXING, of course is an observed event, and occurs in certain, fairly well known cases, which the NUS students exemplify. The mistake is yours, not the NUS students.

They are quite correct to point out hat the root of mixing is that the scattered microstates correspond to an enhanced probability relative to the concentrated dye drop, and that the resulting spontaneous trend is obvious.

And DS and dG are notr magic numbers, but emerge from that underlying process, though of course historically the observations at micro level came later and explained why the magic numbers worked.

10] the entropy of the simple repeating pattern is identical to the entropy of the complex message! I can ask for no more than that. I therefore assume that the entropy of DNA with a simple repeating pattern is identical to that of human DNA?

You are of course first confusing information carrying capacity with functionally specific, complex information. These relate to contingent situations -- i.e. e.g DNA in the generic case is the chemical equivalent of a carrier wave. It is the "modulation" that stores the functional information, and a complex, aperiodic pattern carries the relevant bio-fucntional information.

Second, you conflate order and complexity. I have already pointed to Trevors and Abel on that, but even TBO as cited above summarise that OOL researchers realised the key difference 1/4 century ago.

A simple repeatign pattern is like modulating a carrier wave with a sine function -- the information carrying capacity is the same, but the content does not use that capacity in any reasonable functional way. The sine is specified and complex in a certain sense [easily corrupted by noise -- a perfect sine is HARD to get physically . . . even in a classic oscillator,which depends on nonlinearities to stabilise amplitude], but non-functional.

We are dealing with functional macrostates, which are just as isolated in the config space as are repeating patterns. Note the triple issue: specified, complex and functional.

So, since a perfect repeating "simple" pattern of say 500 K to 3 bn base pairs is complex in the sense of isolated in config space, it is indeed just as constraining on W as is a life-functional DNA strand. But again, it is non-functional, and it is artificially produced. That is, we see again that complex specified information is most credibly the product of agency.

On "equality" of entropy measures: a specific repeating pattern of DNA strands of 3 bn length, is in fact MORE tightly constrained than human DNA -- which obviously takes in all persons living and dead. But both are again vastly beyond the credible reach of chance plus necessity without agency.

To see why, work out the numbers on the config space of a "Simple" cell with 500 k base pairs in its DNA: 4^500k ~ 9.90 *10^301,029.

There are "only" about 10^80 atoms in the known observed universe; the number is way beyond being merely astronomical. Try to search that space, using only chance and natural regularities of chemistry and get credibly to the DNA-Ribosome-Enzyme mechanism, going energetically uphill all the way beyond some basic monomers.

11] The only time you can have a single micro-state is when there is no energy present, and entropy is therefore zero, i.e., at absolute zero (which cannot be achieved anyway).

I am speaking on clustered configurations, not on overall energy. Poor phrasing on my part.

Okay, it seems there was a repetition.

GEM of TKI

The Pixie said...

Gordon, quick reply to one point for the moment.
7] Pix: If we suppose a fitness landscape, and a search routine that tries random increments, rejecting those of [l]ower fitness, inevitably the search will get to a fitness peak.
First, as you can see here from Bradley of TBO, the problem is that the fitness landscape for biofunctional molecules is inverted: the proteins etc require sophisticated, sequenced and controlled energy and matter inputs, gong energetically uphill.
First, please note that I was talking about an abstract process, a search algorhythm that might be conducted on, say, a computer (if you like, we could link the computer to your nanobots, and the fitess is then the fitness of the nanobots). Sure, it will take a lot of iterations to get to a nano-plane, but it is possible, if the fitness landscape is suitable (for nano-planes it may not be, but that is hardly refutes evolution). You might be confusing the fitness landscape for the energy landscape.

Second, this is Darwinian evolution, rather than abiogenesis. Abiogenesis might involve such a search, I do not know. I guess we will have to wait to see what eidence the OOL researchers can produce of abiogenesis, and compare that to the evidence for an intelligence existing at that time with the capability to do the deed.

In short, as my always linked online article long since has summarised, you can't get TO life and you cant get to body plan level biodiversity. And, all thanks to the isolation of the FSCI in the relevant config spaces.
I found nothing in your last post about getting body plan level biodiversity, and nothing convincing in your linked page. From there, Behe's IC is a very dubious concept that he has been forced to redefine from "By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning" (which you quote) to "An irreducibly complex evolutionary pathway is one that contains one or more unselected steps (that is, one or more necessary-but-unselected mutations). The degree of irreducible complexity is the number of unselected steps in the pathway." That is a huge difference. The first is easy to prove (my car has an IC core; take certain bits out and it stops working), but it is very difficult to prove an IC system could not evolve. By the second meaning an IC system cannot evolve by definition! But that leaves you with a big problem showing something is IC in the first place. We have to hope that no one ever uses one definition to prove a system IC, them switches to the other definition to "prove" it could not evolve... Or that anyone tries the old "shift the burden of proof" trick - declare a system is IC, then require the evolutionists to prove otherwise.

Gordon said...

Hi Pixie:

I am having that strange cut paste problem again, don't know why . . . even affects Yahoo mail so I can't cut-paste from there.

Let's try anyways:

1] Fitness landscapes

The problem there is the required complexity and vulnerability to perturbation of the required information.

In short, unless you get it pretty much right, software of that complexity does not work. There is a reason Bill hires a lot of smart programmers. (The information is energetically metastable relative to perturbations. That is in part why life systems have in them DNA repair subsystems -- active protection.]

2] Darwinian Evo vs OOL:

BOTH run into the same information-generation hurdle as my discussion of the Cambrian revolution in the always linked summarises and excerpts.

3] Body-plan level biodiversity

As the linked cites, when we look at the increment in the genome to go to say an arthropod, and tot he associated cell types etc, we are looking at the same scope of beyond the Dembski bound hurdle.

I know that evo mat thinkers shrug and say it happened anyways, but there I find they are going far beyond what is credible to come about based on chance shifts in genomes, or in plausible steps.

McDonald's comment (in my always linked) is especially telling, as excerpted and summarised by Meyer in that famous peer-reviewed paper that passed proper peer review by "renowned scientists":

". . . genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes--the very stuff of macroevolution--apparently do not vary. In other words, mutations of the kind that macroevolution doesn't need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don't occur.6 "

4] Beating up on Behe's IC concept

A bit tangential to topic, but I will comment for the sake of onlookers, at minimum.

It is also related to the information generation issue that the thermodynamics of config spaces points to.

Have you ever had to design a serious engineered system based on several interacting parts,the absence of any one of which causes functionality to collapse?

I have, and equally, I have routinely had to troubleshoot and repair such things. We rely on the principle that just one failure-point is enough. [And in development, a major complication is multiple failure points, which can interact.]

Behe's point is quite familiar and very well defined indeed to those who have had to deal with engineering. You acknowledge this on your car, since you know the issue of getting the spare parts.

Indeed, too, the mousetrap as designed and examined by him exhibits this familiar pattern.

Also, if you have ever looked at Rube Goldberg's cartoons, you will know that co-opting already existing parts is a very tricky process and does not subvert the underlying point: that there is a core in many systems where several coupled parts must all work together or the system fails.

This is part of why I do not take the rhetoric that seeks to beat up on Behe seriously.

Next, in fact, his cases have stood up to serious scrutiny, and in particular the flagellum stands unscathed, Miller's premature announcements notwithstanding, as I noted.

Turns out, the TTSS uses a subset of the flagellar genes to implement the venom injector; the rest of the genes for the flagellum are there in Y pestis, but are not used. Cf Scott Minnich's empirical, lab based, peer-reviewed research in the context of the debate. (This was presented at a conference in Greece that was looking for reverse engineering nature's nanomachines . . . pant, pant, pant . . .) That the plans for System A also have in them as a proper subset, the plans for system B COMPLICATES the origin of information issue. This is integrated code reuse, Pixie. A very high art in microcontroller design that I have never even attempted! Thank God memory has been cheap enough to not have to reuse code in my life as a designer!]

5] Definitionitis

As to the attempt to cast one look at defining IC vs another. In fact they have always been the two sides of the coin, observe how ISCID puts three looks onthe same page!

Why I say that:

a] precisely because NDT style evo is blind and only rewards present functionality; intermediates that are non-functional will be selected against. (These are the "unselected parts" in the metric on the "degree of IC." In short Behe is here moving to a measure . . .]

b] Then, the mutations required at each step in the evolution have to be relatively small [relative tot he 250 base pair limit; of course, most observed muts are single point -- and information -destroying; antibiotic resistance being a classic in point] for all sorts of reasons to do with islands of functionality and the metastability of information discussed above.

c] The resulting hurdle against getting the cluster of mutations and co-optings etc to get to a complex functioning entity then goes the same way as the general FSCI issue: exhausting the credible probabilistic resources.

d] Notice again the soft-form rejection of a hyp: what is in the raw logically and physically possible becomes so improbable relative to available resources that we can rule it out as utterly unlikely relative to chance mutations plus differential reproductive success.

6] "It is very difficult to prove that an IC system could not evolve . . ."

Selective hyperskepticism, I am afraid. [Similar tot he reservation in Darwin's "could not possibly have evolved" line . . .]

In the world of observed facts, we do not ever get to demonstrative proofs, so to shift to such a standard for a particular case is the Clifford-Sagan mistake of claiming " 'extraordinary' claims need 'extraordinaty' evidence."

So, let's ask: what makes such a claim 'extraordinary' and thus requiring 'extraordinary' evidence? ANS: in praxis -- it cuts across my favoured position. AKA, opening the door to that ever popular resort of skepticism, the fallacy of the closed mind.

Soo . . .

Nope, empirical claims need only ADEQUATE -- non-worldviews level question-begging --evidence for accepting fact claims as credible. [Cf my linked for why.] WHile of course anythign that is logically and physically possible is -- possible -- the fact is, we routinely infer on a best explanation basis in ways that rule out many many such possibilities.

For instance, consider the Russel paradox: what if the world was created in an instant five minutes ago, complete with our memories of a past that did not actually happen, and apparent artifacts etc.

Such a world is possible, and is empirically indistinguishable from the world we think we live in. So, why do we reject this and infer that the world is as we think it is? [See my discussion.]

In short, there is more to Behe than meets the eye, especially the eye prejudiced by the usual biased and sometimes uncivil reporting on Behe.

Okay, trust that we can deal with a summary of where this has now gone.

GEM of TKI

The Pixie said...

Gordon, I am looking to make a final post soon and bow out. Before I do, I am curious how you respond to this argument, so I can better represent your position.

The point, I think, of the nanobot analogy is to prove the validity of configurational entropy, as used by Thaxton. I suggested a thought experiment in which there are clumping nanobots, and two types of rearranging nanobots, both reading the configuration and deciding what needs to be done, but one then scrambles that information, and produces a non-functional assembly. What is the different in energy? You seem to agree with me that there is no difference: "Both may expend a similar number of joules in aggregate" Why should we suppose the entropy is any different?

So when we look at how these things were made, the thermodynamics (the movement of energy) is the same! It would seem that the manufacturing process is identical in both cases - so we can safely ignore it. The whole nanobot thing would seem to be irrelevant, all we are really concerned with is what they spit out at the end.

I have used DNA sequence synthesis (a technology we have here today, in contrast to the nanobot idea) in a thought experiment of my own. I believe that the energy requirements to synthesis a specific DNA of a certain length are identical whether the DNA is functional or not. From the above, it seems you may agree with me that the thermodynamics of the DNA synthesis is identical. The only difference is in the DNA sequence produced.

But wait. You said:
"The first is non-contingent, so it has but one configuration, W = 1, so k ln w = 0, its configurational entropy [assuming for the sake of argument we are dealing with molecular scale chemical coding] is zero...
In the third case, due to the tight specification, at molecular level there would be just one config again. Sconfig = k ln 1 = 0. A functionally specified [here grammatically functionally specified . .] unique configuration has sconfig = o
"

Both the functional and non-functional DNA sequences were specified, so the Sconfig is zero for both.

Sure, the probability of the non-function nano-assemble flying or the non-functional DNA have a random function is extremely small, but that is not a second law issue.

Gordon said...

Pixie:

There are obviously several points that need to be clarified:

1] The point, I think, of the nanobot analogy is to prove the validity of configurational entropy, as used by Thaxton

NOT AT ALL.

Configurational entropy, as long since shown in multiple ways, is a well-established concept in thermodynamics in general and OOL research in particular. I only set out to illustrate how it applies in the sort of setting that TBO use.

I have plainly succeeded in that exercise, as judged by for instance the strained objection to a senior paper by professorially supervised students at NUS -- and this on a common issue, diffusion. Observe again the utility of locational cells in specifying microstates and macrostates.

2] I suggested a thought experiment in which there are clumping nanobots, and two types of rearranging nanobots, both reading the configuration and deciding what needs to be done, but one then scrambles that information, and produces a non-functional assembly. What is the different in energy?

As I have repeatedly highlighted, energy and WORK are obviously different. The former is in effect potential to achieve the latter, and is in general measured by (partially as a rule) converting it into the latter. Work, by definition [and in the context of its utility for human society as well as science and engineering] is achieved when forces move their points of application in orderly fashions.

That brings in a quantitative measure -- so many joules. It also subtly implies a qualitative observation: work can be specified, and so end up in different outcomes. That means that a similar number of Joules may be expended in two pieces of work, but end up at very different results.

In the cases in view, the work is specified by two different programs, one targeting a functional state -- a flyable micro-jet. The other targets a certain specific but otherwise unconstrained config-at-random. That means that the former ends in an observable, identifiable macrostate with a relatively small number of possible configurations, thus a low entropy.

The latter, ends up in some arbitrary, and by overwhelming probability non-functional configuration that cannot be identified through other than microscopic observation or disassembling the object program. While in principle specific, it is in practice simply an at random clump, the randomness being moved up one level in the software.

Now, macrostates, are by definition, macroscopically observable from the system itself. So, while a similar number of joules has been expended [and one could in principle work out the number of Joules involved in molecular collisions in a solution to achieve a precipitation reaction . . .] the resulting macrostates are utterly different. One is a flyable jet,t he other, an at-random clump that happens to have been specified by the bowels of an object program that we can dis-assemble -- i.e by microscopic inspection.

What then is the result? We can compare the entropy of scattered, clumped and configured MACRO-states , and it is plain that the entropy decrements sharply in each stage. If you will, and can identify the at-random target MICROstate, you can call it a "macrostate" and work out an entropy number for such a unique config, but that does not evade the point: this is simply one arbitrary config from the clumped macrostate.

In short, the objection fails, but for subtle reasons. This comes out in . . .

3] when we look at how these things were made, the thermodynamics (the movement of energy) is the same! . . . the manufacturing process is identical in both cases - so we can safely ignore it.

Not at all. The WORK is different in its specification, and in the MACROstate that it ends up in. As just summarised.

4] I believe that the energy requirements to synthesis a specific DNA of a certain length are identical whether the DNA is functional or not.

Again, note the difference between energy flow and work. Also, as already noted, you end up in two very different MACROstates, one being bio-functionally specified [which is the material issue] and thus observable in the large; the other being arbitrary and only observable in the small. Any specific state will of course be uniquely specified, and if the DNA chain is long enough it will be complex.

Further to this, observe what has happened: to get to that specific config, your imaginary lab has conducted an information-based, programmed exercise. This is evidence that intelligent agency can in principle [and sometimes praxis]get to specific DNA chains, but it is not good evidence that at-random configurations in some pre-biotic soup [dropping for the moment issues over chirality and synthesis of certain monomers] will have any reasonable probability of achieving the same result. In TBO's terms, improper investigator interference.

Similarly, this is not evidence that in existing life-forms, body-plan level mutations at random will achieve novel and functional body plans.

5] you may agree with me that the thermodynamics of the DNA synthesis is identical.

Again, no. You are conflating several distinct concepts. Mere quantity of energy in Joules is not the same as work ending up in identifiable -- macroscopically observable -- distinct MACRO-states.

All you have done in the jet case is to change where the random behaviour occurs, from in effect the molecular noise level to a shuffling routine in some program. In the DNA chain case, you have substituted the intelligent work of a lab for the random process, to get to an arbitrary specified sequence. In neither case have you shown that configurational entropy is an improper concept.

6] Sure, the probability of the non-function nano-assemble flying or the non-functional DNA have a random function is extremely small, but that is not a second law issue.

That is just the point: they belong to the in effect clumped-at-random or at any rate, arbitrarily, state. By contrast the functional states are MACROSCOPICALLY distinguishable and directly so -- they work as advertised as a flyable jet, or as a functioning DNA in vivo. No detailed micro-level inspection is required to see that. [Cf the already identified difference between complexity and specified complexity, then my "refinement": FUNCTIONALLY specified complexity.]

I take it that identifying MACROstates and estimating their statistical weights are highly relevant to the Boltzmann case on entropy: s = k ln w, the w being the statistical weight of a macrostate. By chaining sequential states, I have shown that scattered to clumped to configured and flyable gives a progressive collapse in the value of S, at the expense of the nanobots' work to search out, capture, move and clump then configure the parts. This is compatible with Brillouin's point that such a process will in the end expend energy ins c a way that the net entropy of the vat increases.

All of this is highly and deeply relevant to the force of the 2nd LOT.

It also shows how it takes a lot of smarts to target a functional state, and get to it; both from my thought expt and as observed in the complexity of the functional molecular machines of life at cellular level. That is the force of the evidence we have, and it sits well with my point way back, that when the energy converters that do a bit of work exhibit FSCI the issue then is where did they come from in light of the sparseness of functional configurations in the overall config space of such component parts taken at random..

Further, one may use such nanobots or the equivalent to target an arbitrarily specified, and/or at-random MICROstate. One may then point out that we have achieved a unique state -- but it is a microstate, not a macrostate. (Similarly a "snapshot" at any moment in time of a vat full of Brownian-motion fluctuating component parts for jets, cars, subs etc, if we could do it, will be highly complex and will be in a unique state. And one that if we were to make a movie we could assign to the water [etc] molecules acting as nanobots to change one initial state into another complex one. But the resulting state will in general not be simply describable or functionally specified, apart from that it is an arbitrary state or a random state. We are looking at imaginary snapshots of microstates not at MACROstates here.

In short, you have made an irrelevancy: in effect as you just admitted, the targeted MICROstate is one of a very large number in the clumped-at-random [which is tantamount to arbitrarily -- any one is as good as any other, cf Robertson's comments] macrostate: clumped but any one config is as good as any other, for practical purposes.

And, if you were to instead in the DNA case say specify a state where ". . . GCAT . . ." would repeat itself over and over along the chain, all you would do is to specify a unique state that happens to have a short specification, i.e is specific, and by definition low-entropy. That would be in accordance with s = k ln w with w = 1. The configurational component of entropy would be small, in fact 0, but that will be under the constraint that such was intelligently achieved -- which is a part of my point.

[Notice too the implied scale of the challenge to start from an arbitrary scattered state and move to a FUNCTIONALLY specified configured one. In that case the config entropy would indeed be larger than the GCAT case [which is unique] but has the further constraint -- functionality. If it takes smarts to reliably get to GCAT, what will it take to get to such a functional state within the gamut of the observable cosmos? ]

In short, TBO's point stands, and so does Mr Sewell's -- not to mention Mr Dembski's.

GEM of TKI.

Gordon said...

As a footnote: you may find this discussion by Dembski on specification to be helpful. You will see many connexions to our own discussion on macrostates and their statistical weight i.e the sets of relevant microstates.

[Notice my point on complexity, without getting into technical details. In effect the length of the description of an arbitrary microstate is highly complex, requiring in effect listing out of the specific parts and their arrangement. But a specification inthe sense we are using, has in effect a low complexity. For instance, functional DNA has an external simple specification -- "it works in some life form." GCAT has a simple specification, "repeat GCAT." An arbitrary DNA chain, has a high complexity -- you have to in effect list out the parts one by one. Similarly, a flyable micro-jet is complex [long configured parts list] but i ssimultaneously simply describable/observable -- it flies. The arbitrary clumped state has only the complexity. It is the COMBINATION of complexity and specification, especially functional specification that is relevant to the design inference. ]

Also, WD's note on searching large spaces is relevant.

GEM of TKI

The Pixie said...

Hi Gordon

I disagree with your comments on IC systems, but do not want to get bogged down in a discussion on that too. Still not sure I understand what you are claiming for entropy...

That brings in a quantitative measure -- so many joules. It also subtly implies a qualitative observation: work can be specified, and so end up in different outcomes. That means that a similar number of Joules may be expended in two pieces of work, but end up at very different results.
You seem to be using "work" in a non-thermodynamic sense (or at least a sense that I would not), but I think I see the difference. You use "work" to be more like "task". A number of tasks can use the same energy, but achieve a different result.
In the cases in view, ... up one level in the software.
Agreed.
Now, macrostates, are by definition, macroscopically observable from the system itself. So, while a similar number of joules has been expended [and one could in principle work out the number of Joules involved in molecular collisions in a solution to achieve a precipitation reaction . . .] the resulting macrostates are utterly different. One is a flyable jet,t he other, an at-random clump that happens to have been specified by the bowels of an object program that we can dis-assemble -- i.e by microscopic inspection.
But your labelling of the different results as "macro-states" does not by itself prove they are macrostates in the thermodynamic sense.

The at-random is specified, as you note, and is observably different (we could see that the assemblies are in this configuration with an electron microscope). What we have is two vats producing specified assemblies, both of which can be identified with an electron microscope. Yes, the nano-plane can be identified in another way, but not in this system, as the system is under low pressure; in this system neither the at-random assemble nor the nano-plane can fly. Both are specified, both are observable, neither are functional in this system.

So your objection seems to be that the specification for the at-random assembly can only be found by microscopic inspection of the nanobots, while the specification for the nanoplanes is on a blueprint somewhere. True, but ...

(1) I have never heard of that as an issue in the calculation of entropy. There is no part of the Boltzmann maths that takes account of where the specification is (or is there; can you produce the maths?). Certainly in classically thermodynamics, the location or nature of the specification is absent from dS = dQ/T.

(2) I lied about the nanobots; there were not faulty at all. They were building assemblies to a blueprint I have in my hand. The message was not scrambled, but was carefully modified, to produce a very specific assembly that, by design, is non-functional. Now the entropy of the non-functional assemblies is the same as that of the nanoplanes; both assemblies are prescribed by blueprints we can inspect.

Alternatively, we must assume that the assembles "know" whether or not they will become functional in the future.

What then is the result? We can compare the entropy of scattered, clumped and configured MACRO-states , and it is plain that the entropy decrements sharply in each stage.
Clearly it is not plain (for the clumped to configured bit), as we have been arguing this point for many days.

If you will, and can identify the at-random target MICROstate, you can call it a "macrostate" and work out an entropy number for such a unique config, but that does not evade the point: this is simply one arbitrary config from the clumped macrostate.
Ah, but now it is not arbitrary. I have now revealed that this was according to a blue print I have right here.

Not at all. The WORK is different in its specification
The task done is different, but the energy movement, as far as I can see, is identical. This suggests to me that the thermodynamics is the same.

I appreciate I may not be getting your point, and I think the problem is that I do not understand how you are using the word "work". Can you explain the difference in the movement in energy in these two vats? Can you either give a very exact definition of what you mean by "work" or perhaps explain the difference in the vats without using the word at all?

Again, note the difference between energy flow and work. Also, as already noted, you end up in two very different MACROstates, one being bio-functionally specified [which is the material issue] and thus observable in the large; the other being arbitrary and only observable in the small.
Right, so it sounds as though we might agree that the energy movement is the same, and what is happening is that we have set the DNA sequencer to do different tasks. One task is to produce a certain functional DNA, the other task is to produce a certain non-functional DNA. Please note that in both cases the specification was written down, then typed into the machine. We have a blueprint both times, a blueprint we can inspect, and compare against the macrostate.

Also note that the DNA sequences that come out are not in cells. Neither of them is functional in the current context. As you note, they will start to fall apart; it is quite possible that neither sequence will ever be in a context in which it is functional.

So now the difference is whether the DNA sequence is in theory biofunctional, rather than whether it will ever be used biologically. So I have to assume that the DNA sequence is low entropy if it is functional in an alien organism. If you do not believe in life on other planets, that that does not matter, as long as the DNA sequence is functional in any hypothetical organism (it is not like we could test it in alien organisms anyway).

Further to this, observe what has happened: to get to that specific config, your imaginary lab has ...
Ah, so when it is my thought experiment, involving technology we have available right now, it is my "imaginary lab". And when it is your thought experiment, involving entities that people working in the field distance themselves from, this is a "more familiar setting".

Disappointing that you feel the need to ridicule my thought experiment in this way, when it would seem rather more feasible than your own.

Further to this, observe what has happened: to get to that specific config, your imaginary lab has conducted an information-based, programmed exercise. This is evidence that intelligent agency can in principle [and sometimes praxis]get to specific DNA chains, but it is not good evidence that at-random configurations in some pre-biotic soup [dropping for the moment issues over chirality and synthesis of certain monomers] will have any reasonable probability of achieving the same result. In TBO's terms, improper investigator interference.
I am not suggesting that it is good evidence that at-random configurations in some pre-biotic soup will have any reasonable probability of achieving the same result. What I am saying is that there will be no discernable difference in the entropy between the two DNA sequences.

Similarly, this is not evidence that in existing life-forms, body-plan level mutations at random will achieve novel and functional body plans.
Again, I am not suggesting that. Please do not put words in my mouth.

All you have done in the jet case is to change where the random behaviour occurs, from in effect the molecular noise level to a shuffling routine in some program. In the DNA chain case, you have substituted the intelligent work of a lab for the random process, to get to an arbitrary specified sequence. In neither case have you shown that configurational entropy is an improper concept.
At the moment I am still trying to understand the concept as you use it.

By contrast the functional states are MACROSCOPICALLY distinguishable and directly so -- they work as advertised as a flyable jet, or as a functioning DNA in vivo.
So the DNA sequence must somehow "know" that when it is put into a cell it will be able to do something! The nano-assemble must somehow "know" that when the air pressure is high enough it will be able to fly.

No detailed micro-level inspection is required to see that.
The DNA sequencer produces two vials of DNA. Which is the functional one? The only way to tell is to put them in cells... and let the cells do micro-level inspection of each sequence.

It also shows how it takes a lot of smarts to target a functional state, and get to it; both from my thought expt and as observed in the complexity of the functional molecular machines of life at cellular level.
I have never observed that the complexity of the functional molecular machines of life at cellular level takes a lot of smarts to achieve, actually. To be honest, I doubt you have.

I also reject the idea that there is such a target in biology.

And, if you were to instead in the DNA case say specify a state where ". . . GCAT . . ." would repeat itself over and over along the chain, all you would do is to specify a unique state that happens to have a short specification, i.e is specific, and by definition low-entropy. That would be in accordance with s = k ln w with w = 1. The configurational component of entropy would be small, in fact 0, but that will be under the constraint that such was intelligently achieved -- which is a part of my point.
So we agree that the simple repeating pattern DNA sequence is low entrpy, even though it is non-functional.

This would seem to be the breaking point of your argument. You spend most of your post emphasising that the big difference here is whether the thing in functional:
* "In the cases in view, the work is specified by two different programs, one targeting a functional state -- a flyable micro-jet. The other targets a certain specific but otherwise unconstrained config-at-random. That means that the former ends in an observable, identifiable macrostate with a relatively small number of possible configurations, thus a low entropy."
* "Also, as already noted, you end up in two very different MACROstates, one being bio-functionally specified [which is the material issue] and thus observable in the large; the other being arbitrary and only observable in the small."
* "That is just the point: they belong to the in effect clumped-at-random or at any rate, arbitrarily, state. By contrast the functional states are MACROSCOPICALLY distinguishable and directly so -- they work as advertised as a flyable jet, or as a functioning DNA in vivo."

And yet you clearly say that the entropy of the DNA strands is the same, even though one is functional and one is not.

Gordon said...

Pixie:

It seems we have gone a long way, but right back to the beginning.

I will note on points, basically for the record:

1] I disagree with your comments on IC systems

I simply cited and linked the proper definitions and gave a summary of how they are connected, coming from the principals involved. (Methinks that it is they who should know what they mean, not those who twist -- in your case I believe inadvertently -- the statements into strawman caricatures.)

The key relevant point is that non-functional parts or parts fulfilling a different function cannot be selected for the relevant novel function until all at once it comes together.

That requires considerable innovation and/or adaptation all at once, well beyond the credibility of the reach of random variation and natural selection within the relevant ambit of a few thousand million years on earth, and especially the 5 - 10 mn year window at the Cambrian life revolution.

Further to this, the IC concept is in fact deeply embedded in the lab practice of molecular biology: knockout studies are premised on the idea that once a part is knocked out one sees the effect in the loss of function.

2] You seem to be using "work" in a non-thermodynamic sense . . . You use "work" to be more like "task".

I am simply pointing out that work as used in PHYSICS, has subtleties embedded in its meaning, which are thermodynamically relevant, and are decisive for the discussion we have.

Namely, as I learned long ago now and have taught many a class, work is dome when forces move their points of application along their lines of action. [NB: This includes when the force does retardation, as with braking forces.]

The QUANTITY of work is the product [it's a vector dot product actually] of the magnitude of the force and the distance moved along the line of action. But that is not the same as to dismiss the quality involved: work takes its meaning and relevance from the fact that humans [and other creatures . . .] are involved in tasks as you say, which impart specific configurations to objects by doing targetted work. (That is why it is a rather special quantity used in physics.)

The work done to impart one configuration is not at all the same as that done to impart another. Indeed, we should note that the fact that vectors are involved should tell us that: forces and displacements are different in both magnitude and direction, which through the vector dot product, affects even the quantity of work done. There is more to it than the number of joules involved.

3] But your labelling of the different results [e.g. of clumping micro jet parts at random vs to a flyable config] as "macro-states" does not by itself prove they are macrostates in the thermodynamic sense.

Macrostate is short for macroscopically observable state, whilst microstate is short for specific individual config [usually at molecular or smaller scale] compatible with the relevant macrostate. As the diffusion example shows, a concentrated drop is a macrostate, the state with the molecules or particles of ink dispersed is a different microstate; based on in effect accessibility of locational cells. In short, there is a relevant entity, configuraitonal entropy, based on counts of the stat weights of the relevant macrostates: s = k ln w, w being the microstate count of the relevant macrostate.

Applying that to the case of a micro-jet, if particles as in the given thought expt at the head of the thread [as copied from the earlier post at UD] are decanted into a vat,they will disperse through diffusion as they undergo Brownian motion [i.e act as giant molecules.] Thus, we see config space in action. They will not spontaneously reassemble, by overwhelming probability.

Clumping nanobots can do a work of clumping that changes the macrostate to a clumped one.

In so doing the work sharply reduces the entropy of the micro-jet component of the vat as a relevant thermodynamic system. [Of course at the expense of offsetting increases in entropy in the rest of the vat etc.]

Configuring nanobots can then configure to yet another observable macrostate, the flyable jet. As the clumping and configuring happen, the number of accessble microstates compatible witht he observed macrostates falls, so that entropy of the micro-jet falls through the application of intelligent action.

The same pattern of thought is thermodynamically relevant and extends to protein synthesis and DNA synthesis, especially under the OOL conditions as discussed by TBO in TMLO and subsequently by Bradley on the case of Cytochrome C.

All of this is very familiar from experience, and it is the sort of thing that Mr Sewell spoke to in the excerpts above.

My labelling is extremely consistent with the concept of what a macrostate is and what associated microstates are.

4] Both are specified, both are observable, neither are functional in this system [a vat without enough atmosphere to see the microjet fly/fail to fly].

Of course, I have particularly emphasised functionally specified complex informational systems, which are directly observable and so the macrostate is quite easy to identify. Your attempt here is to remove the issue of functionality so that any particular microstate is now just the same as any other -- the macrostates are no longer relevant.

Immediately, that implies the relevance and cogency of what I have again just now summarised. That is, the collapse of w in going from scattered to clumped at random to flyable configured states is implicitly acknowledged to identify three distinct macrostates, with the entropy falling with W as we go towards the configured state. That is, strictly, all that is needed for TBO's point to carry.

They are like me, dealing with observable macrostates. And such observability is implicit or explicit in the very definition of W, the statistical weight of a given macrostate: how many configs at micro level are consistent with the observed macrostate?

The rest of their argument follows.

Now, too, when we say "micro" we are looking at the ultramicroscopic level. Your electron microscope observability, in general, is still "macro" relative to that, though of course one cannot identify say flyability by looking at a cluster of micro-jet parts through a microscope; there is a functional test involved that is sensitive to things that are not accessible of such inspection. In short, flyability is not observable through a microscope, so the proposed state test fails.

And, unless one can identify a macrostate observably, one cannot count associated microstates -- apart from being reduced to Robertson's at-random state in which one config is as good as another, so one has to treat the system energetically as if it were behaving at random. That is, the microscope can tell we are in the clumped state [more or less . . . missed parts . . .] but it cannot identify the flyable state.

Your counter expt fails.

5] There is no part of the Boltzmann maths that takes account of where the specification is

The Boltzmann math depends on the recognisability of a given macrostate. That issue has just been addressed again. To wit, what happens if the observed clustered state has a few misplaced parts or missing parts that are not obvious to the microscope but are very visible if we try to fly the jet?

The objection fails.

6] The message was not scrambled, but was carefully modified, to produce a very specific assembly that, by design, is non-functional.

Again, your proposed state is inscrutable relative to function/non function so is part of the observable at random clumped state. That you have a blue print [or equivalently a part by part list in the software] is not relevant to functionality, or to whether the blueprint and the clumped state correspond.

And, at most -- note here the further point, that a "blueprint state" may be observable under certain circumstances -- if your state is in fact observable, it would then be "functional" in the sense of observably corresponding to a predefined state.

It would also in that case be observable and so identified as a macrostate accessed by intelligent action. That goes right back to my point and before me TBO and Sewell.

7] Can you explain the difference in the movement in energy in these two vats?

Again, note the configurational implications of the vectors involved in work.

Vectors differ in both magnitude and direction, so a cumulative effect of work done that targets different configs is different. The quantity of energy expended to achieve the work will be comparable, but the actual work achieved will be different, just as the quantity of work to assemble house parts at a site is comparable [quantitatively, it can be more, equal or less, but is measured in the same terms, joules] to that to subsequently assemble the house, more or less, but the results achieved are very different, functionally different. [Cf my second thought expt on the micro-bridge, back at UD.]

8] it sounds as though we might agree that the energy movement is the same, and what is happening is that we have set the DNA sequencer to do different tasks. One task is to produce a certain functional DNA, the other task is to produce a certain non-functional DNA.

The quantity of energy expended may be comparable, but the results are FUNCTIONALLY diverse i.e we have observably distinct macrostates, with relevant microstate counts. [And, for the moment conforming to a given blueprint is a sort of function, and I suppose a PCR test can make it observable.]

On the -- note just now introduced -- premise of that distinct observability, we are looking at of course the collapse of w from scattered monomers, to at-random clumped chains to configured chains that are observably in given macrostates with one or relatively few microstate members. [I am here thinking of the shift to ensembles as a way to count microstates; consider this to be implicit in what has gone before . . .]

The same result obtains as has long since been pointed out. TBO are right.

9] now the difference is whether the DNA sequence is in theory biofunctional, rather than whether it will ever be used biologically.

Again, it is observability that identifies a macrostate. Once we have that degree of function, as noted above, we can do microstate counts and see how W is collapsing thus entropy is decreasing due to configuraitonal issues.

Function in this sense is not just biofuction, i.e beware of equivocating meanings and contexts.

The relevant states for the point that clumped vs biofunctionally configured is another incremental collapse in w thus s, are as I have long since pointed out. Once the macrostates are identifiable, the microstate counts can proceed. [In this case we can in effect breed our ensembles.]

--> Nor am I at all intending ridicule on any thought expt [Imaginary lab is another word for the expt in your head or on paper or in your computer etc]. Note, too, nanobots long since exist: we call them cells. Apologies for inadvertent offence.

10] there will be no discernable difference in the entropy between the two DNA sequences.

The formation of two observable, specified macrostates will result in relevant microstate counts, and if the states are unique will have the same config entropy value driven by the value of W = 1.

This is irrelevant to the point that a clumped at random macrostate can be functionally distinguished from a functionally configured state and that the w-count falls in moving tot he latter, thence s falls. In short, again, TBO are right.

11] he DNA sequence must somehow "know" that when it is put into a cell it will be able to do something!

I have pointed out, repeatedly,t hat macrostates are based on observation of relevant physical conditions, and we count microstates associated with given macrostates to get w thence s. Observations are written into the process, explicitly or implicitly.

The DNA chain's state or non-state of consciousness is irrelevant to that. I have simply brought out what should be a familiar concept in the quantum and relativity era: observability is relevant to scientific work, and often crops up in our quantities, explicitly or implicitly. [Cf Robertson's long since cited remarks . . . ]

12] The DNA sequencer produces two vials of DNA. Which is the functional one? The only way to tell is to put them in cells... and let the cells do micro-level inspection of each sequence . . . . we agree that the simple repeating pattern DNA sequence is low entrpy, even though it is non-functional . . . . you spend most of your post emphasising that the big difference here is whether the thing in functional:

And the resulting functionality/ non-functionality is typically macroscopically observable: e.g. one spot on the petri dish medium will grow apace, the other will not. So, we can observe the state.

In the case of a specified repeating sequence, we could [as just noted above] do a PCR or the like and in some way inspect the state at a human level of observation. At least, in principle. But the point is, you have a specified and observable state so can do a microstate-count and assign a value of w. The result is that a sufficiently complex entity in a specified state is in a low entropy condition so far as configurational entropy is concerned. That is my point.

And, finally, I am emphasising that macrostate assignment is a matter of observability. Flyability in the case of the microjet or biofucntion in the case of a cell's dna, are observable. With some effort and imagination, we can identify a given microstate specifically and perhaps observe it macroscopically. Once that can be done, we can do a microstate count. The result will be as I have pointed out and discussed.

Now, too, if the state is not macroscopically observable [look above on the nearly but not flyable microjet to see what I am looking at] the state would properly have to be counted as part of the clumped state, which is observable. Its w value is far higher than that of the functional configuration, as noted.

You will note again and again how important using functionality to cleanly identify a state at macro level is.

13] I have never observed that the complexity of the functional molecular machines of life at cellular level takes a lot of smarts to achieve

In EVERY case where we directly know the causal story of an FSCI entity, we see that it takes smarts to get to a relatively rare functional config.

Indeed, in our work with say internet messages, we implicitly accept that [the famous 500 bit limit issue].

But, then, when many see an even more sophisticated case -- despite the obvious issues on the configuration space -- the inference is made that nope it is the "exception." Why? Because of a prior worldviews level commitment that "demands" that.

14] you clearly say that the entropy of the DNA strands is the same, even though one is functional and one is not.

Again: once a macrostate is identifiable -- i.e. observable -- it can be associated with a statistical weight of associated microstates, w, thence config entropy number Sconfig. If a specification chosen at random or arbitrarily is so identifiable, it will have a micro-state count corresponding to low configurational entropy. So will a functional state, which is directly observable.

In short, scattered to clumped at random to specific, observable config [which is happening in both cases] collases W in stages, yielding that Sclumped is treater than Sconfig. And that is my point, it is TBO's point and beyond us both, Sewell's.

In sum: the point made by TBO and Sewell stands.

GEM of TKI

The Pixie said...

What Is - And Is Not - Entropy?

I have just come across an excellent paper by Prof. Frank L. Lambert. Lambert has a lot of stuff about entropy on the internet, eg here and here. His CV (from here):

Frank L. Lambert graduated with honors from Harvard University and received the doctorate from the University of Chicago (under Professor M. S. Kharasch). After military service in WWII and industrial research and development, he joined the faculty of Occidental College. His primary concern was teaching and his publications in the field included a call for abandonment of the standard lecture system (because Gutenberg lives on in printed texts), the first article showing how lecture-size organic molecular models and atomic orbital models could be made from Styrofoam, and, in quite different vein, the first article on thermodynamics and theology in Zygon. For many years he taught "Enfolding Entropy" a course for non-science majors. His research in the synthesis and polarography of organic halogen compounds was always designed for undergraduate collaboration and all but one of his papers were published with student co-authors. Professor emeritus, he became the scientific advisor to the J. Paul Getty Museum and continued as a consultant when the Getty Conservation Institute was established and until it grew to have a staff of 14 scientists.

In summary this is a very well respected scientist, with a track record in thermodynamics and educating people about entropy (yes, this is an argument from authority, but this guy is an excellent authority on entropy; argument from authority is only a fallacy when you qute someone who is not an authority in the specific subject).

The paper I found is here and I am going to quote several chunks of it. Bold mine, italics in original.

The definition, “entropy is disorder”, used in all US first-year college and university textbooks prior to 2002, has been deleted from 15 of 16 new editions or new texts published since 2002 [2]. Entropy is not ‘disorder’ [3] nor is entropy change a change from order to disorder. (Messy papers on a desk or shuffled cards are totally irrelevant to thermodynamic entropy).

Nearly all US general chemistry textbooks prior to 2000 had an illustration or a description of a disorderly desk, or messy student room, or shuffled cards as an example (a minority of texts said “an analogy”) of an increase in entropy. This is absurd. Macro objects do not undergo any thermodynamic change of state by simply being rearranged in space as was established in a 1999 article [11]. Due to that article, no US college/university chemistry text published since 2004 has such a misleading illustration.

That second paragraph is actually a note at the end that relates to the first one, if you are looking for it in the paper.

We discussed the arrangement of books on a shelf earlier. Lambert is clearly saying that whether the order is random or alphabetical by author or alphabetical by author is irrelevant to the entropy (as I was trying to argue). So what is entropy?

Then, examine the molecular thermodynamics of the modern interpretation of Boltzmann.: deltaS = k[B] ln W/W[0] = k[B] ln W, where W is the number of microstates available for the system at its temperature; a microstate being one arrangement (of all the motional energies of the molecules, i.e., of the quantum states) in which the total energy of the system might be at one instant.

Entropy is about how energy is dispersed. Not about the distribution of mass, please note.

The process continues until both flasks and the membrane are at exactly the same temperature. What has happened? Energy – the greater average motional energy of the molecules – in the warmer flask has been dispersed or spread out to the motional energy of the molecules in the slightly cooler flask. What could be more obvious?

A second classic example (that made me despair of ever understanding entropy when I was a student of 18) was that of a gas spontaneously expanding isothermally into an evacuated chamber. But then the professor told us that there was no change in ‘q’ involved. Therefore, we did not have a ‘q’ to insert in the equation, yet because the process was spontaneous, the entropy had increased! Why were we not also told that the significant event that occurred was the initial motional energy of the gas, the motions of its molecules, in one chamber had spontaneously become dispersed or spread out over the entire volume of the two chambers?

To the question that innumerable students have asked, “What is entropy, really?” a clear answer is now “Entropy measures the dispersal of energy: How much energy is spread out in a process, or how widely spread out it becomes – at a specific temperature.

------

Here is another page by Lambert.

The conclusion is the same from another molecular viewpoint. There are many many more microstates for a warmer object or a flame than for a cooler object or substance. However, the transfer of energy to a cooler object causes a greater number of additional microstates to become accessible for that cooler system than the number of microstates that are lost for the hotter system. So, just considering the increase in the number of microstates for the cooler system gives you a proper measure of the entropy increase in it via the Boltzmann equation. Because there are additional accessible microstates for the final state, there are more choices for the system at one instant to be in any one of that larger number of microstates – a greater dispersal of energy on the molecular scale.

The second big category of entropy increase isn’t very big, but often poorly described in general chemistry texts as "positional" entropy (as though energy dispersal had nothing to do with the change and the molecules were just in different ‘positions’!) It involves spontaneous increase in the volume of a system at constant temperature. A gas expanding into a vacuum is the example that so many textbooks illustrate with two bulbs, one of which contains a gas and the other is evacuated. Then, the stopcock between them is opened, the gas expands. In such a process with ideal gases there is no energy change; no heat is introduced or removed. From a macro viewpoint, without any equations or complexities, it is easy to see why the entropy of the system increases: the energy of the system has been allowed to spread out to twice the original volume. It is almost the simplest possible example of energy spontaneously dispersing or spreading out when it is not hindered.
From a molecular viewpoint, quantum mechanics shows that whenever a system is permitted to increase in volume, its molecular energy levels become closer together. Therefore, any molecules whose energy was within a given energy range in the initial smaller-volume system can access more energy levels in that same energy range in the final larger-volume system: Another way of stating this is "The density of energy levels (their closeness) increases when a system's volume increases." Those additional accessible energy levels for the molecules' energies result in many more microstates for a system when its volume becomes larger. More microstates mean many many more possibilities for the system's energy to be in any one microstate at an instant, i.e., an increase in entropy occurs due to that volume change. That's why gases spontaneously mix and why they expand into a vacuum or into lower pressure environments.

Hey, that is what I said! He summarises:

Energy of all types changes from being localized to becoming dispersed or spread out, if it is not hindered from doing so. The overall process is an increase in thermodynamic entropy, enabled in chemistry by the motional energy of molecules (or the energy from bond energy change in a reaction) and actualized because the process makes available a larger number of microstates, a maximal probability.
The two factors, energy and probability, are both necessary for thermodynamic entropy change but neither is sufficient alone. In sharp contrast, information ‘entropy’ depends only on the Shannon H, and ‘sigma entropy’ in physics (σ = S/kB) depends only on probability as ln W.
Entropy is not "disorder". [See http://www.entropysite.com/cracked_crutch.html ]
Entropy is not a "measure of disorder".
Disorder in macro objects is caused by energetic agents (wind, heat, earthquakes, driving rain, people, or, in a quite different category, gravity) acting on them to push them around to what we see as "disorderly" arrangements, their most probable locations after any active agent has moved them. The agents (other than gravity!) undergo an increase in their entropy in the process. The objects are unchanged in entropy if they are simply rearranged.
If an object is broken, there is no measurable change in entropy until the number of bonds broken is about a thousandth of those unchanged in the object. This means that one fracture or even hundreds make no significant difference in an object's entropy. (It is only when something is ground to a fine powder that a measurable increase or decrease in entropy occurs -- the sign of change depending on the kinds of new bonds formed after the break compared to those in the original object.)
Even though breaking a mole-sized crystal of NaCl in half involves slight changes in hundreds to thousands of the NaCl units adjacent to the fracture line, in addition to those actually on such a line, there are still at least 10^6 bonds totally unaffected. Thus we can see why a single fracture of a ski (unhappy as it is to the skier), or a house torn apart in to ten thousand pieces by a hurricane (disastrous as it is to the homeowner), represent truly insignificant entropy changes. [www.shakespeare2ndlaw.com] The only notable scientific entropy change occurs in the agent causing the breaks. Human concepts of order are misplaced in evaluating entropy.


------

And another page on another web site by Lambert, specifically about how messy room and similar analogies are actually misleading:

To aid students in visualizing an increase in entropy many elementary chemistry texts use artists' before-and-after drawings of groups of "orderly" molecules that become "disorderly". This has been an anachronism ever since the ideas of quantized energy levels were introduced in elementary chemistry. "Orderly-disorderly" seems to be an easy visual support but it can be so grievously misleading as to be characterized as a failure-prone crutch rather than a truly reliable, sturdy aid.
After mentioning the origin of this visual device in the late 1800s and listing some errors in its use in modern texts, I will build on a recent article by Daniel F. Styer. It succinctly summarizes objections from statistical mechanics to characterizing higher entropy conditions as disorderly (1). Then after citing many failures of "disorder" as a criterion for evaluating entropy — all educationally unsettling, a few serious, I will urge the abandonment of order-disorder in introducing entropy to beginning students. Although it seems plausible, it is vague and potentially misleading, a non-fundamental description that does not point toward calculation or elaboration in elementary chemistry, and an anachronism since the introduction of portions of quantum mechanics in first-year textbooks.
Entropy's nature is better taught by first describing its dependence on the dispersion of energy (in classic thermodynamics), and the distribution of energy among a large number of molecular motions, relatable to quantized states, microstates (in molecular thermodynamics). Increasing amounts of energy dispersed among molecules result in increased entropy that can be interpreted as molecular occupancy of more microstates. (High-level first-year texts could go further to a page or so of molecular thermodynamic entropy as described by the Boltzmann equation.).


Also:

But it was not a social scientist or a novelist — it was a chemist — who discussed entropy in his textbook with "things move spontaneously [toward] chaos or disorder".7 Another wrote, "Desktops illustrate the principle [of] a spontaneous tendency toward disorder in the universe…".7 It is nonsense to describe the "spontaneous behavior" of macro objects in this way: that things like sheets of paper, immobile as they are, behave like molecules despite the fact that objects' actual movement is non-spontaneous and is due to external agents such as people, wind, and earthquake. That error has been adequately dismissed (7).

So what is entropy?

The general statement about entropy in molecular thermodynamics can be: "Entropy measures the dispersal of energy among molecules in microstates. An entropy increase in a system involves energy dispersal among more microstates in the system's final state than in its initial state." It is the basic sentence to describe entropy increase in gas expansion, mixing, crystalline substances dissolving, phase changes and the host of other phenomena now inadequately described by "disorder" increase.

Gordon said...

PS: There is a very thermodynamic way to discuss the difference between quantity of energy involved in doing work and the work that results. Namely,t he observation that work is a PATH FUNCTION not a state function.

As my Sears & Sal puts it, p. 62: "the work d'W of a force F when its point of application is displaced ds [at an angel theta to F] is defined as Fcos [theta]ds . . ." Of course here s is displacement not entropy.

The d'W indicates that the work is not a proper differential, i.e it is a path not a state function,i.e it is like heat flow d'Q. Work can appear in other guises which reduce to the above, e.g deflection of bodies, electrical processes in circuits, and magnetic effects.

Paths are of course linked to configurations that result from those paths.

On following pages we can see Section 3.4 "Work depends on the path" and 3.5 "configuration work and dissipative work"

In this part, p. 70 - 71, free expansion is cited as a case of config change without work being done [there is no resistance to the free expansion, so no pressure-volume work in effect]. Of course entropy increases as the config shifts in this case.

The Pixie said...

Hi Gordon

I have half written a summing up, but your argument seems to be evolving daily, such as work involves direction and meaning, and what counts as functional.

2] The QUANTITY of work is the product [it's a vector dot product actually] of the magnitude of the force and the distance moved along the line of action. But that is not the same as to dismiss the quality involved: work takes its meaning and relevance from the fact that humans [and other creatures . . .] are involved in tasks as you say, which impart specific configurations to objects by doing targetted work. (That is why it is a rather special quantity used in physics.)
The work done to impart one configuration is not at all the same as that done to impart another. Indeed, we should note that the fact that vectors are involved should tell us that: forces and displacements are different in both magnitude and direction, which through the vector dot product, affects even the quantity of work done. There is more to it than the number of joules involved.

So explain what those differences are! Can you provide a web page that describes how they might be different? I found this one. From it:

"When the force acting on a moving body is constant in magnitude and direction, the amount of work done is defined as the product of just two factors: the component of the force in the direction of motion, and the distance moved by the point of application of the force. "

I see nothing here to suggest that the magnitude of the work can change while the energy expended is the same. What units do you measure the magnitude of the work in, by the way?

Can you go though the differences in the direction of the work for the two types of nanobots. If this is important, as you say, then we need to be quite clear on what is happening. Please be specific so it is quite clear what is happening in each case (eg, I get that a vector plus a vector going the opposite way get you back to where you started; I do not get why we should suppose this is happening in one case and not the other). Otherwise, I am afraid I shall suspect you of deliberately muddying the waters; adding needless complications to hide the fact that you are wrong.

And I am not clear on the bit "work takes its meaning and relevance from the fact that humans are involved in tasks". Does this mean that if humans (etc.) are not involved, then no work is done or less work or what? If I push a rock off a cliff is the work done by the falling rock any different to if it fell by natural causes? If that is what you are claiming, then please find some evidence to support the claim, because to be honest, I find that idea to be ridiculous (but you are physicist, so you must know some web sites to back that up, right). And if you think the rock does the same work either way, please explain why you felt the need to introduce a new irrelevancy.

We have been discussing the imaginary nanobots for weeks now, and I keep asking you to go through the thermodynamics. Only today do I find out the direction of the work is significant. Why is that?

Please go though the thermodynamics of every step and sub-step for these two nanobots and state exactly how they are different, specifically what those differences are (yes, even what direction), why they mustbe different in the two examples and what the impact is on the thermodynamics.

You might also like to explain why you have not done this already, as showly this is the heart of thought experiment!

3] Macrostate is short for macroscopically observable state, whilst microstate is short for specific individual config [usually at molecular or smaller scale] compatible with the relevant macrostate.
See previous post, with numerous quotes from Prof. Lambert!

4] Of course, I have particularly emphasised functionally specified complex informational systems, which are directly observable and so the macrostate is quite easy to identify. Your attempt here is to remove the issue of functionality so that any particular microstate is now just the same as any other -- the macrostates are no longer relevant.
Correct.

Immediately, that implies the relevance and cogency of what I have again just now summarised.
I thought the opposite. By removing functionality, surely I have shown that it is not relevant.

That is, the collapse of w in going from scattered to clumped at random to flyable configured states is implicitly acknowledged to identify three distinct macrostates, with the entropy falling with W as we go towards the configured state. That is, strictly, all that is needed for TBO's point to carry.
Sure. But we have yet to determine if the W you are using is the same one used in thermodynamics. The quotes by Lambert in my last post tell us that in fact you are using a different W. I could use W for books arranged on a shelf - but that would not be thermodynamics. In thermodynamics, as Lambert said, W is about energy distribution.

5] The Boltzmann math depends on the recognisability of a given macrostate. That issue has just been addressed again. To wit, what happens if the observed clustered state has a few misplaced parts or missing parts that are not obvious to the microscope but are very visible if we try to fly the jet?
In Boltzmann's maths, a macrostate is the same as a state in classical thermodynamics. from Wiki:

In statistical mechanics, a microstate describes a specific detailed microscopic configuration of a system, that the system visits in the course of its thermal fluctuations.
In contrast, the macrostate of a system refers to its macroscopic properties such as its temperature and pressure. In statistical mechanics, a macrostate is characterized by a probability distribution on a certain ensemble of microstates.
This distribution describes the probability of finding the system in a certain microstate as it is subject to thermal fluctuations.


There is nothing in macrostates that requires a specification, or functionality. Just standard thermodynamic things like temperature and pressure.

The functional assemblies in their vat and the non-functional assemblies in the other vat are at the same temperature and pressure. The functionality is not relevant to their macrostate (and both are non-functional in the vat, under low pressure). So why the difference in thermodynamic entropy?

6] Again, your proposed state is inscrutable relative to function/non function so is part of the observable at random clumped state. That you have a blue print [or equivalently a part by part list in the software] is not relevant to functionality, or to whether the blueprint and the clumped state correspond.
I agree, that was my point.

And, at most -- note here the further point, that a "blueprint state" may be observable under certain circumstances -- if your state is in fact observable, it would then be "functional" in the sense of observably corresponding to a predefined state.
It is observable - with the electron microscope.

So next thought experiment. Back to the faulty-but-not-by-designed nanobots. They keep producing these non-flying, non-specified high entropy assemblies. I am curious how they do that, so I carefully study the assembly, then design a new set of nanobots that builds the same assembly. The assemblies are still non-flying, but now they are specified, so must be low entropy. Same assemblies, different entropy.

But entropy is an equation of state! It does not depend on how you got to a certain state, only on that state. How can that possibly be?

7] Pix: Can you explain the difference in the movement in energy in these two vats?
Again, note the configurational implications of the vectors involved in work.
I keep asking you to explain the difference, and now, weeks into the debate, you bring up vectors and direction. Sounds just a bit like you are making it up as you go along.

Vectors differ in both magnitude and direction, so a cumulative effect of work done that targets different configs is different. The quantity of energy expended to achieve the work will be comparable, but the actual work achieved will be different, just as the quantity of work to assemble house parts at a site is comparable [quantitatively, it can be more, equal or less, but is measured in the same terms, joules] to that to subsequently assemble the house, more or less, but the results achieved are very different, functionally different. [Cf my second thought expt on the micro-bridge, back at UD.]
Hand waving! Come on and show the maths for the two vats of nanobots. What is the difference between them, and why? Otherwise, sorry, but I will not believe you.

8] The quantity of energy expended may be comparable, but the results are FUNCTIONALLY diverse i.e we have observably distinct macrostates, with relevant microstate counts. [And, for the moment conforming to a given blueprint is a sort of function, and I suppose a PCR test can make it observable.]
Yes, I know. One set of assemblies were built to a pre-existing specification, the other was not (that is what you mean by "the results are FUNCTIONALLY diverse i.e we have observably distinct macrostates, with relevant microstate counts", right?). I just do not get the difference in thermodynamics.

10] The formation of two observable, specified macrostates will result in relevant microstate counts, and if the states are unique will have the same config entropy value driven by the value of W = 1.

This is irrelevant to the point that a clumped at random macrostate can be functionally distinguished from a functionally configured state and that the w-count falls in moving tot he latter, thence s falls.

Sure, when the going gets tough, reject DNA sequences and go for imaginary nano-planes!

The w-count falls whether we move to a functional configuration, or a non-functional one, if both are specified. Thus, "functional" is irrelevant, and we should only consider whether it is specified, when calculating the configurational entropy.

11] And the resulting functionality/ non-functionality is typically macroscopically observable: e.g. one spot on the petri dish medium will grow apace, the other will not. So, we can observe the state.
We can observe the state of the non-functional one using DNA sequencing. No problem there. On the other hand, putting a spot of human DNA on a Petri dish will not show us anything.

In the case of a specified repeating sequence, we could [as just noted above] do a PCR or the like and in some way inspect the state at a human level of observation. At least, in principle. But the point is, you have a specified and observable state so can do a microstate-count and assign a value of w. The result is that a sufficiently complex entity in a specified state is in a low entropy condition so far as configurational entropy is concerned. That is my point.
Sounds like the complexitity is irrelevant too, then, it is just the specification. If it is specified, the entropy is zero. That said, the complexity may be a consideration in the entropy of the at-random instance (as Dembski uses it, complexity is the same as improbability, so that fits).

And, finally, I am emphasising that macrostate assignment is a matter of observability. Flyability in the case of the microjet or biofucntion in the case of a cell's dna, are observable. With some effort and imagination, we can identify a given microstate specifically and perhaps observe it macroscopically. Once that can be done, we can do a microstate count. The result will be as I have pointed out and discussed.
This is not how macrostates are defined in conventional thermodynamics!

You will note again and again how important using functionality to cleanly identify a state at macro level is.
On the contrary, I am coming to the conclusion that functionality is irrelevant, and what is important is specification.

13] Pix: I have never observed that the complexity of the functional molecular machines of life at cellular level takes a lot of smarts to achieve
In EVERY case where we directly know the causal story of an FSCI entity, we see that it takes smarts to get to a relatively rare functional config.
Name one instance of functional molecular machines of life at cellular level where we know the causal history. I know I cannot.

Indeed, in our work with say internet messages, we implicitly accept that [the famous 500 bit limit issue].
Not exactly "functional molecular machines of life at cellular level" though.

14] Again: once a macrostate is identifiable -- i.e. observable -- it can be associated with a statistical weight of associated microstates, w, thence config entropy number Sconfig. If a specification chosen at random or arbitrarily is so identifiable, it will have a micro-state count corresponding to low configurational entropy. So will a functional state, which is directly observable.
Again and again, it seems to be specification that is relevant, not functionality.

Gordon said...

Pixie:

Your two posts interleaved with my last comment. I will again respond on points, observing that the circularity of the discussion now largely overwhelms any progress being made:

1] What is entropy:

Now, of course certifications etc while generically important, are subjectt to the stricture that no authority -- including Mr Lambert, you and yours truly -- is better than his/her underlying facts, assumptions and reasoning.

So, let us go back to basics.

Entropy is a metric of the disorder of systems at micro level, as is entailed by Clausius' classic:

ds >/= d'Q/T.

d'Q is of course importation of random molecular scale motion [i.e. thermal energy].

This brings us to the Boltzmann formulation, where s = k ln w. That is, we identify a macroscopically observable state of a system, then do a micro-state count and calculate s.

In turn that implicates both energy and matter/mass distributions at the relevant micro levels -- you do not have energy distribution without mass distribution, and configurational entropy is still a very relevant concept starting with free expansion and diffusion. Entropy also implicates probabilities [and thus, per Robertson et al, information], as the fundamental principle on a macrostate is that all accessible microstates are equiprobable. And, macro-states are in fact therefore linked to the micro level.

I would also observe that ruling a datum line and saying that once objects get "big enough" we will not use entropy concepts in discussing their behaviour falls of its own weight as special pleading: kindly identify "big enough," and what that means operationally, to see my point.

That quantum states approach one another in bodies as molecules approach each other [thence bands in solids etc] and that as scale goes up Q states also become closer is a trend and not a qualitiative difference. Finally, the classical view is still relevant, i.e the place of the ideal gas etc and its behaviour is a subject for thermodynamics, not least because of the common utility of this model amd the correspondence principle i.e the Q-theory result must approach the classical one as scale becomes appropriate. This is built into the Copenhagen principles, and guarantees that Q results will encase classical ones as a limiting case, necessary for empirical adequacy.]

As Wiki summarises in its Entropy article, and appropriately I might add:

"The concept of entropy (Greek: εν (en=inside) + verb: τρέπω (trepo= to chase, escape, rotate, turn)) in thermodynamics is central to the second law of thermodynamics, which deals with physical processes and whether they occur spontaneously. Spontaneous changes occur with an increase in entropy. Spontaneous changes tend to smooth out differences in temperature, pressure, density, and chemical potential that may exist in a system, and entropy is thus a measure of how far this smoothing-out process has progressed. In contrast, the first law of thermodynamics deals with the concept of energy, which is conserved. Entropy change has often been defined as a change to a more disordered state at a molecular level. In recent years, entropy has been interpreted in terms of the "dispersal" of energy. Entropy is an extensive state function that accounts for the effects of irreversibility in thermodynamic systems.

Quantitatively, entropy, symbolized by S, is defined by the differential quantity dS = δQ / T, where δQ is the amount of heat absorbed in an isothermal and reversible process in which the system goes from one state to another, and T is the absolute temperature at which the process is occurring.[3] Entropy is one of the factors that determines the free energy of the system . . . . In terms of statistical mechanics, the entropy describes the number of the possible microscopic configurations of the system. The statistical definition of entropy is the more fundamental definition, from which all other definitions and all properties of entropy follow. Although the concept of entropy was originally a thermodynamic construct, it has been adapted in other fields of study, including information theory, psychodynamics, thermoeconomics, and evolution.[4][5][6]"

I note on this, that the reason we generally do not address configurational entropy of say a cluster of lego bricks vs a house made of said bricks, or how books in a library are organised, is that the quantitiative measure of the difference is negligible or irrelevant for our purposes, not that the concept is inherently absurd. Besides, the direct informational approach is more fruitful in such cases for most of us, but that is in turn linked to the thermodynamics issues,a s Roberson summarised. But, consider here: as the storage and functional media get smaller and smaller in unit element scale, the thermodynamics issues become ever more apparent in information processing.

In short, there is an arbitrary datum line issue here, that Mr Lambert is missing. [E.g scale up particles in a liquid, and ask yourself if strictly at any level, the particles cease to undergo fluctuation-triggered Brownian motion, or is it that as the relative scale of fluctuations falls, we can safely ignore the issue as it is observationally irrelevant for our usual purposes.]

You will also observe that in the case of the micro jet thought expt, I deliberately chose the scale to be such that the particles in question are in effect large-end molecules capable of visibly participating in the thermal agitation, through Brownian motion. In short, debates on the application of entropy concepts to macro-level objects are strictly irrelevant to our discussion in the main.

In that context, Mr Lambert's remarks, are therefore by and large irrelevant to our discussion. (It is also apparent that he is from one end of the spectrum of the debates on the links to information theory, and on that I will simply say that Brillouin, Robertson et al IMHCO, on objective grounds, have the better of the debate.)

In short, you have not settled the core issue by citing Mr Lambert as an authority. So, you need to re-address the two sides of the issue, and for that I here cite again Robertson's summary:

__________

[. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms [pp. Vii – viii] . . . . the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context. [pp. 3 – 7, 36, Stat Thermophysics, PHI]
__________

--> In sum, there is a bridge to information theory, based on the foundational concepts of stat mech reasoning. And,that reasoning scales quite well -- indeed, Robertson has an interesting discussion on an imaginary air control system, where we do know locations of the jets, then he scales down tot he point where we are reduced to statistical approaches, and deduces the whole project of stat thermodynamics from that . . .

2] your argument seems to be evolving daily, such as work involves direction and meaning, and what counts as functional.

Not so. I am explaining underlying points as the issues come up for clarification.

In the case of work, it has long been understood in Physics as the dot product of force and displacement vectors, which brings in direction, orientation and location issues as well as magnitude issues. I realised that this point needed to be made explicitly as your objections on configuration were multiplied. As a chemist , did you do the level of physics where this was addressed? [Just curious . . . the level of physics that requires calculus and some exposure to vectors, e.g Halliday-Resnick's classic text.]

You will note that I have long since in my always linked discussed functionality to include issues such as meaningfulness and/or correspondence with the syntax etc of a language -- cf my "more or less in English" remark.

As you have raised further objections and cases, I have brought out some implications, to highlight that your cases lead to two outcomes: in some cases we can operationally define a macrostate resulting from the manipulation, so we can proceed to count W and see that s falls. In other cases, we cannot do so, and so we are forced to leave the macrostate as random clumped, and count W relative to that. This still leaves us on the point that as we move scattered to clumped to configured and flyable [or, biofuncitonal] we have reduced entropy by changing configuration of micro-particles associated with in principle identifiable macrostates.

3] explain what those differences are!

Insofar as the overall sum of the vector increments underlying d'W differs, the work done in one path necessarily differs from that in another path. The same quantum of energy [think of using up the same amount of fuel in a tiny fuel cell in the nanobots here] may be implicated in the two cases, but the end states are often observably -- macroscopically observably -- different in location, orientation and configuration. [E.g clumped at random vs a flyable jet.]

That difference makes a difference to w and so to s. Of course,t he context is that s falls as we move from clumped to functionally configured states.

4] I see nothing here to suggest that the magnitude of the work can change while the energy expended is the same.

Vectors bring in spatial contexts and concepts into the picture. They may differ in magnitude and in direction.

In the case of displacement, it is the vector change in location, and we can define a related metric of angular orientation of an extensive non-isotropic body [yaw-pitch-roll; think aeroplane or ship here], both being relevant to our cases.

So, magnitude and location must be held in view and not conflated.

In the case of random clustered jet parts, just sticking them together at random, in location cells of order 10^-6 m across, and in any old orientation, is vastly different from a specific, flyable configuration that locks parts to parts in functional ways tied to relative location and orientation. This, I have repeatedly discussed for weeks, literally.

In the macromolecules of life, the issue of precise geometry of molecules and the relevance of same to biofunctionality is a longstanding commonplace. Underlying this 3-D lock-and key fit issue is the precise chaining of protein monomers based on coded DNA strands, again a commonplace. Life systems expend enormous complexity and energy in making sure of that functionality.

In this light, and given your ill-judged claim on the point, I think it is fair comment for me to ask here that maybe you should look in a mirror and ask again the question of just who is "muddying the waters."

5] Work concept:

Again, physics is a cultural activity, relevant to human interests and needs. Its terms and concepts are freighted with human significance. That is why I noted that we identify and define a concept "work" based on its significance to humans.

That is, the connexion between physical work and economic work -- thence, the importance of "energy" in current debates is not a coincidence. Indeed, the energy intensity of an economy is connected to the technologies in use that exert planned forces on material objects to create and use goods that deliver energy-based services. (For instance, in a certain institution that I worked in, energy use for air conditioning exploded over the 1990's as computers were introduced in large numbers, to the point where at he end of that period, 70% or so of daytime energy use was for cooling services. Since the institution was consuming about 5 MW, that raised the issue of cogeneration to utilise the waste heat locally, to dun centralised absorption chillers and cold water pipelines, raising energy efficiency to about 70% from the more typical 25%. Across the multidecade working life of the project the resulting savings after meeting capital etc costs would be in the ballpark of US$ 200 mn at circa y2k prices and moderate projections on costs -- long since exceeded, net present value being about 1/7 of that. In short I have had to deal with this link rather explicitly in a an energy and sustainable development context.)

Further, that is why there is a clash between proposed large scale energy use reduction and associated implications for economic activity, i.e we could trigger a depression and even wars if we go about addressing climate change issues without thoroughly thinking through what we are doing.

6] I keep asking you to go through the thermodynamics. Only today do I find out the direction of the work is significant. Why is that?

PLEASE! Do better than that, man. Look back -- what do you think configuration has been about all along? Discussion of specific orientations and relative locations to make a functional as opposed to non-functional whole?

As to the notion that I should in effect write the pseudocode for clumping and configuring nanobots, on what is obvious from the very basic level description . . . that is plainly not needed, once you are willing to see that configuraiton counts onfucntionality of microjets, houses bridges etc. And it counts for cells too.

7] By removing functionality, surely I have shown that it is not relevant.

You have precisely NOT removed the relevance of functionality. You have tried to substitute a case where you shift to an environment that makes it hard to observe flyability just above the vat -- a shift to a suitable environment is an obvious fix to that, to make the observation of the resulting macrostate: clumped vs flyable config?

But also, you substituted a test of electron microscope observability which then failed to distinguish functional from non functional clusters. On DNA, I pointed out that the use of testing techniques capable of observing chains would make your cases specific enough to show that entropy indeed falls when we have complex molecules with the criterion that they meet a specification.

I have already answered as to why Mr Lambert's remarks are irrelevant and in material part in error.

8] There is nothing in macrostates that requires a specification, or functionality. Just standard thermodynamic things like temperature and pressure.

The essential issue of macrostate definition is observability at macro-level. The variables and observations required are a matter of the specific situation in hand and can get quite esoteric.

Functionality in a specific context is one way to maketsuch an observationthat distingusihes a macrostate and allows us to do microstate counts. In the micro-jet case, comparatively few configurations of parts will be flyable. In the case of DNA and proteins etc, comparatively few linear combinations of monomers will function properly in a cell.

Functionality is an example of how a macrostate may be specified, and for instance s relevant to the inner workings of say a diesel engine, which is a practical approximation to the Carnot cycle.

To make the matter a stat mech one only requires us to generate an ensemble, empirically or as a thought expt -- cf Gibbs here. (In short you are ruling an arbitrary and indefensible datum line on observability.)

9] Back to the faulty-but-not-by-designed nanobots. They keep producing these non-flying, non-specified high entropy assemblies. I am curious how they do that, so I carefully study the assembly, then design a new set of nanobots that builds the same assembly. The assemblies are still non-flying, but now they are specified, so must be low entropy.

I pointed out that if the nanobots make a specified -- note the reference to "blueprint" -- observable macrostate then the config component of entropy is low, as w is going to be at or close to the blueprint, taking in the possibility of assembly error and tolerances etc. If the resulting state cannot be observationally discriminated from a clumped at random state,the entropy will be large as W is high.

You will observe that the entropy is indeed a STATE function, depending only on: [1] can you observationally identify a macrostate so that microstates relevant to it can be reliably counted, [2] what is the count, w, [3] s = k ln w.

10] I just do not get the difference in thermodynamics.

Cf just above and before going back weeks.

From my perspective, it seems that you have been looking for then recycling already long since answered objections rather than seeking a mutual understanding with points of agreement and disagreement, and so this discussion has become largely circular.

I respond that:

--> I have identified from the outset a case where we can plainly distinguish the scattered, clumped and functionally configured macrostates, and make w-counts.

--> THe counts show that S decremetns as we move towardsthefuncitonal config.

--> The case also shows why it makes sense to distinguish dH and TdS terms, and to split up TdS into a clumping/thermal portion and a configurational portion, the "thermal' part relating to micro distributions of energy [e.g bonding in the chain regardless of how the chain is sequenced], and the configuration relating to the fact that not just any cluster of parts or monomers will do -- functionally specifying the configured state.

--> Strictly, as of that point the issue was over. But the conceptr to configration and related energy flows became an issue.

--> I then discussed how configuration is a relevant and generally accepted part of the thermodynamics [Lambert notwithstanding] and have as time progressed, given more and more discussion and explanation, culminating in addressing the vector nature of work as opposed to the quantum of energy converted in doing the work.

--> The work having been done by our notional nanobots in avat, or by real-world milecular machinesin a cell, we end up in obseravable states, which can in principle at least be set in cells in an ensemble in phase or config space.

--> Counting the appropriate microstates we see that S falls as we move towards functional states. But, in the clumped cases in view [random and configured] bonding energies, vibrational energies etc are sufficiently comparable to be set to one side relative to the focus on configurational contribution, i.e we can separate out configurational entropy and discuss it with profit,as did TBO 25 years ago -- and, as was well received by thermodynamically competent reviewers such as Robert Shapiro.

--> Shapiro's recent point that a golf ball does not play itself around a Golf course simply because energy is added, is relevant: i.e specific configuration counts and that is tied to the vector nature of work, which imparts ordered, directed motion that accesses in this case meaningful locations.

11] The w-count falls whether we move to a functional configuration, or a non-functional one, if both are specified. Thus, "functional" is irrelevant, and we should only consider whether it is specified, when calculating the configurational entropy.

I repeat, I have used fuctionality as a particular, materially relevant type of specification.

12] putting a spot of human DNA on a Petri dish will not show us anything.

I gave an example of observability of the relevant config. Even many single-point mutations are lethal,and an at random global DNA config is overwhelmingly most unlikely to grow on a petri dish. For, fucntional DNA exhibits highly specific codes and regulatory mechanisms.

13] Sounds like the complexitity is irrelevant too, then, it is just the specification. If it is specified, the entropy is zero.

Not at all: complexity in the sense of embracing 500+ bits of information is relevant to whether such a functional config is likely enough to be accessible by chance to credibly attribute its origin to chance plus necessity only. Both parts are required for the explanatory filter.

They are also relevant to the point that there is a dramatic drop in entropy on going from clustered at random to functionally configured.

14] Name one instance of functional molecular machines of life at cellular level where we know the causal history. I know I cannot.

You have changed the subject (as usual on this point). We know from observation the causal story of many systems exhibiting FSCI.

In each and every case we observe, where FSCI exists, we see that the system is the product of intelligent agency. Every instance. And, given the issue of clumped vs functionally specified -- note the use of an adjective here -- states in sufficiently complex systems, we see why: random chance is maximally unlikely to get to a functional configuration. But agents, theough use of creative imagination, knowledge, experise and reasoning, eliminate most of the config space and consider only clusters that are likely to function as intended. HTe plan pr program in effect. THey design in short.

So they then go out and configure systems in a much smaller sub-space than the raw config at random space. Then, through troubleshooting -- a nontrivial exercise I assure you from painful experience -- they access the targetted functionality. The asembly plan then drives manufacturing to create functional entities in that small region of the overall configured space.

Your resort to changing the subject is an implicit acknowledgement of the cogency of this point.

THe inferential step I have taken is that, on an inference to best explanation basis, and comparing the relative likelihoods of accessing such islands of functionality relative to chance clustering and intelligent action, when I see FSCI, even when I do not directly know the causal story, I confidently infer to design.

So do you -- in cases where worldview level prior commitments challenged by this principle do not come into play. (And thereby hangs a long tale on the philosophy, rhetoric and politics involved, not just the raw science. This, I discuss here, as is always linked.)

GEM of TKI

The Pixie said...

Does Thermodynamic Entropy Depend on Mass Distribution?

This seems to be the most fundamental difference between us, so I want to focus on that for this post, and address other issues in a second post. For one thing, you seem to have drifted off into information and entropy, which is quite a different issue. This post is purely about the significance of mass disribution in thermodynamics.

For clarity, my position is that entropy is a measure of the dispersion of energy, rather than the dispersion of both energy and matter. To put it another way, to calculate the entropy using Boltzmann, you need only know about how energy is distributed; you need know nothing about mass distribution.

Now, of course certifications etc while generically important, are subjectt to the stricture that no authority -- including Mr Lambert, you and yours truly -- is better than his/her underlying facts, assumptions and reasoning.
I agree. But here we have a situation where you claim the distribution of mass is a factor in the thermodynamic entropy, and I say thermodynamic entropy is just about distribution of energy. Now, we can see who can support his claim, but I have a world-class thermodynamicist clearly stating that I am right. Perhaps you should consider the possibility that you are wrong?

So, let us go back to basics.
Entropy is a metric of the disorder of systems at micro level, as is entailed by Clausius' classic:
ds >/= d'Q/T.
d'Q is of course importation of random molecular scale motion [i.e. thermal energy].

Right there you build your conclusion into the first line, when you "go back to basics". Entropy is a measure of the distribution of energy at the quantum level, as Lambert clearly states. This is why Clausius' classic inequality is about energy.
This brings us to the Boltzmann formulation, where s = k ln w. That is, we identify a macroscopically observable state of a system, then do a micro-state count and calculate s.

In turn that implicates both energy and matter/mass distributions at the relevant micro levels ...

No, that does NOT imply that "both energy and matter/mass distributions at the relevant micro levels". Clearly the Clausius' inequality is about energy, and not mass. For the Boltzmann equation (including macrostates and microstates), Lambert and I claim that W is about distribution of energy, and not about how mass is distributed at all; you offer no reason to suppose otherwise. So how can you conclude from that that mass distribution is relevant?
... you do not have energy distribution without mass distribution...
Yes you can. Say we have 10 molecules, and 10 lumps of energy, there are numerous ways those 10 lumps of energy can be distributed amoung 10 molecules - especially when we consider vibrational, rotational, translational and other modes. It does not matter where those molecules are, they might be bound together in a crystal, or floating as a gas, you can still distribute the energy.

More importantly, though, this is about whether the mass distribution is used to [i]calculate[/i] entropy. I claim that we can calculate the entropy without knowing how (even in vague terms) the mass is distributed.

... and configurational entropy is still a very relevant concept starting with free expansion and diffusion.
And yet Lambert explains both free expansion and diffusion in terms on energy distribution only.

Entropy also implicates probabilities [and thus, per Robertson et al, information], as the fundamental principle on a macrostate is that all accessible microstates are equiprobable. And, macro-states are in fact therefore linked to the micro level.
So?

I would also observe that ruling a datum line and saying that once objects get "big enough" we will not use entropy concepts in discussing their behaviour falls of its own weight as special pleading: kindly identify "big enough," and what that means operationally, to see my point.
What does that relate to? At the molecular level, entropy can go up and down. At the macroscopic scale, the system is "big enough" that entropy only goes up. Are you really disagreeing with that?

As Wiki summarises in its Entropy article, and appropriately I might add:
In particular, did you read in that quote: In recent years, entropy has been interpreted in terms of the "dispersal" of energy. I found nothing in there that I disagree with, or that supports your claim that the distribution of mass is relevant.

I note on this, that the reason we generally do not address configurational entropy of say a cluster of lego bricks vs a house made of said bricks, or how books in a library are organised, is that the quantitiative measure of the difference is negligible or irrelevant for our purposes, not that the concept is inherently absurd.
Well the world-class thermodynamicist disagrees with you, and published a paper to back that up ( Lambert, F. L. Shuffled Cards, Messy Desks, and Disorderly Dorm Rooms — Examples of Entropy Increase? Nonsense! J. Chem. Educ. 76, 1385 (1999) ).

In short, there is an arbitrary datum line issue here, that Mr Lambert is missing.
Sorry, I am not getting this. Can you explain what you mean?

In that context, Mr Lambert's remarks, are therefore by and large irrelevant to our discussion.
How can you say that? A major (and perhaps the major) fundamental difference in our claims is whether mass distribution directly affects thermodynamic entropy. If not, Sewell's claims about "carbon order" become nonsense, Thaxton's claims about configurations of proteins become nonsense. Lambert clearly states that thermodynamic entropy is about energy distribution only.

And yet you declare that irrevant!
(It is also apparent that he is from one end of the spectrum of the debates on the links to information theory, and on that I will simply say that Brillouin, Robertson et al IMHCO, on objective grounds, have the better of the debate.)
We are debating the second law of thermodynamics, not information theory. I will stick with the world-class thermodynamicist.

In short, you have not settled the core issue by citing Mr Lambert as an authority. So, you need to re-address the two sides of the issue, and for that I here cite again Robertson's summary:
So you reject Lambert as an authority, and impose your own? Well, let us see how well Robertson supports your position. Hmm, not that well. I see talk about energy "I shall distinguish heat from work, and thermal energy from other forms", and plenty about information, but precious little about mass distribution. The best in the text you selected is "A thermodynamic system is characterized by a microscopic structure that is not observed in detail". Does microscopic structure refer to mass distribution or energy; rathr ambiguous.

--> In sum, there is a bridge to information theory, based on the foundational concepts of stat mech reasoning. And,that reasoning scales quite well -- indeed, Robertson has an interesting discussion on an imaginary air control system, where we do know locations of the jets, then he scales down tot he point where we are reduced to statistical approaches, and deduces the whole project of stat thermodynamics from that . . .
Hmm, well I guess that explains why Robertson gives no support to your position that thermodynamic entropy depends in part on mass distribution. Seems you are now talking about information. I will agree that there is a link between some very specific information and thermodynamic entropy.

Now, forgot information, and let us get back to the distrubution of mass. Because I find nothing in your post that even attempts to address this fundamental issue.

You mention the textbook Sears and Sal (I think). I used Physical Chemistry, Second Ed., by PW Atkins. Section 21.1 is How to calculate the partition function. From there, for the translational contribution, q = (2.Pi.m.k/h^2)^(3/2) .V. Note that the translational energy depends on V, the volume, so that will account for free expansion. The approximate rotational partition function for a linear molecule is q = 2.I.k.T/h[bar]^2.sigma. For the vibrational we have q = 1 / [1 - exp(-hcv/kT)], and for electronic, q = exp(-beta.epsilon^e), which in most cases is q = 1. Please note that these all relate to energy distrubtion, not mass distribution. As I am sure you are aware, the partition functions are vital in calculating W for S = k ln W. Atkins does not mention a partition function that corresponds to the distribution of matter. If you are right, then surely we need that? Can you tell me what that partition function is?

Can you find a web page of a respected scientist who backs up your position that the mass distribution is required? If you can, then I suggest the following challenge... You e-mail Prof Lambert, and explain your position to him. I will e-mail whoever you find, and explain to him why I think he is wrong. You will need to find a scientist with an email address, of course, for this to work. I tried to find an e-mail address for the NUS people, with the intention of pointing out their error, but could not find one.

The Pixie said...

3] Insofar as the overall sum of the vector increments underlying d'W differs, the work done in one path necessarily differs from that in another path. The same quantum of energy [think of using up the same amount of fuel in a tiny fuel cell in the nanobots here] may be implicated in the two cases, but the end states are often observably -- macroscopically observably -- different in location, orientation and configuration. [E.g clumped at random vs a flyable jet.]
Yes, I get they will be different. I want to know what those differences are, and why they are different. That is what I mean by explaining the differences. I know you believe they are different, I just do not understand why. And you keep not explaining this.

Here we are having a discussion revolving around the difference in thermodynamics between two vats of nanobots, and I still do not know what you think those differences are. All you seem able to do is repeat that they are different. Surely you can see that that is what we are trying to establish.

We never did anything about vectors in thermodynamics, but I have some familiarity from school maths and physics. So please say say why these vectors are different.

4] Vectors bring in spatial contexts and concepts into the picture. They may differ in magnitude and in direction.
Okay, yes. I got that about half way through writing that post.

In the case of random clustered jet parts, just sticking them together at random, in location cells of order 10^-6 m across, and in any old orientation, is vastly different from a specific, flyable configuration that locks parts to parts in functional ways tied to relative location and orientation. This, I have repeatedly discussed for weeks, literally.
But I have nanobots making pre-specified assemblies that are just as unlikely as yours, and other nanobots making random asemblies. In what was is the thermodynamics of rearranging of a component "vastly different" for these assemblies, compared to yours?

Yes, you have repeatedly discussed the for weeks, but you never get to the thermodynamic differences in the process. We only get it is different, and the arguments about how the resultant assembly is different. Until you can explain, rather than assert, how these are "vastly different" I am afraid I will continue to think you are muddying the waters.

5] Work concept
Irrelevant.

6] PLEASE! Do better than that, man. Look back -- what do you think configuration has been about all along? Discussion of specific orientations and relative locations to make a functional as opposed to non-functional whole?
I think that has been about the entropy of the resultant assemble, rather than the entropy change during the assembly process.

Hmm, have I missed something here? I thought maybe we were discussing the thermodynamics of the rearrangement process (I am a chemist, it is what we do). Surely that was the point of the nanobots, that they do this process. Are we really only concerned with what comes out the end, and the vat and nanobots are just a "black box"?

You have precisely NOT removed the relevance of functionality. You have tried to substitute a case where you shift to an environment that makes it hard to observe flyability just above the vat -- a shift to a suitable environment is an obvious fix to that, to make the observation of the resulting macrostate: clumped vs flyable config?
And for my non-flying assemblies, moving them into an electron microscope environment is an obvious fix too, giving an observable macrostate. And indeed this is true of any random assembly; in the right environment, all have observable macrostates in the right environment. What is the difference?

But also, you substituted a test of electron microscope observability which then failed to distinguish functional from non functional clusters.
That is right, the electron microscope observability fails to distinguish functional from non functional clusters. But the clusters are observable and discernable, so each is a particular macrostate (according to your thermodynamics, as I understand it).

Perhaps you could have another go at defining "functional". I am not clear why the nanoplane in a vacuum is functional but the non-flying, but observable assembly is non-functional. I can see there is a difference, but what the specific difference is, what rule we can apply to all situations to decide if a thing is functional in the thermodynamic sense, I still do not get.

8] The essential issue of macrostate definition is observability at macro-level. The variables and observations required are a matter of the specific situation in hand and can get quite esoteric.
Perhaps you can find a web page to back that up. My web link says otherwise.

9] I pointed out that if the nanobots make a specified -- note the reference to "blueprint" -- observable macrostate then the config component of entropy is low, as w is going to be at or close to the blueprint, taking in the possibility of assembly error and tolerances etc. If the resulting state cannot be observationally discriminated from a clumped at random state,the entropy will be large as W is high.

You will observe that the entropy is indeed a STATE function, depending only on: [1] can you observationally identify a macrostate so that microstates relevant to it can be reliably counted, [2] what is the count, w, [3] s = k ln w.

But I specifically mentioned two cases, one with assemblies produced without blueprint, so high in entropy, the second with the same assemblies, now made to a blueprint, so low in entropy. And yet you agree this is a state function, so both must have the same entropy. I feel this is contradictory. Can you explain this?

Also, just moments ago you were making a big deal about functionality. Now it seems specification is what is important. Again, there seems this contradiction.

10] Pix: I just do not get the difference in thermodynamics.
I have identified from the outset a case where we can plainly distinguish the scattered, clumped and functionally configured macrostates, and make w-counts.
This does sound like the nanobots are really just a black box, so maybe I misunderstood your argument.

THe counts show that S decremetns as we move towardsthefuncitonal config.
I really do not get the point of the nanobots; it all seems like a distraction. Just calculate the Sconfig at each stage, and who cares if they are nanobots or yeast or demons or whatever.

So it comes down to whether configurational entropy is a valid quantity in thermodynamic.

The case also shows why it makes sense to distinguish dH and TdS terms, and to split up TdS into a clumping/thermal portion and a configurational portion
No, no, no. Assuming you are talking about Gibbs' deltaG = deltaH - TdeltaS, then that makes no sense at all. deltaH is a function of the entropy change in the surroundings, TdeltaS is a function of the entropy change in the system. It makes no sense at all to split them up into a clumping portion and a configurational portion.

Thaxton makes this same error in equation 8.5, describing the terms as:
(Gibbs free energy) = (Chemical work) - (Thermal entropy work) - (Configurational entropy work)
... indicating that they believe deltaH is the "chemical work", and thereby showing their own faulty understanding. See the derivation on Wiki, to see that deltaH comes from Sext.

11] I repeat, I have used fuctionality as a particular, materially relevant type of specification.
You really need to find a very specific definition of "functional" as used in thermodynamics. In fact, I suggest you find a web page with it on. That would be so much more convincing. It will be interestting to see if you can find one. What is the defintion in your physics textbook?

13] Pix: Sounds like the complexitity is irrelevant too, then, it is just the specification. If it is specified, the entropy is zero.
Not at all: complexity in the sense of embracing 500+ bits of information is relevant to whether such a functional config is likely enough to be accessible by chance to credibly attribute its origin to chance plus necessity only. Both parts are required for the explanatory filter.
But not relevant to the second law. That is what we are talking about, remember.

They are also relevant to the point that there is a dramatic drop in entropy on going from clustered at random to functionally configured.
This woud be so much more convincing if you had explained why, rather than just asserting it. Oh well.

14] You have changed the subject (as usual on this point). We know from observation the causal story of many systems exhibiting FSCI.
You said earlier: It also shows how it takes a lot of smarts to target a functional state, and get to it; both from my thought expt and as observed in the complexity of the functional molecular machines of life at cellular level. I might have got thing wrong, but it sounded as though you were claiming we had observed cellular machines causal story, and knew it involved "smarts".

THe inferential step I have taken is that, on an inference to best explanation basis, and comparing the relative likelihoods of accessing such islands of functionality relative to chance clustering and intelligent action, when I see FSCI, even when I do not directly know the causal story, I confidently infer to design.
Good for you. I do not, for reasons already stated.

So do you -- in cases where worldview level prior commitments challenged by this principle do not come into play.
Of course! I see design in places where a designer is known to be. I think my car is designed, and that is based, in part, on a certain knowledge of the existence of potential designers of cars in the right place at the right time. You look at a cell and see design. Great! You know as an article of faith that there was a potential designer of cells in the right place at the right time. Just worldviews...

Gordon said...

Pixie:

I will note on points again:

1] Does Thermodynamic Entropy Depend on Mass Distribution? This seems to be the most fundamental difference between us

The answer to that is obvious, starting with diffusion and free expansion, as pointed out. Yes, mass distribution [i.e of particles etc] is relevant tot he number of ways a macrostate can be acheived and so is relevant to entropy.

2] you seem to have drifted off into information and entropy, which is quite a different issue

The "differen[ce]," unfortunately, lies in your opinion, not in fact. The link of course has been there, explicitly, in what I have said and linked, right from the beginning.

I contend, with reasons, that Robertson's core point -- that entropy, probability distributions and entropy are intimately connected -- lies at the heart of the issues.

With evidence, you may show him wrong, but that has to be after reckoning with his argument, which I have outlined. That, I have never seen you engage, only dismiss. That looks a lot like the fallacy of the closed mind, I am afraid.

So, instead, let us pause and look at some of the initial points he made, step by step:

a] "If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes": That is, if one knows certain things about the micro level it can make a difference at macro-level, i.e information is tied to energy considerations.

b] "On this basis, I shall distinguish heat from work, and thermal energy from other forms ": That is, based on the degree of information in hand, from observational sources at macro-level, we can/cannot take advantage of micro level configurations in extracting work. In particular, heat is inferred to be the random distribution of energy across degrees of freedom at molecular etc levels, i.e it is a state of relatively little knowledge . . .

c] "the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information": premise is given, that probability distributions are tied to information, and the upper extremum is given, perfect knowledge.

d] "if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail ": Observe the lack of information implications of the equiprobability of accessible microstates assumption. But in some cases we have more information than that, e.g through observing functionally specified behaviour. [A vortex would be an example, of order as opposed to organised complexity, under Prigogine's dissipative structures concept.]

e] "We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context." [Cf my always linked for the summary info theory calcs, and the book for the details. In these, HR first deduces the info theory results for entropy and partition function etc,t hen provides a stat mech interpretation/application. He then makes the judicious comment I have again cited.

--> In short, the relevance and importance of the info-probaility-entropy link is onthe table, and with a strong argument that goes back to fundamantals.

3] here we have a situation where you claim the distribution of mass is a factor in the thermodynamic entropy, and I say thermodynamic entropy is just about distribution of energy. Now, we can see who can support his claim, but I have a world-class thermodynamicist clearly stating that I am right. Perhaps you should consider the possibility that you are wrong?

THis is precisely the improper appeal to authority that I have remarked on. THe issue is not whether you can find world calss opinion on one or the other side of an issue in controversy, but whether the assertions, underlying assumptions, and facts and logic hold up to scrutiny.

I have summarised why I think Mr Robertson et al have made a serious point, relative to generally accepted first principles, material facts and logical connexions. IF I am wrong, why, specifically?

And, I think that it should be plain from certain common and important examples of increase of entropy of configuration, namely diffusion and free expansion, that loss of constraint on distribution of mass leads to spreading out and to rise in entropy. The reason being, that once the macro-level state-constraint is released, the number of accessible microstates explodes and the random behaviour of microparticles in the situation leads to spontaneous migration towards the predominant cluster of microstates, i.e towards thermodynamic equilibrium. And that, by sheer weight of probability.

In the nanobots case, the decanting of parts into a vat allows that loss of constraint, then the work of clumping nanobots constrains the parts distribution in a certain way. The configuring nanobots constrain the state even more tightly, relative to a program that builds a flyable micro-jet. Flyability of course constrains the macrostate wonderfully well. W-counts for the relevant states shows that entropy falls as we move towards the more constrained states.

While artificial nanobots with such capacities are fictional at present, they are a reasonable projection of known rechnologies and violate no physical laws in their projected function. So, the thought expt is relevant.

And, in the case of the cell, we have existing "nanobots" thatdo perform the sorts of functions in general we have just described. E.g plants are able to take in CO2 molecules and with water etc and photons, transform these molecules into high-energy functionally specified molecules of life, supporting all other life forms.

Broadening the point, the reelvant biomolecules are fucntionally constrained to a wonderful degree, in ways thatare related tothe distribution of monomers in polymer chains, i.e distributon of mass. In particular, observe the relevance of the right chirality to functionality, which is of course an energy-linked process. And, there is but little connexion in raw energy between being D or L rotatory in the monomer. But in life it is the right "hand" or no functionality.

--> The point in the main is therefore well established as of this juncture. I will therefore break at this point, then continue on further details in a second post.

GEM of TKI

Gordon said...

Continuing . . .

On further points of note:

4] Entropy is a measure of the distribution of energy at the quantum level, as Lambert clearly states. This is why Clausius' classic inequality is about energy.

Improper appeal to authority, again.

I have pointed out supra, why Clausius's d'Q is about not just energy but mass distributions, lined to the nature of heat.

I have also shown why heat and probability and information and distributions of mass are all relevant to evaluating entropy.

Dismissal by blind appeal to demonstrably incorrect authority just does not cut it.

5] the Clausius' inequality is about energy, and not mass.

The Clausius d'Q is about heat, which is a measure of energy bound up in random motions of molecules etc. That is, so long as heat has in it kinetic energy terms, it is about what is happening to locations as well as to energy involved in that behaviour.

Such has relevance to: KE of vibrations, rotations and translations, thence to structural constraints on these possibilities for motion, thence to a case in point on the relevance of mass distribution to degrees of freedom and energy distribution. You simply cannot properly sever mass and energy at micro level.

This, I have long since pointed out, and this is not exactly new or controversial information in the field.

Going on from there, I have already long since and repeatedly shown through diffusion and free expansion how locational constraints are relevant to the number of accessible microstates. TO apply that point to the locations in polymer chains that are fuctioal relative to minimum-energy folding thence bio-function, is plainly a reasonable extension.

Similarly, with the nanobots, I have shown that locational constraints directly affect W, for observable macrostates.

Citing that "Mr Lambert says and I agree" simply fails to address the empirical data and the logical issues.

6] Say we have 10 molecules, and 10 lumps of energy, there are numerous ways those 10 lumps of energy can be distributed amoung 10 molecules - especially when we consider vibrational, rotational, translational and other modes.

And, am I to assume that such modes have nothing to do with how the molecules in question are structured and/or tied together in crystal structures etc?

In short, you have inadvertently underscored my point.

7] Lambert explains both free expansion and diffusion in terms on energy distribution only.

No, he does not!

He is forced to accept that the extension of the accessible volume for the particles in question affects their behaviour, which he describes in terms of shifting quantum levels. And, where do those shifting levels come from -- from the relaxation of locational, spatial constraints. (Indeed, as such constraints run out to the classical limit, that too reflects the same point.) QED, sir.

8] Big enough again . . .

My point is that the decision that something is "big enough" is an arbitrary one. Cf my remarks of yesterday.

9] citing WIki: In recent years, entropy has been interpreted in terms of the "dispersal" of energy.

And just what reference does "dispersal" have, but in part to spatial and locational configurations? QED, again.

10] the world-class thermodynamicist disagrees with you, and published a paper to back that up

Improper appeal for modesty inthe face of claimed authority, on apoint where the reasoning and facts are in dispute across schools of experts. That is why I have consistently pointed tot he underlying first principles, facts and reasoned inferences connected thereto.

Mr Lambert is one thermodynamicist among many,and he makes one of many possible arguments, beign able to persuade peers that this is at least worth considering. That is not a demonstration of its truth.

I have addressed the ideas above, and find that Mr Robertson [equally, a published thermodynamicist . . . and member of a school that differs from Mr Lambert's views] remains correct. You cannot simply cite one expert, then apply his datum line to rule out consideration of facts that may not fit his picture -- that is the sense of irrelevancy here: blind authority is not relevant to discussion of facts and logic on the merits.

Namely, on reasonable and factually anchored grounds, macrostates are tied to information and probabilities. These are in turn linked to not only energies but locations and motions of masses at micro level, thence w-counts.

I have simply applied these basic insights to the cases in view: nanobots and micro-jets as a thought expt, and as You raised it, macromolecules of life.

A to the reference to Mr Sewell in this context, kindly re-read the excerpt from his writings at the head of this thread. Writings, which BTW are equally peer-reviewed! For the point being made is in fact a common-place: highly improbabl eand constrained C-atom distributions do not in our observation arise by spontaneous processes, for reasons linked to probability distributions, i.e the issue of accessibility of microstates under a macrostate.

Indeed, he is in part discussing: diffusion of C-atoms in the solid state. [The difference between solids and liquids, in the sufficiently long-term is often a matter of degree. Solid diffusion and creep under load are two cases in point.]

11] you reject Lambert as an authority, and impose your own?

I am precisely not citing Mr Robertson as a basis for appealing to blind modesty in the face of authority. I am excerpting an argument relative to facts and logic, on the merits and am saying let us examine the fundamentals on the merits.

a] "I shall distinguish heat from work, and thermal energy from other forms", and plenty about information, but precious little about mass distribution.

--> You have artfully left off HOW MR Robertson sets out to distinguish heat and work, which I discussed in the post just above this.

--> That how is intimately connected to both mass and energy distributions, as the relevant information is in the constraints on distributions and associated probabilities.

b] he best in the text you selected is "A thermodynamic system is characterized by a microscopic structure that is not observed in detail". Does microscopic structure refer to mass distribution or energy; rathr ambiguous.

--> Not if you think about the nature of the energy distributions and related constraints: vibrations, rotations, and translations of what, where, and to what extent?

--> That is, mass and energy distribution are both in view once we look at just what is being discussed.

12] Robertson gives no support to your position that thermodynamic entropy depends in part on mass distribution.

REALLY! Consider the above, again, kindly,thinking about the nature and structure ands behaviour of molecules. Then, go up to brownian motion and the associatred fluctuations, observing that the behaviour is that of extra-large molecules. As scale goes up further, degree of observed participation reduces as the scale gradually makes it hard to visibly push these molecules around. At no point is there a threshold where Brownian motion is suddenly cut off, just that our ability to observe thence its relevance fades.

Then scale further by letting the particles become bigger: even becoming a swarm of jets. Because we can directly and relatively easily observe such a swarm, we do not normally treat them as a macrostate with equiprobable microstates or even non-equiprobable microstates as an extension. But the issue is: as we scale we see more or less accessible information and so we use different models. Robertson scaled from jets to molecules, I scaled from molecules to jets.

But let's extend: spaceships in intergalactic space, with sufficient numbers that we can only look on at the overall behaviour of the ships as a body, i.e statistically, observing from a telescopic distance. We may then find it useful to use macro-micro state reasoning again.

Actually, this has been done, one of my profs used to use this approach to model stars in globular clusters.

13] the translational energy depends on V, the volume . . . The approximate rotational partition function for a linear molecule . . . For the vibrational . . . for electronic, . . . these all relate to energy distrubtion, not mass distribution.

See my point?

You have repeatedly used terms that are most intimately connected to spatial distributions, locational constraints on, and associated behaviour of matter, including mass.

You cannot coherently sever energy and mass distributions in thermodynamic reasoning. The point is so elementary, once you ask what is tranlating, vibrating and rotating that I find it hard to believe it is actually a subject for dispute.

As for references, I have long since given you online and on paper examples. Brillouin was not exactly a non-entity in Physics, and MR Robertson is not exactly a non-entity, if that is what you want to look at. But more to the point, my discussion is on the basic facts in question: molecules have kinetic energies of rotation and vibration and translation because of how they are behaving spatially, starting with the expression for kinetic energy

e = m*v^2

e - energy, m - mass having the energy, v -- the magnitude of its rate of change of spatial location.

To see where this comes from look back at kinematics and dynamics, and pardon the resort to basics here:

v^2 = u^2 + 2ax,a basic kinematics result.

Multiplying through:

1/2 m [v^2] = 1/2 m [u^2 + 2ax]

ie. 1/2 m[v^2 - u^2] = max

but F = ma, N2L

so: 1/2 m[v^2 - u^2] = F*x

That is work done to change state of motion, displacing the object through x by applying force F, is change in kinetic energy, energy of the motion of a massive object.

This basic reasoning extends to the behaviour of molecules, suitably extended through quantum ideas.

Again, where did your physics background stop?

14] I get they will be different. I want to know what those differences are, and why they are different.

When different clusters of force vectors act through space, they end up creating different configurations, which in the case in view are not functionally equivalent. That leads to different observable macrostates -- note the link to Robertson here -- and thence different statistical weights and entropy numbers.

15] we are having a discussion revolving around the difference in thermodynamics between two vats of nanobots, and I still do not know what you think those differences are.

I am astonished atthis claim: surely, the difference between [a] scattered particles undergoing brownian motion, [b] randomly clumped parts and [c] parts so constrained and configured as to be a flyable micro-jet are plain adn long since described and discussed!!!!!!!!!

Of course if you refuse to accept that mere quantum of energy used is not sufficient to distinguish states, then that may lead to inability to discern the obvious.

But then, the astonishing result that you cannot tell the difference between a vat full of scattered parts, a random and overwhelmingly probably non-functional clump of precipitated material, and a flyable jet is telling on the poverty of the underlying model of thermodynamics you have advocated.

16] I have nanobots making pre-specified assemblies that are just as unlikely as yours, and other nanobots making random asemblies. In what was is the thermodynamics of rearranging of a component "vastly different" for these assemblies, compared to yours?

If the macrostate cannot be distinguished observationally,t hen the microstatre count goes tot he random clumped tate-- if you have set up a bull in a china shop at micro level all you are fdoing is augmenting random themal effects.

If you specify and can observe a given state then obviously we can count the statistical weight of microstates and see that S config has fallen. This I have repatedly pointed out.

17]I thought maybe we were discussing the thermodynamics of the rearrangement process (I am a chemist, it is what we do). Surely that was the point of the nanobots, that they do this process. Are we really only concerned with what comes out the end, and the vat and nanobots are just a "black box"?

First, entropy is a state function, not a path function. The process of getting to a given state is not directly equivalent to the estimation of the entropy of the state. (Often, we substitute a fictional ideal process for getting to a state to estimate its entropy, thence to evaluate the change in entropy from state a to state b to c to whatever.]

I have repeatedly pointed out that the nanobots have to search and locate then attach and move, exerting forces and energy in so doing. What is relevant in this is that this with high confidence leads to sufficient dissipation to more than make up for the resulting tightly constrained configured Jet.

At each stage in the interaction I have explicitly addressed the accessible microstates of the jet parts in [a] scattered, [b] clumped and [c] configured macro states. EXPLICITLY. I have showed that the w-counts fall as we move a to b to c. I have then observed that we are seeing configurational entropy fall for the jet parts as they move through the three states.

18] for my non-flying assemblies, moving them into an electron microscope environment is an obvious fix too, giving an observable macrostate. And indeed this is true of any random assembly; in the right environment, all have observable macrostates in the right environment. What is the difference? . . . . the electron microscope observability fails to distinguish functional from non functional clusters. But the clusters are observable and discernable

Again, note that we are not dealing with the possibility of inspecting microstates; but with observing macrostates and how that constrains the entropy, as Robertson summarised.

IN the case of the clumped state, any clumping order will do so thefactt hat at any one time the jet is in one clumped state oranotherdoes not affect the fact that there is a relatively large statistical weight of the resulting clumped state -- and the microscope is unable to discern b from c.

But, shift to the configured state and look at it fly -- the result is in a tightly constrained macrostate with a low w count. Thence we see again the entropy results. This is again something pointed out over and over and over. And, as long as the micro-jet is flyable that is tied to its configuration not to whether it is in an atmosphere where it can take off and do loops.

To observe the state, simply pick it up, put it in the air and let it fly. If it cannot, it is not in c.

Observability is of course at he heart of a macrostate, as Robertson identifies. For instance P, V adsn T are matters of observation and define the state of an ideal gas. The many microstates compatible with the P V and T observations are the relevant statistical weight.

Nor is this a matter for oh I can find someone to say the obvious on the web!

19] just moments ago you were making a big deal about functionality. Now it seems specification is what is important. Again, there seems this contradiction.

In each context I have pointed out the relevant kind of functionality specified complexity at work. In the case of specified by inspection by electron microscope that can discern clumped from scattered but not configured from clumped. So it has to assign the observed entity only to the clumped state and for that, the particular config is to what counts but the number of such compatible with clumped.

If we have an entity that an electron microscpe can indeed inspect and assign to a specific config,then if that is our relevant context then indeed the statistical weight of the config is low, i.e unique.

But this is utterly out of the context of what has been on the table to begin with: that by clumping then configuring to flyable condition,we can discern macrostates and do w counts giving the point that sconfig is a sensible idea.

Sure it is possible in principle to generate another config that specified a different macrostate but his does not detract from the point just shown -- jet parts could in principle be configured to make a car that moves but can't fly. That is a different config and macrostate still, with a low w-count too, but that is besides the point.

20] deltaH is a function of the entropy change in the surroundings, TdeltaS is a function of the entropy change in the system.

And so you have neatly distinguished the two haven't you . . .?

I have simply pointed out that TBO are specifically interested inthe system,and split Tds into clumped and configured parts. TBO are NOT in error, despite your confident assertions .

I have to lreave for now, but the trend is clear enough and so is the balance ofthe case onthe merits. TBO are right, Sewell is right, Robertson is right, BRillouin is right.

GEM of TKI

Gordon said...

Continuing again:

21] Observability:

Again, macrostates are defined by observation, and function is one way in which we can make an observation. For instance, a jet plane flies or it does not.

That is in turn tied to the very specific constraints on its parts, which are in this case the subject of a nanobot exercise. This, we have repeatedly discussed.

The particular functionality in view is well identified and is [in principle -- ti s is a thought expt, where flyability would be a property of in effect giant molecules . . .] observable, and is connected to w counts. That is, we have a proper defn of a macrostate through issue of functioality as a flyable microjet, or more accurately a population of same.

22] Complexity and the 2nd law

Again, a complex entity in the sense we are dealing with has multiple, contingent elements that can be in multiple states. Such an entity can therefore access a vast configuration space, and this affects W counts. Then s = k ln w. By imposing observable states as constraints on the config we see the shift from scattered to clumped at random [as long since discussed that is what TBO are mostly talking about in their "thermal entropy etc; I substituted clumped to make the point clear between us, as you will recall]

As we go scattered to clumped to configured, S falls, while of course the dissipation triggeed by the energy at large in the work to make the jet compensates.

Looking at the 2nd Law statistically, we see that the probabilistic issues on accessible microstates make it overwhelmingly improbable to get spontaneous clumping much less functional config as a flyable micro-jet. Programmed work is capable of clumping and configuring, under intelligent direction with far greater likelihood than spontaneous assembly. In so bringingabouta more constrained state, the work entities, nanobots, credibly disspate suficeint energy int he wider vat to compensate for the local entropy reduction.

But equally,the only reasonable way to get to the configured state in the gamut of the observable universe is by agency not chance plus necessity only. That is, by stat thermo-d reasoning we can see why it is that when we observe FSCI based systems that we know the causal story for, invariably in our experience, they are produced by agency.

On the strength of the pattern of stat thermod reasoning, we can therefore also confidently assert that in cases of observed FSCI where we do not directly know the causal story, on inference to best credible explanation, they are produced by agencies.

Indeed, FSCI is on the evidence and reasoning we have in hand and have reason to trust however provisionally, we can infer that FSCI is an empirical signature of agency. (We all routinely accept this, and so the sudden rejection of this common sense and scientifically credible claim on certain cases looks a lot like worldviews level question begging and special pleading.)

And, I have long since shown in adequate details just why W drops as we move from scattered, to clumped at random to specifically configured states, in this case testable through flyability.

23] I see design in places where a designer is known to be. I think my car is designed, and that is based, in part, on a certain knowledge of the existence of potential designers of cars in the right place at the right time.

H'mm, lets see:

--> It is only because you know independently that specific design capable agents exist that you can infer to design on a car? So, what happens with SETI where we do not know independently that designers exist, apart from hoped for empirical traces boiling down to FSCI?

--> In short, so long as designers are logically possible, then the inference to design is a reasonable prospect,and one investigated under credible scientific auspices.

--> So, now too, look at origin of a microjet. The jet does not currently exist, but is possible relative to what we know about humans and human technologies. So it is credible that agents could with sufficient motivations implement the throught expt. So it is reasonable to discuss it hypothetically as a thoght expt. In so doing we elucidate why on probabilistic grounds, micro-jet parts arenot likely to spontaneously select adn assemble themselves into a fluyable jet in the scattered state. SImilarly, we see why just any old clumped state is unlikely to be a flyable jet. But, a configured state that was set up to beflyable is likely to get there. And, we can observationally disringuish the stastes and count W for each, giving the entropy reduction and credible mechanism to ytrigger it: agency.

--> Now, go back to the origin of life. On RNA world and metabolism first models we run into the same probabilistic barriers, as Shapiro compared to the problem of a golf ball playing itself around a course due to random forces [he focused on RNA world but the point applies to metabolism first too].

--> By contrast we know independently that agent action can give rise to information rich and functional structures. We also have no basis apart from arbitrary worldview assertions to rule that intelligent agents at the OOL are impossible or so improbable as to be tantamount to impossible. And, life manifests the FSCI rich structures just discussed.

--> So a hyp that OOL was most likely agent directed is reasonable relative to what we do know on agents and what they can and do do. So, why is it often ruled out of bounds? ANS: because of the prior assumption and institutional power of evolutionary materialism, a certain worldview that often likes to call itself "science."

--> Further to this, the assertion that methodological naturalism is a rule of science is in fact tantamount to the same Evo Mat, as it is saying in effect that we are not permitted (by whom?) to use entities in scientific models etc that do not fit in with evo mat. This is neither historically nor logically well warranted.

--> Historically, science's rules change and indeed this MN is the product of an attempted change itself.

--> Logically, we know that chance, necessity and agency act in our world so to rule one out ahead of time is to beg the question.

24] You look at a cell and see design. Great! You know as an article of faith that there was a potential designer of cells in the right place at the right time. Just worldviews...

Au contraire, I am looking here first at FSCI as a known empiricaly tested reliable artifact of agency. So, on observing themost sophisticated cases of FSCI obser4ved in life systems,a dn on seeigntherelvant statistical thermodynamics to get to such FSCI by chance and natural regularities only, I see that his is maximally unluikely inthe gamut oftheobserved cosmos.

I therefore notet hat agency easily explains such FSCI. And, since I have not ruled out the POSSIBILITY of such agents ahead of time on OOL, I then make the simple inference thatsince FSCI on agency is far more likely than FSCI on chance and/or natural regulatity, I have reason to infer to agency absent clear and convncing evidence otherwise.

The resort of many evo mat thinkers to an unobserved quasi-infinite wider universe as a whole in which subuniverses are scattered at random on the parameters making for a life habitable universe, and in that narrow subset, a further tiny set just happen to include the case of the sub-universe we observe is a long-winded way of conceding the point without wanting to acknowledge its force.

Overall it is plain that this thread has now been moving in circles and is largely unproductive, not worth the additional hours of time it has eaten up in recent days, as the pattern is now simply to revisit old grounds.

But in so going back over already adequately addressed grounds, you have amply thereby confirmed the force of the points I have long seen. For that I thank you.

If you have substantially new materials to raise I will respond further on points; otherwise I will simply note that the material is going in circles on already adequately addressed grounds

GEM of TKI

The Pixie said...

Does Thermodynamic Entropy Depend on Mass Distribution

First of all it seems I was not clear enough, and for that I apoloogise. It is my claim that entropy is a measure of how well energy is distributed about the energy levels of molecules. This does mean there is some connection with matter, however, the important point here is that the randomness in how that mass is distributed does not directly impact on the entropy. This is why I often said it has no direct effect on the entropy, but I must admit I was too lazy to write that every time.

If you are using S = k ln W, then W is about energy distribution, not mass distribution (accepting that energy distribution does in turn depend on how the molecules are constrained, which in turn depends on their environment, i.e., their distribution).

Compare this to TBO. They are claiming that the spacial distribution of amino acids in a protein makes a contribution to W; that is, W depends in part on the randomness of the mass distribution.

Compare to NUS. They claim that the spacial distribution of molecules in a volume makes a contribution to W; that is, W depends in part on the randomness of the mass distribution.

I hope this distinction is clear, but please ask otherwise!

The answer to that is obvious, starting with diffusion and free expansion, as pointed out. Yes, mass distribution [i.e of particles etc] is relevant tot he number of ways a macrostate can be acheived and so is relevant to entropy.
But diffusion and free expansion can be explained purely though energy dispersion - as pointed out! No problem, though, here it is again (from Lambert):

The second big category of entropy increase isn’t very big, but often poorly described in general chemistry texts as "positional" entropy (as though energy dispersal had nothing to do with the change and the molecules were just in different ‘positions’!) It involves spontaneous increase in the volume of a system at constant temperature. A gas expanding into a vacuum is the example that so many textbooks illustrate with two bulbs, one of which contains a gas and the other is evacuated. Then, the stopcock between them is opened, the gas expands. In such a process with ideal gases there is no energy change; no heat is introduced or removed. From a macro viewpoint, without any equations or complexities, it is easy to see why the entropy of the system increases: the energy of the system has been allowed to spread out to twice the original volume. It is almost the simplest possible example of energy spontaneously dispersing or spreading out when it is not hindered.

From a molecular viewpoint, quantum mechanics shows that whenever a system is permitted to increase in volume, its molecular energy levels become closer together. Therefore, any molecules whose energy was within a given energy range in the initial smaller-volume system can access more energy levels in that same energy range in the final larger-volume system: Another way of stating this is "The density of energy levels (their closeness) increases when a system's volume increases." Those additional accessible energy levels for the molecules' energies result in many more microstates for a system when its volume becomes larger. More microstates mean many many more possibilities for the system's energy to be in any one microstate at an instant, i.e., an increase in entropy occurs due to that volume change. That's why gases spontaneously mix and why they expand into a vacuum or into lower pressure environments.


Actually maybe this was useful. Note that Lambert is talking about molecules, but he is not interested in how the molecules themselves are distributed. What is important is how the energy is distributed. And even then, it is not the spacial distribution of energy - we do not care where any of the molecules are - it is only about which molecule the energy is on, and how the molecule uses that energy. Nevertheless, matter is not completely irrelevant, because it does impact on the energy levels.

2] Pix: you seem to have drifted off into information and entropy, which is quite a different issue
The "differen[ce]," unfortunately, lies in your opinion, not in fact. The link of course has been there, explicitly, in what I have said and linked, right from the beginning.
I contend, with reasons, that Robertson's core point -- that entropy, probability distributions and entropy are intimately connected -- lies at the heart of the issues.

But the issue is whether mass distribution is important, and Robertson's core point would seem not to address that. You go on to quote Robertson at length - again - but I found nothing there that would lead me to suppose that the mass distribution has a direct impact on entropy.

With evidence, you may show him wrong, but that has to be after reckoning with his argument, which I have outlined. That, I have never seen you engage, only dismiss. That looks a lot like the fallacy of the closed mind, I am afraid.
I have yet to see any reason why I should want to prove him wrong, because I have yet to see whether he and I disagree. I have no disagreement with Robertson's claims (I have yet to see the relevance, to be honest), but you present them as though I do!

6] Pix: Say we have 10 molecules, and 10 lumps of energy, there are numerous ways those 10 lumps of energy can be distributed amoung 10 molecules - especially when we consider vibrational, rotational, translational and other modes.
And, am I to assume that such modes have nothing to do with how the molecules in question are structured and/or tied together in crystal structures etc?
That is exactly the point. You can calculate the entropy from the energy modes, without knowing how the molecules are bound together. If we believe NUS and TBO, then the positions of the molecules contributes to the entropy...

NUS: According to NUS, you have to divide the volume space into a discrete number of cells - of arbitrary size - and consider how the molecules might be arranged in that space. Note that they take no notice of crystal structure! How very odd. And even more bizarre is that you specifically raise crystal structure as an objection to my thermodynamics, but ignore that for their argument. Perhaps you can explain how their maths can be extended to include crystals. And note also how the NUS entropy changes when you pick a different size for those arbitrary cells.

TBO: Let us not forget that TBO do the calculation entirely differently. As far as they are concerned we should be looking at ordering within the molecule, rather than how the molecules are distributed. That means that the configurational entropy is identical whether the material is in the gas phase or is a crystalline solid. So in fact both methodologies that uses the distribution of matter ignore "how the molecules in question are structured and/or tied together in crystal structures etc".

Lambert Now my method (and Prof. Lambert's) is to look at how the energy is distributed. Sure, that distribution is affected by matter. If the molecules are in a crystal lattice the distribution will be quite different to when they are in the gas phase, because the molecules are much more constained, and so a lot of energy levels will be out-of-reach. When you consider the energy distribution, you end up taking account of what phase the material is in, unlike the TBO and NUS approaches.

8] My point is that the decision that something is "big enough" is an arbitrary one. Cf my remarks of yesterday.
Yes, I got that bit. I do not get what you are refering to.

As for references, I have long since given you online and on paper examples. Brillouin was not exactly a non-entity in Physics, and MR Robertson is not exactly a non-entity, if that is what you want to look at. But more to the point, my discussion is on the basic facts in question: molecules have kinetic energies of rotation and vibration and translation because of how they are behaving spatially, starting with the expression for kinetic energy
Right, this is describing how they are behaving - their energy distribution - not where they are - their mass distribution.

The Pixie said...

14]When different clusters of force vectors act through space, they end up creating different configurations, which in the case in view are not functionally equivalent. That leads to different observable macrostates -- note the link to Robertson here -- and thence different statistical weights and entropy numbers.
So again the difference in thermodynamics seems to be in the output, so the vats are really just black boxes.

15] I am astonished atthis claim: surely, the difference between [a] scattered particles undergoing brownian motion, [b] randomly clumped parts and [c] parts so constrained and configured as to be a flyable micro-jet are plain adn long since described and discussed!!!!!!!!!
Yes, but you keep skipping the important bit, what is the change in entropy in each sub-step as you go from [b] to [c]. Of course, if you are regarding the vat as a black box, we can move on.

Of course if you refuse to accept that mere quantum of energy used is not sufficient to distinguish states, then that may lead to inability to discern the obvious.
Oooo, sneaky! But the real issue is not whether the states are discernable, but whether they are thermodynamically discernable. Let's not conflate the two.

16] Pix: I have nanobots making pre-specified assemblies that are just as unlikely as yours, and other nanobots making random asemblies. In what was is the thermodynamics of rearranging of a component "vastly different" for these assemblies, compared to yours?
If the macrostate cannot be distinguished observationally,t hen the microstatre count goes tot he random clumped tate-- if you have set up a bull in a china shop at micro level all you are fdoing is augmenting random themal effects.
I can discern them with an electron microscope. As I said before.

If you specify and can observe a given state then obviously we can count the statistical weight of microstates and see that S config has fallen. This I have repatedly pointed out.
Well, no, not that clearly, but at last we have it.

So those faulty nanobots building a random assembly, but always the same one, which we can discern with an electron microscope, is that low S or high S? Seeing as we have no specification, I must assume high.

Then this second lot, that are designed to make that same assembly, now there is a specification, so now the assemblies are low entropy, right?

Same actual assemblies, but sometimes high entropy, sometimes low. A contradiction.

By the way, this is the third time I have described this apparent contradiction, and I have yet to see an explanation. Perhaps you need to take a good look at your own understanding and address this problem. If there really is a contradiction (and I appreciate I may well have this wrong) then your claims fail. And if you continue to dodge this question, I will have to assume you know there is a contradiction inherent in your claims, and you just want to sweep it under the carpet.

17] First, entropy is a state function, not a path function. The process of getting to a given state is not directly equivalent to the estimation of the entropy of the state.
That is fine. So we are just dealing with a black box process, and really we should just concentrate on what comes out and forget about the nanobots.

I have repeatedly pointed out that the nanobots have to search and locate then attach and move, exerting forces and energy in so doing. What is relevant in this is that this with high confidence leads to sufficient dissipation to more than make up for the resulting tightly constrained configured Jet.
So now you want to discuss the thermodynamics in the vat again! Look, you decide what you want to do. Either we look at the thermodynamics of each step, or we black-box it.

I have showed that the w-counts fall as we move a to b to c. I have then observed that we are seeing configurational entropy fall for the jet parts as they move through the three states.
Yes you have, according to your definition of W. The issue is whether that is relevant for thermodynamics.

18] IN the case of the clumped state, any clumping order will do so thefactt hat at any one time the jet is in one clumped state oranotherdoes not affect the fact that there is a relatively large statistical weight of the resulting clumped state -- and the microscope is unable to discern b from c.
That is not true. The faulty nanobots are producing very specific assembles that are indeed discernable with an electron microscope from the random assembles on b.

But, shift to the configured state and look at it fly -- the result is in a tightly constrained macrostate with a low w count. Thence we see again the entropy results. This is again something pointed out over and over and over. And, as long as the micro-jet is flyable that is tied to its configuration not to whether it is in an atmosphere where it can take off and do loops.
This is why that definition of "functional" in thermodynamics is so important.

Observability is of course at he heart of a macrostate, as Robertson identifies. For instance P, V adsn T are matters of observation and define the state of an ideal gas. The many microstates compatible with the P V and T observations are the relevant statistical weight.
Just as you can observe those assemblies with an electron microscope.

Oh, wait, they are arbitrarily different. And you refuse to offer a precise thermodynamic definition of "functional" so we will not realise it.

21] Observability:
Again, macrostates are defined by observation, and function is one way in which we can make an observation. For instance, a jet plane flies or it does not.

And an electron microscope is another way we can make an observation. Again, a precise definition of macrostate, and what is and is not observable would be good. I doubt it will be forthcoming.


22] Complexity and the 2nd law
This is all built on the claim that mass distribution impacts on entropy directly, so stands and falls on earlier arguments.
________________________
There were a number of issues that I brought up (or repeated) last time around and that you seem to have missed. I want to collect them together here to draw your attention to them and to emphasise the importance I attach to them (then, in the unlikely event that they are not addressed next time around, I will know you are deliberately dodging them).

(1) As I am sure you are aware, the partition functions are vital in calculating W for S = k ln W. Atkins does not mention a partition function that corresponds to the distribution of matter. If you are right, then surely we need that? Can you tell me what that partition function is?
Specifically, this how the configurational entropy of NUS and TBO fit into Boltzmann's maths. I do not think any such partition function exists; I invite you to prove me wrong. In particular, this is vital to TBO's claim that you can separate out Sconfig. You can only separate it out if it is there in the first place.

(2) But I specifically mentioned two cases, one with assemblies produced without blueprint, so high in entropy, the second with the same assemblies, now made to a blueprint, so low in entropy. And yet you agree this is a state function, so both must have the same entropy. I feel this is contradictory. Can you explain this?

(3) The essential issue of macrostate definition is observability at macro-level. The variables and observations required are a matter of the specific situation in hand and can get quite esoteric.
Perhaps you can find a web page to back that up. My web link says otherwise.

You make some significant claims about what counts as a macrostate, but there is nothing to back that up (I cited Wiki to back up my postion).

(4)You really need to find a very specific definition of "functional" as used in thermodynamics. In fact, I suggest you find a web page with it on. That would be so much more convincing. It will be interestting to see if you can find one. What is the defintion in your physics textbook?
This has been going on so long. Your use of "functional" seems just a bit too flexible, and perhaps I should not be surprised you do not want to be pinned down! And I feel sure you will not find any web site to help you, unless it was written by an IDist or creationist. You are talking about standard thrmodynamics, as you learnt in college, right, so surely there will be something out there?

The Pixie said...

I tend to ignore the off-topic stuff, just because life is too short, and my posts too long already, but there were a few things I thought I would say this time.

23] --> It is only because you know independently that specific design capable agents exist that you can infer to design on a car? So, what happens with SETI where we do not know independently that designers exist, apart from hoped for empirical traces boiling down to FSCI?
I happen to believe that in a universe this big, ETI is pretty likely, even if I do not know it for certain. Worldviews again, I guess. That said, I think SETI is extremely unlikely to find anything.

--> In short, so long as designers are logically possible, then the inference to design is a reasonable prospect,and one investigated under credible scientific auspices.
Well then go and do the research. The SETI people have made a prediction about what they expect to see, and have then gone looking for that. IDists are free to do just that. Dembski makes plenty of money from books that could be spent on ID research if he was really interested in it.

--> Now, go back to the origin of life. On RNA world and metabolism first models we run into the same probabilistic barriers, as Shapiro compared to the problem of a golf ball playing itself around a course due to random forces [he focused on RNA world but the point applies to metabolism first too].
Note the OOL researches are doing OOL research. This is why it wins over ID for me (in part).

--> So a hyp that OOL was most likely agent directed is reasonable relative to what we do know on agents and what they can and do do. So, why is it often ruled out of bounds? ANS: because of the prior assumption and institutional power of evolutionary materialism, a certain worldview that often likes to call itself "science."
Personally, I have no problem with ID research being scientific. As it is today, that is not the case. No IDists seem that interested in doing science - that is, proposing a specific hypothesis, drawing predictions, testing those predictions. ID is a political movement, which cannot propose a specific origins hypothesis for fear of being dropped by the creationists (if they go for 4 billion years of life) or being labelled creationists (if they go for 6000 years). Thus, they are reduced to anti-evolutin arguments, such as IC, CSI and EF (note that ID does not predict IC, there is no reason to suppose design will necessarily lead to an IC system).

--> Further to this, the assertion that methodological naturalism is a rule of science is in fact tantamount to the same Evo Mat, as it is saying in effect that we are not permitted (by whom?) to use entities in scientific models etc that do not fit in with evo mat. This is neither historically nor logically well warranted.
Really? Seems science has done quite a lot despite that handicap in the last century or so.

I would be interested to hear what you think would be a better alternative.

A question I often ask IDists is: What are good criteria for deciding what is taught in schools as science? Some say they agree that ID should not be taught as science. The rest consistently duck the question. The problem, of course, is that either you restrict science education to mainstream science (which excludes ID), or you allow everything in (or perhaps just popular theories, like UFOology, astrology, etc.) or you admit it is just your personal subjective choice.

So you want to overthrow MN... How do you think science should be done?

--> Logically, we know that chance, necessity and agency act in our world so to rule one out ahead of time is to beg the question.
But of course we do not. Archaeology, forensic science and, as you already mentioned, SETI are founded on agency, so to claim MN rules it out is, well, dishonest really, given that you full well that SETI does not.

24] Au contraire, I am looking here first at FSCI as a known empiricaly tested reliable artifact of agency.
Why has Dembski or indeed anyone not validated this test? If you really have a reliable way of detecting design, validate it with a wide range of designed and not designed objects and events, and see how it does. No IDist has shown any interest in doing this. Does that not sound alarm bells for you?

So much easy to moan about the rules, than to do the leg work.

The resort of many evo mat thinkers to an unobserved quasi-infinite wider universe as a whole in which subuniverses are scattered at random on the parameters making for a life habitable universe, and in that narrow subset, a further tiny set just happen to include the case of the sub-universe we observe is a long-winded way of conceding the point without wanting to acknowledge its force.
I think that is a possibility. Are you ruling out? On what basis?

Gordon said...

Pixie:

I will be brief, as there is very little to add today, the material overnight basically recirculating already addressed points. (I hoped -- tuns out there was more than I intended at first as I wrote . . .]

The major point of interest is evidently in response to my showing the intimate link between motion and kinetic energy, which covers translation, vibration and rotation:

1] P: it seems I was not clear enough, and for that I apoloogise. It is my claim that entropy is a measure of how well energy is distributed about the energy levels of molecules. This does mean there is some connection with matter, however, the important point here is that the randomness in how that mass is distributed does not directly impact on the entropy.

Now, I appreciate and accept the apology.

In fair comment I also must note on the main point at stake, that I have had to first explicitly go back to O-A level physics to show that change in KE is the work done by a force on a moving body, to mobilise this much responsiveness.

Let us move up a level.

Kinetic energy of rotation is tied to spatial distribution of mass around a centre of rotation [1/2 I omega^2, omega being angular velocity and I moment of inertia, dependent on sum of m.r^2 elements]. Similarly, the energy of vibration is in effect the sum of energy stored in "climbing" the "walls" of a potential well and the kinetic energy of the particle as it moves in the well. [That is, we see here the roots of the classic quantum problem of the particle in a box, which for instance extends to the quantum view of free expansion.] In both these cases, location plays a significant, inextricable role. There is not just "some" connexion, location and its rate[s] of change in light of the interaction between forces and inertias and fields of force/potential barriers are at the heart of energy-mass issues; including at micro-level.

Further to this as Robertson points out, the degree of constraint on the randomness with which matter and energy are distributed at micro-level are in fact tightly coupled to the definition of the relevant microstate and the resulting statistical weight thereof. Thence, to the entropy number for the macrostate through s = k ln w or the equivalent [sum of pi ln pi]. In turn, the entropy number is demonstrably connected to the informational view on the matter: the degree of constraint we have on the randomness of the micro-level behaviour of a body is linked to the observations on macro-level variables [which in the general case are what we have access to]. Where we have more or less constraint, we can treat the body with more or less precision in extracting energy from it it in the form of work, say.

That is, the degree of observed constraint on the degree of randomness of a body or system at micro level is highly relevant to our estimation of its entropy -- indeed, it is bound up in the core of the concept: dS >/= d'Q/T, where d'Q -- heat flow --is in effect importation into the system of random molecular motion.

Temperature of course is a metric of average random kinetic energy per accessible degree of freedom of the micro-particles in a given body. [I use "accessible" here to reckon with the issue of freezing out of degrees of freedom due to quantum step-size effects.]

You will therefore understand my lack of esteem for Mr Lambert's remarks as you have excerpted, as the sort of considerations above have plainly not been properly addressed by him. Mass-locational and other related issues tied to its rates of change are deeply connected to entropy. For instance, let us look at how he addersses free expansion:

"the energy of the system has been allowed to spread out to twice the original volume. It is almost the simplest possible example of energy spontaneously dispersing or spreading out when it is not hindered. "

Observe the highlighted, which reveals the fatal defect in the chain of thought. For, the energy -- in this case, properly, the kinetic energy of translation of particles of an ideal gas or the quantal gas that links analytically to it -- "spreads out" precisely because the locational constraint on the relevant particles has been loosened. Thus, there are more accessible ways in which microstates consistent with the new macrostate can be formed, thence entropy rises. This particular case was discussed in brief in the long since linked online article by Gary L. Bertrand, which you never responded on.

In the cases mainly in view in this discussion, the micro-jet thought expt, the macromolecules of life, and diffusion and linked cases, locational issues in various forms play a role in constraining and/or identifying the relevant macrostates of interest. Associated with these macrostates are microstate counts, which in turn give rise to entropy numbers. Mere denial of the connexion or assertion of arbitrary datum lines is not sufficient to undermine the points being made and the inferences drawn therefrom.

You may in this connexion find the discussion here interesting, excerpting:

"Paul Flory [1] states that "…perhaps the most significant structural characteristic of a long polymer chain… (is) its capacity to assume an enormous array of configurations." This means that a polymer is really defined by a dynamic feature that is closely associated with thermal motion of covalently or ionically, or even "metallically", bonded molecules. Then the definition of a polymer, as one of the three branches of Materials Science is unique in that it does not depend on static atomic features [2], but rather on a dynamic molecular feature. In fact, we can consider that chain molecules of pure sulfur display most of the structural and dynamic features that we associate with polymers and plastics and can therefore be classified as polymers . . . . since the "capacity to assume varied configurations" relies on spatial degrees of freedom, a polymer, by nature, must have a lower mass-fractal dimension compared to the space in which it is embedded. That is, a flexible chain that does not fill space can assume different configurations, that is has a non-zero configurational entropy, and is therefore a candidate to be a polymer, while a rod, which has no configurational entropy, or a protein in its native state, with a fixed configuration, are not candidates to be polymers . . . . polymers never occur in the purely crystalline state and usually display fairly complex morphologies that are related to their "structure defined by dynamics" as well as the logistics of packing chain molecules in crystalline motifs. These make an understanding of the morphology of polymers more complicated than metals and ceramics, though a much richer subject area. Properties and structure are related to a much wider range of size scales (structural levels) in polymers compared to metals and ceramics. A semi-crystalline polymer displays vital structural features on the atomic, nanometer, colloidal, and micron scales simultaneously and the mechanical response of this material is largely determined by the interaction between these different length scales in a dynamic sense."

Of course this author speaks to folding and refolding as relevant configurational parameters to the definition of industrially relevant polymers, and to their properties and behaviours. [He is using polymer in a rather restricted special sense, one in which proteins of fixed chemical chain composition are not polymers. In the more usual sense, they of course are, as macromolecules.]

Proteins of course are capable of diverse folding -- hence the prion disease problem [e.g. mad cow disease], whereby the misfolding propagates and is a threat to life. More to the point, proper folding and functionality are intimately related to sequential composition of the chain, i.e a different sense of configuration which constrains functionally observable state. Thence we can assign a macrostate and through appropriate investigations study the number of micro-configs that are compatible with that state, i.e estimate the chain-compositional configurational entropy. Similarly, as DNA functions as a life- code- storing medium, we can define linked functional states for DNA -- a factor often used in the relevant lab investigations. Such functions are associated with both information manipulation and with energy-linked patterns of behaviour in life molecules.

So, TBO [and other then current OOL investigators who made similar calculations] again are borne out.

I add to this, that Bradley of TBO is a polymer expert, and that his remarks here are worth a further look.

On the thought experiment, we can observe differing degrees of constraint on the locations of micro-jet particles in three macrostates: [a] scattered, [b] clumped at random, [c] funcitoanlly configured as a flyable jet. W falls as we move through the states, and we can assign appropriate configurational entropy numbers. In particular, observe that energy of formation of the two clumped states from the scattered one -- cf the different stages in the thought expt above -- will be similar enough that it makes sense to break out a configurational term from the clumping term. [In other words theraw work to clump is about the same in both cases,t he difference is that in the state c a special clumping that is separately identifiable has been targetted.]

One way -- there is no "have to" here [pace your remarks onthe NUS students] -- to do that W-count is to follow Gibbs and slice up the configuration space of the vat into locational cells of appropriate size [basically small enough to contain just one part]. Then we do our counts for the states [a], [b] and [c]. It is intuitively obvious -- once one has studied permutations and combinations -- that the W-counts fall dramatically as we move from a to b to c.

Thus, we can -- again and again -- see that an entropy of configuration is a coherent, empirically relevant and useful concept. It points to the issue that chance plus necessity without intelligent direction are maximally unlikely to achieve FSCI states, and so where we see such states -- given the point that in every case where we know the causal story directly this is so -- on an inference to best, empirically anchored explanation, FSCI is the likely product of intelligent agency.

2] you keep skipping the important bit, what is the change in entropy in each sub-step as you go from [b] to [c].

Not at all. I keep pointing out that entropy is a STATE, not a path function. IT is therefore appropriate to ask, can we identify certain states, a, b, c? ANS: Yes. Can we do W-counts for these states? ANS: Yes, again. Can we therefore assign entropy numbers? ANS: Yes, yet again. And s falls a to b to c. QED.

How the states are achieved is strictly irrelevant to that, though indeed, stepwise in the process of transforming b to c, the w-count is falling as the parts fall into place for the emerging state c.

For in the intermediate micro states, step by step, the number of accessible configurations is partly at-random, partly constrained by the functional state to be, with the latter fraction steadily rising until the transformation is complete. [Think of the completion of a jigsaw puzzle at sub-micron scale.]

3] Can an electron microscope cleanly distinguish macrostates b and c?

I have picked this point out to highlight how an already adequately answered point is being brought back to a fruitless circle.

In principle for the microjet, the electron microscope can inspect its surface features. That suffices to identify that -- more or less we have a clumped entity. But there are many states that could LOOK like a jet but will not fly -- i.e function -- like a jet. So the electron microscope cannot adequately distinguish states b and c. Assuming al parts are clumped, it can distinguish a and b, showing a fall in accessible locational cells in the config space, thus in w count and entropy. But it cannot show that we have achieved state c; the flyability test can.

By the way, I made a simple observation that we have reason to believe that Brillouin's remarks on compensating creation of entropy in the other parts of the overall system apply tot he assembly of a microjet. That was all that I withed to note on, on the wider 2nd LOT point -- again.

And this was already pointed out, in detail above. Why has it come back up as if the answer had not already been pointed out at least twice before? [There are many other similar cases, which I will not go over yet again to predictably similar effect.]

4] Evo mat and ID stuff.

On this, we could have a long debate on major phil and policy issues and on the power of institutions and paradigms. I have already referred you to a relevant history of sci exploraiton that shows that meth nat is not an adequate definition of sci method [in fact there is a far broader issue on the problem of demarking science/ non-science; there are no reliable criteria]. The areas of great sci advance over the past 100 y or so, are of course precisely those areas in which MN does not have a particularly pernicious effect -- especially in areas where the observations and inferences are subject to lab and observatory work in the direct present. Where the big problems lurk is in the historical reconstructions. [Even the Big Bang models ran into heavy headwinds because of the phil implications of an evident beginning to the observed cosmos, for decades.]

Similarly evo mat runs into deep hot water in addressing thecredibility of mind, as also long since linked.

5] Sci teaching in schools

On the issue of teaching in schools, the answer is simple actually:

--> First why sci ed: actually, it is why sci-math-IT-med and technology education; we should see the whole picture. These things are of great cultural importance and intrinsic value. Much of how we work and produce wealth is tied to them.

--> Besides, they are interesting if properly taught. Just the past week plus my 8 yo son and I were playing with paper wings [of proper form, next step full-blooded wing construction it seems . . .] and gliders, opening up studies on balance, centres of pressure, gravity and stabilty, lift-drag effects, controllability [a whole career in itself!] etc etc etc. Now what was interesting is that he did not at first spot this as "physics" [which he is eager to study it seems . . . maybe his dad has something to do with that . . .]

--> This brings to the fore my preference for inductive approaches to studying and learning in general and science in particular. Cf my teachers prep manual for basic sci edu here.

--> As the primer discusses in brief, a fair presentation of the relevant accurate history of science will undercut the C19 rationalist reconstructions that too often dog school science. [E.g the Galileo tale is far more complex than is usually presented . . . I also have a pet peeve with how Aristotle is often caricatured today, but that is a broader matter . . .]

--> Awareness of this accurate history will help avoid simplistic, agenda-serving, worldview question-begging attempted redefinitions of science such as is implicit in MN. Indeed, there is no one THE scientific method. Just ask Feyerabend and co on that.

--> Some awareness of basic worldviews thinking and alternatives will help us see that science has been as is being practiced by people within all sorts of worldviews.

--> Some exposure to basic epistemology and phil of sci will help us appreciate the limitations and strengths of scientific thinking, and how it evolves across time -- by collective intelligent design no less. [Cf my remarks in my phil toolkit link, often made. I have often taught 4th form students a simple form of this, starting wth the idea of the logic of affirming the consequent and scientific testing and its limitations.]

--> Within this context, the history of major theories in biology and the phil background involved can be fairly addressed, leading to a more balanced presentation on the OOL and macroevolution issues, also on the way not confusing the micro and macro levels, by being aware ot ehe information generation issues connected to the point. (This, DI advocates in its "teach the controversy" recommendation -- learn MORE, more accurately and more balanced, not less on evolution.)

--> the actual issue of design in science is also far broader than evolution. At phil level, chance, deterministic cause-effect bonds under natural forces, and agency all can act, and none should be ruled out a priori based on begging wordviews level q's. [Think about the link between microbiology and pharmacology here, and the implications of the note that pharmaco studies poisons in small doses, including forensic ones.]

--> In physics for instance, the pendulum and the coil or spiral or bar spring and the whirling ball on a string and lenses and mirrors and electric circuits and molecules studied spectroscopically etc, are as often matters of technology as they are matters of undisturbed nature. For that matter an experiment by defintion is an artificially constructed situation for studying natural phenomena. So if we see that we are studying underlying natural forces and phenomena in experimental, observational and technological studies, we can have a more healthy and more relevant and interesting approach.

--> Indeed,the very last high school level class I ever taught in physics related matters was a guest presentation to a carpentry class where I talked briefly on how the cutting action of carpenter's tools is often based on shearing forces and actions.

--> So, if we revamp our approach I think we can make sci edu more relevant, more interesting and more honest [i.e we should not pretend to a certainty that the methods are incapable of . . .].

I trust this helps,

GEM of TKI

The Pixie said...

Hi Gordon

I will be brief, as there is very little to add today, the material overnight basically recirculating already addressed points.
Then let me say from the outset how disappointed I am that you did not address the four points I particularly highlighted last time around.

(1) I have to assume that you are unable to find any partition function for mass distribution in your textbook, and that further convinces me that mass distribution does not directly affect entropy.

(2) I have to assume that you are unable to explain the contradiction in the instance where one vat produces assemblies without blueprint, so high in entropy, the second with the same assemblies, now made to a blueprint, so low in entropy.

(3) I have to assume you are unable to find any web page supporting your use of "macrostate" to allow flyable, but not allow identification by an electron microiscope to discern macrostates, and that further convinces me that this is something ideosyncratic to your understanding of thermodynamics, and quite possibly changing over the course of the argument.

(4) I have to assume that you are unable to provide an exact thermodynamic definition of "functional", and that further convinces me that this is something ideosyncratic to your understanding of thermodynamics, and quite possibly changing over the course of the argument.

The major point of interest is evidently in response to my showing the intimate link between motion and kinetic energy, which covers translation, vibration and rotation:
I agree with all that. What you really need is something that shows a link to location. Remember, I am arguing for energy, which is, as you point out, motion and kinetic energy. I thought you were arguing for mass distribution, so why no mention of how mass is distributed?

Further to this as Robertson points out, the degree of constraint on the randomness with which matter and energy are distributed ...
All the stuff you quoted from him gave no indication that he includes how matter is distributed.

...at micro-level are in fact tightly coupled to the definition of the relevant microstate and the resulting statistical weight thereof. Thence, to the entropy number for the macrostate through s = k ln w or the equivalent [sum of pi ln pi].
See, if you had found that partition function for mass distribution, I might believe you here. You did not. You very clearly ducked doing so, and I have to assume that is because you cannot, and you know you cannot.

Microstates relate to how energy is distributed (remember, I found partition functions to back that up).

In turn, the entropy number ... quantum step-size effects.]
All fine, and works perfectly without worrying about the spaciallocation of the molecules.

You will therefore understand my lack of esteem for Mr Lambert's remarks as you have excerpted, as the sort of considerations above have plainly not been properly addressed by him. Mass-locational and other related issues tied to its rates of change are deeply connected to entropy.
But nothing in the above related to mass-location!

For instance, let us look at how he addersses free expansion:

"the energy of the system has been allowed to spread out to twice the original volume. It is almost the simplest possible example of energy spontaneously dispersing or spreading out when it is not hindered. "

Observe the highlighted, which reveals the fatal defect in the chain of thought. For, the energy -- in this case, properly, the kinetic energy of translation of particles of an ideal gas or the quantal gas that links analytically to it -- "spreads out" precisely because the locational constraint on the relevant particles has been loosened. Thus, there are more accessible ways in which microstates consistent with the new macrostate can be formed, thence entropy rises. This particular case was discussed in brief in the long since linked online article by Gary L. Bertrand, which you never responded on.

But Lambert agrees with you. The entropy changes because the kenetic energy of translation changes (or more precisely the energy levels change). This is not because the molecules have spread out (it cannot be, because the location is not being discussed), but because they are free to spread out. If you look at the Boltzmann math, you are not concerned with where the molecules are, but with the energy they have, and the energy levels they have. This is not about mass distribution, even though matter is significant at some level, this is about energy distribution, which is affected by the material configuration.

I have e-mailed Bertrand to see exactly what he means. If you are so sure Lambert is wrong, why not e-mail him (feedback@secondlaw.com), and explain why? I very much doubt you will, because I increasingly get the impression you know how weak your position is.

In the cases mainly in view in this discussion, the micro-jet thought expt, the macromolecules of life, and diffusion and linked cases, locational issues in various forms play a role in constraining and/or identifying the relevant macrostates of interest.
But in the articles you find to support your case (eg Bertrand's), the location is irrelevant!

You may in this connexion find the discussion here interesting, excerpting
I fail to see the relevance. When a polymer twists into one shape (or configuration), it has a certain enthalpy. Twist it around another way, and the bonding changes (not the chemical bonds, of course), so the enthalpy will be a bit different. Therefore there must be a deltaH for the conversion process, and if that is happening at temperature T, then there is a deltaS = deltaH/T. Do you think that is connected to what NUS mean by configurational entropy? It is not. Do you think that is connected to what TBO mean by configurational entropy? It is not.

Proteins of course are capable of diverse folding ...
So, TBO [and other then current OOL investigators who made similar calculations] again are borne out.

What? TBO are talking about the amino acid sequence, protein folding is entirely different. Yes, I know amino acid sequence determines what folding is likely, but very obviously, TBO do not consider that, and the protein folding you describe does not involve changes in the amino acid sequence. How can you use one to support the other?

3] Pix: Can an electron microscope cleanly distinguish macrostates b and c?
I have picked this point out to highlight how an already adequately answered point is being brought back to a fruitless circle.
In principle for the microjet, the electron microscope can inspect its surface features. That suffices to identify that -- more or less we have a clumped entity. But there are many states that could LOOK like a jet but will not fly -- i.e function -- like a jet. So the electron microscope cannot adequately distinguish states b and c. Assuming al parts are clumped, it can distinguish a and b, showing a fall in accessible locational cells in the config space, thus in w count and entropy. But it cannot show that we have achieved state c; the flyability test can.
By the way, I made a simple observation that we have reason to believe that Brillouin's remarks on compensating creation of entropy in the other parts of the overall system apply tot he assembly of a microjet. That was all that I withed to note on, on the wider 2nd LOT point -- again.
And this was already pointed out, in detail above. Why has it come back up as if the answer had not already been pointed out at least twice before? [There are many other similar cases, which I will not go over yet again to predictably similar effect.]

These things come round and round again because your answers are vague and seemingly inconsistent. If you could give an exact description of what counts as observable, what exactly is a macrostate in your mind, we would not be doing this. Instead, you declare flyable is observable, but electron microscope is not. That seems arbitrary to me, and your reluctance to give a definite statement confirms that.

Now it just so happens that in my vats, the asseemblies are flat, and so the electron microscope does indeed allow us to see exactly which configuration we have. If that is not good enough, I have designed a new set of nanobots that will seek out these specific assemblies, and plant micro-flags on them, to make them readily observable.

Somehow I imagine you will still find a way to object. Your worldview seems to hang on the idea that only your flyable nanoplanes have their own macrostate, and as soon as you admit that my assemblies have their own macrostates the whole argument comes crashing down. Thus:
(1) You must establish that under no circumstances are my assemblies are observable
(2) You must avoid explicitly stating the requirements for a macrostate, so you can change the rules to ensure (1)

Perhaps it is worth pointing out that in conventional thermodynamics the issue of flyability never comes up. In reality, macrostates have nothing to do with whether something can fly.

The Pixie said...

Quick note on education...

--> As the primer discusses in brief, a fair presentation of the relevant accurate history of science will undercut the C19 rationalist reconstructions that too often dog school science. [E.g the Galileo tale is far more complex than is usually presented . . . I also have a pet peeve with how Aristotle is often caricatured today, but that is a broader matter . . .]
But how much time can you spend on the Galileo story; on alchemy, and why chemists nowadays reject it; on astrology and why physicists today reject it; on creationism, and why biologists today reject it. Sure, some historical perspecive is good, but I would not want the teachers of my children spending more than 20 minutes on why alchemy is wrong, for example.

--> Awareness of this accurate history will help avoid simplistic, agenda-serving, worldview question-begging attempted redefinitions of science such as is implicit in MN. Indeed, there is no one THE scientific method. Just ask Feyerabend and co on that.
I thought it was the IDists who were trying to redefine science (see Wedge document for details).

So what is your alternative? I thought I asked this las time around. Hmm, maybe I just missed your reply.

--> Some awareness of basic worldviews thinking and alternatives will help us see that science has been as is being practiced by people within all sorts of worldviews.
And that they are using MN, whatever that worldview is. Indeed.

--> Some exposure to basic epistemology and phil of sci will help us appreciate the limitations and strengths of scientific thinking, and how it evolves across time -- by collective intelligent design no less.
I agree.

--> Within this context, the history of major theories in biology and the phil background involved can be fairly addressed, leading to a more balanced presentation on the OOL and macroevolution issues, also on the way not confusing the micro and macro levels, by being aware ot ehe information generation issues connected to the point. (This, DI advocates in its "teach the controversy" recommendation -- learn MORE, more accurately and more balanced, not less on evolution.)
So you presumably also think alchemy (both European and Chinese of course) should be taught in chemistry. Afterall, by the same token, you must want a balanced approach there.

Perhaps you want your son to be taught that their is a legitimate controversy in science about whether the Sun goes around the Earth. I do not. I want my children taught mainstream science; the science that the vast majority of scientists accept.

And I really see no reason to formally teach history of science. Okay, mention in passing who discovered what, but really who care what people believed before that? How does that help the children's science education? Remember, time is limited. If you are teaching them about alchemy, then you are leaving something else out. If we are teaching them the history of chemistry, then there is less time to teach actual chemistry.

--> the actual issue of design in science is also far broader than evolution. At phil level, chance, deterministic cause-effect bonds under natural forces, and agency all can act, and none should be ruled out a priori based on begging wordviews level q's.
And yet, you are doing the equivalent by trying to pigeonhole everything into these neat categories, i.e., because of your worldview, you see a need to compartmentalise, thereby ruling out a priori combinations.

Can you think of any aspect of science outside of ID/creationism where formally dividing between chance, law and agency has been of any use?

Gordon said...

Pixie:

I will note on a few points.

1] have to assume that you are unable to find any partition function for mass distribution in your textbook

First Zustandsumme are indeed generally important, but there are cases where we can directly do state counts or do things like extending particle in a box.

s = k ln w does not necessarily depend on the calculation of a Partition Function. [And BTW, had you looked at even my excerpts from Robertson, you would have seen an informationally-based general PF that would do the job nicely if that were required.]

More to the material point, I have long since shown that you cannot properly separate out energy and view entropy numbers as irrelevant to mass-distribution and configuration issues.

2] Recirculating already answered stuff.

On a related point, there is simply no contradiction: if you are able to use a blue print to generate an observationally identifiable macrostate, then the entropy is low. If you cannot, you have simply gone the long way around to get to a member of the clumped at random state.

Similarly, I have long since pointed out that observability at macro-level -- which by virtue of the fact that energy is involved in such observations, is inescapably energy- and functionality-linked -- is the distinguishing feature of a macrostate, and w-counts go to the microstates that are consistent with it.

By that criterion, which I believe I have long since cited Mr Robertson [and more] on [as well, it is a matter of the basic definition involved] the distinguishing of a scattered from a clumped at random from a configured flyable macrostate is coherent and proper.

Functional of course, is an observational macrostate point, and gives a particular kind of force to "specified." FYI, there are two sorts of definition, by precising statement, and by adequate example. I have given many instances of functionality, which are entirely adequate for our purposes. [BTW, definition by recognisable example is logically prior to precising definition.]

So, your assertions of "disapointment" and "contradiction" and "you can't find a cite", "you haven't given a defn," "vague and inconsistent" etc ring rather hollow. They are yet further examples of the recirculating of already long since repeatedly adequately answered points.

I cite these as examples of the pattern. If I judge that further points are similarly recirculatory, I will simply refer you above.

3] What you really need is something that shows a link to location.

Several more than adequate cases have been given, in aggregate many times: [1} potential wells, [2] particles in boxes [including free expansion where the box opens up suddenly . . .], [3] folding of polymers generally, [4] folding of proteins and the resulting functionality/ prion failure, [5] biofunctional composition of DNA and protein etc chains, [6] the thought expt in which specific macro-observable consequences stem from location and allow definition of macrostates.

4] Lambert again: Pixie: The entropy changes because the kenetic energy of translation changes (or more precisely the energy levels change). This is not because the molecules have spread out (it cannot be, because the location is not being discussed),

Cf remarks yesterday. The energy of the system does not shift on free expansion, but as the distribution of mass changes due to removal of a constraint, configuration shifts, location of particles shifts, so that entropy increases, of course accompanied by the fact that translational kinetic energy "spreads out" as some of the particles move into the previously inaccessible region.

He is shutting his eyes to the other side of the story, in sum. And, so are you: not explicitly discussing one side of a story is not the same as that it is not implicated. Translational kinetic energy is intimately connected to moving particles and their locations and changes therein.

BTW, mass is itself a form of energy, if you will; usually, we do not reckon with it in energy calculations, but that is a major result of Relativity: e = mc^2.

Also, location is not necessarily absolute, if can be relative, as in the folding of polymers.

5] TBO et al

I get the impression you have never really read through the online chaptrers.

They are talking in the context of the then current debate including of course the sequence composiion of bio-polymers, DNA and Protein being mainly in view. Their use of Brillouin information is parallel to similar work by other investigatores, who made a somewhat different informational metric.

In short, sequence configuration of such polymers is a part of the discussion at the time in question, and thermodynamics issues are relevant to it.

6] it just so happens that in my vats, the asseemblies are flat, and so the electron microscope does indeed allow us to see exactly which configuration we have.

In that case you have just specified a macrostate, with a reduced w-count relative tot he random clumped state. And being a flat clumped cluster of microjet parts is a functional, observable description and specification.

In short, you here show implicitly and inadvertently what I have been saying explicitly.

If you had read carefully above, you will have seen how I discussed for instance the definable macrostate where the clumped parts are able to drive around. We could identify others where they form e.g. a submarine, maybe a rowable one, in the vat, etc . . .

That is I am not at all locked into the possibility that ther eis but one island of functionality in the overall clumped config space -- only, by virtue of the astronomical number of possible configurations, such islands are isolated behind random walk search barriers due to their exhibiting FSCI.

Your objections are taking on an aura of artificiality, even desperation, here. [If I may permit myself a subjective impression.]

7] Sci Edu

In fact ,t here is a lot of time given over to en passant mentions of history and phil of science issues in a lot of teaching in High School and College level textbooks. They are unfortunately too often subject to the classic strictures against unexamined fact claims and metaphysics.

You will observe that I have focussed on the issue of showing the core issues in doing science, and in fact in my experience, relatively short timeframes within a class are sufficient to make the point, if in the general intro to science there is some time spent on methods and limitations, adn ont he basics of design and implementation of experimental and observaitnal studies. [Here in the Caribbean, at some stage students are expected to undertake design and impelmentation of their own investigations for projects assessed towards certificates, the notorious SBAs.)

[BTW, many of your remarks exhibit the same C19 - 20 rationalist-evolutionary materialist influences that I am pointing to as deleterious.]

As to alternatives, I gave an actual link to an actual lesson I prepared for orienting science educators here in Montserrat, complete with ten exercises. So, to ask such as if that were not done is, frankly, disrespectful -- you have there ~ 10 pp of alternative, directed at training the trainers, from a decade ago now!

8] Perhaps you want your son to be taught that their is a legitimate controversy in science about whether the Sun goes around the Earth. I do not.

Thereby hangs a long tale on the issue of your own biases: evo mat is philosophy, not properly science, I am afraid. [E.g. What we DO know empirically, is that FSCI, whenever we know the causal story directly, is produced by intelligent agency . . . and that the hurdles to access such islands int he relevant config spaces are beyond reasobnable credibility. So, it is a priori metaphysics smuggled in, Meth Nat in short, that leads to the dominance of what would otherwise not even be a serious contender . . .]

Similarly, the orbiting of the earth around the sun is a matter of direct observation by instruments we have reason to believe are trustworthy; the second level inferred dynamics of the inferred past in deep time on earth is plainly not comparably established. Indeed, from Darwin's day on,the principal evidence in the matter, fossils, have consistently failed to provide the massive documentation of transitional forms that he hoped for in answer to this telling objection. [Cf her, the recent properly peer reviewed Meyer paper and the peer reviewed Loenig paper, both of which I excerpt and discuss here. Similarly, observe Well's stinging and all-too-telling comments on many of the infamously misleading icons of evolution here, which BTW includes the Miller-Urey experiment on OOL issues.]

What I have taught in schools and to Christian leaders in training, is that across time, we have had differing views on cosmology, and they have been anchored to the observations accessible at the time.

E.g., 300 - 400 BC, Aristotle observed the circular shadow of Earth on the moon in a lunar eclipse and inferred to a spherical earth. Eratosthenes used the shadows cast by the sun at Alexandria and Syene to estimate the circumference of the earth to within several percent of the modern value. [Indeed the actual debate at informed levels with Columbus was over the fact that his estimate was far too low relative to the long known reasonable value. Of course neither side realised there was a whole other continent or two out there to be discovered.]

Similarly, Ptolemy's Almagest -- which propounded the longest standing theory in physical science -- was anchored on the observed circulation of the stars and wandering stars [planetos] in the heavens.

Observing the fixed stars, he inferred that the scale of the earth by comparison to the distance thereto was as a mathematical point; a quasi-infinite scale for space. He explained the odd looping behaviour of the wandering stars by using a mathematical construct "to save thephenomena," namely circular orbits and circular sub orbits feeding off the main orbits. That he used circles was driven by the metaphysics that the cosmos above that dirty sump, earth, was going to reflect perfect forms.

In the 1500s - 1600's, Copernicus was able to simplify the by then quite elaborate scheme of "phenomena-saving" circles by 50% [from 80 to 40 circles] by putting the sun not the earth at centre -- raising the Occam's razor point.

As Brahe's observations -- made relative to his own compromise theory -- came in, Kepler was able to calculate that elliptical orbits would at once greatly simplify the whole scheme, and showed his three laws: elliptical orbits with the sun at one focus, radius vector sweeps equal areas in equal times, there is a period-radius harmonic law.

Galileo advocated Copernicus' scheme [but seems largely to have missed the telling power of Kepler's work . . .], and made the telling discovery that Jupiter was a miniature Copernican-style system. Though, of course telescope observations in that era were subject to serious distortions and optical artefacts due to splitting up of light; a partial explanation for the controversies that met the announcements. {Newton invented the reflector telescope in despair over ever resolving the optical challenges of lenses. Partly that was because he doubted the wave theory, and nobody in that day considered TRANSVERSE light waves . . .]

But, Galileo made two telling mistakes: [1] he asserted to be finding the truth of the matter not a better "saving of the phenomena" and [2] he ridiculed the new scientifically minded pope who had as a cardinal defended him, when the pope underscored the inability of empirical investigations to establish indisputable truth. THAT is why he went through the infamous trial before the Inquisition. BTW, some of his arguments in his Dialogue etc were flat out wrong, e.g on tides. [Cf discussion here for the side to this story we seldom hear, not least in science class.]

Newton's work in his year of miracles in the 1660's, -- in the famous mid-third decade of life that consistently marks the real breakthroughs in Physics -- was to connect gravitation on earth with the orbital mechanics of the moon, at a known distance. A quick calculation shows an inverse square diminishing of the force, thence we can see his way to the famous NLG.

What was a lot harder was to prove ti, relative to Kepler's laws,and to settle several mathematical points along the way. That brings in the co-invention of the Calculus, and several subtle geometrical arguments. For instance, he treated the planets -- known to be extensive bodies, as points, and it was hard to prove why this was reasonable.

But with the cluster of discoveries and theories, celestial and terrestrial mechanics were linked and earth was lifted out of the sump of the cosmos up into the heavens in its own right. And it was forever seen that the heavens were changing and dynamic.

For 200 years, Newton's glory shone undiminished as subsequent discoveries fitted in. Until the issues over electrodynamics and the ether and the question of the nature of blackbody radiation came up.

ALl of this underscored the point:

THEORY => OBSERVATIONS
OBSERVATIONS,

So THEORY

. . . Affirms the consequent: T => O does not entail that O => T. ["If Tom is a cat, then he is an animal" does not entail that "if Tom is an animal he is a cat"! But, we often confuse implication with equivalence . . .]

In short, theoretical scientific investigations still "save the phenomena," and we should never confuse our inferences with indisputable truth. Scientific knowledge claims and methods are provisional.

I trust this will suffice to show the relevance of the sort of points I made yesterday to how we teach science today!

9] Can you think of any aspect of science outside of ID/creationism where formally dividing between chance, law and agency has been of any use?

I already have discussed SETI and pharmacology-molecular biology as cases in point. Investigations of geological processes in given environments, forensic sciences, many things in the more scientific side of archaeology etc etc all count. And, of course there is a whole theory of inventive problem solving out there, TRIZ, pioneered in the USSR.

In many scientific disciplines -- and note here I include pure and applied science, medicine and technology under this rubric -- the basic observed fact that cause-effect bonds trace to chance conditions, more nearly deterministic dynamics [i.e cases of change forces, inertia forces, rates and accumulations of change], observed regularities that seem to be more or less deterministic similar to dynamics but maybe with no known dynamical explanation, and agent action, can all be relevant.

In short, you are again improperly unresponsive, and consequently circular. For I took time to give more than adequate details here.

GEM of TKI

The Pixie said...

Gordon

First Zustandsumme are indeed generally important, but there are cases where we can directly do state counts or do things like extending particle in a box.
s = k ln w does not necessarily depend on the calculation of a Partition Function. [And BTW, had you looked at even my excerpts from Robertson, you would have seen an informationally-based general PF that would do the job nicely if that were required.]

How about instead of just asserting that, you should how you can mix these two things together. We both agree that partition functions are important. If you want to calculate the entropy of a protein, considering both the energy levels, though the partition functions, and the mass distribution, what is the maths? Surely this is in your textbook?

In fact, let me ask this directly. What does your textbook actually say about mixing energy levels and mass distribution?

On a related point, there is simply no contradiction: if you are able to use a blue print to generate an observationally identifiable macrostate, then the entropy is low. If you cannot, you have simply gone the long way around to get to a member of the clumped at random state.
Hmm, are you really not getting this?

I have this vat, with faulty nanobots, making a certain, but unspecified, assembly. I collect the asemblies, they are flat, and can readily be identified under an electron microscope. Just to be sure, I build some new nanobots, that identify this particularassembly, and mark it in some way, which makes it readily observable. So these assembles are "an observationally identifiable macrostate" (and you seem to accept this), but with no blueprint. According to the above, I "have simply gone the long way around to get to a member of the clumped at random state". These assemblies are, by implication, high entropy. Is that right? This is the bit that time and again you fail to confirm or deny (or rather you vaguely suggest you both confirm and deny it!).

I empty the vat, and start again with new nano-bots. These nanobots are designed to produce the same assemblies as before. Again we have "an observationally identifiable macrostate", but this time with a blueprint. Clearly this is low entropy.

But these are the same assemblies. Made one way they are high entropy, made the other they are low entropy!

You seem to be heading towards accepting that both assemblies are low entropy (so there would be no contradiction), but where does that leave your "functional" and "specified"? They would seem irrelevant.

Similarly, I have long since pointed out that observability at macro-level -- which by virtue of the fact that energy is involved in such observations, is inescapably energy- and functionality-linked -- is the distinguishing feature of a macrostate, and w-counts go to the microstates that are consistent with it.
Sure, but you use observability to mean whatever you want. Why is the electron microscope observation not good enough? Your answer seems entirely arbitrary. It is not good enough, because you do not want it to be good enough.

Functional of course, is an observational macrostate point, and gives a particular kind of force to "specified."
You make a big deal about FCSI as opposed to CSI, so I assumed functional meant something quite different to specified. In thermodynamics, a macrostate of water is defined by its temperature, pressure and mass. Is this "functional"? Not in any conventional sense.

Again, you seem to be using the term as it pleases you moment by moment.

FYI, there are two sorts of definition, by precising statement, and by adequate example.
Do you honestly think you have given adequate examples to define "functional" or "macrostate"? Adequate examples must test the borders of the definition, and say why this example is, and this is not. Your favourite adequate example is "flyable" - it is not even in the real world.

I have given many instances of functionality, which are entirely adequate for our purposes.
No they are not, not if I still do not know what you mean by it.

[BTW, definition by recognisable example is logically prior to precising definition.]
So are you really claiming that "functional" is a legitimate term in thermodynamics, but that it does not even have a precise definition?

Well, anyway, now we know. There is no precise definition of "functional" in thermodynamics.

Several more than adequate cases have been given, in aggregate many times: [1} potential wells, [2] particles in boxes [including free expansion where the box opens up suddenly . . .], [3] folding of polymers generally, [4] folding of proteins and the resulting functionality/ prion failure, [5] biofunctional composition of DNA and protein etc chains, [6] the thought expt in which specific macro-observable consequences stem from location and allow definition of macrostates.
I went through all that last time.


4] Cf remarks yesterday. The energy of the system does not shift on free expansion, but as the distribution of mass changes due to removal of a constraint, configuration shifts, location of particles shifts, so that entropy increases, of course accompanied by the fact that translational kinetic energy "spreads out" as some of the particles move into the previously inaccessible region.
The energy does not change, but how it is distributed does. Entropy is a measure of the distribution of energy, remember.

By the way, have you e-mailed Prof Lambert yet to explain why he is wrong? No? Is that because secretly you are worried he might be right?

BTW, mass is itself a form of energy, if you will; usually, we do not reckon with it in energy calculations, but that is a major result of Relativity: e = mc^2.
Sure, but the energy levels associated with nuclear processes are way high for the processes we are considering.

Also, location is not necessarily absolute, if can be relative, as in the folding of polymers.
Okay, so how do they do the maths then?

5] I get the impression you have never really read through the online chaptrers.
I have. And the first one is pretty good; I have no fault with it at all. Then they get into this confogurational entropy onsense...

They are talking in the context of the then current debate including of course the sequence composiion of bio-polymers, DNA and Protein being mainly in view. Their use of Brillouin information is parallel to similar work by other investigatores, who made a somewhat different informational metric.

In short, sequence configuration of such polymers is a part of the discussion at the time in question, and thermodynamics issues are relevant to it.

6] In that case you have just specified a macrostate, with a reduced w-count relative tot he random clumped state. And being a flat clumped cluster of microjet parts is a functional, observable description and specification.
Excellent, at last I have confirmation that this is a macrostate with reduced entropy - whether it was specified or not with a blueprint before hand.

So if the entropy is low for this non-flying, non-specified assembly, why do we care about specification and functionality?

If you had read carefully above, you will have seen how I discussed for instance the definable macrostate where the clumped parts are able to drive around. We could identify others where they form e.g. a submarine, maybe a rowable one, in the vat, etc . . .
Sure, I saw that. But do you really believe these examples are enough to define "functional" or "observable macrostate"? I feel like it has been a real struggle getting you to admit that discernable with an electron micoscope counts as a macrostate.

Even now, with this new example, there are still questions. Was this an "observable macrostate" before the electron microscope was invented, for example?

That is I am not at all locked into the possibility that ther eis but one island of functionality in the overall clumped config space -- only, by virtue of the astronomical number of possible configurations, such islands are isolated behind random walk search barriers due to their exhibiting FSCI.
But that is irrelevant, surely, because only moments ago you said that as long as the assembles could be discerned under the electron microscope, they were "observable macrostates". Whether they could fly, swim, drive or talk is besides the point; my assemblies could do none of them. It has nothing to do with specification; again, my assemblies were not specified first time around.

Undoubtedly you will accuse my of circularity again. Well, have you considered that the circularity is due to your own inability to explain? Why is there definition of "functional" or "observable macrostate" in thermodynamics, and why is your usage of the terms so flexible? This is root of the problem, in my opinion.

Gordon said...

Pixie:

As I am up this Sunday AM, I will look at this thread . . .

A few observations:

1] Circularity:

This is an observation and correction, not an "accusation." When a matter has been adequately addressed on the merits, it should not be 5raised again as if it had not been so addressed.

2] Z-summe

This is a case in point . ..

S = k ln w is about finding observable macrostates and doing microstate counts. We can do this directly, when this is appropriate/ easy to do. PFs help us in the case of in effect modelling the type of system where energy levels are accessible in accordance with some distribution for a collection of microparticles in a gas, etc.

PFs are therefore not particularly required in this particular case, and the issue has already been addressed -- also I have given the excerpt from Robertsopn, and pointed to the possibility of your acquiring and reading the book yourself through interlibrary loans, etc.

Similarly, energy and mass distributions are both important and can be linked, in some cases, e.g. constraints on mass distribution/location being the more fundamental issue.

In the case of diffusion, mass distribution is important, and the issue of the utter improbability of a drop of dye reassembling itself spontaneously has long since been addressed in a context of entropy of configuration. I have long since given adequate discussion and links.

This is of course relevant to the micro-jet thought expt.

2] Identifying macrostates

Let's see: you have specified that you have a spatially flat distribution of clustered parts [which is therefore non-random to that extent].

You then go to the expense and trouble of setting up an electron microscope and running it to identify the locations and so on of ~ 10^6 parts credibly involved.

In short you have observed and thus specified a macrostate [macro-observable . . .], with in this case but one micro-config as at the point of observation.

And, you go to the expense of such an observation specifically to specify the state of the system, relative to being [a] a "flat" distribution of parts, and [b] being in one particular arrangement of such parts. Both of these are sharply constraining specifications, b being far more so than a.

And, to run an exercise in which the observed pattern in one vat of a specific flat state of assembled parts, is now the template for assembling parts in another vat, is to do the same exercise of moving to a macrostate with sharply constrained possible configs. So, both show low entropy of configuration.

By contrast a clumped at random state has order 10^6! possible configs, as associated microstates consistent with being "clumped at random." I think the difference should be plain enough.

Not to mention, being in a flyable config is also a particular macrostate, one compatible with perhaps thousands of specific configs.

In all of these cases, you are able to observe at mnacro-level a state, which state is associated with a certain number of possible congifgs of parts. We can therefore see that we have constrained the configuration space relative to the scattrered at random statre,a nd have collapsed the entropy of config to a certain degree in so doing.

In the case of biomolecules, the macrostate that is biofunctional can be observed through of course the viability in organisms, and so we can explore the issue of the isolation of such islands of functionality in the available config space.

Sconfig makes a lot of sense for the relevant problems.

3] Functionality and specification:

Here we see that in the thought expt as at the head of the thread, I gave one particular functionally specified macrostate: a flyable micro-jet.

Above, you have given others, at latest flat arrangements of parts as observed through an electron microscope. This too is a specification. Being flat so that an electron microscope can observe the arrangement of parts unambiguously is indeed a functional specification, and a quite constraining one, relative to being in a random state wherein parts may be connected together in any 3-d order.

All of this, to come right back to what was plain from the beginning: functional, observable specification constrains the possibility of arranging configurations. [Full circle . . .]

4] you use observability to mean whatever you want

Not so. Observability has to have an operational definition: we must be able to measure or image or record in some way.

I have pointed out that electron microscopy -- an imaging tech BTW -- would in the general case not be able to distinguish 3-d at random configs that look like but don't function like a flyable jet from the jet.

You then substituted a flat arrangement of parts, and I have pointed out that the resulting state is both sharply constrained as to configuration and observable.

That is once we give a specification [which here has the function of being flat so it can be observed through a microscope, a different function from the more serious one I described above, but a function] with a system that is subject to a large number of possible configs, we sharply reduce the number of possible arrangements.

Thus, the number of microstates consistent with the observable macrostate falls. That is again the basic point long since made: configured states that are specified by some function or other, have a much lower w-count than clumped at random macrostates, and than scattered states.

Thus, entropy of configuration is a reasonable concept and it is applicable to the sort of assembly of multiple component parts to form functional wholes that are macroscopically observable by virtue of that function.

5] Examples:

Adequacy of defining examples is not evaluated by "acceptability to a determined objector . . ." [Denial does not win the day by default, for excellent reason.]

And, FYI, the examples long since cited, stand -- objections having been adequately and in some cases repeatedly been answered. [Specious objections by a determined objector who recirculates refuted objections as if they have not been adequately addressed, do not by repetition make what did not stand up the first time suddenly become a sound objection.]

6] The energy [of a body of gas undergoing free expansion]does not change, but how it is distributed does. Entropy is a measure of the distribution of energy, remember.

The distribution of mass in the case becomes more disordered, because less constrained, and as a result entropy rises.

Entropy, from s = k ln w, is also a way of counting how many microstates of a system correspond to a given macroscopically observable state.

That relates to how mass is distributed in this case [free expansion of a gas] -- opening up the previously locked off vacuum region permits more microscopic arrangements of gas molecules, so entropy rises without need to import d'Q.

I have simply applied that to the case of the microjet, and TBO to the issue of the fucntional results of varying the component monomers in biopolymer chains: proteins and DNA.

Going way back to the original issue you raised on my making reference to TMLO, it makes sense to identify and discuss sconfig within the TdS term in the equation:

dG = dE +pdV - TdS TMLO 8.4 a

Since PdV ~ 0, and since internal energy of the bonding chain is more or less the same in the chains of monomers regardless of arrangements, in that unfolded state, we can see that the key difference between a random DNA or protein-like molecule chain and a functional macromolecule will not be in the energy added to chain monomers regardless of the arrangement within the chain, but in the specific chain-arrangements that provide relevant biofunctions.

Indeed, we observe the complexity of the mechanisms by which the cells work to get [then fold . . .] required chains -- mute testimony to just how isolated these are in config space singly much less in aggregate.

So, we see the credibility of separating the dS term:

dG = dE - [T(dsclump + dsconfig)]

In this case,I [and before me, TBO etc] take advantage of the state functional nature of S,to notionally form a more or less random chain then rearrange it, to get to the final state whose entropy is of interest. In short ch 8 is no less competent on thermodynamics issues than was ch 7 (which you acknowledge to be reasonable -- I have seen some real contortions on the part of a reviewer who had to accept technical correctness then tried to find fault . . . ).

Going back even further, to Mr Sewell's point, relative to the issue of the probability of accessing a complex and functional macrostate by chance in large config spaces:

_____________

"order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door . . . . If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth's atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here . . . But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here." Evolution is a movie running backward, that is what makes it special.

THE EVOLUTIONIST, therefore, cannot avoid the question of probability by saying that anything can happen in an open system, he is finally forced to argue that it only seems extremely improbable, but really isn't, that atoms would rearrange themselves into spaceships and computers and TV sets . . ."

________

Given the observed pattern that cause is driven by chance, necessity or agency [or combinations thereof], the issue is what would make the complexity we see likely.

One answer is agency. The other is to so expand the search space that the opportunities overwhelm the scope of the search. Agency is well supported empirically as a cause of FSCI. An unobserved wider cosmos as a whole is a metaphysical speculation, not science.

What, however, is not an answer is to claim that necessity plus chance reduces the odds; for, the point is that we are dealign with contingency to an extreme degree so the hoped for laws of spontaneous ordering are not seriously on the cards.

7] Specification and blueprints

You will observe that first, specification is not an issue of before- or after-handedness, but of observability or describability in a relatively simple way. That is there is a category error here on the part of this objection.

Second, in the very case you chose, you first put forth the case of observing a 3-d cluster, and I pointed outthat in general a microscope is incapable of distinguishing a random clump from a flyable one. That is it was incapable of the macrostate specification in view in the thought expt.

You resorted to another case, a planar distribution of clumped parts observable through a microscope. I have now repeatedly pointed out that this is a tightly constraining specification on wthence it reduces entropy relative to clumped at random or scattered throughout the vat states. Both of these are consitent with -- indeed illustrate -- the point I have made since over at UD.

In short, once we are able to specify a macrostate, we end up in a case where if that state confines configurations, we reduce entropy. Config entropy is sensible and useful.

Further to this, a macrostate of a drive-able config of parts is a functional -- it drives, observable -- it drives, macrostate.

Such a state that flies is functional -- it flies, and is functional -- it flies.

Being flat so the arrangement can be seen in an electron microscope is again observable and functional.

9] Well, before there were electron microscopes

Then we have no basis for having an observable macrostate do we -- i.e we have not applied any constraining specification at all.

[Merely doing physical work on parts does not guarantee that we are in any specific macrostate that is hard to access in the config space. And the point of all this was to highlight that it is hard to access certain material kinds of configs that fucntion in certain relevant ways, by chance plus ncessity only. That is: how likely is it to get a flyable jet by assembling a config under a a randomly chosen blueprint? Or by just clumping anyhow? Or, by assemblinga flat config? Vs by targetting a known flyable designed config? Ans -- low, low, low. Vanishingly low on the gamut of the observed cosmos as the number of bits of information involved vastly exceed 500.]

So, the number of relevant states would be for the the clumped at random macrostate. [An object of order ~ 10^-2 to 10^-3 m or maybe a bit less would precipitate out of the vat. We could see it in many ways. Sending out the signal to fly would tell at once if it is flyable, too . . . vanishingly improbable for a clumped at random state or clumped according to a randomly chosen blueprint,a s opposed to a designed-to-function by being flyable one. SO the various strawmen have failed.]

The observable clumped state is of course strongly constrained relative to the number of possible configs in a scattered at random state. That is, we are in effect here looking at diffusion in reverse! And, seeing that the drop in the number of accessible locational cells in config space means S has fallen.

That is we see S config in action yet again. QED, again.

10] as long as the assembles could be discerned under the electron microscope, they were "observable macrostates". Whether they could fly, swim, drive or talk is besides the point; my assemblies could do none of them. It has nothing to do with specification; again, my assemblies were not specified first time around.

Again, we are looking at how constrained macrostates reduce w-counts thence s, i.e at how Sconfig works.

Clumping in a plane so that a microscope can inspect the pattern of the parts, is a constraint relative to clumped at random with access to 3 -d's not just one plane. It is even more constrained relative to the scattered state which can access the whole space of the vat, with ~ 10^18 possible locations,and where once any one is taken up, there ar re still vastly many more to go. Indeed, when just one particular part is not yet "located," there are ~[10^18 - 10^6] ~ 10^18 possible arrangements left!

Being in a drivable or flyable or swimmable state etc -- all examples of FUNCTIONS that are observable! (i.e when you drop one objection for the moment to make another you show that you know or should know that the first one was in fact specious . . .] -- are of course particular functionally specified macrostates that are observable.

They all have in them the same basic point: each is sharply constrained relative to [a] the number of microstates for "scattered across the vat," [b] "clumped at random."

Sconfig makes sense and applies to the situation TBO used it for, also to the point Sewell made.

In short, the objections fall of their own weight in the end.

GEM of TKI

PS: See my point on the relevance of history and phil of sci, BTW?

The Pixie said...

Gordon

Trying to confuse me by posting on a Sunday, eh? So here we are still circling "macrostates", "specification" and "functional".

Let us suppose that imaging technology has improved by the time we have nanobots and nanoplanes, and it is possible to determine the complete configuration of any assembly, where does that leave your argument? Now the observable assemblies are not constrained to flying, 2d etc., they are all observable.

TBO talk about amino acid sequences. These can be determined quite easily in a lab., using technology today. Therefore, if you have a random sequence of amino acids, that sequence is observable. Therefore that particular sequence must surely be a macrostate, is presumably "functional", and has an entropy of zero.

Gordon said...

Pixie:

Oh, the wonders of insomnia -- I saw an unexpected comment on another thread by an old blog friend, Colin B, passed it, commented on it.

Then, I decided in fairness to get to yours, which I also saw. That turned out far longer than I at first intended!

Now, on your latest:

1] Let us suppose that . . . it is possible to determine the complete configuration of any assembly, where does that leave your argument?

The Macro-state/ Micro-state distinction is of course driven by the problem of observation of the current state and dynamics of all particles of a system.

In some cases it would be worth the cost in effort to characterise the system, and in others it would not.

So, stat thermoD analyses would still be relevant on a cost-benefits basis.

And of course beyond our level, there still lurks the good old uncertainty limit, on position-momentum or thanks to an Einstein thought expt during one of the more intense moments of the Copenhagen Conference, on energy-time. (Cf my old Prof D's analysis of globular clusters as relativistic gases. We can observe stars and could in principle run the analysis on a star by star basis but it may be good enough to simply resort to a statistical view.]

On your more direct point, we would still have noted the difference in entropy in moving from a scattered to any given clumped state, i.e there is an entropy of configuration, and it measurably collapses on going to any clumped state from a scattered one.

To see that, suppose the microjet clumped states are of order 1 cm across, i.e ~ [10^4]^3 cells ie about 10^12 cells are available, of which ~ 10^6 are used in any given clumped state. That selects from 10^18 cells in the vat.

On some rather crude approximations [I am not going to bother with a full Stirling here for our purposes, just noting that a trillion less a million is more than 10^11], there are some [10^11]^[10^6] possible clumped states. Similarly,there are about [10^17]^[10^6] possible scattered states [including the clumped ones as a small subset]. In short, W collapses in going to clumped states.

Next, taking your reference at its word, back to the actual macrostate I had in mind, a flyable micro-jet.

Maybe, there are 10's of thousands of such configs? Maybe somewhat more? But in any case, the shift in w is plain, and the shift in s is also plainly that as the constraints increase, S config goes down, sharply.

The point still stands -- and it does so even if we can do microstate observations of any and all states.

2] if you have a random sequence of amino acids, that sequence is observable. Therefore that particular sequence must surely be a macrostate, is presumably "functional", and has an entropy of zero.

This is of course a classic fallacy of equivocation, compounded by in effect putting words into mouths 25 years after the fact . . .

TBO are speaking of the difference between clumped at random states,and specifically biofunctional ones, which as the later work with Cytochrome C shows, are indeed pretty isolated in the config space of protein-like sequences.

Their analysis remains valid: we have a bio-functionally specified macro-state, and compare it to that in which any old sequence of the same length will do, as presumably could have formed in the pre-biotic world -- the clumped at random macrostate.

We see the identical collapse of S in moving from scattered to clumped to functional state as their context addresses. S config is valid.

In short, you are not free to freight TBO's analysis with any definitions of macrostate you may like to apply!

Of course, you can use modern sequence observing techniques to observe any and all protein sequences, but that does not lead to any material difference in the point that we have no reason to prefer any sequence over the other in the first case, and in the second, we can specify the state on ability to function in an organism.

[What you have done in short is to provide a handy tool for microstate counts relative to a macrostate that is otherwise specified . . . ]

Y'know: let's run the chaining at random of a 110 or so monomer protein say 10^12 times -- let's ignore the issues of chain stopping due to other reagents, adn chirality and the ~1/2 of bonds that are not protein-like, ofr hte moment.

How many of these, even under generous conditions, will be verifiably in the biofucntional Cytochrome C state?

10^12 samples is at the upper maximum of human experimentation, even at ~ US$1 per shot, and 10 s per shot to generate the sequence and sort it out. Predictably, on inspection of all 10^12 random chains using the sequencing techniques, we will not even be at ONE Cytochrome C sequence. [To see why this is so strongly the case on probabilistic grounds, cf Bradley's estimates and cites here, as already linked. Note just who he is citing, too; and that we are only 1/2 way tot he FSCI limit set by the Dembski bound, i.e we are here looking at a softball example: try the same trick with say a 300 - 500 monomer enzyme that needs to form substrate and energy molecule slots on folding then fit into a specific biochemical pathway . . .. and then multiply by the dozens to hundreds for functional life forms. That alone is enough to show why Hoyle et al blew the whistle on the OOL game by about 1981 or so!]

So, observe, just because one may be able to relatively routinely identify a sequence does not allow us to assume that at-random, we will get to any given chain. The point I and others have made still stands: to reliably get to biofunctional molecules, and/or to tothercases beyond the FSCI limit of 500 or so bits of information, we have to move beyond more or less random searches to intentional ones that collapse the config space and allow us to troubleshoot and get the bugs out in a reasonable cost of time, energy and effort.

This of course underscored the pointthat agency is the best explanation of FSCI especiallyas the complesxity exponentiates as we see the many nanomachines of life at work.

Cheerio -- and let's see if I can snatch some beauty sleep before sunup . . .

GEM of TKI

The Pixie said...

Gordon

Just looking at the amino acids sequences...
[quote]TBO are speaking of the difference between clumped at random states,and specifically biofunctional ones, which as the later work with Cytochrome C shows, are indeed pretty isolated in the config space of protein-like sequences.[/quote]
Interesting you use the word "biofunctional" now. I thought we had established that the bio- prefix was irrelevant.

The issue here is whether the random configurations and the specified functional configuration means anything in thermodynamics. The random configurations are observable. Each one is an observable microstate. Therefore, as I understand you (as opposed to TBO I suspect), each is specified, as we have now established that you do not need a prior blueprint for that. And that would seem to make that all "functional", as you use the term in thermodynamics (though as you are incapable of giving a definition, this does give you the luxury of deciding that on a case-by-case basis).

I often post on a foum called ARN, and there was a recent post there about functionality of random proteins here, by the way.

Their analysis remains valid: we have a bio-functionally specified macro-state, and compare it to that in which any old sequence of the same length will do, as presumably could have formed in the pre-biotic world -- the clumped at random macrostate.
So now we are comparing our personal desire for this one "bio-functionally specified macro-state" (even though the bio- prefix is irrelevant) against our personal desire for "any old sequence"; it is all about our perception, our personal wants.

In short, you are not free to freight TBO's analysis with any definitions of macrostate you may like to apply!
I am trying to apply yours. Of course, you are incapable of giving the exact definition (giving you the luxury of deciding that on a case-by-case basis). Seems to me TBO use macrostates to indicate what sequence they are looking for, which, yes, is different to how you seem to use it.

Of course, you can use modern sequence observing techniques to observe any and all protein sequences, but that does not lead to any material difference in the point that we have no reason to prefer any sequence over the other in the first case, and in the second, we can specify the state on ability to function in an organism.
So it is just personal preference. We can set up this experiment saying we are looking at another particular sequence, and now TBO's protein falls into the "any old sequence" group, and becomes high entropy (in the TBO methodology).

How many of these, even under generous conditions, will be verifiably in the biofucntional Cytochrome C state?
So because all you are interested in is cytochrome-c, you will ignore all the other biofunctional proteins, i.e., any other biofunctional protein in the search space is high entropy because you have decided you are looking for cytochrome-c only. This is a subjective choice, that would, if we believe you, affect the absolute entropy of the system.

Gordon said...

Pixie:

A few remarks:

1] Functional

The specific type of observed/ observable functionality relevant to a particular macrostate is always relevant!

In particular, as just noted, by observing functionality, we are seeing a defined macrostate that to reliably access requires targetted work, that creates here the functional sequences of Cytochrome C.

2] The random configurations are observable. Each one is an observable microstate. Therefore, as I understand you (as opposed to TBO I suspect), each is specified,

THey may indeed be observable ex post facto, but in fact there is no reason to be in any particular config so you are simply observing the outcome ofa toss that it does not matter what happens int he string. The lottery here is "won" by whatever "number" happens to turn up. [That's like painting the target around the arrow after you fired it, to use WD's own comment on this one.]

Specification is independent of that particular outcome. In this case, biofunciton is independently specified and is observable. It shows that a rather small fraction of 10-monomer sequences are biofunctional, of order 1 in 10^65 to 1 in 10^ 70 etc.

With maximal probability, a random reaction chain will NOT get intot he biofunctional region within the available probabilistic resources for a reasonable empirical test. But, there is an observed system that reliably and routinely produces this biofunctional state, using a pre-programmed sequence of steps. That is, we are looking at manufacturing, just as with the micro jets thot expt case.

3] Article, ARN and debate

You are not there looking at random generation of proteins in prebiotic soups or the like, or the production of RNA in similar soups. In short you are looking at already existing biosystems and are tinkering with them.

Further to this, this is of course the problem of undue investigator interference, i.e the insertion of intelligent design into the discussion and the empirical work. Too often the implications of that insertion are not faced fully and frankly.

4] we are comparing our personal desire for this one "bio-functionally specified macro-state"

Not at all. I have pointed out, with supporting links and data, that there is a difference between a random protein-like chain molecule and a biofuncitonal one. That difference is macroscopically observable and it is specified by the obvious difference in fucntionality between the manufacture and resulting behaviour of Cytochrome C and the highly predictable outcome of forming a random chain of the 20-state monomers of the same length, about 110 monomers. [And NB this is well within the Dembski bound . . .]

In short these are different macrostates with appropriate W-counts, and the W-count collapse direction is as already repeatedly pointed out.

5] Definitionitis

Macrostate is macroscopically observable state. One way to do this is to measure pressure and volume and temperature of a body of gas. Magnetic field intensity may be relevant in another case. Flybaility of a particular config of jet parts is yet another. So is whether a particular protein-like chain functions as Cytochrome C.

In each case, we have certain information at macro level based on what is observed or observable [in principle at least -- thot expts . . .], which constrains how many microstates can count as being of the given macrostate. Thence we see W and S.

That's not so hard, is it?

6] it is just personal preference.

All human activities and considered activities, have in them choices we make [starting with when we speak, indeed there is a reason the Greek word for logic and word, at root is the word LEG, essentially, to choose.]

In the case in view, after the fact observation of the sequence that won the lottery to produce a random sequence, is worlds apart form a process that reliably gets us to Cytochrome C. We can therefore identify that a random chain stare [whatever the particular outcome] is a different macrostate from the Cytochrome C state of the protein chain of length 110 or so.

7] because all you are interested in is cytochrome-c, you will ignore all the other biofunctional proteins, i.e., any other biofunctional protein in the search space is high entropy because you have decided you are looking for cytochrome-c only.

in fact it is highly likely that NO 110-or so monomer biofucntional protein will result from the random chaining process, for he same reason that Cytochrome C is utterly unlikely to result.

But this is all besides the point:t hat is, we ave defined a certain observable state, Cytochrome C. A random process is unlikely to achieve this target within the gamut of available reasonable resources for a research lab [or even all research labs presently on earth working together]. So, we see tha tit makes sense to identify a random chain state and a fucntional molecular state, and to do associated W-counts etc, i.e TBO are working soundly.

In short, for good reason we can see that the most liklely outcome is just as Sewell observed: the most probable thing will happen with great reliability, given the relative odds. The maximally improbable relative toi chance plus necessity only [whether accessing Cy-C or other biofunctional states] is maximally unlikely to happen within the resources of a trillion attempt try.

But, with a programmed instruction and implementing machines, we can reliably get to Cy C. Such systems invariably manifest FSCI, and raise the point that FSCI is in our unexceptioned observation whenever we directly know the causal story, the product of agency at work.

So observation and probability are telling us the same thing . . .

GEM of TKI

The Pixie said...

Gordon

THey may indeed be observable ex post facto, but in fact there is no reason to be in any particular config so you are simply observing the outcome ofa toss that it does not matter what happens int he string. The lottery here is "won" by whatever "number" happens to turn up. [That's like painting the target around the arrow after you fired it, to use WD's own comment on this one.]
Here we go circling back to those faulty and designed nanobots...

The first time, the nanobots produce a certain assembly by accident, there is no blue print, and so no specification (according to Dembski). Is the entropy high or low?

The second time, I design the nanobots to produce that same assembly, so this time Dembski would say there is a specification, and you seem pretty sure the entropy is low.

The same assemblies are produced both times, and you agree entropy is a state property, so if the entropy is low the second time, it MUST be low the first time. But there was no specification (in the Dembski sense at least).

Specification is independent of that particular outcome. In this case, biofunciton is independently specified and is observable. It shows that a rather small fraction of 10-monomer sequences are biofunctional, of order 1 in 10^65 to 1 in 10^ 70 etc.
And for my second set of assemblies, "functional" is specified before the event, and is observable. And the chances of arriving at it by chance are very low too.

You are not there looking at random generation of proteins in prebiotic soups or the like, or the production of RNA in similar soups. In short you are looking at already existing biosystems and are tinkering with them.
Yes, I agree that it is not the same.

Further to this, this is of course the problem of undue investigator interference, i.e the insertion of intelligent design into the discussion and the empirical work. Too often the implications of that insertion are not faced fully and frankly.
Somehow I get the feeling this will always be the objection for any successful abiogenesis experiment.

5] Definitionitis
Sorry, may be it is different in physics. When I do science I like to know what the terms mean. When debating, it is especially important that terms are well defined to avoid any suspicion that that the definition is changing. And when someone tells me there is no definition for "functional", well, I start to wonder (especially when he uses biofunctional to mean the same thing, thermodynamically).

Macrostate is macroscopically observable state. One way to do this is to measure pressure and volume and temperature of a body of gas. Magnetic field intensity may be relevant in another case. Flybaility of a particular config of jet parts is yet another. So is whether a particular protein-like chain functions as Cytochrome C.
We have to use a thermometer to determine temperature, just as we have to use an electron microscope to determine the configuration of an assembly, so clearly this is a macrostate, right?

In each case, we have certain information at macro level based on what is observed or observable [in principle at least -- thot expts . . .], which constrains how many microstates can count as being of the given macrostate. Thence we see W and S.
That's not so hard, is it?

So for any flat configuration, it must be a macrostate, as it can be discerned with instrumentation, and it has only one configuration, so has an configurational entropy of zero, right? So what does functionally really mean?

7] in fact it is highly likely that NO 110-or so monomer biofucntional protein will result from the random chaining process, for he same reason that Cytochrome C is utterly unlikely to result.
The articles cited at ARN suggest otherwise (or are you saying that - according to current thinking in abiogenesis - cytochrome-c was one of the proteins in first life?).

But this is all besides the point:t hat is, we ave defined a certain observable state, Cytochrome C. A random process is unlikely to achieve this target within the gamut of available reasonable resources for a research lab [or even all research labs presently on earth working together]. So, we see tha tit makes sense to identify a random chain state and a fucntional molecular state, and to do associated W-counts etc, i.e TBO are working soundly.
Well this is my objection. Because of the way you frame the question, you have, by personal preference defined cytochrome-c as being functional, and effectively defined not-cytochrome-c as being non-functional, regardless of whether any not-cytochrome-c protein is really functional.

Three scientists have a test tube full of human cytochrome-c, and they want to know its configurational entropy. Scientist A wants to know the entropy of a cytochrome-c protein, so calculates all the possible configurations and gets W from that. Scientist B says there is only one configuration for human cytochrome-c (and this specific configuration is an observable macrostate, of course), so he says W is zero. Scientist C realises that actually there are a huge number of configurations that are biofunctional (though an extremely small fraction of all possible configurations), and calculates W on that basis. They end up with three different values for the entropy. Who is right and why?

In short, for good reason we can see that the most liklely outcome is just as Sewell observed: the most probable thing will happen with great reliability, given the relative odds.
Well, duh! The issue is whether there is a connection to the second law.

The maximally improbable relative toi chance plus necessity only [whether accessing Cy-C or other biofunctional states] is maximally unlikely to happen within the resources of a trillion attempt try.
Okay, I missed something here. Where did you calculate the probability of cytochrome-c arising through a well-regarded abiogenesis scenario (i.e., not just 110 amino acids happening to float together)? Where did you calculate the probability of an intelligent agent doing it?

The Pixie said...

A new thought experiment.

I have a large flat box, 3 identical cups and 12 identical counters, that I will use to model statistical thermodynamics. The cups are molecules, the counters are energy, the box is a volume in space.

The entropy due to mass configuration: The box is 1m by 1m (the cups can only move in 2d of course) and the cups have a diameter of 5 cm.

Attempt 1: The NUS divide the volume into a number of cells equal to the number of particles, so this would be three, each of which can have one particle. And as the particles are identical, this means the entropy is one!

Attempt 2: Forget the NUS paper, let us say that the box is divided into 16 cells, each of which is holding at most one cup. Now W = 16! / 4! / 12!. Easy.

Attempt 3: What about if we have 400 cells, each just big enough for a cup,what is the entropy now? W = 400! / 4! / 396!

Curiously, the entropy due to mass configuration changes depending on the arbitrary decision on how we divide the volume up! Maybe you have another way to do this calculation, which we can compare again TBO and NUS. I make no claim that I am doing it right. Indeed, my point is that there is no right way to do it!

The entropy due to energy configuration: We have 12 counters to put in 3 cups. One microstate is to have 4 in each cup, another is to have 12 counters in one cup, none in the others. I counted up a total of 18 unique arrangements, so W is 18.

It is worth noting that W falls as temperature falls (i.e., you have less counters), of course. If you have only three counters, then W = 3. If you have no counters, W = 1. Curiously the entropy due to mass distribution is the same at any temperature.

The thing about the energy distribution is that it is not arbitrary. The energy levels are determined by the laws of nature (and the counters can only go in cups), while the cells used for calculating mass distribution were decided by personal whim.

Incidentally, if the volume of the box changes, then to correspond to statistical thermodynamics, the energy levels get closer together, so in this analogy, you would replace the counters with a larger number of smaller counters. The total weight of counts (cf total energy) is the same, but there are more ways it can be rearranged.
_______________________________________________________

I have asked you to quote your textbook to back up your claims about mass distribution and what "functional" means. I thought I would check mine. From Physical Chemistry, 2nd Ed. by PW Atkins, chapter 20, "Statistical thermodynamics: the concepts"

"The energies of atoms and molecules are confined to discrete valuues, and the last few chapters have shown how these energies may be calculated, detemined spectroscopically, and related to sizes and shapes of molecules. The next major step is to show how a knowledge of these permitted energy levels can be used to account for the behaviour of matter and thermodynamics."
Heavy emphasis on energy and energy levels.

"Consider a system composed of N molecules. Although the total energy can be specified it is not possible to be definite about how that energy is distrbuted..."
There is that distribution of energy - and nothing about distribution of mass that I could find.

"When entropy was introduced in Chapter 5, it was presented as a measure of the distribution of energy. Since the partition function is also a measure of the distribution of energy, it is reasonable to suspect a relation between them. The nature of the relation can be discovered by thinking about the connection between the internal energy U and the energy levels and populations."
Again very clearly Atkins is talking about about the distribution of energy (his emphasis, by the way). He later looks at the entropy of a monoatomic gas, to derive the Sackur-Tetrode equation:
"Connection with the material of Part 1 is made by noting that the change of entropy when a perfect gas expands isothermally from Vi to Vf is given by the difference of two expressions of the form of eqn (20.3.13)
deltaS = Sf - Si = nR ln aVf - nR ln aVi = nR ln (Vf/Vi)
where aV is the collection of quantities inside the logarithm. This expression is exactly the same as deduced on the basis of thermodynamic aguments where only changes of thermodynamic quantities could be treated.
"

Interestingly in that chapter he says (about the word "ensemble"):
"Like so many scientific terms this has basivally its everyday meaning, but sharpened and refined into precise signifcance."
Odd that the word "functional" is so poorly defined in thermodynamics...

Gordon said...

Pixie:

A few overall remarks are in order.

First, the very fact of this long interchange shows that the thought experiment I posted at the first carries considerable force in argument -- otherwise it would have been brushed aside long ago.

Further to this, it is long since shown that there is a an observable and calculable phenomenon that can be reasonably identified as configurational entropy. In particular, the distinction between a clumped-at-random state [which in certain special cases we may observe at leisure after-hand -- painting the target ex post facto so to speak], and a functionally configured one is plain.

Yet more, there is a clear and distinct collapse in the W-count as we move from scattered/ diffused to clumped and then on to configured state. The first of these, is amply shown by diffusion: it does not spontaneously reverse, because of the statistical properties of the accessible microstates and associated overwhelming probabilities. The second of these is shown by the observed [and potentially observable . . . thot expt] functional difference between a clumped at random state and a functional one, and also by the obvious programmed, step by step constraints imposed in vivo to reliably achieve bio-functional macro-molecules as compared to what we may reasonably expect from realistic pre-biotic soup model environments, and what we may observe on the likely outcomes of random chaining, even when we constrain the system to achieve the proper bond types and chirality for DNA or proteins.

In short, there is a serious and unanswered case on OOL -- a notorious fact. That fact is best explained -- note my use of IBE here -- relative tot he statistical form of thermodynamics as TBO et al and others have inferred or worked out.

In that light, I find the many attempts to avert the consequences of the above, are all fundamentally flawed. I will briefly mote on further points:

1] Nanobots assembling clumps clusters of microjet patrs

The w-counts relative to the scattered state are plainly and demonstrably low. That is configurational entropy is seen in action. Cf diffusion.

Then, relative to the macro-state that is a flyable jet, the Sconfig is obviously high. Again, s config in action.

Now, you have proposed a case in which an arbitrarily chosen config is manufactured using nanobots. Now, what is that config? Id it a flyable one, or is it something that is arbitrary and pretty nearly, any other config would do? [In short, I am again pointing tot he fact that in effect you have used an intelligent process to target a random state, thus you are in the clumped at random state, not a functionally specified state. The only specification there is the actual pattern, adn the only relevant funciton is conformity to an arbitrarily chosen pattern of high K-complexity. A look at Dembski on specification as long since linked, will show that that is precisely what he does not have in mind. Indeed, Sewell's remarks on simply describable configs -- less than 1,000 words in english etc, say more or less the same.]

The rest has long since been adequately discussed.

2] When I do science I like to know what the terms mean.

I am pointing to the basic point that there are primitives of fdefinition, even inthe most precise science of all, mathematics -- on pain of infinite regress. And there are phil reasons for that: we first recognise examples and counter-examples of something, then we develop precising descriptions thereof that reliably separate examples and non-examples.

But in a lot of cases we can neither do genus and difference, nor a necessary and sufficient descriptive statement. E.g, kindly define "life" in a way that includes all examples and excludes all non-examples. But, Biology is certainly a science. So, we have to deal with the reality of the world we study as it is, not as we would like it to be.

This holds for Physics, and it holds for chemistry too.

I also do not indulge in debate, but in critical dialogue under comparative difficulties. Debate is best defined as that wicked art that makes the worse appear the better case, being aided and abetted therein by rhetoric,the art of persuasion, not proof. [Jefferson, adapted]

On the particular issues in view, I have long since given a celar enough example of the difference between scattered, clumped and configured macrostates and the associated W-counts and entropy of config. I have in so doing also abundantly shown that functionality is a relevant and energy-linked observable specification that in some cases is of entitities that are information-rich in the sense of beyond the 500 bit limit.

That there may be other cases of different senses of particular function, is irrelevant to the force of what was shown. Indeed, I have repeatedly shown that so soon as you identify a fucntion, you have a case of W-collapse, relative to the scattered at random start-state. But that is enough to point out that configurational entropy is a meaningful concept.

3] ARN Articles;

Look more closely, they are NOT describing clumped at random states at all, but as noted already, are tinkering with existing functional chains.

For 110 monomers, that would entail selecting at random [even in a homochiral mix without other possible reactants present -- i.e after a lot of intelligently directed selecting and sorting work has long since been done -- and neglecting the 50% of non-protein type bonds that tend to happen] from 20^110 or ~ 1.30*10^143 possible arrangements in the config space. [We are not here binding down to having the relative proportions of the monomers that Cy C happens to have, we are looking at a truly random polymer chain.]

Assuming 10^10 lifeforms [species if you will] and 10^6 functional proteins of 110-monomer length in each, that gives us 1 in 1.3*10^127 being functional. The odds of hitting such a functional config on any given try are much, much worse than painting one atom in the observable universe at random, then doing a lottery and picking it by chance. The exercise of doing 10^12 runs -- beyond our current reasonable lab capacity, actually -- will nor by a very long shot, make a material difference to the outcome just noted on.

And, BTW, there is an island of functionality surrounding any one Cy C config. The W-count is not 1, even among humans.

4] NUS counter-example

First, go look at Gibbs and those who followed him on making cells to identify a config space. As soon as you include degrees of freedom on momentum, you are compounding the space well beyond 3-d's, I am afraid. {That is phase space is a multidimensional entity.]

Second, we are not dealing with just unconstrained arbitrary divisions of phase space; we are looking at diffusion [with sufficiently low particle speeds that we can negelct is which allows collapsing the space to a locational one]

We are also looking at exhaustive and credibly unique locations for the particles in the space -- a constraint that you are missing in your little exercise in rhetoric as science. [Notice how I used ~ 1 micron parts and locations in my vat example, which you of course neglect to address. Observe the implications of the move from scattered to clumped to configured states for that case.]

5] Atkins cites

Fine so far as they go, but materially irrelevant to the cases in view. That has always been the problem you have had.

Try out diffusion, and look again at free expansion. In this last, access to fresh locations, without changing available energy [no work is done, no heat is imported or exported], shifts possible arrangements of matter and energy leading to rise in entropy. That is entropy of configuraiton is relevant.

And, functionality is plainly sufficiently well defined to identify observationally, certain relevant macrostates that are very hard to access at random in config space.

BTW, I found a recent discussion on the chem basis of life in which there was a point that such life would not emerge in the gaseous state precisely because of want of adequate constraint on locations of relevant molecules. In the solid state, molecules will by contrast be over constrained. So, we come to the need for a fundamentally liquid medium as the Goldilocks zone. Thence, the FUNCTIONAL importance of water to life, and associated the issue of just right constraints on configuration.

Not all arrangements of even the right biopolymers will work! Similarly, not just any config of jet parts will fly. There is a config constraint, accessible to programmed assembly but not reasonably accessible to at random clumping, and which ends up in an observable, simply specifiable, macrostate that is relative to a complex entity storing over 500 bits of information, in both cases.

TBO is right, Sewell is right, and Dembski is right.

GEM of TKI

The Pixie said...

Gordon

First, the very fact of this long interchange shows that the thought experiment I posted at the first carries considerable force in argument -- otherwise it would have been brushed aside long ago.
Hmm, doubtful reasoning. Your thought experiment is sufficiently displaced from reality that it is not clear what is happening in the vats, and it was only after some weeks that I was even able to determine that you think that unimportant; it might as well be magic as nanobots. Furthermore, I still am not clear how you resolve the inconsistency. Sure, there may not be one, but the reason this keeps going on is because you seem unable to state clearly whether the assemblies the nanobots make the first time around (no specification) or the second time (specified, non-flying) are low entropy, like your nano-planes.

Further to this, it is long since shown that there is a an observable and calculable phenomenon that can be reasonably identified as configurational entropy.
Sure. In fact there are several phenomena that can be reasonably identified as configurational entropy, including NUS's and TBO's. Whether any of them are governed by the second law is, of course, still under dispute.

In particular, the distinction between a clumped-at-random state [which in certain special cases we may observe at leisure after-hand -- painting the target ex post facto so to speak], and a functionally configured one is plain.
Well, no, this is not clear at all, as I have already mentioned. But you do you on to cloud the issue further...

Now, you have proposed a case in which an arbitrarily chosen config is manufactured using nanobots. Now, what is that config? Is it a flyable one, or is it something that is arbitrary and pretty nearly, any other config would do? [In short, I am again pointing to the fact that in effect you have used an intelligent process to target a random state, thus you are in the clumped at random state, not a functionally specified state. The only specification there is the actual pattern, and the only relevant function is conformity to an arbitrarily chosen pattern of high K-complexity. A look at Dembski on specification as long since linked, will show that that is precisely what he does not have in mind. Indeed, Sewell's remarks on simply describable configs -- less than 1,000 words in english etc, say more or less the same.]
The rest has long since been adequately discussed.

Well I have to say that until now I thought you were saying that the assemblies built to a prior specification - albeit an arbitrary and non-functional one - were low entropy. For example, previously, in respect of this example, you said:
"On a related point, there is simply no contradiction: if you are able to use a blue print to generate an observationally identifiable macrostate, then the entropy is low. If you cannot, you have simply gone the long way around to get to a member of the clumped at random state."
I thought we had established that the assemblies were in an observationally identifiable macrostate (using an electron microscope), and had been made to a blueprint, so they were low entropy.

See this is why the nanobots keep on going. Not that they are unassailable, but that you seem to change your mind from post to post. It is not that I am attacking an impregnable fortness, but that I am attacking mirages!

E.g, kindly define "life" in a way that includes all examples and excludes all non-examples.
How about: A dynamic, complex, integrated, self-regulating system of chemicals (by complex, I suggest greater than 100 different chemicals; and yes, that is arbitrary, but then I think the border is arbitrary).

Look more closely, they are NOT describing clumped at random states at all, but as noted already, are tinkering with existing functional chains.
The only point I want to make about the ARN post is that functionality is not as rare as you claim. It would seem that for a 100+ amino acid sequence, there will be plenty of functional proteins that are not cytochrome-c (though there will be far more that are not functional at all).

And, BTW, there is an island of functionality surrounding any one Cy C config. The W-count is not 1, even among humans.
As I understand it, humans would function perfectly well with yeast cytochrome-c, but all humans produce the same cytochrome-c.

So how does resolve the problem of the three scientists determining different configurational entropy for the same thing? It is strange that when we have real life examples to discuss, you seem not to want to, but nano-bots, you can discuss them for weeks.


4] First, go look at Gibbs and those who followed him on making cells to identify a config space.
But you could not find any reference yourself? How about this. I will look for web pages that support my arguments, you look for them that support yours.

As soon as you include degrees of freedom on momentum, you are compounding the space well beyond 3-d's, I am afraid. {That is phase space is a multidimensional entity.]
I do not understand this.

We are also looking at exhaustive and credibly unique locations for the particles in the space -- a constraint that you are missing in your little exercise in rhetoric as science. [Notice how I used ~ 1 micron parts and locations in my vat example, which you of course neglect to address. Observe the implications of the move from scattered to clumped to configured states for that case.]
Well, yes, I did miss that. Can you say how you determine "exhaustive and credibly unique locations" for the simple case of helium at STP in a 1 m3 vessel, connected to an evaculated identical vessel. I would be interested in seeing the NUS maths applied to this real application (which is about as simple as you will get).

I bet you can just quote a bit from your textbook that says how to divide the volume in some non-arbitrary way. Or maybe not.

Oh, and for the cups in the box, why were you unable to determine "exhaustive and credibly unique locations" in this trivial case? Think how much more convincing your argument would be if only you could use the concepts you are promoting. It is this inability by you to use your thermodynamics that makes me suspect you are not so certain in your position as you appear.

Fine so far as they go, but materially irrelevant to the cases in view. That has always been the problem you have had.
Yes, that has been the problem. I find credible sources stating clearly that mass distrubution is not applicable, and you declare them irrelevant. It might be convincing if you could explain why.

Try out diffusion, and look again at free expansion. In this last, access to fresh locations, without changing available energy [no work is done, no heat is imported or exported], shifts possible arrangements of matter and energy leading to rise in entropy. That is entropy of configuraiton is relevant.
Did you see the bit I quoted that explains this?

How about where I explained it earlier myself?

Or where I quoted Lambert earlier?

Twice?

I have explained free expansion several times now, and you have raised no objection to the explanation. Frankly, I find it dishonest that you bring it up again as though it is still a problem for me. Either explain why you reject my explanation, or accept that it stands.

BTW, I found a recent discussion on the chem basis of life in which there was a point that such life would not emerge in the gaseous state precisely because of want of adequate constraint on locations of relevant molecules. In the solid state, molecules will by contrast be over constrained. So, we come to the need for a fundamentally liquid medium as the Goldilocks zone. Thence, the FUNCTIONAL importance of water to life, and associated the issue of just right constraints on configuration.
Are you using "functional" here to mean the same as your have before? It does not sound like it.

Not all arrangements of even the right biopolymers will work!
Of course not. But that is no reason to suppose the non-working arrangements must necessarily have higher entropy.

Gordon said...

Pixie:

First, an admin point. The thread is now of unwieldy length, and has plainly deadlocked. Kindly prepare a closing comment, and then I will note on it my own final remarks, and we bring this thread to a close.

General points:

1] Thankfully, this discussion has been largely without the sort of incivility that has too often characterised exchanges on any of the matters of this type. Thanks are due to Pixie for that.

2] The funadamental issue remains, pretty much as TBO put it 20 years ago, and as Sewell more recently summarised. Namely:

--> For systems characterised by energy-related behaviour, and tied to configurations of matter that are associated therewith, observable macrostates are as a rule associated with a great many microscopic configurations of mass and energy, microstates.

--> By a fundamental principle of probability reasoning as applied, once there is no particular reason tp prefer any particular cluster of microstates, accessible microstates are viewed as equiprobbale.

--> This normally results in a predominant cluster of states dominating the observed behaviour, i.e systems that are nor constrained otherwise tend to be in "equilibrium."

--> As a direct result, when systems that are significantly away from that state are allowed to act on their own [i.e by chance and necessity only], by balance of probabilities they spontaneously migrate towards equilibrium, i.e to the more probable clusters. (For instance a dye drop in a beaker tends to diffuse, and we will not normally observe the spontaneous re-gathering of the dye particles. Similarly, free expansion is irreversible.)

--> This is the statistical form of the 2nd law of thermodynamics.

--> Moreover, simply opening up a system to the injection of raw energy and/or mass does not materially shift the balance of probabilities. This can be seen in the behaviour of, e.g. subsystems within a wider isolated system, as Clausius et al worked out. Namely, bodies receiving raw injections of energy normally increase their entropy, i.e they tend to augment the randomness of configuration and distribution of energy. By contrast, when a body is configured as a heat engine [or other energy converter], it may couple inputs of energy [and/or mass . . .] to perform work, but as a rule at the expense of exhausting sufficient waste energy that the overall entropy of the wider system goes up.

--> The same reasoning can be applied to the question of the spontaneous origin of energy processing systems based on functionally specified, complex information [FSCI]. Immediately, it shows why the observed pattern occurs: namely, that in all cases that we directly observe, FSCI-based systems are the products of agent action, not chance + necessity only.

--> Thanks to the generality of the underlying logic, we may therefore see that in cases of particular interest, namely OOL, origin of a flyable micro-jet, etc, the best explanation for the origin of the FSCI-based system/state is not their openness to raw injections of energy or mass, but instead to their openness to agent-based, programmed, information-directed action, which so orders the components that they are shaped into the sort of complex, functionally specified energy converters just discussed.

In short, the basic point made by Sewell, TBO etc stands.

Now on points of note:

1] Dismissals:

Nope . . .

2] Configurational entropy;

Diffusion and free expansion etc are rooted in the statistical form of 2nd LOT, which is statistical-probabilistic, as again just summarised.

3] States and specifications

I have had two distinct things to say. First, that once we specify, we are targetting a particular state. Second, that if the state is arbitrarily [rather than functionally chosen], we are not materially addressing the subject at hand -- origin of certain FSCI-rich systems that act in certain observable ways. [In other words, an irrelevancy]

The former, as to generally observable macrostate, is in the clumped at random state, which BTW is compressed sharply in terns of W-count relative to the scattered one. The latter, is in the observable, functionally specified macrostate, e.g. a flyable micro-jet, which has a far lower W-count than the clumped at random state. [And I am making here the point, that microstate inspection is different from a macrostate that is functional and observable [note the point in the cite you highlighted to infer a "contradiction!], a point I have emphasised for several days at least. This is in part shaped by further thinking on WD's point on painting the target around the arrow after it has landed. Macrostate specification is INDEPENDENT of (as opposed to temporally prior to) microstate occurence as such. W-counts relate to the number of microstates that correspond to a given macrostate. The significance of that point emerged as I noted on your series of objecitons.].

4] Clumped, microscope inspected state of micro-jet parts

Here, we see that such a state is of course a different macrostate from the material one, a flyable jet. 3-d clumped collections as I pointed out long since, cannot be distinguished by external inspection on flyable vs clumped at random states. You then resorted to a flat "inspect-able" distribution.

I then pointed out that this constitutes a functional specification [even as in turn clumped at random is such, but not the one of particular interest], and that it naturally constrains the number of microstates associated therewith, relative to the clumped at random state. W collapses in turn form scattered to clumped at random to flat clumped.

Similarly the flyable config collapses W-count relative to clumped at random, etc. [Indeed, the flat clumped state can be used as an intermediate assembly stage to inspect and cross-check that all relevant parts are gathered together, as a stage in assembling a flyable jet . . .]

--> Of course, as you raise all sorts of alternatives and issues, and as I follow them, there will be some wandering across the world of ideas. But the core is still quite plain: configuration can affect entropy numbers, and in a way that is linked to the issue of the spontaneous as opposed tot he agent-directed pattern of change.

5] Re ARN: for a 100+ amino acid sequence, there will be plenty of functional proteins that are not cytochrome-c (though there will be far more that are not functional at all).

What are you talking about?

I for the sake of argument spoke of 110-monomer chains [length of Cy C], and as a first approximation estimated 10^16 functional proteins of that single chain-length. Such a number [for a single protein chain length] was immediately and immensely swamped by the config space for polymers of that length, so much so that I could have been off in my estimate by dozens of orders of magnitude and it would have made no practical difference to the result.

Let's go to 111-monomer chains, and use the same illustrative number, 10^16. Since 20^111 ~ 2.60*10^144, 20 times as big a config space, the functional monomers of chain length 111 would be even more swamped out. [This you acknowledge by implication but I am hihglighting the point's significance.]

We can repeat, going 112 all the way up to 500 or so [or any suitable number], covering the range of typical proteins. I would be estimating about 4*10^18 functional proteins with that upper bound, and by the same pattern of the mathematics, that rather large number would be swamped at each and every stage, and of course increasingly so. I could add in dozens of orders of magnitude at each stage and it would make little practical difference.

The point is, and it is observationally well-warranted, that at any given chain length, the functional states are overwhelmed by the configurational state, which is exponentially increasing all along the way.

What the folks at ARN etc. are doing is not looking at the config space as a whole but at the effects of judiciously targetted perturbations to chain composition. That is they are moving around in an island of functionality rather than accessing the config spaceas a whole at random. were they to do the latter, by overwhelming probability, the resulting chains would be non-functional [and there would be nothing interesting to report, on a trivially predictable result].

6] 3 different entropy numbers issue

You have simply not looked at the way in which cells are assigned going back to Gibbs, and as I did above. While the pattern undercounts, that is acceptable in this context to make the point that is material.

7] Look it up on the web . . .

Sorry, I ALREADY gave you adequate information, and links.

I have pointed to the founding figure of the phase space view of thermodynamics, and have given an example of why I am making a simplified calc in the case of scattered microjet parts vs clumped. The first principles are there for all to see, and the result is plain too. Address that.

8] Helium free expansion

This is a materially different case from the diffusion case, not least because the He molecules are moving at some 100's of m/s on average, as opposed to small fractions of a mm/s. Thus, the phase space has to reckon with momentum issues as well as positional ones. This I discussed above explicitly.

And, the result is well known and long since summarised and linked: configurational constraints affect the entropy such that entropy rises on free expansion. Nor, can you properly isolate the change in energy distribution from the change in mass distribution, pace Mr Lambert.

In the case of diffusion, the ink particles [or microjet particles] are moving at much, much lower speeds, being of order 1 micron or so, and are participating in the well-known Brownian motion. For ease of calculation, we can ignore momentum, and instead look at what we are interested in: why is it that spontaneously the p[articles scatter, and do not spontaneously re-assemble.

So, dice up the vat into cells small enough that just one particle will fit. In my case 1 micron cells about 10^18 of them.

We have no reason to constrain the particles across time from accessing any free cell, and so there are 10^18 equiprobable cells, and in the case of the microjet ~ 10^6 particles, so that we can see that there are in excess of [10^17]^[10^6] possible configurations. The number of these that can be assigned to clumped configs is as also shown, much, much smaller. Thus we see that W-counts fall on clumping,a nd further yet on configuring to flyable configs. That is, entropy falls in the direction of moving to a flayable micro-jet, andin such a way that the probabilities of that occurring spontaneously in any reasonable observational scope are negligible. A standard result of stat thermodynamics reasoning.

If you will, you can equivalently work it out on assemblies of vats, frozen in the different accessible configs, and assess the proportions that are in configurations of interest. The overwhelming number will be in the scattered state, with relatively few clumped and much fewer still configured as flyable.

All of this circulation of states across time or ensembles of frozen states stuff is standard basic stat mech reasoning that can be accessed in any introductory level text. The probability reasoning is the Laplacian principle of equiprobable outcomes in absence of constraints on configurations [duly relabled as the relevant theorems].

9] Atkins and mass distribution

Have you not paid attention to the fact that mass configuration and behaviour is VERY relevant to energy distribution at micro level, starting with the nature of kinetic energy available in translation, vibration and rotation?

Similarly, let us look at a simple potential well: a Hookean spring and mass. As we stretch/compress the spring, energy is stored in the location relative to the rest point:

F = kx, Hooke

Epot = 1/2 k x^2,

by simple integration [sum up the force-displacement increments from the rest point]

In short, potential energy is positionally constrained, where there is a potential well. This extends to 3-ds as well. [Think of a particle nested between springs in 3-ds]

In the case of diffusion as noted, we are seeing how positional distribuiton accessed at random affects entropy.

In short you are reading into Atkins what is simply not there, for fundamental physical reasons. In many cases of interest it makes sense to focus on energy distributions, but in other cases it makes equal sense to focus on position and momentum [motion, linear and angular].

10] Free expansion etc

You have, sadly, plainly paid little or no long-term attention to the issue of the shift in boundary constraint that allows the particles of gas to access the wider region, leading to a wider box. This I have repeatedly pointed out.

11] Functional importance of water to life

I am of course pointing out that there is a reason why water in liquid state is generally viewed as a material basis for biochemically based life to work. That reason is linked to the just rightness of moving about but not too freely in the liquid as opposed to the gas or the solid states.

In the former there is so much freedom of motion that configurations of the required complexity cannot be sustained, and in the latter there is so little freedom of movement that life function is not possible.

I will simply ignore other than to note on it, the inference that I am equivocating. For, I have long since made myself clear to anyone interested to understand rather than to object.

GEM of TKI

The Pixie said...

Configuration Entropy vs the Second Law

I am wondering how the second law affects or limits changes in configuration entropy, as opposed to "thermal" entropy, in your understanding. I can envisage three possibilities, and I realise I cannot say which of these you believe is right (or perhaps a fourth, even).

Configuration Entropy as a Part of Total Entropy

According to TBO, configurational entropy is just one part of the total entropy, much like the entropy associated with rotational energy. You can look at how just the rotational energy changes, you can look at how just the configurational entropy changes. Similarly, you might want to split the total entropy into that in the system and that in the surroundings, or indeed just look at one component.

All well and good, but where does the second law fit in? The second law only requires that the total entropy does not decrease (at the macroscopic scale). The second law does not demand that the rotational entropy cannot decrease, or that the entropy of the system, or the surroundings or a single component cannot decrease (clearly under the right conditions any of these can indeed happen).

So this would lead me to believe that configurational entropy of this sort can go down as well as up, as long as the total entropy goes up.

Configuration Entropy as Another Entropy

You might view configurational entropy as something that stands on its own (as NUS seem to claim). In that case, we have to wonder if neither configurational entropy nor thermal entropy can decrease, or if one can decrease as long as the other cannot. Let us start with the former.

So we have thermal entropy, which can never decrease, and we have configurational entropy, which also can never decrease.

But we know this is wrong. Configurational entropy - at least as calculated by NUS - does decrease in some situations (eg condensation, or oil and water separating).


Configuration Entropy as An Alternative Entropy

So this leaves with two types of entropy, and for any given process, either thermal entropy cannot decrease or configurational entropy cannot decrease. This would seem to be the NUS position. They use configurational entropy to show that liquids mix; as long as the configurational entropy is going up, we can ignore the thermal entropy. But for oil and water, they separate because that leads to the thermal entropy increasing, so we can ignore the configurational entropy.

However, this seems to have a couple of problems. In processes where configurational entropy goes up, and thermal entropy goes down (or vice versa), what is it that determines the direction of the process? Why is it that thermal entropy trumps configurational entropy for oil and water mixing, but configurational entropy beats thermal entropy for alcohol and water mixing?

The other problem, for the abiogenesis and evolution argument, is that this means we can ignore configurational entropy for any process where the thermal entropy is increasing. Or to put it another way, to prove a process is limited by the second law under this system, you must show that BOTH thermal entropy AND configurational entropy are decreasing.

Gordon said...

Pixie

You have elected to restate objections to configurational entropy as your final post.

In short, you stake your case on the question that freedom of dispersal or rearrangement of mass in an accessible volume is in effect not an entropy effect connected to the 2nd law, contrary tot he use of TBO in CH 8 of TMLO.

Appropriately, therefore, I will first again link to and excerpt the presentation by Prof Bertrand of U of M, Rolla, on basic thermodynamics:

----

"The universe is made up of bits of matter, and the bits of matter have mass and energy. Neither mass nor energy may be created or destroyed, but in some special cases interconversions between mass and energy may occur . . . The freedom of the universe can take many forms, and appears to be able to increase without limit. However, the total freedom of the universe is not allowed to decrease.

The energy or the mass of a part of the universe may increase or decrease, but only if there is a corresponding decrease or increase somewhere else in the universe. The freedom in that part of the universe may increase with no change in the freedom of the rest of the universe. There might be decreases in freedom in the rest of the universe, but the sum of the increase and decrease must result in a net increase . . . .

The freedom within a part of the universe may take two major forms: the freedom of the mass and the freedom of the energy. The amount of freedom is related to the number of different ways the mass or the energy in that part of the universe may be arranged while not gaining or losing any mass or energy. We will concentrate on a specific part of the universe, perhaps within a closed container. If the mass within the container is distributed into a lot of tiny little balls (atoms) flying blindly about, running into each other and anything else (like walls) that may be in their way, there is a huge number of different ways the atoms could be arranged at any one time. Each atom could at different times occupy any place within the container that was not already occupied by another atom, but on average the atoms will be uniformly distributed throughout the container. If we can mathematically estimate the number of different ways the atoms may be arranged, we can quantify the freedom of the mass. If somehow we increase the size of the container, each atom can move around in a greater amount of space, and the number of ways the mass may be arranged will increase . . . .

The thermodynamic term for quantifying freedom is entropy, and it is given the symbol S. Like freedom, the entropy of a system increases with the temperature and with volume . . . the entropy of a system increases as the concentrations of the components decrease. The part of entropy which is determined by energetic freedom is called thermal entropy, and the part that is determined by concentration is called configurational entropy."

-----

Here, we see a general discussion of the nature of entropy, and of why there is division of labour so to speak between entropy as connected to energy and as connected to mass. For, the degree of freedom of mass and od energy may vary independently of one another. (And, I may add an observation: volume/concentration does not inherently relate to the macro-scale -- if a monomer in a chain molecule is locked into one point in a chain, it is confined volumetrically; but if it may move anywhere along the chain, it is released in terms of the volume it may occupy. Thus, configurational freedom or confinement within the volume of an identifiable unit is what is important, not the scale of the unit in metres or sub-multiples thereof.)

Going on to two key examples, we note:

Free expansion: "No Change in Temperature No Change in Energy No Change in Thermal Entropy --> Increase in [accessible] Volume Decrease in Concentration Increase in Configurational Entropy"

Diffusion/mixing: Here, the accessible volume opens up and of course the dye molecules [etc] spread out across the medium irreversibly.

This brings us to the case of the NUS term paper by Poh and Leong [assuming they use family name first, the Chinese standard . . .], and closely supervised by Chin [cf p. 33] in which diffusion of favourable mixing cases is discussed, starting on p. 15. [The balls "model" in short is a model of DIFFUSION (in its many guises -- I detect an eye to device physics and electronics, BTW . . .) based on accessibility of the two layers to balls of diverse colour].

Intheir discussion they introduce and discuss the issue of distinguishable macrostates and statistical weight, then through the principle of statistical weights [which I prefer to "thermodynamic probability, following Mandl in the classic Manchester Physics series], assign the most probable state to be observed as that with the greater weight.

On p. 17, they correctly note: "The fundamental relation between the macroscopic entropy function, S, and the number of corresponding microstates, W, is expressed by the Boltzmann equation. This equation “forms the bridge between the second law of classical thermodynamics and the atomic picture”.17 [Fast, p. 62.] . . . The equation is s = k ln W, [eqn 30.]"

A bit of discussion on how the classical and microscopic views are bridged through this log relationship leads on p. 18 to the conclusion, "The state of maximum entropy corresponds to the most probable macrostate."

All of this is actually commonplace, but is important on the points in dispute.

They then proceed to the cells model first used by Gibbs, and note:

"The next logical question, then, is how do we determine the value of W in the first place? It is meaningless to talk about microstates in entropy without specifying the type of entropy we are dealing with, for different forms of entropy entail different methods for calculating W.

There are two forms of entropy – configurational entropy and thermal entropy.19 To calculate W, we need an explicit model for each microscopic system. For configurational entropy, we use the cell model, in which “the space available to a molecular mixture is divided into cells of equal size” for molecules to be distributed spatially into the cells.20 As for thermal entropy, the relevant model is a collection of mass-spring harmonic oscillators. These terms may be unfamiliar at this stage, hence we need further elaboration." [p.19]

They then note on the spreading in space of energy and mass as the core points to be used in such modelling. {I note, as above, that so long as we can observe the difference, the length-scaling of the accessible volume is irrelevant tot he underlying point of freedom of location within a region of space.]

So we come to diffusion, as an example of configurational entropy:

"Configurational entropy deals with the extent of disorder in the position of particles in space. Since configurational entropy is related to the distribution of position in space, it has a lot to do with the physical process of mixing. For example, a drop of red dye in a beaker of clear water spontaneously turns into a mixture of light red solution (though it takes some time to diffuse completely). The opposite process never happens. We never observe the light red solution separating spontaneously into clear water and a drop of red dye floating around in one localised position in the water." [p.20]

That is, they are explicitly discussing those cases where diffusion happens, then proceed to a simple cell model of dispersal across an accessible set of locational cells, molecules of X and O, deducing aloing the way the relationship N!/x1!x2! . . . as a way to avoid double counting of microstates.

By p. 24, they proceed to discuss "the configurational entropy change, ∆S, when we “mix n1 moles of pure substance 1 and n2 moles of pure substance 2 to form a solution with mole fractions X1 and X2”.28" Thus, they are again discussing cases where the mixing is "favourable," as they spoke of and as has already been discussed in this thread -- i.e cases where due to say the differences between oil and water block mixing -- the cells become inaccessible in short! -- are irrelevant to the subject explicitly in view, i.e cf red dye in clear water etc.

So, having calculated the entropy of config for a favourable mixture, by p 24, they go on to note: "This result based on the cell model is satisfactory to ideal gases, liquid solutions and solutions of solids. This is because the cells merely represent the “average volumes that the gas or liquid molecules occupy in space”.30 " [Cf. here the use of a 1-micron cell for the microjet parts in the throught expt, noting that the part size is also just right for the parts to become part of the Brownian motion -- i.e act as giant molecules and disperse through the vat.]

THis leads up tot he context for the remark you have latched on to on p. 25. First we see eqn 48: "∆S = – R Σ (ni ln Xi) -- (48)" then we see "What we need now is an understanding of how we can conclude that the spatial spreading of matter is favourable. This next step is relatively easy, to much relief. Since mole fractions are always less than one, therefore the term “ln Xi” is always negative. This implies, from equation (48), that ∆S is always positive."

While we may nit-pick at their phrasing, the use of mole fractions and the context above, shows that hey are speaking of cases where such diffusional mixing to form mole fractions does occur. [That excludes cases where such mixing does not occur, by direct implication, i.e we are not here concerned as to why the "molecules" in the walls of the beaker or vat, or the oil drops in the finger prints on the inside of the vat or beaker etc do not by and large join in with the dye-spreading. So, let us lay to one side such a red herring.]

Their main conclusion follows: "Spreading of matter in space is spontaneous as configurational entropy increases."

In short, we see a useful illustration of entropy of configuration at work, and note that the scale of gthe confining volume is irrelevant ot the case, so long as the relevant cells are such that there is access to moving about one way or another.

SO, we have come back tothe original point: there is a spontaneous direction of change, and so long as particles are free enough to have access to such distributions, they do not spontaneously move to highly constricted macrostates that are relatively improbable. Where such states are functionally specified and based on complex information, then that shows us why in our observation we normally only see these as the result of intelligently directed, information based action, when we directly observe the causal events.

Okay, that's the main point.

On specific notes:

1] Is S config part of total entropy?

Answer: yes. And, as S is a state function, it is separable for analytical purposes [In the above we do not need to address the components due to rotational and vibrational states of the dye molecules etc. Nor, since translational velocities are tiny, do we need to worry unduly about that component of the state. For of course degrees of freedom are by definition independent one from the other.]

2] where does the second law fit in?

Through the bridge from micro to macro-scales: the spontaneous direction of change is towards the higher degree of freedom, and the opposite direction under program control as in bio-molecule assembly or micro-jet assembly, leads to compensating changes elsewhere that preserve the net at least constant nature of entropy.

Thus, by the underlying reasoning behind the macroscopic form of the law, we see why FSCI-based structures do not spontaneously emerge apart from program-directed action, in our observation.

3] configurational entropy of this sort can go down as well as up, as long as the total entropy goes up.

It can as can thermal entropy, but again the issue is the spontaneous direction of change. Chance + necessity only do not in our observation give rise to FSCI-based complex entities beyond the 500 or so bit limit.

4] You might view configurational entropy as something that stands on its own (as NUS seem to claim).

They and I and for that matter, prof Bertrand and TBO are only speaking of entropy of config as a COMPONENT of overall entropy that may be separated out for analysis.

Further to this, obviously, both components of entropy may decrease or increase, so long as there is a compensating overall increase in entropy due to what happens elsewhere in the universe or the relevant isolated system. [Cf thecse of interacting bodies A and B in an isolated system, and the case where B becomes a heat engine.]

This, I have long since elaborated on so it is astonishing that this should resurface. (Onlooker, start with the original post above!)

5] So this leaves with two types of entropy, and for any given process, either thermal entropy cannot decrease or configurational entropy cannot decrease.

The 2nd law, classical form is in effect that entropy of the universe or an isolated subsystem thereof, will not NET decrease. Components within such a subsystem may indeed decrease entropy so long as there are compensating changes elsewhere.

Taking the statistical view, we see that this is as a result of overwhelming balance of probabilities in favour of more heavily weighted macrostates.

When we deal with the configurational component, the point is that once accessible microstates are equiprobable, in effect, then we will see that the macrostate that tends to dominate will be that which bears the heavy statistical weight. In particular, FSCI-rich microstates [which are ultimately in view] in our observation are accessed by programmed action that assembles them through imposing constraints that lead to that outcome.

E.g cf the vats thought expt above, in which once parts are infused, they actually disperse. By overwhelming probability they will not naturally clump, much less configure to form a flyable micro jet. But through programmed action, we can credibly produce same. [After all we do that on the full sized jets routinely -- and the relevant considerations do not depend on the scale of parts being micron vs meter size . . .].

Also, the cellular protein assembly mechanism, which leads to reliable creation of proteins through programmed, controlled action based on said complex proteins.

In short, we are seeing the onward operation of a system here that has self-replicating capacity. Its spontaneous origin is so utterly improbable on the relevant grounds, that we may safely infer that agency is the best explanation.

6] The other problem, for the abiogenesis and evolution argument, is that this means we can ignore configurational entropy for any process where the thermal entropy is increasing. Or to put it another way, to prove a process is limited by the second law under this system, you must show that BOTH thermal entropy AND configurational entropy are decreasing.

Of course, this is first based on the above flawed argument on points.

Further to this, the relevant degreed of vibrational and rotational freedom and distributions thereof across monomers within the chains of biologically relevant polymers, are not materially different across the clumped at random polymer vs the biofunctional one. [They are at about the same temp, and would be destroyed by about the same increase in temp, and are using about the same bonding etc]. In short "heat em up" would amount to the same result: "cook em."

So the basic point still holds. We see that config entropy is a relevant component of overall entropy that can be isolated for analytical purposes [as TBO did in their discussion].

Then, the issue is here freedom to move about in a certain volume, and confining to cells within that volume is of lower entropy than freedom to go to in effect any cell in the volume not "already" occupied.

The cell model being valid, it relates to the dimensions of a vat or a beaker or an industrial scale reaction vessel, and also to the more confined volume of a clumped collection of microjet parts or the chain of cells in a polymer. So, once we have observable, functionally specified, complex information based macrostates, which ARE confined, and contrast them with similar but clumped at random states, then we see that the natural direction is to form the clumped or beyond that even the scattered state, not the FSCI-rich ones.

But agent intervention , direct or indirect [through program- controlled assembly steps] can access such functional states reliably.

And so, the case made by TBO long since, stands.

I do thank Pixie for the interaction, and will use it to update my online page.

FINIS

GEM of TKI

The Pixie said...

Actually I had not read your previous post when I wrote my last, so did not realise it should be my last, but so be it. I think it highlights an important problem with your argument; that configurational entropy is free to decrease, as long as thermal entropy is increasing.

So I will just say thanks for hosting the debate, and staying polite and respectful thoughout (I have had some bad experiences too).

Hmm, I wonder if anyone else has read this down to the bottom?

Gordon said...

PS: I found the just above overnight in the inbox, and decided that it is best to publish it. Onlookers can compare what I wrote in response to the previous post by Pixie with his final words for themselves. I am not so sure they will agree with Pixie's evaluation, but that is of course my own opinion based on my own analysis of the issue.)

I note that I have also updated the web page on information, design and related issues, here, mostly in the appendix on thermodynamics, which inter alia now has in it an updated form of the nanobots example. (Just above it is an updated discussion on the statistical form of the 2nd Law of Thermodynamics.)

Thanks to Pixie for his exchange of ideas, which plainly required some significant and sustained effort.

GEM of TKI

Gordon said...

PPS: It is worth a brief remark on the comment by Pixie: "configurational entropy is free to decrease, as long as thermal entropy is increasing."

The issue is not at all whether or not any particular configuration is accessible -- i.e the system is logically and/or physically "free" to take it up [as TBO, Sewell and indeed I have noted right from the beginning] -- but that the balance of probabilities is such that the system is maximally unlikely to access the relevant configurations by chance plus undirected natural regularities only.

Yes, config entropy is "free" to decrease, only that freedom predictably and reliably by probability points towards disordered, not functionally specified, information-rich states.

Yes, if the thermal entropy component of a system rises, the config entropy by happenstance may indeed fall, but in fact the increased thermal agitation is most likely to disorder any FSCI-rich configuration, by the balance of the same probabilities. (E.g. heating up a glass of water in which a dye drop has been infused is free to suddenly make the drop re-assemble, but in fact by overwhelming probability, the scattered state will predominate over the concentrated/clumped one. Indeed, even moreso than in the case where we did not warm up the glass, as the thermal agitations will be more energetic.)

That is, as long since noted, importing raw energy in a system is most likely by overwhelming probability to disorder the system further.

And, that is empirically extremely well supported, as the probabilistic underpinnings of the classic form of the second law, right from Clausius' first example.

GEM of TKI

Georgi Gladyshev said...

Dear Colleague,
The origin of life can be understood on the foundations of hierarchical thermodynamics without representations about dissipative structures and any fantastic ideas. Hierarchical thermodynamics was created on the foundation of the theory of Gibbs - the most rigorous physical theory.
I am sending you the information. Have a look at it please.
Sincerely,
Georgi Gladyshev
Professor of Physical Chemistry

As part of its applicability thermodynamics explains everything that happens in the world.
To understand the origin of life and its evolution we should use the hierarchical thermodynamics and principle of substance stability (1977). http://www.mdpi.org/ijms/papers/i7030098.pdf
Life: The principle of substance stability establishes the common life code in the universe http://endeav.net/news/19-origin-of-life.html
The origin of life can be explained through the study of thermodynamics of universe evolution http://gladyshevevolution.wordpress.com/
Origin of Life (Abiogenesis) - Darwin's selection: the survival of stable supramolecular structures.
Why does life originate and exist now? Origin of life and its evolution are the result of action of laws of hierarchical thermodynamics and the principle of substance stability. The criterion of evolution of living system is the change (during evolution) of the specific free energy (Gibbs function, G) of this living system formation. http://endeav.net/news.html
The hierarchical thermodynamics has predicted the observed effects many years ago. About the phenomenon of life has been written in many articles and some monographs (for example, see: Gladyshev Georgi P. Thermodynamics Theory of the Evolution of Living Beings.- Commack, New York: Nova Science Publishers, Inc.- 1997.- 142 P., Thermodynamics optimizes the physiology of life http://ru.scribd.com/doc/87069230/Report-Ok-16-11-2011 , Int. J. Mol. Sci. 2006, 7, 98-110 http://www.mdpi.org/ijms/papers/i7030098.pdf , Site in English:
Hierarchical thermodynamics solves the puzzle of life. The role of catalysis
http://gladyshevevolution.wordpress.com/article/hierarchical-thermodynamics-solves-the-puzzle-of-life/
http://creatacad.org/?id=48&lng=eng http://creatacad.org/?id=57&lng=eng , http://www.eoht.info/page/Georgi+Gladyshev+%28biography%29 http://www.statemaster.com/encyclopedia/Georgi-Pavlovich-Gladyshev )!
Thermodynamic evolution http://www.statemaster.com/encyclopedia/Thermodynamic-evolution

GEM of The Kairos Initiative said...

While KF blog is not generally accepting comments these days given problems with abuse this seemed worth the publishing. It is to be noted that no one has in fact solved the OOL challenge on any blind watchmaker thesis, especially the origin of a code using von Neumann self replicating, metabolising molecular nanomachine automaton, i.e. the living cell. KF