Evolution
The dynamics is essential for understanding a variable.
- natural evolution
- brain
- science
- technology
- culture
- ...
have a common feature: information, i.e. variables and their values.
A general evolving system can be described with the methods of physics by matching these quantities
Physics | General System |
---|---|
space | values of variable |
time | selections |
action | information (selections * values) |
energy | information/time |
momenta | simultaneous components of selection |
mass | hidden components of selections |
Introduction
The information blog placed the variable as basic element of evolution
A variable is
- values
- an exclusive selection of one of the values
The two elements cannot be split because
- selection is needed to create the values
- the values are needed to allow selection
energy = information/time
The values are information, the selection is a time step. So information/time characterizes a selection. Further down this is identified with mechanical energy.
How do we know the other values that were excluded by one selection? We don't, unless we continue to observe. In the observation time the physical variable gets mapped to an image variable in the brain, the model of the physical variable. For that the physical variable must repeat. One selection physically implies the excluded values, but the physical variable can cease to exist before we can find out.
Information in our brain
We need to know the variable, i.e. the alternative values.
- When learning, i.e. acquiring new information, we need to know the alternatives, then we understand something.
- When we know the alternatives, then we can describe our choices and choose.
A selection process in our brain is in many ways the same as the selection process of natural evolution. A sophisticated device or a work of art can be created by both: by a person or by nature.
Our memory of variables that occurred before, has created a structure in our mind
- a value space with a topology or
- values with a probability
Our memory is a map of reality, that contains some real truth, valid for a limited time.
We tend to see the values as static, but only through the selections (the dynamic part of a variable) we can and need to verify that the values belong to the variable, or that the variable is still the same.
Different names are in use with basically the same meaning as the variable/value pair:
- set/element in mathematics
- mutations/selection in biological evolution
- tries/non-errors in searching
- random variable/event in statistics
- supply/demand in economics
- choice/choosing in our brain
- class/instance in programming
- system/state
Intelligence
A system that can create new structures, new concepts,... can be called intelligent:
- People that can create new concepts
- An economy that creates new devices
- Nature that creates new organisms
- ...
A concept is understood as in the FCA blog. It is basically a value in a hierarchy of values. Such static structure corresponds to a hierarchy of interactions and time scales, as we well see further down.
Dynamic variables that have an interaction are best described together. They form a dynamic system with related times. Here system alone shall mean a dynamic system.
Two systems that do not interact, follow separate and independent evolutions. They have independent selections, i.e. independent times.
One interaction corresponds to one time. The compartmentalization into systems depends on the type of interaction one considers.
A system consists of sub-systems, which we also call particles, because they form part of the system.
Instead of the value of a system, one normally says the state of the system. For a particle, instead of value of a particle, one normally says coordinates of the particle, point of the particle, or state of the particle. A coordinate is a static/representational variable or value, representing a component of the particle's state.
The minimal number of coordinates is the dimension, also called degrees of freedom. Dimension derives from Latin and means measure. We mean not the measure, but the number of measures.
- A particle moving in a circle has one dimension, even if described by three cartesian coordinates.
- N particles can be considered separately or in a flat way with 3N space coordinates, which in our general sense here can be considered as one big particle.
The choice of representation must suit the kind of interaction.
dynamics vs kinematics
Just to look at the selection without the source would correspond to kinematics. In order not to split a system where it is most connected, I use dynamic, of greek origin meaning force. (We get mired in word play.)
system vs particle vs thread
From their etymology: Variable is something that can change. So it will change. It will evolve. Evolution is the change.
More variables can be linked together and have a common evolution. E.g.: Matter of a physical particle links the x, y, z variables with time.
This can be generalized. More representational variables whose evolution is linked, we'll call a particle of the system in a general sense.
This meaning of particle includes the particles in physics, but also every situation where a selection (time step) has more components linked together to form a part of the (evolution of the) system.
One (dynamic) variable can be specified with more (orthogonal) representational variables. Example: Number and unit form a quantity. The number is further decomposed into digits.
In computer science and our brain a thread (of thought) represents one time axis.
Representational variables arise from the fact that one physical selection can be decomposed in the brain to more mental selections. Then the mental selections need to be synchronized to simulate a physical selection. The over-specification in the brain is reduced to the physical reality by differential equations: The different observable changes dxi, which are different mental times in the brain, are reduced to one physical time.
One time is motivated by a common interaction and common evolution.
Random variable
Random variables are not considered here. They are static variables that follow from summarizing over values of (many) independent (possibly dynamic) variables. The values of random variables are not ordered by time steps.
Mechanics
Let's try to construct a link from selections to mechanics.
d
With d we mean a finite and discrete change, but small, in need to be compared to something as small to get a graspable number. Without resorting to infinitesimals, the exterior derivative d produces functions that map a vector (physical quantity) to a number:
Time
What is time?
Since I first got in contact with basic physics like x = vt, I was at unease with time as something physical, because it can only be observed via changes.
We will understand t as a count of the changes, i.e. the count of selection steps. Without changes, t does not count.
One Time = One Energy
One selection of a state of a particle is represented by one or more simultaneous selections of coordinates. Physical time is not defined by the coordinates, but by the selection of the state of the particle.
Locally
- time τ just counts the selections: dτ = 1
- the state change dI is just dI = 1.
- and the energy E is also E = 1
When comparing to some global coordinates, E is the inverse of the time step. The constant energy E stands for a homogenous time, meaning constant time steps dt.
Information counts time steps (= selection steps).
One time line defines an independent particle. The variable is information of which a selection is the time step in which the information is reduced to 0. In other words, a point in time defines one value, i.e. one selection, of a variable. A closed system does not lose energy, but selections go from variable to variable, until all values of all variables have been cycled through. In a closed system the reduction to 0 is simultaneous (within the same time step) with the creation of one or more other variables with same information. Or we have an increase of selection speed, i.e. decrease of time step width.
The cycle time T should be seen as derived: T = I ⁄ E. For a closed system, i.e. without interaction, I, E and T are constant.
One local τ value joins the values of the coordinates (representational variables) of the particle to one point. Time creates a topology.
The memory of more particles creates a topological space. The values of this space are not values of any specific particle.
The simultaneous coordinate changes dxi of a particle x at time step dτ are the momenta ẋi.
Momenta must change in direction to produce a cycle with respect to an underlying topological space in order for the particle to have finite information. Thus an inertial frame, a non-accelerating observation frame, is local, like a tangent space at a point of a manifold. The infinite version is a mental extrapolation and does not physically exist. This also means that time and energy and thus information are local quantities.
Symmetry
If ẋ is a constant, it means that it is independent of the x variable. This meaning can be implied by Newton first law.
More generally if a value of a variable does not change with the value of another variable (symmetry), then the variables are independent (Noether).
Reversibility
A system that does not loose information is reversible. Physically reversibility is also local. A completely reversible system would not interact and thus not loose any information. But such a system could never be observed. Supposing the information of the whole universe is preserved, then, due to its expansion, the density of information decreases. The reason or consequence is, that subsystems interact, i.e. loose energy.
Since one dτ always has dxi and dẋi in parallel, the element of information is dI = dẋidxi. Locally dI = dτ = dχ = dτ × dχ ⁄ dτ = 1.
The potential energy V(x) can be chosen. 0 is a good choice. But the freedom can be used fruitfully to summarize interactions when concentrating on a subsystem.
Metric
The simultaneous changes create a metric on the space. Nearby time values mean nearby space values and nearby momenta values. Our physical topological space has locally an euclidian metric δij: dI = dxidẋidt = δjidxidẋj. This makes cartesian coordinates a natural choice for physics. A manifold locally is euclidian. With generalized coordinates the metric is converted to produce a cartesian volume element in order to keep quantities like energy comparable.
Kinds of Energy
In general systems we will also have to use different kinds of information, if we summarize the detailed selections to macroscopic variables of relevance to the question at hand.
A physical particle approximately moves along cartesian coordinates with uniform speed (Newton first law), making cartesian coordinates a natural choice for this classical referenceframe. Only rarely corrections from general relativity are needed. A topological space, which locally resembles a euclidean space is called a manifold.
Physical Momentum and energy are defined in cartesian coordinates. When a system is described with other coordinates, one
- first transforms to cartesian
- then uses the cartesian formula
For a particle moving uniformly and circularly just, φ of polar coordinates (φ, r) is a good choice of coordinates. Lagrangian and Hamiltonian are equal, because there is no potential energy.
- We could do L = H = ∬dωdφ = ∫ωdω = ω² ⁄ 2, where ∫dφ = ∫10ωdt = ω − 0
- And momentum would be ∂ωL = ω = φ̇.
But that does not fit to the topology of the physical space, which locally has a euclidian metric. Therefore we need to convert to cartesian coordinates first.
- L = H = ∬dpdx = m∫rωrdφ = mr²ω² ⁄ 2
- The linear moment is p = mẋ = m(ṙφ + rω) = mrω. The moment along ω is angular momentum ∂ωL = mr²ω, which is conserved.
Via dI = ∂iIdxi the momenta ẋi = (∂I)/(∂xi) are linked to the minimal step along xi. With a local ∂I = 1:
Here a particle is defined by one time, i.e. also one energy. It is not restricted to one physical body, but can include more or many physical bodies. One can be interested only in some of the many coordinates. In this setup potential energy makes representational sense without compromising the idea that time is change and that energy is the inverse of a global time step, i.e. the comparison to some global change.
Let's focus on just one coordinate y = xi and its change ẋ = ẋi. Then we split the energy into
- the energy K(ẏ) = ẏ² ⁄ 2 of the coordinate y, the kinetic energy, and
- the energy of all the other coordinates, the potential energy V. V normally would be a function of all the ẋi, but since we don't look at any (xi, ẋi) ≠ (y, ẏ), we make V a function of y: V(y). So the potential energy accounts for changes we cannot see or decided to neglect.
Because E(ẏ, y) = K(ẏ) + V(y) of the particle is constant, a change of K(ẏ) is compensated by an according opposite change of V(y): ΔK = − ΔV.
Time steps Δt are change steps:
Vice versa, starting from the latter (Newton second law), one can derive
Note, though, that Newton had no kinetic energy concept.
Local simultaneity produces causality. Newton third law: The force F and change of momentum ÿ mutually cause each other. The selection is mutual and represents a local time dτ, which associates it with an energy E = 1 ⁄ dτ, the cost of the selection. After the selection the system is in a new state, allowing new selections. Selection after selection produces a causal chain: the time evolution.
Interaction is better depicting this mutuality. Interaction is a more general concept, but in its elementary manifestation synonymous to selection. F forms an interaction consisting of change of selection speed per Δy. ∫Fdy is the gained selection speed (energy).
More Times = More Energies
Every interaction motivates a new time = new energy.
Lower systems (subsystems) become the particles of higher level systems. There is a hierarchy of systems, like the nodes in an fca lattice.
Lower and higher refers to dependence and is relative to a specific system.
- Molecules depend on atoms:
- The inner life of the atom is the lower system.
- The interactions between atoms in the molecule form a higher system. The atoms become the particles of the molecule.
- Life on earth depends on the sun. Regarding life on earth
the sun is a part(icle) to the evolution of life on earth.
- The sun is the subsystem.
- Interactions with life on earth are the higher system.
Lower level systems need to live longer to allow higher level systems to form.
The lower system has two kinds of energies:
- Internal energy of the system.
- Component of energy that quantifies the interaction of the subsystem as particle within the higher system.
The interaction is also called interface with the subsystem or role of the subsystem.
The hierarchy might be built on a common space. The system hierarchy is the topology making the common space to a topological space. All particles have coordinates from this topological space, but can have additional coordinates, too.
The states of a particle only are a (small) subset of the common space. The common space is not infinite other than as an extrapolation of mind.
The whole hierarchy is limited downward as well as upward, because physically there can only be finite information. This means that
- there exists an information quantum and
- there are only a finite number of quanta in the whole system.
In physics Planck constant ħ is an information quantum.
Global Time
A common global time t is a coordinate of the common space. It is a mental tool to describe the interactions in the system hierarchy. It does not physically exist as a specific change of the system.
The global time
- must have a higher frequency then all internal subsystem times
- will inherently have an error
- must be regularly synchronized with each subsystem through a common event, just like normal clocks
Our everyday global time
- used to be derived from the earth's rotation
- now it comes from an atomic clock using the Cs atom
- and soon it will come from a nuclear clock
The inner time tick of a particle is represented by the inner energy of the particle.
The proper times are independent and need to be transformed to each other. For physics, the Lorentz transformation is used.
When transforming from a global time t to a local time:
- Changes that seem simultanous globally, don't need to be actually simultaneous, i.e. belong to the local clock of a particle.
- Changes that globally happen one after another, do not need to represent subsequent steps of the local clock of a particle. The time could be reversed locally with respect a global time order.
Transformation from a local particle time (= causality) to a global time keeps causality of a particle intact, though. This means that no particle can move faster than the speed of light.
More dimensions of a common space allows for different ways a subsystem can cycle through its values. This gives the subsystem an additional coordinate: spin.
An interaction can be expressed as a field. A field stands for additional variables that the time of the particle fixes to more or less defined values, as it does with space values. The sharing of a common space is a kind of interaction between particles, already, and can by described by a field, too. Different interactions mean different fields.
If there is interaction between the levels of the hierarchy, this can be described with a generalization of thermodynamics. The lower system's inner life can be quite secluded, though. The energy normally considered is about what's going on in the current level only. So we don't include mc² of the raw material used by a power plant, but only the part that interacts, like the falling water instead of all the water's mc².
Mass=Energy
A mass m can be seen as hidden selections synchronized with an observable selection step dτ = 1.
Inner selections of lower particles count as mass for higher particle and slow things down. Every level has it's own speed limit for interactions.
A common topological space can have effects similar to the Lorentz transformation in the physical spacetime manifold, if there is an energy limit c (=information speed=minimal δτ).
There are no hidden selections on the lowest level. c is values per time and we assume it constant. mc are the selections in dt = 1. The information step is values*selections. Energy is information per time:
The components of the selections, the momenta, are not constant.
If the low system moves as particle in the higher system, the low level selections need to decrease to keep the constant m²c². From the high level system this can be seen as mass increase of the particles by γ:
For a particle moving with c = 1: 1 ⁄ ds = m = E = 1 ⁄ dt. Time resolution is equal to space resolution, because all that t counts are s steps. A particle that can move with c has no hidden selections, no rest mass, and is therefore a lowest level (fundamental) particle. For slower particles the relativistic momentum (E ⁄ c, p) = (1 ⁄ cdt, 1 ⁄ ds) has length equal rest mass mc² = m, the hidden selections.
From the higher level the dt steps appear smaller and the ds steps larger, which contravariantly (in global units) means time dilation and length contraction.
Planck Constant and Units
Planck discovered that action is quantized, when describing the black body radiaton. Having found such basic constants of nature, he used them for natural units. The unit of information in Planck units is h ⁄ 2π = ħ = 1, energy becomes mass or the inverse of time E = mc² = ħω → m = 2π ⁄ dt and momentum the inverse of space p = ħk → p = 2π ⁄ dx.
Lagrangian and Action
How to know that two changes are simultaneous, that is, are due to one selection? We need to start by assuming that every somehow atomic change at a point in time is independent. We should not be biased and rather think, we know nothing yet. This is the principle of maximum entropy or, better, of maximum information. So we first think of every coordinate as an independent dynamic variable.
L(xi, ẋi) shall count changes as independent if we don't know better. Depending on what changes, L can depend on
- one or more variables xi and
- changes of different order thereof (ẋi, ẍi, ...)
Known independent changes must be multiplied, e.g. dxi and dẋi. Simultaneous changes are counted separately.
With a direct global time dependence of L, external influences are synchronized into the system. But we assume no global t dependence.
With yk = (x1, ẋ1, x2, ẋ2, ...).
At one point in time:
Lkdyk = ∂L ⁄ ∂ẋidẋi counts the visible information per time step.
∂L ⁄ ∂ẋi are called momenta pi.
This change sums to kinetic energy: K = dI ⁄ dt = d∬mdxdẋ ⁄ dt = ∫mẋdẋ = mẋ² ⁄ 2. The squaring comes from seeing x and ẋ as changing independently.
Lkdyk = ∂L ⁄ ∂xidxi accounts for interactions with particles taken out of direct calculations by associating L with x instead (see above). But considering that information travels with c, the result of the interaction is detached from those particles by time and space. So one counts the information received at x.
∂L ⁄ ∂xi are called forces Fi, force particles in quantum mechanics.
Accumulated interaction per x or force F is work: W = ∫Fdx.
Finally, accumulation over time gives the total number of changes, called action.
Calculus of variations is used to get rid of non-system counting. Minimizing S = ∫Ldt is called principle of least action: The values of a particle evolve along a line of least action. Demanding this to be minimal means that symmetry breaking should be minimal, which means that the actual information in the particle should be minimal.
Varying the trajectory δx with fixed endpoints and demanding δS ⁄ δx = 0, produces the second order Euler-Lagrange equation:
With force F = ∂xL and momentum p = ∂ẋL the Euler-Lagrange equation is a generalization of Newton's
Euler-Lagrange without external time.
ẋ is a selection of x and if information is conserved then δx = δẋ. So we have ∂xL − ∂ẋL = 0. But since ẋ is compared to a global time (quotient dx ⁄ dt), we have to undo this when comparing inside the system. That is the reason for the (d)/(dt)(∂L)/(∂ẋ) instead of ∂ẋL.
A particle's evolution along this trajectory is called on-shell. Due to an inherent synchronization error between particles, off-shell paths are also possible (virtual particle).
To create the action with the Lagrangian, we have maximized our information about the system, i.e we have tried to make no assumptions (maximum entropy). And through calculus of variations we have minimized the information in the system (principle of least action).
Generalized Coordinates
How things change can sometimes be complicated, because of constraints imposed on variables. One can get rid of the constraints in these two equivalent ways
- by adding new variables (lagrange multipliers)
- by transforming to new variables (generalized coordinates).
The new variable in the latter case are often named differently:
- q instead of x
- p instead of mẋ.
- F = ∂L ⁄ ∂q is the generalized force component accounting for interactions associated with q.
- p = ∂L ⁄ ∂q̇ is the momentum component accounting for the change along q̇.
- ...
p = (∂L)/(∂q̇) is the conjugate momentum. See also canonical transformations.
The principle of least action demands a minimal finite number of total changes.
t counts forever, but the number of values of the system must be finite. Thus the values must repeat over time. Or the particle dies before it comes to repetition.
Every cycle produces the same number of values for each coordinate. The values describe a closed curve in the phase space parametrized with t. If F = 0 in the Euler-Lagrange equation F − ṗ = 0, then q is cyclic by itself. So either there is interaction or q itself is cyclic. Suitably chosen generalized coordinates can make F = ∂L ⁄ ∂q = ṗ = 0. This codes the interaction in the curvature.
Hamiltonian
Since we assume ∂L ⁄ ∂t = 0, by the first integral of the Euler-Lagrange equation, the Beltrami identity, we can introduce a constant H (Hamiltonian), which is energy.
This relation between Lagrangian and Hamiltonian is a Legendre transformation.
The first order Hamilton equations
are equivalent to the second order Euler-Lagrange.
From
follow the Hamilton-Jacobi equations
- Of course, the global derivative of S gives the Lagrangian L. According to our interpretation of L, it counts changes separately.
- The partial derivative of S with time gives − H: The Hamiltonian counts simultaneous changes as 1 change.
The Hamilton equations give the corresponding simultaneous conjugate change. Force − ∂H ⁄ ∂q is simultaneous with its result, the change of momentum (impulse dp = Fdt). The Hamiltonian H does not accumulate work W = ∫∂qLdq = ∫Fdq separately, but L does. If F is conservative, one can associate a potential energy with each q: V(q) = − W(q) = − ∫Fdq.
I = ∫ − ∂S ⁄ ∂tdt = ∫Hdt = H∫dt = HT gives the total number of steps (= time steps), i.e. the total information of the system.
If the total information is fix, then a smaller time period means higher energy.
A constant H is associated with every phase point H(q, p) at all times. H(q, p) does not change along the trajectory, the Hamiltonian flow XH = (∂H ⁄ ∂p, − ∂H ⁄ ∂q) does. Time is replaced by H as any field F can be time-derived by forming the Poisson bracket:
A more intuitive description is with the two-form ω = dqdpii as information element. ω(⋅, XH) becomes a one-form that produces the total derivative along the trajectory of time (time derivative). The total derivative is the inverse of the integral, that summed up the time steps = the information steps.
Structural Evolution
Maximum entropy and principle of least action can be used for general dynamic systems like economy or biological evolution or the brain.
It is not only a method to describe a system, it is also how a system evolves in its complexity.
Information is added: Variables and/or values are created (creative phase, supply)
Added information is used to
increase the number of values per variable
- increase range, variability
- increase redundancy, parallelizing: more copies of a stable subsystem (= particle) with independent further evolution
increase the number of variables by
- increasing interactions types:
- between smaller subsystem instead of big systems
- synthesizing, by combining values of existing variables
- increasing interactions types:
Information is removed: Variables and/or values are reduced again (selective phase, competition).
Here the opposite happens:
number of values of variables are reduced
- reduced range, variability
- reduced redundancy: less copies of subsystems
number of variables are reduced by
- reducing interaction types:
- Encapsulation (maximal cohesion, minimal coupling): Selecting big units needs less information, than selecting many little units.
- Abstraction of reusable variables, orthogonally in order to need less interaction,
- reducing interaction types:
Considering I generalized momentum, the action angle w = νt + β, with cycle frequency ν = 2π ⁄ T, becomes generalized position replacing time t, as that changes from system to system. Action angle w correspond to phase in a circular motion. Action and angle variable (I, w) are the canonical variables.
Adding and then removing energy from a system is also called annealing. It can also be described as oscillation of energy between the system and its environment.
The cycles in canonical variables are related to observable variables qk via fourier series
Adding information I (action) means an inward energy flow that more or less keeps the existing cycles of subsystems. It rather adds values or produces new variables: TΔS (see Action vs Entropy).
Constant energy keeps a system in a constant cycle, up to quantization resolution. Through annealing, system and subsystems become more complex. Annealing creates new environments for subsystems.
The amount of energy in a system is independent from goal or no-goal. A goal for a system is ultimately linked to survival for the sake of survival, which is controlled by the environment. Those systems that did not survive do not exist any more. What is system and what is environment is relative. One can swap the names.
An individual change is random, but with selection mechanisms in place, subsystems change in a more or less sophisticated or intelligent way, because that is how they were selected by the environment to still exist.
Through selections, over time, subsystems change. Basically they become new subsystems. In quantum mechanics this is described with creation and annihilation of particles, i.e. of subsystems of the system.
After a change the role in the higher level
- can be taken over by the new particle or
- can be taken over by another particle of the same kind, which means the change was de-selected
A de-selection is for a specific interaction, which does not necessarily mean that the changed subsystem ceases to exist for other interactions. The subsystem can continue to interact in another context.
The system change is evolutionary, if the changes are due the changes of subsystems that keep the system alive. For this either
- the subsystem change keeps the role more or less functional
- or the system has alternatives
evolution vs revolution
If a system dies, this is a revolutionary (= destructive) change. The subsystems continue to exist and will form a new system.
For a mutal dependence, what is the subsystem, is a question of perspective:
- A company is a subsystem of a person's live
- The person is a subsystem seen from the company
Consequently a revolutionary change on one side can be seen as an evolutionary step from the other side. E.g. the French Revolution destroyed the social order of the time. But from the people's perspective it was an evolutionary step. Revolution is also an evolution.
There normally exist more annealings (oscillations) in parallel. These oscillations can be sorted by their period time. The longer periods constitute stable structure upon which higher frequency oscillations can build.
An interacting (non-closed) system has energy
- input
- output
- storage = input + output (one of them negative)
With constant input and output, removing energy from one subsystem means providing it for another subsystem.
The amount of energy exchanged within one period of an oscillation must be within a more or less narrow range, depending on the available internal storage.
- Adding too much energy per time may destroy the system instead of supporting a creative phase.
- Removing too much energy per time may also destroy the system instead of producing optimizations.
The acceptable range can be considerably widened by the storing capability of the system.
Only a fluctuating energy supply will make systems able to survive such fluctuations. The system will have adapted its storage. It will have acquired knowledge about its environment. It can predict or simulate the environment.
The input, output and storage are per interaction, i.e. per oscillation.
For an animal metabolism
- inputs: different kinds of food, oxygen
- outputs: faeces, moving, temperature
- storage: glycogen, fat
Due to the storage the energy input and output don't need to be simultaneous but can alternate within periods of limited range.
Balance between input and output keeps a system in a cycle, as if a closed system.
To produce structural evolution of a system as opposed of just making it cycle faster, subsystems need to die and be replaced by others, which we also named revolutionary changes. Therefore the energy supply or starvation needs to regularly exceed the limits of survival for some subsystems to produce structural changes.
Evolution is trial-and-error or better: trial and non-error or trial-and-testing. Another name for this is simply searching. Annealing is a random search algorithm for the most stable, i.e. long living, subsystems.
A stable minimum
- is long living thus can become
- foundation for higher level structures.
The physical world in this way has produced and still produces gradually more complex, hierarchical structures:
- asteroids, planets, star systems, galaxies, ...
- particle, atom, molecule, ...
- prokaryote, eukaryote,...
- hand tools, machines, computers,...
- simple concepts, complex concepts
- ...
Model vs. real system
The variables involved are embedded in a structure, whose evolution one wants to describe. A description is a mapping (a model) of the actual evolution, but simplifications are needed and follow their own mental evolution.
When summarizing all the dynamics of a gas volume V with nNA molecules with two intensive variables temperature T and pressure p, then this is not only a process of mind. Also nature makes decisions (= selections) based on such variables. For instance the pressure can make a wall burst.
Sometimes nature uses variables of as little information as one bit, i.e. yes/no. Situations, where very small variables count, are normally unstable balances.
With models one often chooses to leave out certain information, because it is not of importance, not interacting anyway in a situation to describe.
Nature also has information that is hidden, is not involved in an interacion, is internal, like the energy in the mass.
Action vs Entropy
On a repeated event, the number of occurrences Oi of value i is additional information, which, by normalization to 1, makes up probability pi = (Oi)/(∑Oi). The information reduces to I = − ∑pilogpi, also called entropy S.
The same way probability pi can be associated to the values of a column in a multi-column table (relation), i.e. by summing all table entries for that value divided by the total number of entries in the table. In a general context we can think of the table as comprising all the selection steps of an interaction. The entropy summarizes the table entries relevant for the specific interaction.
In thermodynamics the partition function summarizes based on kinetic energy. T is the result of the kinetic energy of the particles by T = (mẋ2)/(2k), based on the most probable thermal velocity.
In E = TS, the temperature T is separate because of its practical importance: A temperature difference ΔT decides on whether energy flows or not. T = ∂E ⁄ ∂S is a generalized force corresponding to F = − ∂V ⁄ ∂x from mechanics.
Entropy is S = klogΩ, where Ω counts all microstates, each equally probable: pi = 1 ⁄ Ω.
So
Here E is in units of kinetic energy of the particles and S in units of k. With another unit the number of microstates would be different. This is like using bits or bytes. So one could replace S ⁄ k with A ⁄ ħ. The latter represents the actual resolution of nature, while the former maps temperature to the particle's kinetic energy, which is of macroscopic nature.
In thermodynamics S ⁄ k is the information of relevance, because the rest does not take part in the interaction. In that sense entropy S is the action of thermodynamics.
The unit of action is [A] = Js, which we said is information. Entropy's unit [S] = J ⁄ K is also information, but of a specific encapsulation level, i.e. that between molecules/atoms. For every selection layer we need to introduce different units for information.
- Equilibrium thermodynamics summarizes the microscopic selections per time, i.e. time, via temperature or its inverse, β. Seeing β as "new time" we have the correspondence.
- The Boltzmann equation has time in the p's. The H in the H-theorem can be seen as action via drdp = drdtdp ⁄ dt = Fdrdt = Tdt. H is related to S in equilibrium.
A relation between action and entropy is given in these papers:
Maxwell Demon
The Landauer limit kTln2 of energy consumption of a selection from a variable of 2 elements (bit) assumes processing via atoms/molecules. It does not apply if comparing different layers.
A Maxwell demon consumes as much energy as it produces if it processes on the same layer. But if the processing happens on a lower layer, it can produce energy. This is how dynamic structures form.
- The biological cell is a Maxwell demon, letting high energy molecules in and low energy molecules out.
- Animals and plants are biological Maxwell demons of a higher biological layer, letting good food in and excretes waists.
- An entrepreneur is economical Maxwell demon, letting good workers in and firing bad workers.
- ...
But one can have a landauer limit for every layer. So the quantum landauer limit's − kTtr(ρlnρ) depend on ρ and are different units than the classical mechanics landauer limit kTln2. Also T = ∂E ⁄ ∂S is of a different unit in quantum mechanics than in higher layer systems. The lowest limit is an absolute limit, though.