So far I’ve blogged only about the finite variable and a topological structure (FCA) on it:
- a variable is a set of exclusive values, created by a selection process
- variables live in a lattice of set containment (topology, FCA)
- an extensive value is a variable by itself
The last kind I explore in a separate measure blog.
Here I want to ponder over infinity, how it can be integrated into the current understanding and what good it is for.
What is infinity?¶
Finite variables have finite information. Do infinite variables have infinite information? But that would mean a computer taking up all the universe would not suffice to store the values of an infinite variable or select a value thereof. Infinity in this sense does not exist and therefore one can also not make any statements or conclusions that involve infinity.
The universe can be regarded as a parallel computer consisting of a myriad of, but still finitely many, selection processes happening in parallel. Each of these selections are from finite variables.
We, as an organism that survived by evolving adaptability through a brain able to map the world and simulate it, need to be able to generate internal variables that can be mapped to the real ones.
Paradigm change to algorithms: generated variables
Instead of regarding actual variables with a finite number of values we now start to look at algorithms to generate variables. E.g. \(ℕ\) is generated by looping +1 (=(+1)*). With this the size of a variable (space complexity) can be moved to the time domain (time complexity). This is not only a feature of the information processing brain but also of the world itself (\(ΔEΔt≥h\)) (action, h).
A generated variable is versatile. It can be generated to the appropriate size, whatever the physical variable asks for.
Infinity is a construct of the mind and not of the physical reality. Nothing in reality is infinite, neither time nor space nor anything else. Infinity is not applicable to real variables, but only to the generation of variables, i.e. to its generation algorithm The infinity that is meant in mathematics is a loop in an algorithm that is ended, when the wanted magnitude or precision is reached.
Our numbers are such a generated variable. It is an algorithm to create a multitude mappable to all kind of physical variables. It is infinite, but this only means that our only limitation is the available time or space:
- numbers of \(ℕ\), generated by \((+1)*\), can be selected (written down) only if not too large
- additionally a number in the continuum, the real numbers, \(ℝ\) can be selected only with a limited precision.
Infinity
Infinity is a practical notation (for the deferred decision about the stop time) of an algorithm with at least one loop.
The continuum \(ℝ\) is not only generated by an algorithm but also consist of algorithms. Operations on extensive physical values (quantities) are mapped to operations on the numbers and then made part of the numbers to form reusable algebraic structures like groups (addition) and fields (addition and multiplication).
- Addition to the size of an extensive value
- Multiplication: Independent extensive values are variables independently selectable from. One can form the cartesian product. Multiplication gives the size of \(AxB\).
So numbers are algorithms and it turns out that some of them do not have finite time complexity. For example, \(√2\), the diagonal of a unit square, is an algorithm which involves an infinite loop (open loop). That the algorithm never ends is synonymous to: \(√2\) does not exist in \(ℚ\), but by including such algorithms we make the completion of \(ℚ\), which is then called the real numbers \(ℝ\): \(ℝ=ℚ∪𝕁\). The irrational numbers \(𝕁\) and rational numbers \(ℚ\) are dense in \(ℝ\).
Why include infinite loops?
They in principle never end and thus they need to be aborted, and then we are still in \(ℚ\), i.e. the limit points practially don’t exist. We can get out of this dead end by not talking any more about the limit points but rather about the algorithm: \(√2\) as an algorithm is different than \(1.4142135623730951\).
By including the algorithms the statements given by the algebraic structures become more general. We have less limitations when making new calculations (closure).
Because the limit points can actually never be reached, i.e. do not exist, one must be careful about calculating with “algorithms”, i.e. in \(ℝ\), when dealing with \(0\) and \(∞\). \(0/0\) can be any number, but it is still the foundation of calculus. It matters how one approaches 0. L’Hôpital’s rule and in general asymptotics helps then to find the limit. The same is true for \(∞\) (infinitely large). \(∞/∞\) can be any number as well. By a linear definition \(0 = \lim_{n→0} n\) and \(∞ = \lim_{n→∞} n\) one could write \(∞/∞=1\), \(0/0=1\) and \((1+1/∞)^∞ = e\). Because \(1/∞\) is approaching \(0\) slower than linearly, we have, \(0∞=0\). \(∞+1=∞, `3∞=(3+0)∞=3∞\). But to acknowledge L’Hôpital’s rule we would have \(3∞ ≠ ∞\) and \(3\cdot 0≠0\), which is not consistent with \(ℚ\), thus \(0/0\) and \(∞/∞\) is not defined in \(ℝ\).
\(ℚ\) vs \(ℝ\)
In \(ℚ=\{a/b|a,b∈ℤ,b≠0\}\), if one allows \(a_i∈ℤ\) and \(b_i∈ℤ\) to go to \(±∞\) in any possible way, also nonlinearly, i.e. if one admits sequences in this way, then one defines only a subset of \(ℝ\). The reason is not so much proofs like
\(√2=a/b ⇒ 2=a^2/b^2 ⇒ a^2\) even \(⇒ a\) even \(⇒ b\) even \(⇒ a/b\not\inℚ\)
which hinges on
if \(a^2\) is even then \(a\) is even
and it is questionable, whether that can be done if we can’t reach \(∞\). Remember: \(∞+1=∞\).
It is rather the comprehensive definition of \(ℝ\) to comprise all thinkable sequences that approach a number as one equivalence class.
Infinity and Information¶
The information of a variable is basically the number of values. If the values of the variable are generated by an algorithm, then one moves the complexity (the size, the information) to the number of time steps needed to generate the values.
Kolmogorov complexity looks only at the length of the algorithm and neglects the time. In the general descriptive complexity theory the time is considered, though, via complexity classes.
The information of an infinite variable in bits is always infinite, but one can further classify them via their algorithmic complexity, i.e. via their number of nested endless loops.
- \(ℕ\) has one loop and so do \(ℤ\) and \(ℚ\), because they have a bijection to \(ℕ\).
- \(ℝ\) has two nested loops, infinitely dense-in-itself and unbounded. One can think of writing a real number with infinite but countable binary digits and thus can conclude that the cardinality of the real numbers is \(2^ℕ\). Every ever so small interval in \(ℝ\) is also of the same size. See also continuum hypothesis
axiom of choice
It is not possible to choose all elements from an infinite variable, because it would need infinite information and/or infinite time. One therefore resorts to make choice an axiom to still be able to reason about such sets.
Practically the information of a infinite variable depends on when one chooses to stop the infinite loop, i.e. on the precision. When modelling reality in computer software the integer type or the floating point type is chosen according to the needed precision. If they are not enough, one can use arbitrary precission libraries.
Infinity and Topology¶
In \(ℝ\) even a normal \(1\) means \(1.\bar{0}\). The latter is an algorithm that never ends. The short \(1\) chooses an element of \(ℕ\) with variable length coding. The presence of infinitely close other numbers in \(ℝ\) asks for a method to distinguish the \(1\) from them. This is done by an algorithm that produces numbers ever closer to \(1\). At a certain step, e.g. \(1.000n\), the \(n\) is chosen to be \(0\), instead of \(1\) to \(9\). Every step generates a variable to allow a further choice. This infinite loop is called open.
open
open can be interpreted as open ended infinite loop. Open ended in the sense that we will decide later, when to get out of it.
The objective of an open loop is to define an element \(x\) by approaching it. The intermediate steps of the loop form a neighborhood of \(x\). \(x\) is defined by the algorithm and is included in the set: topological closure.
The definition of elements is given by the separation axioms:
- T0 defines a point \(x\) (element, value) via neighborhoods as described
- T1 is when, of any two points, one has a neighborhood not containing the other
- T2 is when any two points can be distinguished by disjoint neighborhoods. Every filter and every net has a unique limit.
If the closure of a set has missed a point and that point is separated from the closed set by a neighborhood, then this is a regular space. A regular space is metrizable.
Cauchy convergence and completeness is generalized with the uniform space that is built on top of axioms equivalent to a pseudometric, where two elements not necessarily need to have a distance.
If in the topology we have a metric a neighborhood conveniently is defined as an open ball.
All these concepts to define closeness can be visualized with a finite FCA lattice and then be generated to finer ones ad infinitum.
space vs time
Neighborhood is normally mapped to our sense of physical space, but this provokes the misleading idea that more is selected at the same time. It is better to map a neighborhood to our sense of time, because that better depicts the fact that the points’ only reason to occur together, is the selection process itself. We look at one selection process at a time. open makes this selection process an open ended loop. It stands for the older notation \(x = \lim_{n→∞}x_n\). “\(f: A→B\) is continous if an open \(X⊂B\) has an open \(f^{-1}(X)⊂A\)” is the same as \(\lim_{n→∞}f(x_n)=f(x)\). With a metric one can also say “For every \(δ>|f(x_n)-f(x)|\) there is an \(ε>|x_n-x|\)”. Note also that neighborhood does not imply nearness in the metric sense, but rather in the set containment sense.
Every open cover has a finite subcover.
With this property connects the infinity with the finite and thus allows to make global statements about the set that at the infinitely close (local) wouldn’t have much meaning, because infinity can never be reached.
That a bounded set is closed or vice versa can be proved with compactness.
Usefulness of Infinity¶
One such use mentioned already is to have a versatile multitude to map all kind of real variables to.
Variables do not exist alone. They exist because of other variables. The functional dependence is a general characterization of the system, not so much the size of the variables (information), which changes from system to system. With the infinity as defined here one makes a simulation with generated variables to the wanted precision. The algorithm needs less memory and simulations can be done with minimal generated variables. So this analytic description altogether saves a lot of memory.
One does not need to use by chance numbers
When a physicist liberally uses infinity in his description of the world, this is an idealization justified by the wanted precision. For example an infinite distance could be a few centimeters when describing an atomic scale phenomenon. It is this idea that makes him use \(∞\) instead of a by chance distance like 2cm or 3cm.
One can make more general statements. Such statements are shorter, i.e. need less space (information, complexity).
In a general statement the precision is unknown and so the decision about it needs to be deferred.
For example in a NaCl crystal the diagonal will need one precision for KBr another one.
There is often no finite algorithm to describe certain things, like the length of the diagonal of a square (\(√2\)).
Trial and error is a basic principle because it follows from selection. This is an infinity iterative algorithm that is stopped when content with the result. Obviously this has many applications.
No comments:
Post a Comment