"Functions" in math vs. computer science

Status
Not open for further replies.

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
I'm doing an investigation into the semantic foundations of mathematics, and I'm coming upon a very strange phenomenon over and over. I'm curious if there are any math heads in here that could comment on this research. This is a very subtle question that requires a basic computer science and mathematics background. This may sound strange at first until you read the entire argument.

In computer science, a function is defined exclusively as a procedure that takes an input (argument) and computes or generates an output based on that input.

In C++, for example, floor() is a function that takes any floating point number as an argument, and computes and returns the value of the input rounded down to its nearest integer. E.g., the input of floor(5.4) computes an output of 5; floor(4.4992) computes 4; floor(3.29492) computes 3, etc.

"Computes" is the very important operative term. A function in this computer science context is a machine that takes an input and computes, or renders an output. You put something in, you get something else out.

Big deal—we have functions in math that compute outputs.

Or do we?

Do functions in math compute outputs or do they map elements only? One may at first think this is no big deal, until we see something very strange going on with number sets.

According to wiki, and confirmed elsewhere:

In mathematics, a function is a binary relation between two sets that simply associates each element of the first set to exactly one element of the second set. What's very important here is that the sets do not need to contain numbers to do so; meaning, there is no computation involved with the generation of values for a target set (codomain).

I.e., the definition of function is simply about ordered pairs of the contents of one set vs. another, not about any procedure that computes the ordered pair.

So if we define set X = { 1, 2, 3}
and set Y = { D, B, C, A }

We can define a function as:

\( f: X → Y \)

Screen shot 2021-08-22 at 3.36.42 PM.png

The function is defined NOT as a computational procedure, but merely as the set of ordered pairs:

{(1, D), (2, C), (3, C)}

Is there a computation rendering the codomain from the domain here? No. There is a pairing, or a mapping between sets simply as an enumeration or listing of the sets' elements, or symbols. The type of pairing is either injection, surjection, or bijection, and it has nothing to do with numbers or computations thereof to determine the pairing.

Set A = { 1, 2, 3, 4, 5 }

and

Set B = { 2, 4, 6, 8, 10 }

Seem to be related numerically and arithmetically. However, if we pair them as this function:

{(1,2), (2,4), (3, 6), (4, 8), (5, 10)}

We do not do so as a condition of any arithmetic relationship (such as 2x). We simply say there's a bijection between both sets if we can pair the elements one-to-one irrespective of any seeming computational relationship between the numbers. I.e., we are not pairing them due to a computational procedure such as f(x)=2x. We are pairing them as symbols occupying set elements only. The function is the pairing.

Why is this such a big deal?


Because in this context we are actually judging cardinality—or total number of elements—of a set by the number of occupied indexes ONLY. There is a bijectability between A and B below as much as there is between A and C, or B and C:

Set A = { 1, 2, 3, 4, 5 }

Set B = { 2, 4, 6, 8, 10 }

Set C = { £, ß, ç, œ, ® }

It's tempting to say we "generated set B" by the procedure f(x) = 2x. Indeed, if we put the values of set A through f(x) = 2x, we can actually generate the elements residing in set B. But a bijective function is NOT defined this way: It is defined ONLY as a one-to-one pairing between existing set elements, independent of the representation of the elements, whether alpha or numeric. Whether alpha or numeric is key here.

Even though a set in mathematics doesn't have addressable indexes like an array in computer science, the true definition of a bijection is between what could be considered indexes and whether or not there's an occupation within each. This is seen above with |A| = |B| = |C|. We have 5 elements in each set, and all 3 sets have the same cardinality, and a one-to-one bijection can be created between all 3.

Let's extend this logic to number sets and we'll see something very unearthing. If we assume the capacity to create a unique symbol to represent any number, we've defined "base infinity" (base-∞). In this base,

How many indexes are there in ℕ? Infinite.
How many indexes are there in ℚ? Infinite.
How many indexes are there in ℤ? Infinite.
How many indexes are there in ℝ? Infinite.

Again, we do not determine whether or not there is a bijection between these due to a computation. By saying there are infinite indexes, we automatically have a bijection between them. Remember, by the mathematical definition, we do not need to know the contents—whether alpha or numeric. There's simply a bijection due to the pairing of contents of the indexes. Here we can see set A bijects one-to-one with sets B and C, and set B can also biject to set C:

A[0] == 1 → B[0] == 2 → C[0] == £
A[1] == 2 → B[1] == 4 → C[1] == ß
A[2] == 3 → B[2] == 6 → C[2] == ç
A[3] == 4 → B[3] == 8 → C[3] == œ
A[4] == 5 → B[4] == 10 → C[4] == ®


We do not care what symbols or quantities are occupying the respective index. Note: While there is a potential computation (2x) that links A[0] and B[0] (or any of A and B) for example, there is not one for A[0] and C[0], or B[0] and C[0] and yet the bijection between them exists.

The important takeaway:

By this logic, this is demonstrating that all infinite sets share the same cardinality. There are no transfinites, no "continuum hypothesis". Boundlessness is extra-numeric and doesn't have "different sizes."

In conclusion:
Indeed, if set A ⊆ set B and B ⊆ A then we have A = B.

If there is a 1-1 bijective function map set P to another set Q we say the number of elements (cardinality) of two sets are equal, and denoted by |P| = |Q|.

If we say |ℕ| has infinite cardinality and |ℝ| has infinite cardinality, the number of potential elements in both is equal, therefore |ℕ| = |ℝ|, and as can be seen in base infinity, |ℕ| is just as "large" as |ℝ|.
 
Last edited:

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Cantor's "diagonal argument" is based on conflating the definition of a function from computer science with that of mathematics: generating a unique number in ℝ has nothing to do with whether or not it can be mapped to ℕ, because "infinite cardinality" means "unbounded indexes", not whether or not a procedure can create a unique number in one number set vs. another.

Further, ℕ exists in ℝ, and every point in ℝ can be mapped to between (0, 1) in ℕ. ℕ is simply another simplified version of "whole-number cuts" of ℝ. ℝ is therefore "countable" because one can biject any infinite set to another infinite set due to the true definition of "infinite cardinality" being based on "potential infinity" as "infinite indexes".
 
Last edited:

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Welcome back Jennifer

Unless you are aiming to edit the question like last time
the answer is 42.
Hi Andrew,

For the 42nd time, I wasn’t “aiming” to edit the question, FFS. I was aiming to clarify, since my areas of research overlap into various territories of math, metamath, physics, metaphysics, and epistemology, and I’ve now made sure to make it general enough so that the discussion can grow if need-be. ;)
 
Last edited:

MrAl

Joined Jun 17, 2014
11,389
I'm doing an investigation into the semantic foundations of mathematics, and I'm coming upon a very strange phenomenon over and over. I'm curious if there are any math heads in here that could comment on this research. This is a very subtle question that requires a basic computer science and mathematics background. This may sound strange at first until you read the entire argument.

In computer science, a function is defined exclusively as a procedure that takes an input (argument) and computes or generates an output based on that input.

In C++, for example, floor() is a function that takes any floating point number as an argument, and computes and returns the value of the input rounded down to its nearest integer. E.g., the input of floor(5.4) computes an output of 5; floor(4.4992) computes 4; floor(3.29492) computes 3, etc.

"Computes" is the very important operative term. A function in this computer science context is a machine that takes an input and computes, or renders an output. You put something in, you get something else out.

Big deal—we have functions in math that compute outputs.

Or do we?

Do functions in math compute outputs or do they map elements only? One may at first think this is no big deal, until we see something very strange going on with number sets.

According to wiki, and confirmed elsewhere:

In mathematics, a function is a binary relation between two sets that simply associates each element of the first set to exactly one element of the second set. What's very important here is that the sets do not need to contain numbers to do so; meaning, there is no computation involved with the generation of values for a target set (codomain).

I.e., the definition of function is simply about ordered pairs of the contents of one set vs. another, not about any procedure that computes the ordered pair.

So if we define set X = { 1, 2, 3}
and set Y = { D, B, C, A }

We can define a function as:

\( f: X → Y \)

View attachment 246315

The function is defined NOT as a computational procedure, but merely as the set of ordered pairs:

{(1, D), (2, C), (3, C)}

Is there a computation rendering the codomain from the domain here? No. There is a pairing, or a mapping between sets simply as an enumeration or listing of the sets' elements, or symbols. The type of pairing is either injection, surjection, or bijection, and it has nothing to do with numbers or computations thereof to determine the pairing.

Set A = { 1, 2, 3, 4, 5 }

and

Set B = { 2, 4, 6, 8, 10 }

Seem to be related numerically and arithmetically. If we pair them as this function:

{(1,2), (2,4), (3, 6), (4, 8), (5, 10)}

We do not do so as a condition of any arithmetic relationship (such as 2x). We simply say there's a bijection between both sets if we can pair the elements one-to-one irrespective of any seeming computational relationship between the numbers. I.e., we are not pairing them due to a computational procedure such as f(x)=2x. We are pairing them as symbols occupying set elements only. The function is the pairing.

Why is this such a big deal?


Because in this context we are actually judging cardinality—or total number of elements—of a set by the number of occupied indexes ONLY. There is a bijectability between A and B below as much as there is between A and C, or B and C:

Set A = { 1, 2, 3, 4, 5 }

Set B = { 2, 4, 6, 8, 10 }

Set C = { £, ß, ç, œ, ® }

It's tempting to say we "generated set B" by the procedure f(x) = 2x. Indeed, if we put the values of set A through f(x) = 2x, we do actually create set B. But a bijective function is NOT defined this way: It is defined ONLY as a one-to-one pairing between existing set elements, independent of the representation of the elements, whether alpha or numeric. Whether alpha or numeric is key here.

Even though a set in mathematics doesn't have addressable indexes like an array in computer science, the true definition of a bijection is between what could be considered indexes and whether or not there's an occupation within each. This is seen above with |A| = |B| = |C|. We have 5 elements in each set, and all 3 sets have the same cardinality, and a one-to-one bijection can be created between all 3.

Let's extend this logic to number sets and we'll see something very unearthing. If we assume the capacity to create a unique symbol to represent any number, we've defined "base infinity" (base-∞). In this base,

How many indexes are there in ℕ? Infinite.
How many indexes are there in ℚ? Infinite.
How many indexes are there in ℤ? Infinite.
How many indexes are there in ℝ? Infinite.

Again, we do not determine whether or not there is a bijection between these due to a computation. By saying there are infinite indexes, we automatically have a bijection between them. Remember, by the mathematical definition, we do not need to know the contents—whether alpha or numeric. There's simply a bijection due to the pairing of contents of the indexes. Here we can see set A bijects one-to-one with sets B and C, and set B can also biject to set C:

A[0] == 1 → B[0] == 2 → C[0] == £
A[1] == 2 → B[1] == 4 → C[1] == ß
A[2] == 3 → B[2] == 6 → C[2] == ç
A[3] == 4 → B[3] == 8 → C[3] == œ
A[4] == 5 → B[4] == 10 → C[4] == ®

We do not care what symbols or quantities are occupying the respective index. Note: While there is a potential computation (2x) that links A[0] and B[0] (or any of A and B) for example, there is not one for A[0] and C[0], or B[0] and C[0] and yet the bijection between them exists.

The important takeaway:

By this logic, this is demonstrating that all infinite sets share the same cardinality. There are no transfinites, no "continuum hypothesis". Boundlessness is extra-numeric and doesn't have "different sizes."

In conclusion:
Indeed, if set A ⊆ set B and B ⊆ A then we have A = B.

If there is a 1-1 bijective function map set P to another set Q we say the number of elements (cardinality) of two sets are equal, and denoted by |P| = |Q|.

If we say |ℕ| has infinite cardinality and |ℝ| has infinite cardinality, the number of potential elements in both is equal, therefore |ℕ| = |ℝ|, and as can be seen in base infinity, |ℕ| is just as "large" as |ℝ|.
What are you trying to show or prove?
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
From what i understand here infinite sets are very different than finite sets. There could be different interpretations of how you define infinity.
So with that in mind, what have you read on this so far?
A set is defined as a rationalization, or knowable enumerable order of elements, while infinity represents unbounded unknowability. Therefore, the word set and infinity are oxymorons and an attempt at rationalizing the irrational. Consciousness, or the true ℕ, is the only “infinite set”. All supposed infinite “sets” are 100% identical in cardinality as shown above using base infinity and reckoning identical potential indexes. “Infinite sets” are thusly a delusion of Georg Cantor which led to “transfinites,“ or different sized set cardinalities of infinity due to conflating the definition of mathematical function of pairing vs. a computational process; this thinking leads to rationalizing the likes of “pi” as a “finite number,” when it is truly an irrational, transcendental 2D numeric expression involving the actual number 3 prepended to an unending process of infinitesimal calculation.
 
Last edited:

Delta Prime

Joined Nov 15, 2019
1,311
Hello there! :) your avatar! what are those flowers. Are they even flowers? they're simply lovely. Are they fragrant? I mean that with the utmost respect!
In Reading This thread in its entirety you are developing a theoretical framework based on procedural and perceptual thinking!
Your methodology based on quantitative data analysis & is to be saluted. Soooo. I do salute you.
Inconsistencies between the formal and intuitive dimensional thinking you propose is in fact diametrically opposed. Hence oxymoron! Your thought process is breaking new ground. I cannot contribute to your research. Don't tell anybody else but you're almost beyond my capabilities.;)
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Hello there! :) your avatar! what are those flowers. Are they even flowers? they're simply lovely. Are they fragrant? I mean that with the utmost respect!
In Reading This thread in its entirety you are developing a theoretical framework based on procedural and perceptual thinking!
Your methodology based on quantitative data analysis & is to be saluted. Soooo. I do salute you.
Inconsistencies between the formal and intuitive dimensional thinking you propose is in fact diametrically opposed. Hence oxymoron! Your thought process is breaking new ground. I cannot contribute to your research. Don't tell anybody else but you're almost beyond my capabilities.;)
Why thank you! Yes, to answer your question, they’re flowers (it’s some kind of variety)... :—) Glad you’re able to appreciate the thinking…
 

dcbingaman

Joined Jun 30, 2021
1,065
As far as 'infinite' sets are concerned. I think the idea of 'infinity' is misleading. Take the set of all positive integers. You can have a finite list of the positive integers going upward from 1 to n. I think a better word is a set that can increase without limit is more accurate. Thus for the positive integers the rule follows that regardless how large a finite set of positive integers going from 1 to n. There is always another set that goes from 1 to n+1. It is a process that is being used. The process can go on indefinitely. But in the real world you always have finite sets. Another interesting thing to think about when it comes to irrational numbers. Per definition the decimal positions go on with zero repeatability (how you prove that I don't know) but assuming that is true for say the square root of 2. If there is zero repeatability then every possibility is realized within such a number. Thus if we assign say each of two digits in such a number to say a English grammar symbol (letters, numbers, spaces, etc.), then you will find the entire book of War And Peace encoded in all such numbers, because there cannot be repeatability there must be infinite diversity, leading to this strange conclusion: All information knowable is encoded somewhere in every irrational number.

Now take the set of irrational numbers. This is a real set but each element cannot be expressed with exact precision. A symbol may stand in for say the radius of a circle over its diameter. But the real number that is represents cannot be expressed with exact precision. Sets have their limits.

Set theory also breaks down on this simple idea: Take the set that includes all sets that do not contain themselves. Does it contain itself? If it does it contradicts itself and if it does not it still contradicts itself.

What to take from all this? I don't have a clue. I tend to think in practical terms. In practical terms all things are finite. Working in my wood shop I need a circle. How close do I need to know the radius to its diameter? In that case probably no more than 3.1. While in a machine shop for the same problem using steel as my material where it will be used in a high performance engine, I may need to know it to 3.14159 or better. In electronics when converting say angular frequency to cycles per second I need to divide by 2 times pi. Most of the time 3.14 is sufficient considering part tolerances.

Every real world problem deals with finite limits. Though it is interesting to think about 'infinite' sets. I prefer to think of them as 'unbounded' sets. A very large set of real numbers going from 1 to 10e20 is usually sufficient and can 'sit in' as being an approximation of the set of real numbers used to solve any real problem and of course you can increase the size if required for some other problem in science or engineering. But we don't use infinite sets in science and engineering, they just show up for those math guru's who like to discuss such things at the local pub over a mug of beer. There is nothing wrong with that, as long as we have the beer!
 
Last edited:

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
As far as 'infinite' sets are concerned. I think the idea of 'infinity' is misleading. Take the set of all positive integers. You can have a finite list of the positive integers going upward from 1 to n. I think a better word is a set that can increase without limit is more accurate. Thus for the positive integers the rule follows that regardless how large a finite set of positive integers going from 1 to n. There is always another set that goes from 1 to n+1. It is a process that is being used. The process can go on indefinitely. But in the real world you always have finite sets. Another interesting thing to think about when it comes to irrational numbers. Per definition the decimal positions go on with zero repeatability (how you prove that I don't know) but assuming that is true for say the square root of 2. If there is zero repeatability then every possibility is realized within such a number. Thus if we assign say each of two digits in such a number to say a English grammar symbol (letters, numbers, spaces, etc.), then you will find the entire book of War And Peace encoded in all such numbers, because there cannot be repeatability there must be infinite diversity, leading to this strange conclusion: All information knowable is encoded somewhere in every irrational number.

Now take the set of irrational numbers. This is a real set but each element cannot be expressed with exact precision. A symbol may stand in for say the radius of a circle over its diameter. But the real number that is represents cannot be expressed with exact precision. Sets have their limits.

Set theory also breaks down on this simple idea: Take the set that includes all sets that do not contain themselves. Does it contain itself? If it does it contradicts itself and if it does not it still contradicts itself.

What to take from all this? I don't have a clue. I tend to think in practical terms. In practical terms all things are finite. Working in my wood shop I need a circle. How close do I need to know the radius to its diameter? In that case probably no more than 3.1. While in a machine shop for the same problem using steel as my material where it will be used in a high performance engine, I may need to know it to 3.14159 or better. In electronics when converting say angular frequency to cycles per second I need to divide by 2 times pi. Most of the time 3.14 is sufficient considering part tolerances.

Every real world problem deals with finite limits. Though it is interesting to think about 'infinite' sets. I prefer to think of them as 'unbounded' sets. A very large set of real numbers going from 1 to 10e20 is usually sufficient and can 'sit in' as being an approximation of the set of real numbers used to solve any real problem and of course you can increase the size if required for some other problem in science or engineering. But we don't use infinite sets in science and engineering, they just show up for those math guru's who like to discuss such things at the local pub over a mug of beer. There is nothing wrong with that, as long as we have the beer!
You got it. :)
 

dcbingaman

Joined Jun 30, 2021
1,065
You got it. :)
Thanks, though honestly I am not sure anyone can 'get' infinite sets or the fact that the set of natural numbers 1 onward and the set of even natural numbers 2 upward are the same size because they can be placed in a '1 to 1' correlation to each other. Or the stranger fact that even though the set of even natural numbers and the set of odd natural numbers are both exclusive sets that have no members in common, yet because of the one to one correlation they are all the same size as the natural numbers that both are subsets of! Reminds me of quantum mechanics. Anyone who claims to actually understand it does not according to some of the most renown physicist
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
As mentioned above, a "set" and "infinity" are oxymoronic. R is the only infinite continuum, and all the other sets "exist" in R, including N. It's ridiculous to say that R is "uncountable" because it can't biject to the N within itself. That's why there's nothing to get, because insanity is untenable. :) "Actual Infinity" is another oxymoron. Infinity is that which is unbounded. A set is bounded.

I discovered Steve Patterson shares my thoughts on the matter and very eloquently puts things into perspective:

http://steve-patterson.com/cantor-wrong-no-infinite-sets/
 

Deleted member 115935

Joined Dec 31, 1969
0
how's this coming on Jennifer ?

Do you need to edit the question to make things clearer like before ?
 

MrAl

Joined Jun 17, 2014
11,389
As far as 'infinite' sets are concerned. I think the idea of 'infinity' is misleading. Take the set of all positive integers. You can have a finite list of the positive integers going upward from 1 to n. I think a better word is a set that can increase without limit is more accurate. Thus for the positive integers the rule follows that regardless how large a finite set of positive integers going from 1 to n. There is always another set that goes from 1 to n+1. It is a process that is being used. The process can go on indefinitely. But in the real world you always have finite sets. Another interesting thing to think about when it comes to irrational numbers. Per definition the decimal positions go on with zero repeatability (how you prove that I don't know) but assuming that is true for say the square root of 2. If there is zero repeatability then every possibility is realized within such a number. Thus if we assign say each of two digits in such a number to say a English grammar symbol (letters, numbers, spaces, etc.), then you will find the entire book of War And Peace encoded in all such numbers, because there cannot be repeatability there must be infinite diversity, leading to this strange conclusion: All information knowable is encoded somewhere in every irrational number.

Now take the set of irrational numbers. This is a real set but each element cannot be expressed with exact precision. A symbol may stand in for say the radius of a circle over its diameter. But the real number that is represents cannot be expressed with exact precision. Sets have their limits.

Set theory also breaks down on this simple idea: Take the set that includes all sets that do not contain themselves. Does it contain itself? If it does it contradicts itself and if it does not it still contradicts itself.

What to take from all this? I don't have a clue. I tend to think in practical terms. In practical terms all things are finite. Working in my wood shop I need a circle. How close do I need to know the radius to its diameter? In that case probably no more than 3.1. While in a machine shop for the same problem using steel as my material where it will be used in a high performance engine, I may need to know it to 3.14159 or better. In electronics when converting say angular frequency to cycles per second I need to divide by 2 times pi. Most of the time 3.14 is sufficient considering part tolerances.

Every real world problem deals with finite limits. Though it is interesting to think about 'infinite' sets. I prefer to think of them as 'unbounded' sets. A very large set of real numbers going from 1 to 10e20 is usually sufficient and can 'sit in' as being an approximation of the set of real numbers used to solve any real problem and of course you can increase the size if required for some other problem in science or engineering. But we don't use infinite sets in science and engineering, they just show up for those math guru's who like to discuss such things at the local pub over a mug of beer. There is nothing wrong with that, as long as we have the beer!
To quote:
"All information knowable is encoded somewhere in every irrational number."

I like that. So could we find pi within the square root of 2, or find the sqrt(2) inside pi.

I have read that there was a proof that there are more reals than natural numbers. The proof makes sense or does it. I have a feeling that it depends how you define things. It's like a stop and go ... depending on where you stop, you get a different conclusion. If you keep going then you dont ever get a conclusion.

Admittedly i have to read more on this as it's been a long while for me now. I do like to inject a thought experiment i posed back in the 1980's though about infinity.

The thought is to think about how a set of humans would go about counting to infinity.
First we have the generations where each son takes over for each father as they get too old to continue the count.
Then we have the math where we find better functions that can count to a higher number with each iteration, such as going from N+1 to N+N to N^2 to N^3 to N^N, etc. A question then comes up as to what is the function that increases the most per iteration.
Then we have technology which can help count faster (such as a computer).
So after say 2000 years what is the highest number we could ever reach, or after 10000 years.
And, given the number of atoms in the universe, is it even possible to store this number or the number N-1 so that we could do one more iteration.

So it seems that infinity is a number that can not stand alone without some preconceived notion about what it means or how it is defined.

Another example i have run across is that some problems that involve infinity can be solved to within any desired accuracy/precision by using a number that is large relative to the problem at hand. Using a large number results in an conclusion that is almost the same as if we used a theoretical infinity.
A really simple example:
y=a/(a+1)
what is the limit as a goes to infinity?
If we approximate with a=100000, then we have:
y=100000/100001
which is already very close to 1:
y=0.999990000099999000009999900000...
and if that is not accurate enough for our application then we can go to a=1000000 and get:
y=1000000/1000001=0.999999000000999999000000999999...
and we see we got another digit of accuracy.
This also works with some integrations where we have to integrate to infinity.
Of course it is a numerical calculation however.
 

Deleted member 115935

Joined Dec 31, 1969
0
To quote:
"All information knowable is encoded somewhere in every irrational number."

I like that. So could we find pi within the square root of 2, or find the sqrt(2) inside pi.

I have read that there was a proof that there are more reals than natural numbers. The proof makes sense or does it. I have a feeling that it depends how you define things. It's like a stop and go ... depending on where you stop, you get a different conclusion. If you keep going then you dont ever get a conclusion.

Admittedly i have to read more on this as it's been a long while for me now. I do like to inject a thought experiment i posed back in the 1980's though about infinity.

The thought is to think about how a set of humans would go about counting to infinity.
First we have the generations where each son takes over for each father as they get too old to continue the count.
Then we have the math where we find better functions that can count to a higher number with each iteration, such as going from N+1 to N+N to N^2 to N^3 to N^N, etc. A question then comes up as to what is the function that increases the most per iteration.
Then we have technology which can help count faster (such as a computer).
So after say 2000 years what is the highest number we could ever reach, or after 10000 years.
And, given the number of atoms in the universe, is it even possible to store this number or the number N-1 so that we could do one more iteration.

So it seems that infinity is a number that can not stand alone without some preconceived notion about what it means or how it is defined.

Another example i have run across is that some problems that involve infinity can be solved to within any desired accuracy/precision by using a number that is large relative to the problem at hand. Using a large number results in an conclusion that is almost the same as if we used a theoretical infinity.
A really simple example:
y=a/(a+1)
what is the limit as a goes to infinity?
If we approximate with a=100000, then we have:
y=100000/100001
which is already very close to 1:
y=0.999990000099999000009999900000...
and if that is not accurate enough for our application then we can go to a=1000000 and get:
y=1000000/1000001=0.999999000000999999000000999999...
and we see we got another digit of accuracy.
This also works with some integrations where we have to integrate to infinity.
Of course it is a numerical calculation however.
One of the "proofs" of infinite series I remember being taught,

there is an infinite number of integer numbers,
there is an infinite number of fractional numbers between each integer,
hence there is two infinites, ( or more

Both statement are true, but both wrong,

the special numbers of infinity, and zero , do not behave as other numbers.


Jenifer ,
how do you square that circle ?
 

dcbingaman

Joined Jun 30, 2021
1,065
To quote:
"All information knowable is encoded somewhere in every irrational number."

I like that. So could we find pi within the square root of 2, or find the sqrt(2) inside pi.

I have read that there was a proof that there are more reals than natural numbers. The proof makes sense or does it. I have a feeling that it depends how you define things. It's like a stop and go ... depending on where you stop, you get a different conclusion. If you keep going then you dont ever get a conclusion.

Admittedly i have to read more on this as it's been a long while for me now. I do like to inject a thought experiment i posed back in the 1980's though about infinity.

The thought is to think about how a set of humans would go about counting to infinity.
First we have the generations where each son takes over for each father as they get too old to continue the count.
Then we have the math where we find better functions that can count to a higher number with each iteration, such as going from N+1 to N+N to N^2 to N^3 to N^N, etc. A question then comes up as to what is the function that increases the most per iteration.
Then we have technology which can help count faster (such as a computer).
So after say 2000 years what is the highest number we could ever reach, or after 10000 years.
And, given the number of atoms in the universe, is it even possible to store this number or the number N-1 so that we could do one more iteration.

So it seems that infinity is a number that can not stand alone without some preconceived notion about what it means or how it is defined.

Another example i have run across is that some problems that involve infinity can be solved to within any desired accuracy/precision by using a number that is large relative to the problem at hand. Using a large number results in an conclusion that is almost the same as if we used a theoretical infinity.
A really simple example:
y=a/(a+1)
what is the limit as a goes to infinity?
If we approximate with a=100000, then we have:
y=100000/100001
which is already very close to 1:
y=0.999990000099999000009999900000...
and if that is not accurate enough for our application then we can go to a=1000000 and get:
y=1000000/1000001=0.999999000000999999000000999999...
and we see we got another digit of accuracy.
This also works with some integrations where we have to integrate to infinity.
Of course it is a numerical calculation however.
I like your question on does the square root of 2 contain pi or does pi contain the square root of 2. I would think it would have to. But the larger quandary: if any irrational number contains an infinite amount of information, then can it contain itself? If it does, it has to be repeatable. If it does not where do we go with that. Seems like a logical contradiction.
 

Deleted member 115935

Joined Dec 31, 1969
0
Pi and Sqr of 2 are both infinite numbers,

Thus as per the quandary of how many fractions are there between integers , both of which are infitant series,
the answer is an impossible one to answer,

This sounds like another of those impossible questions the OP has read / heard of , and wants to have a fire side chat over,
 
Status
Not open for further replies.
Top