Beauty

bogosort

Joined Sep 24, 2011
696
If they do, it's quite probable that we'll never know it in its entirety:

https://en.wikipedia.org/wiki/Gödel's_incompleteness_theorems
Those theorems are about the completeness of 2nd-order (or higher) formal systems, i.e., that there exist statements that can neither be proved nor disproved within the system. This is a very different type of thing than trying to find equations that characterize the universe. Godel doesn't apply here.
 

bogosort

Joined Sep 24, 2011
696
Not sure about what you're saying. But it was he that gave mathematical proof that there are some truths that can never be proved. The reason is that to know everything about a system (in this case, the Universe) one has to be outside of the system.
FYI, this is a common misrepresentation of what the incompleteness theorems imply. It's important to note that Godel's incompleteness results apply only to formal systems, and not even all formal systems: only those of sufficient expressive power. For example, first-order logic is complete. First-order logic has the power to quantify over variables, and so is powerful enough to introduce the natural numbers, but it's not powerful enough to uniquely model the natural numbers. There are infinite possible first-order models of arithmetic, each of them perfectly complete, but none of them compatible with the others. We say that first-order systems are syntactically strong and semantically weak. Likewise, propositional logic (Boolean algebra) is complete: every possible propositional statement can be shown to be true or false.

On the other hand, the added expressivity of second-order logic -- which allows for quantifying over sets -- can uniquely (up to isomorphism) model the natural numbers. Second-order systems gain semantic strength at the expense of syntactic (mechanical) strength. Godel's incompleteness applies only to semantically-rich formal systems, and only in terms of showing that there must exist statements in the system that can neither be proved nor disproved within the given axioms and rules of inference of that system.

In short, incompleteness speaks to the limits of mechanical proofs, not the limits of what we can know.
 

bogosort

Joined Sep 24, 2011
696
Yes that analogy is not exactly perfect, but you must see that if there is a solution we dont know what it is yet so to be perfectly scientific about this i think we have to say that it is only a belief that we think there is a solution. That's why i brought up that math problem.
This doesn't make sense to me, so I think we may be talking about different things. We already have various sets of parametrized equations that model the universe at different scales. These equations have numerical solutions that, to some order of approximation, correspond very closely to the data that we measure at that particular scale. What we don't yet have is a set of equations that produces solutions appropriate for all scales. Such a set would represent the fundamental laws of physics in this universe. It's possible that such a set does not exist, that perhaps the universe cannot be perfectly modeled with mathematics. But even in this case, we can always choose -- as we've been doing for the past few centuries -- a set of equations that approximate the universal laws of physics. The solutions of this set will have some error with respect to what we actually measure; our goal is to reduce this error as much as possible.

But in all cases the equations have solutions. This is almost tautological, because an unsolvable equation does not describe the universe!

Another way to look at this is that we might ask what if there are no *real* solutions. We dont even know what *real* is yet. From a math point of view we might say that a complex (as in complex number) solution exists, but then that brings up the question again as to what is real and what is not real, and that leads to even more dimensions and we dont know if these extra dimensions are real or just math tools that help lead to a solution that again we have to deem as real.
When you say "*real*" do you mean a real number in ℝ? I hate that mathematicians of old used the overloaded words "real" and "complex" to describe numbers in ℝ and ℂ. There is nothing more or less real about a number in ℝ than any other type, nor is there anything more or less complex or imaginary about a number in ℂ than any other type.

Anyway, with the overloaded words it's difficult for me to parse what you're trying to say. But think of how in engineering we use numbers in ℂ to represent impedances. An impedance is characterized by its magnitude and phase, so we need two independent variables to mathematically describe it. There are many, many ways to do this, but we often choose single numbers from ℂ because it is a 2-dimensional vector space, which is precisely the size we need to represent impedances. The two dimensions are associated to the two independent values of an impedance's magnitude and phase. Likewise, if our model of the universe has 42 independent variables, then the solutions to its equations will live in a 42-dimensional space. There's nothing weird or unexpected about this.

Also, keep in mind that it is possible to trick a machine that knows everything about everything into giving a false answer to a properly designed question.
Huh?

This brings us to the question of is there a logical statement that can can not be disproved and as such be one of the founding definitions of the universe ... that is, a definition that can be used in an argument that defines the universe that can NEVER be contradicted no matter how many men look for an antithesis.
There's no such thing as "a logical statement that can not be disproved". Proof: for any statement β that is unprovable in logical system A, create the system A' that is exactly the same as A, except that it includes the axiom "β is false".

Formal systems are an entirely different subject with different kinds of goals; they don't have any bearing on the search for equations that describe the universe.
 

MrAl

Joined Jun 17, 2014
11,389
Not sure about what you're saying. But it was he that gave mathematical proof that there are some truths that can never be proved. The reason is that to know everything about a system (in this case, the Universe) one has to be outside of the system.
Hi,

That's an interesting view too. That may be the final result.
 

MrAl

Joined Jun 17, 2014
11,389
This doesn't make sense to me, so I think we may be talking about different things. We already have various sets of parametrized equations that model the universe at different scales. These equations have numerical solutions that, to some order of approximation, correspond very closely to the data that we measure at that particular scale. What we don't yet have is a set of equations that produces solutions appropriate for all scales. Such a set would represent the fundamental laws of physics in this universe. It's possible that such a set does not exist, that perhaps the universe cannot be perfectly modeled with mathematics. But even in this case, we can always choose -- as we've been doing for the past few centuries -- a set of equations that approximate the universal laws of physics. The solutions of this set will have some error with respect to what we actually measure; our goal is to reduce this error as much as possible.

But in all cases the equations have solutions. This is almost tautological, because an unsolvable equation does not describe the universe!


When you say "*real*" do you mean a real number in ℝ? I hate that mathematicians of old used the overloaded words "real" and "complex" to describe numbers in ℝ and ℂ. There is nothing more or less real about a number in ℝ than any other type, nor is there anything more or less complex or imaginary about a number in ℂ than any other type.

Anyway, with the overloaded words it's difficult for me to parse what you're trying to say. But think of how in engineering we use numbers in ℂ to represent impedances. An impedance is characterized by its magnitude and phase, so we need two independent variables to mathematically describe it. There are many, many ways to do this, but we often choose single numbers from ℂ because it is a 2-dimensional vector space, which is precisely the size we need to represent impedances. The two dimensions are associated to the two independent values of an impedance's magnitude and phase. Likewise, if our model of the universe has 42 independent variables, then the solutions to its equations will live in a 42-dimensional space. There's nothing weird or unexpected about this.


Huh?


There's no such thing as "a logical statement that can not be disproved". Proof: for any statement β that is unprovable in logical system A, create the system A' that is exactly the same as A, except that it includes the axiom "β is false".

Formal systems are an entirely different subject with different kinds of goals; they don't have any bearing on the search for equations that describe the universe.

We should probably talk about one piece of your reply at a time but let's see.

First, there are two types of approximations, those that come close, and those that actually approximate and can get closer and closer. If we find an approximation, i think it is only valid if we can use it to approximate to any desired accuracy.

When i say "unsolvable" i was referring to the universe itself. I think you agree that it may not be.

When i say "real" i mean if we come up with 20 dimensions do they really exist just because an equation describes the behavior.

In response to "huh"...
If you had a machine that could answer every single question you asked it with prefect accuracy, you would expect it to be able to answer every question you asked it right? But yet you can devise questions that the machine will give a false answer to, even though it is taught to answer every question perfectly true and that means every logical question would be true, but yet you can trick it into either giving a false answer or rather should i say unable to answer a question.

By logical statement that can not be disproved i mean one that is always true in what it describes. If we describe gravity with a statement then it should NEVER be disproved unless we dont have the right statement yet.
 

bogosort

Joined Sep 24, 2011
696
First, there are two types of approximations, those that come close, and those that actually approximate and can get closer and closer. If we find an approximation, i think it is only valid if we can use it to approximate to any desired accuracy.
Are volt meters and tape measures valid? By your criterion, no matter how well-engineered they are, they're not. Yet somehow we find ways to use them to do incredible things. Wouldn't you agree that they must have some level of validity? Even if we can never approach zero error, there's clearly a meaningful difference between a measurement with 10% uncertainty and one with %0.001.

If it happens to be the case that humans simply do not have the capability to completely understand the rules by which the universe behaves -- the equivalent of arbitrary accuracy -- we can still do incredible things with imperfect physics. Indeed, that's exactly what we've being doing for the past few thousand years. And as our physics gets better -- closer to %0.001 than %10 -- we've been able to do and understand more.

Seems like a valid approach to me.

When i say "unsolvable" i was referring to the universe itself. I think you agree that it may not be.
I think we've been having a language issue, using the same terminology to mean different things. As I see it, the laws of physics are a parametrized set of equations of motion. Depending on the chosen parameters and initial conditions, the solutions to these equations represent different possible universes. Our universe is one particular solution set in the space of solutions. So, the way I'm using these words, it doesn't make any sense to think of the universe as unsolvable.

I'm guessing that by an unsolvable universe, you mean the case where we cannot find the fundamental equations of motion, either because they don't exist (randomly-changing rules?) or because they are beyond our comprehension (too complicated?). This is certainly a possibility (unfortunately, probably the most likely), but it doesn't in any way prevent us from trying to find the best approximation. And since the universe does not appear to be randomly changing its physics, and we've been getting better and better at it, I believe we can get pretty damn close.

When i say "real" i mean if we come up with 20 dimensions do they really exist just because an equation describes the behavior.
More language issues. What do you mean by "exist"? Do the three spatial dimensions "exist"? To my way of thinking, they exist only in our minds; 3D is a convenient and sufficient abstraction for daily human activity, a way to make a mental map of all the raw sensory input we're bombarded with. Likewise, 4D is a convenient and sufficient abstraction to reason about relativity. As I see it, 3D is no more or less real than 4D (or 20D); they're all just mental models. The judgement about the "reality" of an equation should be how well it predicts physics, not how many independent variables it has.

In response to "huh"...
If you had a machine that could answer every single question you asked it with prefect accuracy, you would expect it to be able to answer every question you asked it right? But yet you can devise questions that the machine will give a false answer to, even though it is taught to answer every question perfectly true and that means every logical question would be true, but yet you can trick it into either giving a false answer or rather should i say unable to answer a question.
This is less meaningful than asking if an unstoppable force can move an immovable object. I say less because such a truth machine is provably impossible (see: Halting problem). So, I'm not too concerned about what an impossible machine has to say. :)

By logical statement that can not be disproved i mean one that is always true in what it describes. If we describe gravity with a statement then it should NEVER be disproved unless we dont have the right statement yet.
Logic statements and physics statements are categorically different; it really does not help trying to shoehorn physics into formal logic. We can never prove statements about physics. Why? Because proof is syntactic: do these collections of symbols follow from these other collections of symbols, given this set of axioms and this other set of rules of inference. Proofs only exist in formal systems.

When we use math to describe physics, we are not formalizing physics. We're using a model, i.e., providing semantics by imbuing the equations with an interpretation. The equations can be proved, but the interpretation cannot. To make this abundantly clear, consider the expression "A ∨ B" in propositional logic. These symbols represent a well-formed formula in the language of propositional logic, and -- since this can be syntactically proved -- it is a theorem. But the statement has no intrinsic truth value, it is just a meaningless group of symbols until we provide a model. If we interpret the symbol 'A' as being a true statement, then the expression "A ∨ B" becomes a true statement. Note the subtle distinction: we can never prove that the expression "A ∨ B" is true (or false); to do that, we would need to prove that A (or B) is true, but the truth of A and B were interpretations, brought in from outside the formal system. Proof can only happen in the formal system itself, because it is just syntactic symbol shuffling.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,081
https://www.technologyreview.com/s/...ts-theres-no-such-thing-as-objective-reality/
That raises some fascinating questions that are forcing physicists to reconsider the nature of reality.

The idea that observers can ultimately reconcile their measurements of some kind of fundamental reality is based on several assumptions. The first is that universal facts actually exist and that observers can agree on them.

But there are other assumptions too. One is that observers have the freedom to make whatever observations they want. And another is that the choices one observer makes do not influence the choices other observers make—an assumption that physicists call locality.

If there is an objective reality that everyone can agree on, then these assumptions all hold.
 

bogosort

Joined Sep 24, 2011
696

MrAl

Joined Jun 17, 2014
11,389
Are volt meters and tape measures valid? By your criterion, no matter how well-engineered they are, they're not. Yet somehow we find ways to use them to do incredible things. Wouldn't you agree that they must have some level of validity? Even if we can never approach zero error, there's clearly a meaningful difference between a measurement with 10% uncertainty and one with %0.001.

If it happens to be the case that humans simply do not have the capability to completely understand the rules by which the universe behaves -- the equivalent of arbitrary accuracy -- we can still do incredible things with imperfect physics. Indeed, that's exactly what we've being doing for the past few thousand years. And as our physics gets better -- closer to %0.001 than %10 -- we've been able to do and understand more.

Seems like a valid approach to me.


I think we've been having a language issue, using the same terminology to mean different things. As I see it, the laws of physics are a parametrized set of equations of motion. Depending on the chosen parameters and initial conditions, the solutions to these equations represent different possible universes. Our universe is one particular solution set in the space of solutions. So, the way I'm using these words, it doesn't make any sense to think of the universe as unsolvable.

I'm guessing that by an unsolvable universe, you mean the case where we cannot find the fundamental equations of motion, either because they don't exist (randomly-changing rules?) or because they are beyond our comprehension (too complicated?). This is certainly a possibility (unfortunately, probably the most likely), but it doesn't in any way prevent us from trying to find the best approximation. And since the universe does not appear to be randomly changing its physics, and we've been getting better and better at it, I believe we can get pretty damn close.


More language issues. What do you mean by "exist"? Do the three spatial dimensions "exist"? To my way of thinking, they exist only in our minds; 3D is a convenient and sufficient abstraction for daily human activity, a way to make a mental map of all the raw sensory input we're bombarded with. Likewise, 4D is a convenient and sufficient abstraction to reason about relativity. As I see it, 3D is no more or less real than 4D (or 20D); they're all just mental models. The judgement about the "reality" of an equation should be how well it predicts physics, not how many independent variables it has.


This is less meaningful than asking if an unstoppable force can move an immovable object. I say less because such a truth machine is provably impossible (see: Halting problem). So, I'm not too concerned about what an impossible machine has to say. :)


Logic statements and physics statements are categorically different; it really does not help trying to shoehorn physics into formal logic. We can never prove statements about physics. Why? Because proof is syntactic: do these collections of symbols follow from these other collections of symbols, given this set of axioms and this other set of rules of inference. Proofs only exist in formal systems.

When we use math to describe physics, we are not formalizing physics. We're using a model, i.e., providing semantics by imbuing the equations with an interpretation. The equations can be proved, but the interpretation cannot. To make this abundantly clear, consider the expression "A ∨ B" in propositional logic. These symbols represent a well-formed formula in the language of propositional logic, and -- since this can be syntactically proved -- it is a theorem. But the statement has no intrinsic truth value, it is just a meaningless group of symbols until we provide a model. If we interpret the symbol 'A' as being a true statement, then the expression "A ∨ B" becomes a true statement. Note the subtle distinction: we can never prove that the expression "A ∨ B" is true (or false); to do that, we would need to prove that A (or B) is true, but the truth of A and B were interpretations, brought in from outside the formal system. Proof can only happen in the formal system itself, because it is just syntactic symbol shuffling.

Hi,

I realize that my side of the discussion is easier because you say that
everything is eventually provable while i say that it may not be in
its most exact form.


'voltmeters'
It appears that you are accepting approximations while i am not but there is
of course the idea that a good enough approximation is by definition good enough.
The volt meter and the tape measure are tools that rely on statisitcal averages
and thus we miss the detail. If you are suggesting that they can eventually
lead to a good enough approximation then i think you are right, but if you are
suggesting that they lead to a real approximation that is accurate down to
any choice of scale, then that remains to be seen.
For example, if we look at 1000 basketballs at a distance they look like one big
mass. Yes it still helps though to be able to see it at all.


'unsolvable'
But you are suggesting that we already know everything about the universe yet we
know we dont.


'unsolvable universe'
You see a pattern of success and therefore believe that that pattern will lead to
better and better approximations. This is just a higher form of common experience.
Yes most of us believe we will solve it eventually, but we cant be sure until we
actually solve it completely, and then if we accept the wrong solutions we might
still be missing something.


'exist'
Well ha ha, this one goes deep into philosophy. But in short, i agree that if we have
a formula that may be good enough for many purposes, but what if that 'good enough'
formula means we miss something. By exist i mean that if the formula has 5 dimensions
then we can actually find and use these dimensions for something other than the formula
itself (as well as the formula).


'prefect machine'
That is not the point, i dont think, because this is a machine that can not give a
false answer. But let's look at an example and maybe see why mathematics is so limited
because it by definition encompasses itself. Maybe you said something like this before but
here is an example.
We have a machine, the machine is as perfect as you can get because it "knows" the
difference between true and false. A statement like:
1+1=2 is true because it is true but lets define the machine and see what happens.
This machine (or algorithm i guess) always repeats the statement you give it if it is
true, but never repeats it if it is false. The statements will appear on the left followed by
what the machine does on the right.
1 plus 1 equals 2 [gets repeated]
1 plus 2 equals 2 [does not get repeated]
I can not say "1 plus 2 equals 2" [repeated]
I can not say "i can not say '1 plus 2 equals 2'" twice [repeated]
I can not say "1 plus 2 equals 2", i can not say "1 plus 2 equals 2" [indeterminant]

I hope i have these statements correct but you see the problem.


'model for physics'
Yes but the model is what we think up given the ultra common experience.
By ultra common i mean after much experimentation.


But as you see form other post even the nature of reality is questioned so how could we know until we know what that is, and
that is why i think we need to realize first that what we see may not be what we get and so we might miss something
and that something would be what defines the universe to any degree of approximation we care to calculate.
All we can do is hope to get there by the methods that have been used before us.
 

bogosort

Joined Sep 24, 2011
696
I realize that my side of the discussion is easier because you say that
everything is eventually provable while i say that it may not be in
its most exact form.
I must be grossly obscuring my point if your takeaway is that I think that everything is eventually provable. As clear as I can say it: the only category of things that are provable are well-formed statements in formal systems of logic. Even within this extremely narrow and restricted category, there exist well-formed statements that are not provable (Godel). In other words, almost everything is not provable.

'voltmeters'
It appears that you are accepting approximations while i am not but there is
of course the idea that a good enough approximation is by definition good enough.
The volt meter and the tape measure are tools that rely on statisitcal averages
and thus we miss the detail. If you are suggesting that they can eventually
lead to a good enough approximation then i think you are right, but if you are
suggesting that they lead to a real approximation that is accurate down to
any choice of scale, then that remains to be seen.
For example, if we look at 1000 basketballs at a distance they look like one big
mass. Yes it still helps though to be able to see it at all.
Again, I am saying that there is no such thing as arbitrary accuracy, i.e., there is no possible measurement where we can make the error ε as small as we care. You call such a measurement (ε → 0) a "real approximation" -- as opposed to "fake approximation"? -- but I call it a perfect measurement, and I deny its very existence. This means that we're left only with measurements that have some absolute lower bound on ε > 0. If these are not valid, as was your original implication, then science as a whole is not valid, because the only tool it has are imperfect approximations. While I recognize the possibility that science may indeed be completely wrong and therefore worse than useless, I do not believe that is a rational conclusion. If science were completely wrong, what is the probability that we'd have been able to develop, say, the cell phone? Think for a second how much we need to understand about the universe in order to make a networked phone that fits in the palm of your hand work.

'unsolvable'
But you are suggesting that we already know everything about the universe yet we
know we dont.
How have I suggested that we already know everything about the universe? Please elaborate, because I specifically said the opposite.

Let me try to explain my point re: 'unsolvable' again. You're back in Physics 1 class and have been asked to figure out the projectile motion of a cannon ball fired from a cliff. Everything is ideal -- no friction, constant acceleration, etc. -- so the laws of physics can be reduced to a few parametrized equations of motion (the kinematic equations). Note what we have here: a set of equations represents the laws (the rules by which the universe works), and the solution space of these equations represents all physically possible projectile motions. Plugging in the single parameter (g = 9.8 m/s^2) and a few initial conditions will pick out the solution that corresponds to your particular class problem.

Now, what does "unsolvable" mean in this context? It means un-physical, i.e., some motion or path that cannot physically happen.

Of course, in the classroom example, we already knew all the laws of physics. In reality we don't know all the laws -- we haven't yet found all the equations of motion that can account for everything that happens in the universe at all scales. However, we do know that -- whatever the laws are -- there is a solution, namely, the one that leads to the universe as it looks now. As in the cannon ball example, saying that the universe is unsolvable is akin to saying that the universe is un-physical, that it cannot exist. But here it is! By virtue of the universe existing, it must be a solution to some set of equations. I concede that it may not be an exact solution, but exact solutions aren't necessary (e.g., aerodynamics and hydrodynamics are full of inexact solutions, yet we can do some pretty spectacular things with them).

'unsolvable universe'
You see a pattern of success and therefore believe that that pattern will lead to
better and better approximations. This is just a higher form of common experience.
Yes most of us believe we will solve it eventually, but we cant be sure until we
actually solve it completely, and then if we accept the wrong solutions we might
still be missing something.
It's difficult to truly grasp how much science has progressed in the past 5,000 years. Calling it just a "pattern of success" is really underselling it. There's absolutely no doubt that we're missing an enormous amount of details, but please consider which is the more probable:
  1. Science is slowly converging to a more accurate model of the universe; or
  2. Science was once converging, but is presently diverging from an accurate model; or
  3. Science is neither converging nor diverging; it is completely off the mark in every way.
Pick any field of study -- astronomy, electricity, biology, whatever -- think about its history and explain to me how option 1. is not the most probable.

'exist'
Well ha ha, this one goes deep into philosophy. But in short, i agree that if we have
a formula that may be good enough for many purposes, but what if that 'good enough'
formula means we miss something. By exist i mean that if the formula has 5 dimensions
then we can actually find and use these dimensions for something other than the formula
itself (as well as the formula).
What does it mean to "find and use these dimensions"? Dimension is a mathematical concept; I can't dig for dimension, or build a barn out of dimension.

'prefect machine'
That is not the point, i dont think, because this is a machine that can not give a
false answer. But let's look at an example and maybe see why mathematics is so limited
because it by definition encompasses itself. Maybe you said something like this before but
here is an example.
We have a machine, the machine is as perfect as you can get because it "knows" the
difference between true and false. A statement like:
1+1=2 is true because it is true but lets define the machine and see what happens.
This machine (or algorithm i guess) always repeats the statement you give it if it is
true, but never repeats it if it is false. The statements will appear on the left followed by
what the machine does on the right.
1 plus 1 equals 2 [gets repeated]
1 plus 2 equals 2 [does not get repeated]
I can not say "1 plus 2 equals 2" [repeated]
I can not say "i can not say '1 plus 2 equals 2'" twice [repeated]
I can not say "1 plus 2 equals 2", i can not say "1 plus 2 equals 2" [indeterminant]
The statement "1 + 1 = 2" cannot be said to be true or false without reference to some particular model. For example, "1 + 1 = 0" is true within Boolean rings, and "1 + 1 = 1" is true within Boolean algebras. Your machine that cannot give false answers must have some domain of discourse, otherwise you'll have to spell out every possible assumption. But that's besides the point. I can't tell why your machine becomes undetermined in the last phrase, but there are several rigorous ways to get the same effect. It all starts with Godel, but the Halting problem is more in the spirit of your example. You can read Wikipedia for the full treatment, but the basic idea is that we can never design an algorithm that can determine whether an arbitrary computer program will eventually stop on some input, or keep running forever. The proof amounts to "tricking" such an algorithm by giving it itself as its own input, which leads to a paradox. Though it doesn't sound like a particularly earth-shattering result, it shows that such "never false" machines cannot exist.


'model for physics'
Yes but the model is what we think up given the ultra common experience.
By ultra common i mean after much experimentation.
Right, but my point is that they can neither be proved nor disproved. There is no proof in physics.

But as you see form other post even the nature of reality is questioned so how could we know until we know what that is, and
that is why i think we need to realize first that what we see may not be what we get and so we might miss something
and that something would be what defines the universe to any degree of approximation we care to calculate.
All we can do is hope to get there by the methods that have been used before us.
The nature of reality has been in question since humanity first had some leisure time. For the past century, physics has been telling us that whatever the true nature of reality may be, it is nothing like our every day experience. This doesn't mean that it's beyond our grasp -- quite the opposite! None of the so-called "unobjectivity" results goes against our present physical theories; they merely confirm what we already know. The moment Einstein showed that there is no such thing as an objective now upon which everyone can agree, we lost our "objective innocence". Quantum mechanics went further and trashed any remaining distinction between what it means to be subject or object.

But consider how amazing it is that -- even though we personally are not beholden to some kind of objective reality -- we can quantify just how unobjective things really are. In other words, science is doing for us what it always has done: allow us to transcend our empirical limitations. That the universe appears to fundamentally be some kind of crazy funhouse (relative to our everyday experience) is only more reason to lean into science. It's really the only tool we have for figuring out what the hell is going on; fortunately, it seems to be working.
 

MrAl

Joined Jun 17, 2014
11,389
I must be grossly obscuring my point if your takeaway is that I think that everything is eventually provable. As clear as I can say it: the only category of things that are provable are well-formed statements in formal systems of logic. Even within this extremely narrow and restricted category, there exist well-formed statements that are not provable (Godel). In other words, almost everything is not provable.


Again, I am saying that there is no such thing as arbitrary accuracy, i.e., there is no possible measurement where we can make the error ε as small as we care. You call such a measurement (ε → 0) a "real approximation" -- as opposed to "fake approximation"? -- but I call it a perfect measurement, and I deny its very existence. This means that we're left only with measurements that have some absolute lower bound on ε > 0. If these are not valid, as was your original implication, then science as a whole is not valid, because the only tool it has are imperfect approximations. While I recognize the possibility that science may indeed be completely wrong and therefore worse than useless, I do not believe that is a rational conclusion. If science were completely wrong, what is the probability that we'd have been able to develop, say, the cell phone? Think for a second how much we need to understand about the universe in order to make a networked phone that fits in the palm of your hand work.


How have I suggested that we already know everything about the universe? Please elaborate, because I specifically said the opposite.

Let me try to explain my point re: 'unsolvable' again. You're back in Physics 1 class and have been asked to figure out the projectile motion of a cannon ball fired from a cliff. Everything is ideal -- no friction, constant acceleration, etc. -- so the laws of physics can be reduced to a few parametrized equations of motion (the kinematic equations). Note what we have here: a set of equations represents the laws (the rules by which the universe works), and the solution space of these equations represents all physically possible projectile motions. Plugging in the single parameter (g = 9.8 m/s^2) and a few initial conditions will pick out the solution that corresponds to your particular class problem.

Now, what does "unsolvable" mean in this context? It means un-physical, i.e., some motion or path that cannot physically happen.

Of course, in the classroom example, we already knew all the laws of physics. In reality we don't know all the laws -- we haven't yet found all the equations of motion that can account for everything that happens in the universe at all scales. However, we do know that -- whatever the laws are -- there is a solution, namely, the one that leads to the universe as it looks now. As in the cannon ball example, saying that the universe is unsolvable is akin to saying that the universe is un-physical, that it cannot exist. But here it is! By virtue of the universe existing, it must be a solution to some set of equations. I concede that it may not be an exact solution, but exact solutions aren't necessary (e.g., aerodynamics and hydrodynamics are full of inexact solutions, yet we can do some pretty spectacular things with them).


It's difficult to truly grasp how much science has progressed in the past 5,000 years. Calling it just a "pattern of success" is really underselling it. There's absolutely no doubt that we're missing an enormous amount of details, but please consider which is the more probable:
  1. Science is slowly converging to a more accurate model of the universe; or
  2. Science was once converging, but is presently diverging from an accurate model; or
  3. Science is neither converging nor diverging; it is completely off the mark in every way.
Pick any field of study -- astronomy, electricity, biology, whatever -- think about its history and explain to me how option 1. is not the most probable.


What does it mean to "find and use these dimensions"? Dimension is a mathematical concept; I can't dig for dimension, or build a barn out of dimension.



The statement "1 + 1 = 2" cannot be said to be true or false without reference to some particular model. For example, "1 + 1 = 0" is true within Boolean rings, and "1 + 1 = 1" is true within Boolean algebras. Your machine that cannot give false answers must have some domain of discourse, otherwise you'll have to spell out every possible assumption. But that's besides the point. I can't tell why your machine becomes undetermined in the last phrase, but there are several rigorous ways to get the same effect. It all starts with Godel, but the Halting problem is more in the spirit of your example. You can read Wikipedia for the full treatment, but the basic idea is that we can never design an algorithm that can determine whether an arbitrary computer program will eventually stop on some input, or keep running forever. The proof amounts to "tricking" such an algorithm by giving it itself as its own input, which leads to a paradox. Though it doesn't sound like a particularly earth-shattering result, it shows that such "never false" machines cannot exist.



Right, but my point is that they can neither be proved nor disproved. There is no proof in physics.


The nature of reality has been in question since humanity first had some leisure time. For the past century, physics has been telling us that whatever the true nature of reality may be, it is nothing like our every day experience. This doesn't mean that it's beyond our grasp -- quite the opposite! None of the so-called "unobjectivity" results goes against our present physical theories; they merely confirm what we already know. The moment Einstein showed that there is no such thing as an objective now upon which everyone can agree, we lost our "objective innocence". Quantum mechanics went further and trashed any remaining distinction between what it means to be subject or object.

But consider how amazing it is that -- even though we personally are not beholden to some kind of objective reality -- we can quantify just how unobjective things really are. In other words, science is doing for us what it always has done: allow us to transcend our empirical limitations. That the universe appears to fundamentally be some kind of crazy funhouse (relative to our everyday experience) is only more reason to lean into science. It's really the only tool we have for figuring out what the hell is going on; fortunately, it seems to be working.

Hi,

Thanks for the discussion and remember all that you talked about there but i think we have to narrow the discussion context down a little in order to be able to make timely replies that are worthwhile.
Let's start with this:
It's difficult to truly grasp how much science has progressed in the past 5,000 years. Calling it just a "pattern of success" is really underselling it. There's absolutely no doubt that we're missing an enormous amount of details, but please consider which is the more probable:
  1. Science is slowly converging to a more accurate model of the universe; or
  2. Science was once converging, but is presently diverging from an accurate model; or
  3. Science is neither converging nor diverging; it is completely off the mark in every way.
Pick any field of study -- astronomy, electricity, biology, whatever -- think about its history and explain to me how option 1. is not the most probable.
This is exemplary of the point i am arguing now, and i replied to this but you did not realize i guess that my reply pertained to this very point.

I said already, that most people BELIEVE that number 1 is correct. Not all however. Some believe that as we ask questions, we get some answers and more questions. That is in the context of truly understanding the universe though not a question of engineering accomplishment.

When it comes to engineering accomplishment, look at what has been done knowing just classical physics. You can do a lot with just that. But as we dig deeper, we find there is more to the total puzzle. Then once we find that, we find there is even more.

Engineering is not the same as science. We can build things even without understanding science except maybe for some very basic principles like Newton's gravity. Engineering can get away with those approximations but science can not.

I guess this is similar to some math approximations. The simplest might be a table vs a function.
In a table we have a lot of entries (like a log table) where we can do a LOT with that in engineering. But in pure mathematics we want to be able to calculate the log of any number to any degree of accuracy, not limited to somebody else's idea of what is good enough.
There are examples that even go farther into this idea of approximations vs the perfectly accurate. There is an algorithm that calculates prime numbers, or so it might seem, but there is ONE prime that is not valid that it calculates as prime but is not prime. Just ONE number that comes out that is not prime out of thousands of numbers that are prime. You would never notice it unless someone told you. That algorithm makes it look like you've got a prime number generator, but you really dont because of just that one error.
I might try to look for that algorithm on the web as it's been a long time now since i had a look, from around the mid 1980's. By now it must be on the web as a good example of what could go wrong.

The main point being that our common experience includes are experiments, no matter how good they are, they are always conditions. Look up the Descartes "Evil Genius" to understand more if you never did that. Of course we dont believe in such a thing, but this is one view on how wrong we can be with common experience.

I hope this makes my point a little more clear. Remember i dont disagree that engineering benefits from approximations, even science to some degree, but i think we need perfect accuracy to know it all. Perfect accuracy is like a log formula vs an imperfect table of logs.
 

MrAl

Joined Jun 17, 2014
11,389
You are not. Welcome to the club ...
Hi,

Oh sorry if i missed something, but feel free to point it out, no problem. Try to keep it short though ok so that we can reply with a good reply that doesnt take all night to write out :) I am involved in a number of other things that i am finding take a lot of my time right now but i also like these discussions. Thanks.
 

Thread Starter

nsaspook

Joined Aug 27, 2009
13,081
Hi,

Oh sorry if i missed something, but feel free to point it out, no problem. Try to keep it short though ok so that we can reply with a good reply that doesnt take all night to write out :) I am involved in a number of other things that i am finding take a lot of my time right now but i also like these discussions. Thanks.
Don't worry about it.
 

MrAl

Joined Jun 17, 2014
11,389
Hi,

Oh ok no problem then :)

Thanks.
Another view i realize is from Laplace, who suggested that an intellect that was good enough to know the position and velocity of every particle at a given time would be able to predict the same for the particle at a later time, and thus would know everything there was to know even in the future in the present time.
THAT is perfection.

Now we can default a little to just a given set of particles, and be happy with that i think, but we'd have to investigate the error in doing that.
An example is when calculating inductance in the classical sense. The exact solution means we have to integrate out to an infinite distance from the wire, but then we realize that the sum drops off quickly after a certain distance so we can approximate by keeping the distance large but not infinite. This means that we can calculate to ANY degree of exactness BUT once we get to a certain point we have to extend the distance again because the accuracy drops off. So what i call perfection is the original technique where we have to go out to an infinite distance and everything else is an approximation. I brought this up before too, but this leads to the notion of "what is good enough for all calculations for all humanity". Maybe we will discover that going out to a distance of 1 light year is good enough for all calculations we can ever hope to encounter, until one day when it isnt. Note that NOW the approximation again fails while the perfect calculation never does.
Of course we also have to realize that this 'perfect calculation' is also within the body of this discussion and so is subject to the same scrutiny. For the purpose of this discussion i deem it perfect but that's also subject to changes that could come in the future (and we know this already happened!).

Of course we still have at least one more view. The view of sustainability. In this view we look at the discovery of new ideas as a process not disjointed from time like how we usually think of these things.
I posed the question about the practicality of counting to infinity. Now we know we cant, but it's interesting to look at the PROCESS of TRYING to do so.
If one man starts out counting, 1,2,3, etc., and lives to be 90 years old and then passes the task to his son and he takes over and lives to 90, and he passes to his son, etc., etc., what number would be the last number they could ever count to.
Well, for one thing we could improve on the task by letting them use a computer and algorithm for counting, so each one could get to a higher number before passing the task to the next generation. The thing is though this just increases the number attainable, and after we improve on the way we do this the question then becomes how long the human race will even last. So here something we never thought of during the inception of the project attained complete dominance over our attempt to find the solution.
.
 
Last edited:

bogosort

Joined Sep 24, 2011
696
I said already, that most people BELIEVE that number 1 is correct. Not all however. Some believe that as we ask questions, we get some answers and more questions. That is in the context of truly understanding the universe though not a question of engineering accomplishment.
The other two options were: 2. the accuracy of science is diverging (i.e., we're actually learning less about the universe as we go forward); and, 3. science is neither converging nor diverging (i.e., whatever we've been doing, none of it has anything to do with the universe). How can a reasonable person argue that either of these two options are more probable than option 1? That's what I was asking about. I have no idea why you brought in engineering accomplishments; I was talking specifically about our scientific knowledge: "astronomy, electricity, biology", etc.

But as we dig deeper, we find there is more to the total puzzle. Then once we find that, we find there is even more.
Yes, and this is called convergence.

I guess this is similar to some math approximations. The simplest might be a table vs a function.
In a table we have a lot of entries (like a log table) where we can do a LOT with that in engineering. But in pure mathematics we want to be able to calculate the log of any number to any degree of accuracy, not limited to somebody else's idea of what is good enough.
FYI, by definition, a function is a table: all a function does is map elements from one set to another. You probably mean to emphasize the difference between a function on infinite sets vs a similar function defined on finite subsets. Naturally, the former will have more entries than the latter -- and so the latter can be considered a finite approximation of the former. But both are equally valid functions. Note also that science is and must always be 100% approximations, despite the seeming perfect certainty of our equations. We say that the length of the ramp is 5 m, but this is an approximation that's only valid with a large enough ruler -- what's its length when measured with molecular precision? with atomic precision? with subatomic precision? with Planck-scale precision?

There is an algorithm that calculates prime numbers, or so it might seem, but there is ONE prime that is not valid that it calculates as prime but is not prime. Just ONE number that comes out that is not prime out of thousands of numbers that are prime. You would never notice it unless someone told you. That algorithm makes it look like you've got a prime number generator, but you really dont because of just that one error.
Then that is a severely broken algorithm. Maybe they were using heuristics, accepting occasional errors in the interest of huge speed gains. But that's a different goal than simply (and perfectly) generating prime numbers. We've had prime number generators (that make zero errors) for over 2,000 years now. The public-key crypto algorithms in your browser do it perfectly and fast.
 

bogosort

Joined Sep 24, 2011
696
Another view i realize is from Laplace, who suggested that an intellect that was good enough to know the position and velocity of every particle at a given time would be able to predict the same for the particle at a later time, and thus would know everything there was to know even in the future in the present time.
THAT is perfection.
Laplace didn't know quantum mechanics (or even thermodynamics). The goal of physics is not to be able to predict the exact state of the universe at any given time; QM shows us that this is impossible. The goal is to find a complete set of equations of motion that govern the universe at all scales. Just like the heat equation tells us the temperature dynamics of an object -- but not what any particular molecule will do -- the equations of motion would give us the dynamics of the universe, the physically possible ways it can evolve at every scale.
 

MrAl

Joined Jun 17, 2014
11,389
The other two options were: 2. the accuracy of science is diverging (i.e., we're actually learning less about the universe as we go forward); and, 3. science is neither converging nor diverging (i.e., whatever we've been doing, none of it has anything to do with the universe). How can a reasonable person argue that either of these two options are more probable than option 1? That's what I was asking about. I have no idea why you brought in engineering accomplishments; I was talking specifically about our scientific knowledge: "astronomy, electricity, biology", etc.


Yes, and this is called convergence.


FYI, by definition, a function is a table: all a function does is map elements from one set to another. You probably mean to emphasize the difference between a function on infinite sets vs a similar function defined on finite subsets. Naturally, the former will have more entries than the latter -- and so the latter can be considered a finite approximation of the former. But both are equally valid functions. Note also that science is and must always be 100% approximations, despite the seeming perfect certainty of our equations. We say that the length of the ramp is 5 m, but this is an approximation that's only valid with a large enough ruler -- what's its length when measured with molecular precision? with atomic precision? with subatomic precision? with Planck-scale precision?


Then that is a severely broken algorithm. Maybe they were using heuristics, accepting occasional errors in the interest of huge speed gains. But that's a different goal than simply (and perfectly) generating prime numbers. We've had prime number generators (that make zero errors) for over 2,000 years now. The public-key crypto algorithms in your browser do it perfectly and fast.

Hello,

You seem to be missing the points of these statements.

For example, i dont quote a broken algorithm to have you say that it is a broken algorithm. Why would you think that reply would help when i stated that it is not perfect already? The point was, things look perfect until we find something totally unexpected and then we see the flaw.
What follows could be flaw after flaw, and we hope and pray that as we solve them we get to where we really want to go.

You have to understand why someone says something before you can reply with something that actually adds to the discussion to take it farther on. Otherwise it will be back and forth forever. So let me repeat, you have to understand why the other person says something, not just what they say verbatim.
Let me repeat the point to that too. Things look perfect until we find something unexpected and that could take us down a totally unforeseen path. I used to quote a saying i made up a long time ago with some variations:
"It seems often perfect when its errors perfectly hide".
 
Thread starter Similar threads Forum Replies Date
cmartinez Off-Topic 2
marshallf3 Analog & Mixed-Signal Design 6
Top