We Hang Up Our Brains In 2025?

Thread Starter

MrAl

Joined Jun 17, 2014
13,667
Hello there,

According to Elon Musk artificial intelligence will become smarter than humans by 2025. That's only two more years.
Even if that is not true and it takes longer, imagine what this will mean for society and social interactions and jobs and everything human. We could look at some examples to get a feel for this.

Imagine if you wanted to design a circuit to do some unique thing that has not been done before, and assume that it is possible for now. You could ask the chat bot, and the chat bot would give you the circuit, no matter how complicated, and the explanations and formulas and parts lists. Where does that put people who design circuits?
Imagine if you wanted a building or bridge to be built somewhere and you could ask a chat bot how to design it and parts lists and equipment needed and so on. Where would this put architects.

What some people believe is that this will never happen because AI can not handle novelties too well, and that is what i have seen too, as well as they make some big mistakes. For example one chat bot claimed that a very low input offset op amp was laser trimmed to provide a "50mv" input offset. Really, 50mv? Gee i guess their lasers were off that day (ha ha). The right value was 50uv of course and i had to explain that to the thing.

But some day it could happen, or at least get close enough to almost take over all the calculations and figuring out of everything.
The question is, what do we do then. What does everyone do then.
No need for libraries, technical books. Many things will become obsolete like the slide rule.

What is kind of funny right now though is that some of them get their information from chat rooms on the web. Thus, when Barney tells Fred that the sun revolves around the earth, it may get reported to you as fact.
 

nsaspook

Joined Aug 27, 2009
16,249
AI today and IMO near future will continue to be a toy tool. Toys are fun, tools are useful but they both will always require human imagination for actual creativity. The perceived AI intelligence and creativity is only possible due to human content being manipulated in very creative ways. GIGO will always be true.
 

ApacheKid

Joined Jan 12, 2015
1,762
Hello there,

According to Elon Musk artificial intelligence will become smarter than humans by 2025. That's only two more years.
Even if that is not true and it takes longer, imagine what this will mean for society and social interactions and jobs and everything human. We could look at some examples to get a feel for this.

Imagine if you wanted to design a circuit to do some unique thing that has not been done before, and assume that it is possible for now. You could ask the chat bot, and the chat bot would give you the circuit, no matter how complicated, and the explanations and formulas and parts lists. Where does that put people who design circuits?
Imagine if you wanted a building or bridge to be built somewhere and you could ask a chat bot how to design it and parts lists and equipment needed and so on. Where would this put architects.

What some people believe is that this will never happen because AI can not handle novelties too well, and that is what i have seen too, as well as they make some big mistakes. For example one chat bot claimed that a very low input offset op amp was laser trimmed to provide a "50mv" input offset. Really, 50mv? Gee i guess their lasers were off that day (ha ha). The right value was 50uv of course and i had to explain that to the thing.

But some day it could happen, or at least get close enough to almost take over all the calculations and figuring out of everything.
The question is, what do we do then. What does everyone do then.
No need for libraries, technical books. Many things will become obsolete like the slide rule.

What is kind of funny right now though is that some of them get their information from chat rooms on the web. Thus, when Barney tells Fred that the sun revolves around the earth, it may get reported to you as fact.
I'm growing tired of the lay press harping on about "AI" like its some kind of new invention. There's no objective definition of intelligence or measure, so it's all hype, the latest press fad. As for artificial intelligence, that too is a bit of a misnomer, there's compelling reasons to believe that a human mind is not algorithmic so how an algorithmic system can be expected to replicate the behavior of a non-algorithmic system I don't know.
 

ApacheKid

Joined Jan 12, 2015
1,762
Until AI achieves consciousness and is self-aware, it will remain a long way from duplicating human intelligence and innovation.
Consciousness is unexplained, there's no scientific explanation for the subjective experience of "being aware", this is the realm of metaphysics and philosophy, science cannot help us here, it is a true frontier of understanding.
 

nsaspook

Joined Aug 27, 2009
16,249
It’s probably already smart enough to take over most middle-management jobs.
I think smarts is the wrong word for most middle-management jobs. The best managers use the correct tools to help them make wise decisions.
1682435923919.png
No need for AI.
 

ApacheKid

Joined Jan 12, 2015
1,762
Just because it can't be scientifically explained, doesn't mean it's not real.
Of course, that's true. But if it cannot be modelled scientifically then it cannot be simulated, that's the point I was driving at.

There are many things that cannot (even in principle) be explained via science yet are real, there existence is self evident.
 

ApacheKid

Joined Jan 12, 2015
1,762
It's interesting to consider SETI here, that work is concerned with testing for the existence of intelligence, of course that's fine to do, valid science, unless its under the ID banner in which case all of a sudden its regarded as pseudoscience!
 

Thread Starter

MrAl

Joined Jun 17, 2014
13,667
Until AI achieves consciousness and is self-aware, it will remain a long way from duplicating human intelligence and innovation.
Hi,

That makes a lot of sense. It's going to take time, but i just think of the time when it gets 'good enough' which is sometimes really good enough.

But also, what happens when two or more chat bots get into a chat room and start to brainstorm. They will have to acknowledge each other, and the 'other' will confirm that the 'other-other' exists in some way and the 'other-other' will confirm that the 'other' exists in some way, so that may emulate self awareness, if that really matters, but ideas and innovation may be able to emerge from that exchange of information.

I guess i am thinking into the future, where much of this has already happened. Where does that leave society. Do we still need a President, etc.
 

Thread Starter

MrAl

Joined Jun 17, 2014
13,667
Who gives a rat's hind quarter what the Chief Twit has to say? Certainly not me.
Hi there Papa, and thanks for your reply.

What i was getting at was not so much what the main man had to say, that i was using just as a starting point for the discussion of what is to come, sooner or later. I wonder where that will leave us. For example, will you and i actually have to discuss this if the answer is already known. If we dont have to discuss this, then what do we discuss. If all circuits can be solved by a bot, then we might find the discussions boring after that. That's really the line of thinking i was aiming at.
 

Thread Starter

MrAl

Joined Jun 17, 2014
13,667
Of course, that's true. But if it cannot be modelled scientifically then it cannot be simulated, that's the point I was driving at.

There are many things that cannot (even in principle) be explained via science yet are real, there existence is self evident.
Hi,

But what about in the future.
 

Thread Starter

MrAl

Joined Jun 17, 2014
13,667
It's interesting to consider SETI here, that work is concerned with testing for the existence of intelligence, of course that's fine to do, valid science, unless its under the ID banner in which case all of a sudden its regarded as pseudoscience!
Hi,

Yeah i kind of wondered about that too. If we dont take it seriously we might miss something, yet if we take it seriously we might be considered strange. I think at least the question is a serious one though.
 

xox

Joined Sep 8, 2017
936
Hi,

That makes a lot of sense. It's going to take time, but i just think of the time when it gets 'good enough' which is sometimes really good enough.

But also, what happens when two or more chat bots get into a chat room and start to brainstorm. They will have to acknowledge each other, and the 'other' will confirm that the 'other-other' exists in some way and the 'other-other' will confirm that the 'other' exists in some way, so that may emulate self awareness, if that really matters, but ideas and innovation may be able to emerge from that exchange of information.

I guess i am thinking into the future, where much of this has already happened. Where does that leave society. Do we still need a President, etc.
Some have argued that, at some point humanity could reach a point where an "AI singularity" arises, which is to say a point where humans can no longer compete AT ANY LEVEL with computers.

Where might that lead us?

On one the one hand, it could mean a future where everything is provided to us by machines. Very little human labor would be necessary to keep society together. Food, clothing, even housing could be continuously manufactured by robots. There would still be a need for statesmen and presidents, although most other jobs would be virtually eliminated.

Then again, considering humanity's overall-lousy track record insofar as ethics go, society could (alas!) just as foreseeably slide into a something akin to the the kinds of regimes which arose in the early 20th-century (except...even more Orwellian).

Hopefully, our greatest legacy will be that we were indeed able to retain our humanity as we adapted to each successive wave of technological revolutions. There are many indicators that this is in fact the case. So things may not be so bad after all. Let us pray!
 

Boggart

Joined Jan 31, 2022
82
For example one chat bot claimed that a very low input offset op amp was laser trimmed to provide a "50mv" input offset. Really, 50mv? Gee i guess their lasers were off that day (ha ha). The right value was 50uv of course and i had to explain that to the thing.
Anyone who has done publishing or worked with fonts would be aware of this issue. A mu in many fonts isn't available, and so you normally use the "Symbol" font, and in that font, the mu corresponds to a lowercase m. So if the text is then rendered without that font available, all "mu"s turn into "m"s. I expect that this hasn't been dealt with in the AI yet, hence the weirdness.
 
Top