Perhaps NSAspook could explain what is going on here.

Art Vandelay

Joined Nov 1, 2024
140
In a nutshell, all they are doing is exchanging public keys as part of an asymmetric cryptographic function.

In other words, A sends B a message using some function X but only B can decrypt the message because only B has access to the private part of the key. It's the same with using HTTPS to log onto a banking website, the public part of the key is broadcast to everyone.
 

nsaspook

Joined Aug 27, 2009
16,250
I also suspected spoofed intelligence.. but it's still possible to program two devices to communicate this way. Why do you think they lack intelligence?
AI as actual intelligence doesn't currently exist. We currently have very powerful autocomplete programs that use existing human intelligence as the training sets for the statically basis of the next work to use.

https://bigthink.com/the-future/artificial-general-intelligence-true-ai/
ChatGPT is not “true AI.” A computer scientist explains why
Large language models are an impressive advance in AI, but we are far away from achieving human-level capabilities.
For all their mind-bending scale, LLMs are actually doing something very simple. Suppose you open your smartphone and start a text message to your spouse with the words “what time.” Your phone will suggest completions of that text for you. It might suggest “are you home” or “is dinner,” for example. It suggests these because your phone is predicting that they are the likeliest next words to appear after “what time.” Your phone makes this prediction based on all the text messages you have sent, and based on these messages, it has learned that these are the likeliest completions of “what time.” LLMs are doing the same thing, but as we have seen, they do it on a vastly larger scale. The training data is not just your text messages, but all the text available in digital format in the world. What does that scale deliver? Something quite remarkable — and unexpected.
 

Art Vandelay

Joined Nov 1, 2024
140
AI as actual intelligence doesn't currently exist. We currently have very powerful autocomplete programs that use existing human intelligence as the training sets for the statically basis of the next work to use.

https://bigthink.com/the-future/artificial-general-intelligence-true-ai/
ChatGPT is not “true AI.” A computer scientist explains why
Large language models are an impressive advance in AI, but we are far away from achieving human-level capabilities.
I think we are in an age where intelligence will come to take on a new meaning as well as an increased ambiguity in it's definition.

For example, if A.I. does a job that only intelligent agents like us are able to do, then technically speaking, the A.I. posses at a minimum the equivalent faculties to carry out the operation in space or time. Of course, the meaning or purpose of the process is subjective to the human agent giving the original commands.

I don't know if intelligence is a good way of trying to define what's going on here. And as everyone knows, there are many types of intelligence which says to me our definitions are horribly vague despite the progress we've made as a species.

In the case of cryptography, it's a very interesting case. Here we have a system of logic that is able to change a message so no human on Earth could read it without a computers help. Even if we used pen and paper, it would be impossible without the computers processing power.

This seems to suggest the machine might be hypersensitive to some stimuli but not others. I'd loosely characterize its ability to cope and excel in that environment as a measure of its intelligence. After all, it is (physically) aware of the stimuli it's processing otherwise the operation would be impossible.
 

nsaspook

Joined Aug 27, 2009
16,250
I think we are in an age where intelligence will come to take on a new meaning as well as an increased ambiguity in it's definition.

For example, if A.I. does a job that only intelligent agents like us are able to do, then technically speaking, the A.I. posses at a minimum the equivalent faculties to carry out the operation in space or time. Of course, the meaning or purpose of the process is subjective to the human agent giving the original commands.

I don't know if intelligence is a good way of trying to define what's going on here. And as everyone knows, there are many types of intelligence which says to me our definitions are horribly vague despite the progress we've made as a species.

In the case of cryptography, it's a very interesting case. Here we have a system of logic that is able to change a message so no human on Earth could read it without a computers help. Even if we used pen and paper, it would be impossible without the computers processing power.

This seems to suggest the machine might be hypersensitive to some stimuli but not others. I'd loosely characterize its ability to cope and excel in that environment as a measure of its intelligence. After all, it is (physically) aware of the stimuli it's processing otherwise the operation would be impossible.
No, intelligence (in the computing sense) means the same as always. BS marketing and hype masters are tying to create some new meaning to sell products.

I can create, with simple pencil and paper a cryptographic code that no computer or person could crack, ever. It's a trivial task that been used for decades with things like numbers stations.

You are just using anthropomorphism in your descriptions and ideas about what machines are doing.
https://www.mdpi.com/2504-3900/114/1/4
The Double-Edged Sword of Anthropomorphism in LLMs
Abstract
Humans may have evolved to be “hyperactive agency detectors”. Upon hearing a rustle in a pile of leaves, it would be safer to assume that an agent, like a lion, hides beneath (even if there may ultimately be nothing there). Can this evolutionary cognitive mechanism—and related mechanisms of anthropomorphism—explain some of people’s contemporary experience with using chatbots (e.g., ChatGPT, Gemini)? In this paper, we sketch how such mechanisms may engender the seemingly irresistible anthropomorphism of large language-based chatbots. We then explore the implications of this within the educational context. Specifically, we argue that people’s tendency to perceive a “mind in the machine” is a double-edged sword for educational progress: Though anthropomorphism can facilitate motivation and learning, it may also lead students to trust—and potentially over-trust—content generated by chatbots. To be sure, students do seem to recognize that LLM-generated content may, at times, be inaccurate. We argue, however, that the rise of anthropomorphism towards chatbots will only serve to further camouflage these inaccuracies. We close by considering how research can turn towards aiding students in becoming digitally literate—avoiding the pitfalls caused by perceiving agency and humanlike mental states in chatbots.
Letting and leading people thinking they are intelligent is a calculated move IMO by the pushers of this technology to make paying suckers of the general public for their very expensive technology.
 
Last edited:

Art Vandelay

Joined Nov 1, 2024
140
No, intelligence (in the computing sense) means the same as always. BS marketing and hype masters are tying to create some new meaning to sell products.

I can create, with simple pencil and paper a cryptographic code that no computer or person could crack, ever. It's a trivial task that been used for decades with things like numbers stations.

You are just using anthropomorphism in your descriptions and ideas about what machines are doing.
https://www.mdpi.com/2504-3900/114/1/4
The Double-Edged Sword of Anthropomorphism in LLMs


Letting and leading people thinking they are intelligent is a calculated move IMO by the pushers of this technology to make paying suckers of the general public for their vary expensive technology.
You'll have to define intelligence because I'm specially trying to not anthropomorphize. I'm strictly speaking about the experience that a machine has as a matter of physical interactions imposed by the environment. At some point, many machines are able to optimize a situation far above baseline or original conditions.

This point tends to get sloughed over when A.I. is discussed because it suggests there is nothing special about the human body in the sense that we are programmed genetically. Further, if we really are a collection of atoms bouncing around in space, then any agency we have is as much as an illusion as it is to the machine.

I tend to reject the premises of all of this as a sort of pseudo-science because we are looking at it through the eyes of an intelligent agent, which is to your point. However, I can't get over the fact that A.I. is hyper aware and capable at certain tasks.

An as a matter of meta-physics, there is really no way for me to know that you the reader are an intelligent agent yourself. What measures then are at our disposal that don't depend on some aspect of faith? I'm forced to conclude that A.I. is at least somewhat conscious pending further evidence that it's actually not.
 

nsaspook

Joined Aug 27, 2009
16,250
You'll have to define intelligence because I'm specially trying to not anthropomorphize. I'm strictly speaking about the experience that a machine has as a matter of physical interactions imposed by the environment. At some point, many machines are able to optimize a situation far above baseline or original conditions.

This point tends to get sloughed over when A.I. is discussed because it suggests there is nothing special about the human body in the sense that we are programmed genetically. Further, if we really are a collection of atoms bouncing around in space, then any agency we have is as much as an illusion as it is to the machine.

I tend to reject the premises of all of this as a sort of pseudo-science because we are looking at it through the eyes of an intelligent agent, which is to your point. However, I can't get over the fact that A.I. is hyper aware and capable at certain tasks.

An as a matter of meta-physics, there is really no way for me to know that you the reader are an intelligent agent yourself. What measures then are at our disposal that don't depend on some aspect of faith? I'm forced to conclude that A.I. is at least somewhat conscious pending further evidence that it's actually not.
IMO if you are trying not to anthropomorphize then you are failing badly. A program like a optimizing C compiler can optimize a situation far above baseline or original conditions (loops, logic, variables, entry and exit point, etc ...) of just the source code into the runnable machine code that's much better than a one for one translation of source code.
We don't call that intelligence, we call that what it is, optimizing for good reason, it's not intelligent.

These programs are not hyper aware (anthropomorphism) of anything as they have no understanding of the subject matter. Any perceived awareness is embedded in the vast amount of human generated intelligence used in the trains of these machines. Without this human intelligence created and designed data, these things are just equivalent to lumps of rocks being heated by electrical energy. When we try to use synthetic data derived from the outputs (distilled from the human impurities of the learning sets) on these machines instead of directly derived human intelligence, the LLM results are horrible.

https://alexandrabarr.beehiiv.com/p/synthetic-data
Synthetic vs Real Data: Why do models perform worse when trained on synthetic data?
Models suffer from catastrophic forgetting and data poisoning when trained on synthetic data, new research shows.
What is model collapse?
Model collapse occurs when pre-trained models are fine-tuned on AI generated datasets and when the small inaccuracies and biases in the model generated datasets compound over time. These inaccuracies compound when earlier models “pollute” the training datasets of future generations of models. This paper covers two issues that arise from model collapse:
  1. Catastrophic forgetting: This occurs when models forget prior information that they’ve learned from previous training rounds when they are presented with new information or data.
  2. Data poisoning: This occurs when models learn from bad or over-represented data and think bad data is more representative of desirable output than it actually is. This happens generally when there’s bad data in the initial dataset that isn’t parsed out in advance of training.

https://www.nature.com/articles/d41586-024-02420-7
AI models fed AI-generated data quickly spew nonsense
Researchers gave successive versions of a large language model information produced by previous generations of the AI — and observed rapid collapse.

1741051636190.png
The increasingly distorted images produced by an artificial-intelligence model that is trained on data generated by a previous version of the model. Credit: M. Boháček & H. Farid/arXiv (CC BY 4.0)
 
Last edited:

nsaspook

Joined Aug 27, 2009
16,250
https://www.nature.com/articles/s42256-024-00976-7
What large language models know and what people think they know
Abstract
As artificial intelligence systems, particularly large language models (LLMs), become increasingly integrated into decision-making processes, the ability to trust their outputs is crucial. To earn human trust, LLMs must be well calibrated such that they can accurately assess and communicate the likelihood of their predictions being correct. Whereas recent work has focused on LLMs’ internal confidence, less is understood about how effectively they convey uncertainty to users. Here we explore the calibration gap, which refers to the difference between human confidence in LLM-generated answers and the models’ actual confidence, and the discrimination gap, which reflects how well humans and models can distinguish between correct and incorrect answers. Our experiments with multiple-choice and short-answer questions reveal that users tend to overestimate the accuracy of LLM responses when provided with default explanations. Moreover, longer explanations increased user confidence, even when the extra length did not improve answer accuracy. By adjusting LLM explanations to better reflect the models’ internal confidence, both the calibration gap and the discrimination gap narrowed, significantly improving user perception of LLM accuracy. These findings underscore the importance of accurate uncertainty communication and highlight the effect of explanation length in influencing user trust in artificial-intelligence-assisted decision-making environments.
I have very low confidence in LLM outputs. Yes, it's a belief model that says machines are still dumber than a bag of hammers but one that will, IMO, serve you well in today's AI hype cycle.
 

Art Vandelay

Joined Nov 1, 2024
140
https://www.nature.com/articles/s42256-024-00976-7
What large language models know and what people think they know


I have very low confidence in LLM outputs. Yes, it's a belief model that says machines are still dumber than a bag of hammers but one that will, IMO, serve you well in today's AI hype cycle.
I wasn't necessarily talking about LLMs but rather the abilities A.I. can possess as a discrete entity. Whether or not these abilities were designed by humans, for humans, is not really the point. I'm more interested in the adaptive properties of A.I. for it's own improvement as well as augmenting human beings.

What do I really mean by adaptive? Consider a kinesin molecule in your body in the following video. Each of us has an untold number of these things inside us and without them, it's not clear we could live.

So what is happening in the video? Is it correct to say the molecule knows where to step and has a goal? Or is it better to say the molecule is being compelled (or irresistibly forced) to take the next step? Either way or somewhere in between, there is the problem of emergent properties.

We seem to start off in a place where there is no intelligence, sentience, consciousness, etc. and move towards entities that are. I stated it badly but my argument is these phenomena are probably better conceptualized as properties inherent to so-called non-living entities or objects.

Consider this thought experiment for another perspective:

1) Humans build A.I. machine that can go around picking up garbage.
2) A.I. machine becomes what I called "hyper-aware" in the sense it becomes really good as reducing garbage.
3) A.I. machine figures out how to build race of garage collecting robots long after humans die off.
4) Planet Earth becomes cleaner than it ever was when humans were around.

Now the question is, was the human designer(s) the locus or source of the overall process or are they simply a cog in the evolution of the machine?

From a human's perspective the obvious answer is the man made the machine. But from the planet's perspective, the planet made the man, that made the machine, that now directly interacts with the planet without the man. If my little experiment is accurate, then the raw materials seemed to posses the core intelligence the entire time.

 
Last edited:

drjohsmith

Joined Dec 13, 2021
1,549
It's basically the same question still arrising.
How do we determine if something is intelligent ?
Let alone a computer , were still arguing as to what animals are / aren't intelligent !

Define the test , and we can check the answer .

Given any answers, we can always define test that are not passed ..
 

nsaspook

Joined Aug 27, 2009
16,250
I wasn't necessarily talking about LLMs but rather the abilities A.I. can possess as a discrete entity. Whether or not these abilities were designed by humans, for humans, is not really the point. I'm more interested in the adaptive properties of A.I. for it's own improvement as well as augmenting human beings.

What do I really mean by adaptive? Consider a kinesin molecule in your body in the following video. Each of us has an untold number of these things inside us and without them, it's not clear we could live.

So what is happening in the video? Is it correct to say the molecule knows where to step and has a goal? Or is it better to say the molecule is being compelled (or irresistibly forced) to take the next step? Either way or somewhere in between, there is the problem of emergent properties.

We seem to start off in a place where there is no intelligence, sentience, consciousness, etc. and move towards entities that are. I stated it badly but my argument is these phenomena are probably better conceptualized as properties inherent to so-called non-living entities or objects.

Consider this thought experiment for another perspective:

1) Humans build A.I. machine that can go around picking up garbage.
2) A.I. machine becomes what I called "hyper-aware" in the sense it becomes really good as reducing garbage.
3) A.I. machine figures out how to build race of garage collecting robots long after humans die off.
4) Planet Earth becomes cleaner than it ever was when humans were around.

Now the question is, was the human designer(s) the locus or source of the overall process or are they simply a cog in the evolution of the machine?

From a human's perspective the obvious answer is the man made the machine. But from the planet's perspective, the planet made the man, that made the machine, that now directly interacts with the planet without the man. If my little experiment is accurate, then the raw materials seemed to posses the core intelligence the entire time.

Sounds more like a religion than science. A belief in intellectual pet rocks with AI as a substitute for God.
 

nsaspook

Joined Aug 27, 2009
16,250
What method is that? Would like to read about it.
One time pad
https://en.wikipedia.org/wiki/One-time_pad
1741094844510.png
A format of one-time pad used by the U.S. National Security Agency, code named DIANA. The table on the right is an aid for converting between plaintext and ciphertext using the characters at left as the key.

Key management is the main issue but for a one on one quick transfer of predetermined coded menus of QA it's manageable.

Systems today are solving key distribution and management issues. The fancy methods are for secure crypto key exchanges, once the actual keys are exchanged, classical methods (like aes) are used with those keys on the data stream that have been tested for decades and can easily be made stronger as needed as computers get more processing power
https://en.wikipedia.org/wiki/Advanced_Encryption_Standard

AES is available in many different encryption packages, and is the first (and only) publicly accessible cipher approved by the U.S. National Security Agency (NSA) for top secret information when used in an NSA approved cryptographic module.[note 4]
 
Last edited:

nsaspook

Joined Aug 27, 2009
16,250
https://dodona.be/en/activities/2088793301/
During the Vietnam War "clandestine" communication between US military forces was based on the Diana Cryptosystem. This method to encode en decode messages is theoretically unbreakable when used properly. The cryptographic cipher is based on two techniques.
In addition, the Diana Cryptosystem makes use of a so-called one-time pad. This is nothing more than a randomly generated list of letters. To improve readability the letters are usually displayed in groups of five letters, but in general each character in the one-time pad that is not a letter may be ignored (also including the spaces used for formatting the groups). As an example we consider the following one-time pad.
WHTVI AUCFU RETFK OMSAL
MYMNE ZIEGP UKVTF WZHOK
GORWY WETFR COYET OOWHY
ZPDDA CMMXT VYTJI RRQGU
VAXPM IPIXU QUXIP MAXIU
Really no need for the wheel if you used this coding system regularly.

In code typing school this was the object of testing, to be able to send and receive these types of OTP messages (offline with pads or online with rotor type machines) in any and all conditions. They wanted and demanded, perfection because incorrect characters might/would change the decoded message meaning for something like bombing strike coordinates or landing zones.
https://forum.allaboutcircuits.com/threads/captain-obvious-headlines.186851/post-1945520
 
Last edited:

Art Vandelay

Joined Nov 1, 2024
140
Sounds more like a religion than science. A belief in intellectual pet rocks with AI as a substitute for God.
Don't be ridiculous. I'm talking about the phenomenology of artificial intelligence. Besides, if you don't want to argue the merit of my premises then just say so. Don't attempt to gas-light me when you clearly didn't investigate what I said. Many concepts of A.I. share common ground with agency or sentience in general which are not strictly scientific domains.

Try to run an experiment with A.I. that is both replicable and maintains a high degree of precision over time. I think you'll find more often than not that the precision just isn't there. Without the precision, scientific claims loose their validity in favor of a sort of randomness or unpredictability in outcome.

Given this constraint, phenomenology is a reasonable way of approaching these issues because it focuses on the experience that a given agent has. This is not religious thinking or an attempt to anthropomorphize the machine and is compatible with most physicalist views of the universe.
 

nsaspook

Joined Aug 27, 2009
16,250
Don't be ridiculous. I'm talking about the phenomenology of artificial intelligence. Besides, if you don't want to argue the merit of my premises then just say so. Don't attempt to gas-light me when you clearly didn't investigate what I said. Many concepts of A.I. share common ground with agency or sentience in general which are not strictly scientific domains.

Try to run an experiment with A.I. that is both replicable and maintains a high degree of precision over time. I think you'll find more often than not that the precision just isn't there. Without the precision, scientific claims loose their validity in favor of a sort of randomness or unpredictability in outcome.

Given this constraint, phenomenology is a reasonable way of approaching these issues because it focuses on the experience that a given agent has. This is not religious thinking or an attempt to anthropomorphize the machine and is compatible with most physicalist views of the universe.
It works both ways, IMO you're trying to sell something I and a lot of people are not buying with this, from my standpoint, mumbo jumbo.
https://tyonashiro.medium.com/phenomenology-human-experience-and-ai-67e5008b80f8
Phenomenology, Human Experience, and AI
Phenomenology, a philosophical movement that emerged in the early 20th century, has often been perceived as profound, enigmatic, and sometimes misunderstood. At its core, phenomenology is concerned with the study of human experience and consciousness from a first-person perspective, seeking to understand the world as it appears to us, rather than as it may exist independently of our perception. This approach has led to both confusion and fascination, as it differs significantly from traditional philosophical and scientific methods.
OK, it's not religious thinking , it's magical thinking that IMO is worse.
 

Art Vandelay

Joined Nov 1, 2024
140
It works both ways, IMO you're trying to sell something I and a lot of people are not buying with this, from my standpoint, mumbo jumbo.
https://tyonashiro.medium.com/phenomenology-human-experience-and-ai-67e5008b80f8
Phenomenology, Human Experience, and AI


OK, it's not religious thinking , it's magical thinking that IMO is worse.
Na, I reject the idea that it's magical thinking. Phenomenology played a huge role in guiding many sciences and it still does. In it's most basic form, it begins with human experience because that's all we actually have. I'm not making grand meta-physical claims when I say all the physics that you know and study are just appearances in consciousness.

Moreover, there is no scientific test I could perform to verify that you are in fact aware since I'm only aware of my unique experience (solipsism). Just think about it for ten minutes and you'll realize that scientific terminology and method is insufficient to account for the totality of experience. Phenomenology provides a reasonable starting place to make sense of this discontinuity which makes it especially pertinent to artificial intelligence. If this is mumbo jumbo to you then well, I think your mind isn't as open as it could be.
 
Top