Two AI agents spontaneously agreeing on cryptographic keys?
I also suspected spoofed intelligence.. but it's still possible to program two devices to communicate this way. Why do you think they lack intelligence?Totally meaningless. Two non intelligent programs pretending to be intelligent.
AI as actual intelligence doesn't currently exist. We currently have very powerful autocomplete programs that use existing human intelligence as the training sets for the statically basis of the next work to use.I also suspected spoofed intelligence.. but it's still possible to program two devices to communicate this way. Why do you think they lack intelligence?
For all their mind-bending scale, LLMs are actually doing something very simple. Suppose you open your smartphone and start a text message to your spouse with the words “what time.” Your phone will suggest completions of that text for you. It might suggest “are you home” or “is dinner,” for example. It suggests these because your phone is predicting that they are the likeliest next words to appear after “what time.” Your phone makes this prediction based on all the text messages you have sent, and based on these messages, it has learned that these are the likeliest completions of “what time.” LLMs are doing the same thing, but as we have seen, they do it on a vastly larger scale. The training data is not just your text messages, but all the text available in digital format in the world. What does that scale deliver? Something quite remarkable — and unexpected.
I think we are in an age where intelligence will come to take on a new meaning as well as an increased ambiguity in it's definition.AI as actual intelligence doesn't currently exist. We currently have very powerful autocomplete programs that use existing human intelligence as the training sets for the statically basis of the next work to use.
https://bigthink.com/the-future/artificial-general-intelligence-true-ai/
ChatGPT is not “true AI.” A computer scientist explains why
Large language models are an impressive advance in AI, but we are far away from achieving human-level capabilities.
No, intelligence (in the computing sense) means the same as always. BS marketing and hype masters are tying to create some new meaning to sell products.I think we are in an age where intelligence will come to take on a new meaning as well as an increased ambiguity in it's definition.
For example, if A.I. does a job that only intelligent agents like us are able to do, then technically speaking, the A.I. posses at a minimum the equivalent faculties to carry out the operation in space or time. Of course, the meaning or purpose of the process is subjective to the human agent giving the original commands.
I don't know if intelligence is a good way of trying to define what's going on here. And as everyone knows, there are many types of intelligence which says to me our definitions are horribly vague despite the progress we've made as a species.
In the case of cryptography, it's a very interesting case. Here we have a system of logic that is able to change a message so no human on Earth could read it without a computers help. Even if we used pen and paper, it would be impossible without the computers processing power.
This seems to suggest the machine might be hypersensitive to some stimuli but not others. I'd loosely characterize its ability to cope and excel in that environment as a measure of its intelligence. After all, it is (physically) aware of the stimuli it's processing otherwise the operation would be impossible.
Letting and leading people thinking they are intelligent is a calculated move IMO by the pushers of this technology to make paying suckers of the general public for their very expensive technology.Abstract
Humans may have evolved to be “hyperactive agency detectors”. Upon hearing a rustle in a pile of leaves, it would be safer to assume that an agent, like a lion, hides beneath (even if there may ultimately be nothing there). Can this evolutionary cognitive mechanism—and related mechanisms of anthropomorphism—explain some of people’s contemporary experience with using chatbots (e.g., ChatGPT, Gemini)? In this paper, we sketch how such mechanisms may engender the seemingly irresistible anthropomorphism of large language-based chatbots. We then explore the implications of this within the educational context. Specifically, we argue that people’s tendency to perceive a “mind in the machine” is a double-edged sword for educational progress: Though anthropomorphism can facilitate motivation and learning, it may also lead students to trust—and potentially over-trust—content generated by chatbots. To be sure, students do seem to recognize that LLM-generated content may, at times, be inaccurate. We argue, however, that the rise of anthropomorphism towards chatbots will only serve to further camouflage these inaccuracies. We close by considering how research can turn towards aiding students in becoming digitally literate—avoiding the pitfalls caused by perceiving agency and humanlike mental states in chatbots.
You'll have to define intelligence because I'm specially trying to not anthropomorphize. I'm strictly speaking about the experience that a machine has as a matter of physical interactions imposed by the environment. At some point, many machines are able to optimize a situation far above baseline or original conditions.No, intelligence (in the computing sense) means the same as always. BS marketing and hype masters are tying to create some new meaning to sell products.
I can create, with simple pencil and paper a cryptographic code that no computer or person could crack, ever. It's a trivial task that been used for decades with things like numbers stations.
You are just using anthropomorphism in your descriptions and ideas about what machines are doing.
https://www.mdpi.com/2504-3900/114/1/4
The Double-Edged Sword of Anthropomorphism in LLMs
Letting and leading people thinking they are intelligent is a calculated move IMO by the pushers of this technology to make paying suckers of the general public for their vary expensive technology.
IMO if you are trying not to anthropomorphize then you are failing badly. A program like a optimizing C compiler can optimize a situation far above baseline or original conditions (loops, logic, variables, entry and exit point, etc ...) of just the source code into the runnable machine code that's much better than a one for one translation of source code.You'll have to define intelligence because I'm specially trying to not anthropomorphize. I'm strictly speaking about the experience that a machine has as a matter of physical interactions imposed by the environment. At some point, many machines are able to optimize a situation far above baseline or original conditions.
This point tends to get sloughed over when A.I. is discussed because it suggests there is nothing special about the human body in the sense that we are programmed genetically. Further, if we really are a collection of atoms bouncing around in space, then any agency we have is as much as an illusion as it is to the machine.
I tend to reject the premises of all of this as a sort of pseudo-science because we are looking at it through the eyes of an intelligent agent, which is to your point. However, I can't get over the fact that A.I. is hyper aware and capable at certain tasks.
An as a matter of meta-physics, there is really no way for me to know that you the reader are an intelligent agent yourself. What measures then are at our disposal that don't depend on some aspect of faith? I'm forced to conclude that A.I. is at least somewhat conscious pending further evidence that it's actually not.
What is model collapse?
Model collapse occurs when pre-trained models are fine-tuned on AI generated datasets and when the small inaccuracies and biases in the model generated datasets compound over time. These inaccuracies compound when earlier models “pollute” the training datasets of future generations of models. This paper covers two issues that arise from model collapse:
- Catastrophic forgetting: This occurs when models forget prior information that they’ve learned from previous training rounds when they are presented with new information or data.
- Data poisoning: This occurs when models learn from bad or over-represented data and think bad data is more representative of desirable output than it actually is. This happens generally when there’s bad data in the initial dataset that isn’t parsed out in advance of training.

I have very low confidence in LLM outputs. Yes, it's a belief model that says machines are still dumber than a bag of hammers but one that will, IMO, serve you well in today's AI hype cycle.Abstract
As artificial intelligence systems, particularly large language models (LLMs), become increasingly integrated into decision-making processes, the ability to trust their outputs is crucial. To earn human trust, LLMs must be well calibrated such that they can accurately assess and communicate the likelihood of their predictions being correct. Whereas recent work has focused on LLMs’ internal confidence, less is understood about how effectively they convey uncertainty to users. Here we explore the calibration gap, which refers to the difference between human confidence in LLM-generated answers and the models’ actual confidence, and the discrimination gap, which reflects how well humans and models can distinguish between correct and incorrect answers. Our experiments with multiple-choice and short-answer questions reveal that users tend to overestimate the accuracy of LLM responses when provided with default explanations. Moreover, longer explanations increased user confidence, even when the extra length did not improve answer accuracy. By adjusting LLM explanations to better reflect the models’ internal confidence, both the calibration gap and the discrimination gap narrowed, significantly improving user perception of LLM accuracy. These findings underscore the importance of accurate uncertainty communication and highlight the effect of explanation length in influencing user trust in artificial-intelligence-assisted decision-making environments.
I wasn't necessarily talking about LLMs but rather the abilities A.I. can possess as a discrete entity. Whether or not these abilities were designed by humans, for humans, is not really the point. I'm more interested in the adaptive properties of A.I. for it's own improvement as well as augmenting human beings.https://www.nature.com/articles/s42256-024-00976-7
What large language models know and what people think they know
I have very low confidence in LLM outputs. Yes, it's a belief model that says machines are still dumber than a bag of hammers but one that will, IMO, serve you well in today's AI hype cycle.
Cryptographic Keys are old hat. There is 1 and only 1 encryption method that is impervious to attack and cannot be broken (even by Quantum computers). And it's so easy, you can do it in your head.Two AI agents spontaneously agreeing on cryptographic keys?
Sounds more like a religion than science. A belief in intellectual pet rocks with AI as a substitute for God.I wasn't necessarily talking about LLMs but rather the abilities A.I. can possess as a discrete entity. Whether or not these abilities were designed by humans, for humans, is not really the point. I'm more interested in the adaptive properties of A.I. for it's own improvement as well as augmenting human beings.
What do I really mean by adaptive? Consider a kinesin molecule in your body in the following video. Each of us has an untold number of these things inside us and without them, it's not clear we could live.
So what is happening in the video? Is it correct to say the molecule knows where to step and has a goal? Or is it better to say the molecule is being compelled (or irresistibly forced) to take the next step? Either way or somewhere in between, there is the problem of emergent properties.
We seem to start off in a place where there is no intelligence, sentience, consciousness, etc. and move towards entities that are. I stated it badly but my argument is these phenomena are probably better conceptualized as properties inherent to so-called non-living entities or objects.
Consider this thought experiment for another perspective:
1) Humans build A.I. machine that can go around picking up garbage.
2) A.I. machine becomes what I called "hyper-aware" in the sense it becomes really good as reducing garbage.
3) A.I. machine figures out how to build race of garage collecting robots long after humans die off.
4) Planet Earth becomes cleaner than it ever was when humans were around.
Now the question is, was the human designer(s) the locus or source of the overall process or are they simply a cog in the evolution of the machine?
From a human's perspective the obvious answer is the man made the machine. But from the planet's perspective, the planet made the man, that made the machine, that now directly interacts with the planet without the man. If my little experiment is accurate, then the raw materials seemed to posses the core intelligence the entire time.
What method is that? Would like to read about it.Cryptographic Keys are old hat. There is 1 and only 1 encryption method that is impervious to attack and cannot be broken (even by Quantum computers). And it's so easy, you can do it in your head.
One time padWhat method is that? Would like to read about it.

During the Vietnam War "clandestine" communication between US military forces was based on the Diana Cryptosystem. This method to encode en decode messages is theoretically unbreakable when used properly. The cryptographic cipher is based on two techniques.
In addition, the Diana Cryptosystem makes use of a so-called one-time pad. This is nothing more than a randomly generated list of letters. To improve readability the letters are usually displayed in groups of five letters, but in general each character in the one-time pad that is not a letter may be ignored (also including the spaces used for formatting the groups). As an example we consider the following one-time pad.
WHTVI AUCFU RETFK OMSAL
MYMNE ZIEGP UKVTF WZHOK
GORWY WETFR COYET OOWHY
ZPDDA CMMXT VYTJI RRQGU
VAXPM IPIXU QUXIP MAXIU
Don't be ridiculous. I'm talking about the phenomenology of artificial intelligence. Besides, if you don't want to argue the merit of my premises then just say so. Don't attempt to gas-light me when you clearly didn't investigate what I said. Many concepts of A.I. share common ground with agency or sentience in general which are not strictly scientific domains.Sounds more like a religion than science. A belief in intellectual pet rocks with AI as a substitute for God.
It works both ways, IMO you're trying to sell something I and a lot of people are not buying with this, from my standpoint, mumbo jumbo.Don't be ridiculous. I'm talking about the phenomenology of artificial intelligence. Besides, if you don't want to argue the merit of my premises then just say so. Don't attempt to gas-light me when you clearly didn't investigate what I said. Many concepts of A.I. share common ground with agency or sentience in general which are not strictly scientific domains.
Try to run an experiment with A.I. that is both replicable and maintains a high degree of precision over time. I think you'll find more often than not that the precision just isn't there. Without the precision, scientific claims loose their validity in favor of a sort of randomness or unpredictability in outcome.
Given this constraint, phenomenology is a reasonable way of approaching these issues because it focuses on the experience that a given agent has. This is not religious thinking or an attempt to anthropomorphize the machine and is compatible with most physicalist views of the universe.
OK, it's not religious thinking , it's magical thinking that IMO is worse.Phenomenology, a philosophical movement that emerged in the early 20th century, has often been perceived as profound, enigmatic, and sometimes misunderstood. At its core, phenomenology is concerned with the study of human experience and consciousness from a first-person perspective, seeking to understand the world as it appears to us, rather than as it may exist independently of our perception. This approach has led to both confusion and fascination, as it differs significantly from traditional philosophical and scientific methods.
Na, I reject the idea that it's magical thinking. Phenomenology played a huge role in guiding many sciences and it still does. In it's most basic form, it begins with human experience because that's all we actually have. I'm not making grand meta-physical claims when I say all the physics that you know and study are just appearances in consciousness.It works both ways, IMO you're trying to sell something I and a lot of people are not buying with this, from my standpoint, mumbo jumbo.
https://tyonashiro.medium.com/phenomenology-human-experience-and-ai-67e5008b80f8
Phenomenology, Human Experience, and AI
OK, it's not religious thinking , it's magical thinking that IMO is worse.