Using AI to communicate logic

Thread Starter

Futurist

Joined Apr 8, 2025
721
I recently started using ChatGPT to create a representative flowchart from existing source. These flowcharts are then distributed to business users prior to meetings about development progress.

The audience is expected understand the common abstractions, but no knowledge of the programming language is expected. This has been a huge help, because we don't get detailed functional specs any more (this has fallen out of fashion, we also rarely see systems analysts anymore either). We only get "requirements" and these aren't always well written.

So as the code matures and we start unit testing it, it gets more solid but challenging to see the business logic amongst all the code details like declarations etc, even by the developer. Extracting that as a flowchart is a HUGE help, I recommended others try this out if they want to share functionality of code with non-programmers.

During meetings I'll hear questions like "Oh, so we still enroll the student even if we decide to switch teachers?" and these are highly useful questions, helping us ensure that the fundamentals are right.
 

MrAl

Joined Jun 17, 2014
13,667
I recently started using ChatGPT to create a representative flowchart from existing source. These flowcharts are then distributed to business users prior to meetings about development progress.

The audience is expected understand the common abstractions, but no knowledge of the programming language is expected. This has been a huge help, because we don't get detailed functional specs any more (this has fallen out of fashion, we also rarely see systems analysts anymore either). We only get "requirements" and these aren't always well written.

So as the code matures and we start unit testing it, it gets more solid but challenging to see the business logic amongst all the code details like declarations etc, even by the developer. Extracting that as a flowchart is a HUGE help, I recommended others try this out if they want to share functionality of code with non-programmers.

During meetings I'll hear questions like "Oh, so we still enroll the student even if we decide to switch teachers?" and these are highly useful questions, helping us ensure that the fundamentals are right.
Hi,

I've done a lot of testing of 'ai' for various tasks and the bottom line is you always have to test anything it comes up with very carefully and sometimes it will be outright wrong and there is sometimes no way to convince it (even after a couple hours of back-and-forth dialog) that it is wrong.
It seems that some of its 'rules' are deeply programmed or something, because it keeps relying on those rules even when you tell it either it does not apply or it is just wrong. It must have references that are 'thought' to be correct all the time and so it will not allow any variation in that area of 'thought'. In those cases it is not possible to correct it, and the best you can do it come at it from a different angle. It may still not see the correlation though.
It also has a way of minimizing the importance of some issues. This can be really great when the importance is not that high, but when it is of high importance it can be hard to get it to think of it as important unless you specify that fact very succinctly. This could come up in areas of fine tuning in optimization problems.

This reminds me of how automated reasoning works, where if all of the statements are not presented in a very perfect way, the outcome can be way off, and that means completely unusable. This usually requires going over all of the input statements again to figure out what was not presented correctly or left out. With 'ai', sometimes you just have to start all over again.
 

Thread Starter

Futurist

Joined Apr 8, 2025
721
Hi,

I've done a lot of testing of 'ai' for various tasks and the bottom line is you always have to test anything it comes up with very carefully and sometimes it will be outright wrong and there is sometimes no way to convince it (even after a couple hours of back-and-forth dialog) that it is wrong.
It seems that some of its 'rules' are deeply programmed or something, because it keeps relying on those rules even when you tell it either it does not apply or it is just wrong. It must have references that are 'thought' to be correct all the time and so it will not allow any variation in that area of 'thought'. In those cases it is not possible to correct it, and the best you can do it come at it from a different angle. It may still not see the correlation though.
It also has a way of minimizing the importance of some issues. This can be really great when the importance is not that high, but when it is of high importance it can be hard to get it to think of it as important unless you specify that fact very succinctly. This could come up in areas of fine tuning in optimization problems.

This reminds me of how automated reasoning works, where if all of the statements are not presented in a very perfect way, the outcome can be way off, and that means completely unusable. This usually requires going over all of the input statements again to figure out what was not presented correctly or left out. With 'ai', sometimes you just have to start all over again.
I agree, one must be cautious. But as you know these LLMs don't "think" or have any reasoning abilities, it's fake intelligence rather than artificial. They "simply" predict their next word statistically then feed that partial sentence back in to predict the next word and repeat until they have a complete paragraph.

True intelligence doesn't need Gigawatts of electrical power to hold a conversation.

Who'd have thought the Turing test would be passed by such an unintelligent system!
 
Last edited:

MrAl

Joined Jun 17, 2014
13,667
I agree, one must be cautious. But as you know these LLMs don't "think" or have any reasoning abilities, it's fake intelligence rather than artificial. They "simply" predict their next word statistically then feed that partial sentence back in to predict the next word and repeat until they have a complete paragraph.

True intelligence doesn't need Gigawatts of electrical power to hold a conversation.

Who'd have thought the Turing test would be passed by such an unintelligent system!
Hi,

I'd be careful about believing the Turing test results. Do we know who the human judges were ... I don't think so.
But even then, I think they relied on *previous* conversations that were written down, they did not have the chance to converse directly with 'ai'.
If that is true, that means to me that the 'ai' did not pass the test they passed a piped down version of the test.
Maybe if we are satisfied that it can *respond* in a way that a human could, then maybe we could say it passed. I don't think the Turing test requires technical accuracy of any kind. That level of passing to me would only be a small milestone however. We are lucky though that it can help with some things.

But also after having a lot of conversations with 'ai' to see how they responded, I can say that myself and probably you and a lot of other members here could tell if we were talking to 'ai' or a human with a high success rate. It does depend partly though on what questions we ask of it.
They do give some pretty good responses, but the bad responses are so bad we'd have to be talking to a really stupid human I think in order to get some of those responses :)

I do like playing around with it sometimes though. It is pretty amazing what it can come up with when we don't get too technical. In some cases it does technical too though.

There are more modern tests that judge further aspects of the capabilities through, like tests for creativity, knowledge in various subjects, etc.
It does somewhat well on the SAT's, and it is reasonable in math but for multistep math it is not very strong.
Who knows how much better they were get in the future though.
 
Last edited:

WBahn

Joined Mar 31, 2012
32,703
The "Turing Test" is not some standardized test that is used and a system either passes or fails. It is a conceptual thought experiment proposed by Turing in a 1950 paper as a basis for the discussion in that paper regarding the philosophy of artificial intelligence. He wasn't proposing a test to determine whether a machine can "think".

https://academic.oup.com/mind/artic...33/986238?redirectedFrom=fulltext&login=false

NOTE: This is not a free site, but you can often access the content through a university library. There are also plenty of places online where you can find PDFs of the paper (it's about 22 pages long), but since I'm unsure of the copyright status of the article, I won't link to any of them.

He used a simple alternative thought experiment, the "imitation game", as a less ambiguous definition of what "thinking" is for the purpose of countering several objections that people had regarding whether machines could ever be considered to "think".

It's interesting that he took the position that the notion of whether or not machines can think is as much a social question as it is a technical one. He stated, "I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."

I think, for better or worse, we have reached that point. Notice that he doesn't say that you can speak of machines thinking without being contradicted, but rather that you can speak of that without expecting to be contradicted. Once that point is reached, we are poised to broadly accept that machines can "think" regardless of their technical capability to actually do so, but because we will have, as a society, revised our definition of "thinking" to include whatever it is that these machines are doing.
 

Thread Starter

Futurist

Joined Apr 8, 2025
721
The "Turing Test" is not some standardized test that is used and a system either passes or fails. It is a conceptual thought experiment proposed by Turing in a 1950 paper as a basis for the discussion in that paper regarding the philosophy of artificial intelligence. He wasn't proposing a test to determine whether a machine can "think".

https://academic.oup.com/mind/artic...33/986238?redirectedFrom=fulltext&login=false

NOTE: This is not a free site, but you can often access the content through a university library. There are also plenty of places online where you can find PDFs of the paper (it's about 22 pages long), but since I'm unsure of the copyright status of the article, I won't link to any of them.

He used a simple alternative thought experiment, the "imitation game", as a less ambiguous definition of what "thinking" is for the purpose of countering several objections that people had regarding whether machines could ever be considered to "think".

It's interesting that he took the position that the notion of whether or not machines can think is as much a social question as it is a technical one. He stated, "I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."

I think, for better or worse, we have reached that point. Notice that he doesn't say that you can speak of machines thinking without being contradicted, but rather that you can speak of that without expecting to be contradicted. Once that point is reached, we are poised to broadly accept that machines can "think" regardless of their technical capability to actually do so, but because we will have, as a society, revised our definition of "thinking" to include whatever it is that these machines are doing.
Very insightful post.
 

MrAl

Joined Jun 17, 2014
13,667
The "Turing Test" is not some standardized test that is used and a system either passes or fails. It is a conceptual thought experiment proposed by Turing in a 1950 paper as a basis for the discussion in that paper regarding the philosophy of artificial intelligence. He wasn't proposing a test to determine whether a machine can "think".

https://academic.oup.com/mind/artic...33/986238?redirectedFrom=fulltext&login=false

NOTE: This is not a free site, but you can often access the content through a university library. There are also plenty of places online where you can find PDFs of the paper (it's about 22 pages long), but since I'm unsure of the copyright status of the article, I won't link to any of them.

He used a simple alternative thought experiment, the "imitation game", as a less ambiguous definition of what "thinking" is for the purpose of countering several objections that people had regarding whether machines could ever be considered to "think".

It's interesting that he took the position that the notion of whether or not machines can think is as much a social question as it is a technical one. He stated, "I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."

I think, for better or worse, we have reached that point. Notice that he doesn't say that you can speak of machines thinking without being contradicted, but rather that you can speak of that without expecting to be contradicted. Once that point is reached, we are poised to broadly accept that machines can "think" regardless of their technical capability to actually do so, but because we will have, as a society, revised our definition of "thinking" to include whatever it is that these machines are doing.
I think I agree with most of that.

We can probably think of this as a gradient performance measure rather than anything absolute. We even have trouble defining this for us humans in great detail.
 

Samantha Groves

Joined Nov 25, 2023
151
I recently started using ChatGPT to create a representative flowchart from existing source. These flowcharts are then distributed to business users prior to meetings about development progress.

The audience is expected understand the common abstractions, but no knowledge of the programming language is expected. This has been a huge help, because we don't get detailed functional specs any more (this has fallen out of fashion, we also rarely see systems analysts anymore either). We only get "requirements" and these aren't always well written.

So as the code matures and we start unit testing it, it gets more solid but challenging to see the business logic amongst all the code details like declarations etc, even by the developer. Extracting that as a flowchart is a HUGE help, I recommended others try this out if they want to share functionality of code with non-programmers.

During meetings I'll hear questions like "Oh, so we still enroll the student even if we decide to switch teachers?" and these are highly useful questions, helping us ensure that the fundamentals are right.
Dont trust Chat GPT.It is a neural network , it doesnt have human logic.If all of the content on the Internet were erased ,it wouldnt answer what 1+1 does.What it currently does is search the internet for you and based on most data about a subject, it answers back just like a human would.But it is not sentient and humans are infinitely better at understanding information given enough time.It is just a smart google search.
 
Last edited:

Thread Starter

Futurist

Joined Apr 8, 2025
721
Dont trust Chat GPT.It is a neural network , it doesnt have human logic.If all of the content on the Internet were erased ,it wouldnt answer what 1+1 does.What it currently does is search the internet for you and based on most data about a subject, it answers back just like a human would.But it is not sentient and humans are infinitely better at understanding information given enough time.It is just a smart google search.
Oh I'm well aware that "AI" is not aware, I don't for one second attribute any sentience or logic to it. It just executes instructions, nothing more, it cant even make choices, it just trundles along.
 
Last edited:
Top