ChatGPT

WBahn

Joined Mar 31, 2012
30,303
https://cacm.acm.org/opinion/generative-ai-and-cs-education/
Generative AI and CS Education
Increased knowledge sharing is helping CS educators and researchers accelerate change in computing education.

I was in grade school in the pre-calculator era. When calculators/computers became cheap and usable we still had the "number sense" to use them as tools (as working agents of our knowledge) instead of a crutch. I'm watching a daughter take college level programming courses in C++ and now C (using K&R as the course book :D ) for her computer architectures classes while taking calculus 3 and physics. So far Generative AI has only been lightly touched during her learning process so they can develop the 'code sense' of a traditional CS education. Integrating AI assistants into education will IMO be totally different than the use of calculator as unless there is a calculation error, they don't hallucinate answers. The "AI" gives you an answer (without understanding) and tries hard to convince you it's correct (at face value). You can only tell whether it's a good answer or not if you're capable of writing the good answer yourself.

How to teach the 'code sense' to see and detect these hallucinated and sometimes detailed, at times, very complicated code responses from autocomplete chatbots will be an interesting process. These tools deliver code but writing code has very little to do with "computer science".

IMO current LLMs are not early prototypes, they are pretty much at the limits of that's capable with that technology. The reliability and trust problems we see today likely can't and won't be fixed long-term with LLM based programming.
I actually view the use of AI and the use of calculators as being quite decent parallels at a pretty fundamental level -- AI just brings those problems to the table on steroids.

As you stated, AI gives you an answer and you can't tell if it's a good answer or a bad answer if you are not capable of writing the good answer yourself -- or at least have some basis upon which to evaluate the quality of the offered answer. But the exact same is true with a calculator. It gives you an answer and if you do not have any basis upon which to evaluate the quality of the offered answer, you are highly likely to simply accept whatever it gives at face value. I see that happen all the time.

I recently gave an exam in a computer networks class in which they were given a problem in which realistic values for link length, transmission rate, and packet size were given and they were asked to determine the effective data transfer rate if a Stop and Wait protocol were used (assuming zero processing and queuing delays). I got answers that ranged from many (as in dozens) of orders of magnitudes faster than the link transmission rate, to answers so small that it equated to less than one bit transferred over the current age of the known universe. This is despite having harped on tracking units (which hardly anyone did) and asking if the answer makes sense (which even fewer did) all semester. The sad fact is that the overwhelming majority of students have gotten to this point with zero number sense and are so reliant on calculators to do their thinking for them that they are not capable of evaluating whether the answer it spews forth is reasonable of not. Even if they were willing to deal with this deficiency (and that number is extremely small, though not quite zero), they have dug themselves such a deep whole (with the educational system being ever ready to give them bigger and bigger shovels to dig with), that it would be a major step back for them to remediate it.

The exact same thing has happened with all kinds of technological advances that are intended to assist human endeavors but, almost invariably, have the effect of substituting for basic human thinking. There is absolutely NO reason to believe that AI is going to be ANY different -- in fact, there is every reason to expect that it will race down that inevitable path faster than anything we have seen before. We can also expect this process to be sped up by the educational system that, like the calculator and other tools before it, will merely latch onto yet another way to "bring technology into the classroom" and turn it into the biggest shovel yet for students to dig themselves holes in which they have to think even less.
 

nsaspook

Joined Aug 27, 2009
13,578
Basic agreement but still calculators are not guessing the next number or expression to display.
Calculators are pure GIGO. 'AI' can be Good Input (as a proper query of proper facts) with result Garbage Output that goes beyond simply looking good and logical to the nonexpert, often with imaginary garbage references as outputs that can be very verbose and initially very convincing on the surface.

A slick and polished trickster doing exactly as designed to emulate excellence and knowledge with no actual intelligence instead of just being an electronic math helper pencil that can be misused by operational incompetence.
 
Last edited:

WBahn

Joined Mar 31, 2012
30,303
Basic agreement but still calculators are not guessing the next number or expression to display.
Calculators are pure GIGO.
Usually, but not necessarily. I once convinced a 16 year old girl that she was only 14 -- and I have witnesses (well, a witness)! -- because she believed anything a calculator spewed forth, even when it produced clearly bad results even when the correct buttons were pushed (the calculator in question had some issue -- perhaps a sticking key or nearly dead batteries).
 

nsaspook

Joined Aug 27, 2009
13,578
Usually, but not necessarily. I once convinced a 16 year old girl that she was only 14 -- and I have witnesses (well, a witness)! -- because she believed anything a calculator spewed forth, even when it produced clearly bad results even when the correct buttons were pushed (the calculator in question had some issue -- perhaps a sticking key or nearly dead batteries).
Wow, similar to people believing that GPS is infallible while driving up a show covered logging road at night. Just a little critical thinking is all it takes.
https://ca.news.yahoo.com/driver-led-astray-gps-detour-214041528.html?guccounter=1
1714435535813.png
 

nsaspook

Joined Aug 27, 2009
13,578
https://federalnewsnetwork.com/arti.../nara-bans-use-of-chatgpt-on-agency-networks/
NARA bans use of ChatGPT on agency networks
The National Archives and Records Administration has become the latest federal agency to bar its employees from using ChatGPT for work purposes, citing “unacceptable risk” to the agency’s data.

The policy decision stems from what agency officials said are concerns that any data employees enter as prompts into the commercial version of the AI service might be used not only to train the ChatGPT model, but that the same data could make its way into responses to other users.

“Various media reports indicate there is a growing amount of personally identifiable information and corporate proprietary information showing up in ChatGPT and other AI services,” Keith Day, NARA’s chief information security officer, wrote in a memo to employees Wednesday. “Employees who want to use AI to help them in their jobs often don’t realize that these types of AI services keep your input data for further training of the AI. If sensitive, non-public NARA data is entered into ChatGPT, our data will become part of the living data set without the ability to have it removed or purged.”
 

nsaspook

Joined Aug 27, 2009
13,578
This is all about the Benjamins.
https://www.theguardian.com/technol...wing-users-to-create-ai-generated-pornography
OpenAI considers allowing users to create AI-generated pornography
OpenAI, which is also the developer of the DALL-E image generator, revealed it was considering letting developers and users “responsibly” create what it termed not-safe-for-work (NSFW) content through its products. OpenAI said this could include “erotica, extreme gore, slurs, and unsolicited profanity”.

It said: “We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts … We look forward to better understanding user and societal expectations of model behaviour in this area.”
 

nsaspook

Joined Aug 27, 2009
13,578
https://www.reuters.com/technology/openai-co-founder-ilya-sutskever-departs-2024-05-14/
OpenAI co-founder Ilya Sutskever to exit

Ilya Sutskever was removed from OpenAI's board in November last year, after he joined in the effort to fire Altman but later signed an employee letter demanding his return.
Altman was fired from the company in November without any detailed cause, sparking confusion about the future of the startup but he was soon given back the reins of the company.
 

nsaspook

Joined Aug 27, 2009
13,578
https://www.axios.com/2024/05/17/google-openai-ai-generative-publishers
AI eats the web
Today's web exists because millions of people have spent decades extending it with bits of knowledge, lore and images.

  • That process is the only reason today's AI is able to know anything about anything.
They don't know anything. The ability to parrot (say the likely next word in the sequence) existing human generated knowledge in a way that sometimes looks like knowledge is what they have.
 

nsaspook

Joined Aug 27, 2009
13,578
https://gizmodo.com/scarlett-johansson-openai-sam-altman-chatgpt-sky-1851489592
Scarlett Johansson Says She Warned OpenAI to Not Use Her Voice
Sam Altman denies Johansson was the inspiration even though the movie star says he offered her to be the voice of ChatGPT.
OpenAI asked Scarlett Johansson to provide voice acting that would be used in the company’s new AI voice assistant, but the actress declined, according to a statement obtained by NPR on Monday. And after last week’s demo, Johansson says she was shocked to hear a voice that was identical to her own. Especially since OpenAI was asking for Johansson’s help as recently as two days before the event.

OpenAI announced early Monday it would “pause the use of Sky” as a voice option. But Johansson is threatening legal action, and her statement goes into detail about why.

“Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system,” Johansson said in the statement, referring to the head of OpenAI. “He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and Al. He said he felt that my voice would be comforting to people.”
 

nsaspook

Joined Aug 27, 2009
13,578
https://ca.finance.yahoo.com/news/google-ai-search-wants-glue-161858685.html?guccounter=1
Google’s AI search wants you to glue cheese to your pizza. It’s just the tip of its bad-idea iceberg
Google is learning this the hard way as it rolls out AI-generated answers into search—and the internet is not letting things slide. Social media has been loaded with examples of the AI’s flubs, which range from the sublime to the absolutely ridiculous.

Leading the pack is the now widely shared example of Google suggesting you “mix about 1/8 cup of nontoxic glue into the sauce” to keep cheese from sliding off your pizza slice. The source of this advice—which we strongly suggest you do not follow—is seemingly a joke post from Reddit made 11 years ago by a user whose name we can’t repeat in a family financial publication. (The answers are almost word-for-word.)
 
Top