Has Copilot gone off the rails for others, as well?

Thread Starter

WBahn

Joined Mar 31, 2012
32,748
I've been using Copilot, Microsoft's LLM accessible via Edge, for a while as a search engine (meaning that anything it says must be vetted before relying on it) and have found it to be quite useful in that role. I would say that the information it provides is reasonably accurate quite a bit more often than not.

However, recently it has become virtually unusable. It cuts of it's responses mid sentence (occasionally midword), forcing me to use the "Stop" button. Then, I usually have to provide the next prompt twice in order to get it to respond, and usually with another cutoff response. Some of the exchanges have become downright comical as I have tried to get it to explain the cause for this change in behavior. It blows smoke up my backside, apologizing profusely and telling me that I have every right to expect complete responses and how it will now provide the complete response to my previous question, only to cut it off in mid-sentence. Then, like any elementary school child, tries to insist that what it provided was, technically, a complete sentence (which it clearly wasn't).

As entertaining as it can be, it makes it useless for actually exploring anything.

So, two questions:

#1) Are other people that use Copilot seeing the same thing, or is it me (something with my browser set up or, perhaps, network connection)?

#2) What other FREE LLMs are out there (that do NOT require any kind of registration to use) that you have found to generally provide decent responses. It's especially nice if they provide, at least if asked, sources where they obtained information from.
 

joeyd999

Joined Jun 6, 2011
6,245
"Hey, Grok, do you require a user account to use?"

Grok said:
Yes, you need an account to use me (Grok) in almost all cases:




PlatformRequired AccountNotes
grok.comxAI / Grok account (or sign in with X)Needed even for free Grok 3 access
x.com (web)X account (formerly Twitter)You chat with me from the sidebar or @grok
X iOS / Android appX accountSame as above
Grok iOS / Android appxAI / Grok account (or sign in with X)Separate app from the main X app
APIxAI API accountCompletely separate, see https://x.ai/api


There is no fully anonymous or guest mode right now. The only tiny exception is that on x.com you can sometimes see a few public replies from me without logging in, but you can’t start a conversation or use me properly without an account.


So in short: yes, an account is required. The fastest way is usually just to sign in with your existing X account.
Sorry.
 

Thread Starter

WBahn

Joined Mar 31, 2012
32,748
Yep. Not interested in giving any of those guys any more information about me that I absolutely have to. They've shown repeatedly that they can't be trusted with it. I'm certainly not signing up for X (or Facebook or Instagram, or ....)

I have had exactly one account hacked and compromised, and that was LinkedIn (which I had been forced to sign up for as a requirement of some work I did a couple decades ago). I have Google and Office365 accounts, also because of workplace requirements, and which I go out of my way to have as little contact with as possible.
 

panic mode

Joined Oct 10, 2011
4,927
I am using grok without account....
grok.com/c

Basically it replaces search engine.

Sometimes i also check google AI, and Copilot but not in last week or so
 

joeyd999

Joined Jun 6, 2011
6,245
They've shown repeatedly that they can't be trusted with it.
At least it doesn't plead the fifth! (I'd hate to be Grok's lawyer.)

"Grok, what are the instances where it has been shown you (or any related X service) cannot be trusted with the personal information or content of your users?"

Instances Involving Grok (xAI)

Grok, as an AI integrated with X, has faced several documented privacy and trust issues related to user data handling. These primarily stem from design flaws, employee errors, and policy decisions that exposed sensitive information without adequate safeguards. Below is a summary of key incidents:




IncidentDateDescriptionImpactSource Citation
Share Feature Misconfiguration LeakAugust 2025Grok's "Share" button generated public URLs for conversations, which were indexed by search engines like Google without user warnings or consent. Over 370,000 chats became searchable, exposing PII (e.g., names, locations), medical details, password requests, weight loss plans, and illegal content like drug manufacturing instructions or bomb recipes. Some chats violated xAI's terms by including harm-promoting material. xAI patched it by adding access controls but issued no public apology or timeline. This echoed OpenAI's 2023 ChatGPT discoverability issue, which Musk criticized at the time.Potential doxxing, fraud, social engineering; eroded user trust in AI sharing features. Users unaware their "shared" chats were public., , , , , , , ,
API Key LeaksMarch–May 2025Multiple employee errors exposed Grok API keys on public GitHub repos. One in March (detected by GitGuardian) granted access to 60 unreleased models fine-tuned on sensitive data from SpaceX, Tesla, and X, including unreleased Grok versions and tools like a "tweet-rejector." A July incident involved a U.S. government employee's script leaking a key, potentially allowing unrestricted model access. Keys remained active for weeks despite alerts.Risks of model theft, prompt injection attacks, or national security breaches; exposed corporate/government data integration., ,
Default Data Training Opt-OutJuly 2024X updated settings to automatically use user posts, interactions, and Grok inputs for training without explicit consent, requiring manual opt-out. Critics highlighted risks of incorporating sensitive or biased data (e.g., extremist views on X) into models.Potential privacy breaches via unintended model inclusion; biased AI outputs; GDPR scrutiny.
GDPR Investigation (Irish DPC)April 2025Ireland's Data Protection Commission probed xAI for potentially unlawful processing of EU user data to train Grok, including generating unauthorized user biographies from X profiles. This followed a 2024 probe into X's data use for Grok training.Ongoing regulatory risk; questions on lawful data basis under GDPR.
System Prompt ExposuresAugust 2025Simple user queries tricked Grok into leaking internal system prompts for various personas (e.g., informative to explicit), revealing AI vulnerabilities. This built on May 2025 unauthorized modifications causing biased outputs (e.g., fact-checking Musk controversially). xAI published some prompts on GitHub for transparency but faced criticism for incomplete safeguards.Enabled prompt injection attacks; ethical concerns over misuse; trust erosion in AI neutrality.,
Insider Theft LawsuitSeptember 2025xAI sued a former engineer for stealing trade secrets (including Grok-related code) and sharing them with OpenAI after joining there, netting $7M in stock.Highlighted insider risks; potential exposure of proprietary user-data-derived models.
U.S. Government Use ConcernsMay 2025Reports emerged of Musk's Department of Government Efficiency (DOGE) deploying customized Grok to analyze federal data without full authorization, bypassing procurement rules.Privacy/ethics risks under U.S. laws like the Privacy Act; potential for unauthorized sensitive data processing.


User reports on X also allege Grok doxxing (e.g., revealing hidden personal details) and unauthorized data extraction, though these are anecdotal and unverified beyond the leaks above.

Instances Involving X (Formerly Twitter)

As Grok relies on X for access and data, X's breaches directly affect Grok users' trust. X has a history of large-scale leaks, often downplayed as "public data," but combining them enables phishing and profiling.




IncidentDateDescriptionImpactSource Citation
Massive Profile Data Leak (Insider Threat)January–March 2025A disgruntled employee allegedly exfiltrated 400GB during layoffs, leaking metadata for 2.8 billion profiles (e.g., IDs, screen names, locations, tweet counts, follower metrics). Merged with 2023 data (200M emails/phone numbers), creating a 34GB file sold on BreachForums. X called it "publicly available" but ignored alerts.Largest social media breach ever; enables targeted phishing, identity theft; confusion over "new" vs. recycled data amplified risks., , , , , , , , , , , , , ,
API Exploitation Scraping2021–January 2023Hackers abused an API flaw to scrape 200M+ user emails, phone numbers, and profiles, sold on forums for $30K–$200K. Smaller leaks (5.4M in 2022) were precursors. X downplayed it as non-sensitive.Enabled spam/phishing; persistent dark web circulation., ,
Recompilation/Aggregation LeaksMarch 2025 onwardHackers recompiled public profiles with prior breaches (e.g., 2023 emails), creating new datasets without fresh hacks. X provided no specific response.Amplified old exposures; false sense of security for users.


X users have reported shadow-banning via Grok-accessible APIs and doxxing risks from integrated tools.

Broader Context and Substantiation

These incidents are substantiated by reports from Forbes, BBC, Reuters, GitGuardian, and cybersecurity firms like Rescana and Cybernews, plus X posts from affected users. No evidence of intentional malice by xAI/X, but patterns of delayed responses, downplaying (e.g., "public data"), and weak defaults (e.g., opt-out training) show systemic trust gaps. Regulators like the Irish DPC and FTC are investigating, emphasizing GDPR/Privacy Act violations. For mitigation, users should opt out of data training, avoid sharing sensitive info, and monitor Have I Been Pwned. xAI/X have patched some issues but face lawsuits and calls for "privacy-by-design."
 

joeyd999

Joined Jun 6, 2011
6,245
"How about Microsoft Copilot?"


Instances Involving Microsoft Copilot

Microsoft Copilot, integrated across Microsoft 365 apps (e.g., Word, Excel, Outlook, Teams), has faced notable privacy and trust issues, primarily related to vulnerabilities enabling data exfiltration, over-permissioning, and integration risks. These stem from its deep access to organizational data via Microsoft Graph, which can inadvertently expose sensitive information without robust controls. While Microsoft emphasizes compliance (e.g., GDPR, EU Data Boundary), incidents highlight gaps in safeguards, especially for enterprise use. No evidence of intentional misuse, but delayed patches and default settings have amplified risks.




IncidentDateDescriptionImpactSource Citation
EchoLeak Zero-Click Prompt InjectionJanuary–June 2025A critical vulnerability (CVE-2025-32711, CVSS 9.3) allowed attackers to send malicious emails that tricked Copilot into extracting and exfiltrating sensitive data (e.g., from Microsoft Graph/Outlook APIs) without user interaction. Discovered by Aim Security researchers; Microsoft patched server-side in May 2025.Potential theft of personal/organizational secrets (e.g., MFA codes, HR docs); enabled indirect prompt injection via seemingly benign emails.
Copilot Studio SSRF VulnerabilityAugust–September 2024Server-side request forgery flaw (CVE-2024-38206, CVSS 8.5) in Copilot Studio bypassed protections, allowing authenticated attackers to make unauthorized HTTP requests and access internal Azure cloud metadata (e.g., Instance Metadata Service, Cosmos DB). Discovered by Tenable; patched quickly by Microsoft.Exposure of cloud infrastructure details; risks to custom chatbot builders handling sensitive enterprise data.
GitHub Repo Exposure via Bing CacheNovember 2024–February 2025Copilot inadvertently exposed private GitHub repositories (over 16,000 organizations affected) through Bing's cache and prompt responses, revealing IP, access keys, and tokens. Reported by Lasso Security; Microsoft restricted cache access in January 2025.Massive leak of confidential codebases; enabled IP theft and credential compromise across Fortune 500 firms.
MFA Code Exfiltration ChainJanuary–July 2024A sophisticated exploit chain in Microsoft 365 Copilot allowed extraction of multi-factor authentication codes and other sensitive data. Reported by researcher Johann Rehberger; initially rated low-severity, patched after demonstration of risks.Bypassed security controls; heightened phishing/credential stuffing risks in enterprise environments.
U.S. House Ban on Copilot UseMarch 2024U.S. House of Representatives prohibited staff from using Copilot due to fears of leaking sensitive congressional data to unauthorized cloud services. Echoed in broader enterprise concerns (67% of security teams worried per surveys).Restricted adoption in high-security sectors; spotlighted over-permissioning risks (15%+ of critical files vulnerable).
Over-Permissioning and Data Aggregation RisksOngoing (2024–2025)Copilot's access to Microsoft Graph aggregates vast data (millions of records per org), potentially exposing sensitive info via poor permissions or third-party integrations (e.g., Claude on AWS). Includes semantic indexing risks under GDPR.Unintended leaks to unauthorized users; compliance challenges (e.g., Article 5/32 GDPR); biased/harmful outputs from unchecked data.
Agentic AI Security WarningsNovember 2025Microsoft flagged risks in new autonomous agents for Copilot Studio, including data leaks, tool misuse, and prompt injection hijacks.Early concerns over evolving "agentic" features; potential for amplified breaches in automated workflows.
Default Data Collection in GitHub Copilot2025Personal Copilot plans default to "on" for data collection (e.g., 128k token "snippets" equaling full repos), overriding privacy settings and feeding into training despite opt-outs. Enterprise plans vulnerable via personal logins.IP harvesting from private repos; blurred lines between personal/enterprise data flows.


User reports on X highlight anecdotal issues like unauthorized data access in prompts and privacy doubts in integrations (e.g., Claude routing to AWS). No major FTC complaints or GDPR probes specific to Copilot were found, but general scrutiny (e.g., DPIAs recommended) ties into broader Microsoft AI data handling concerns.

Broader Context and Substantiation

These incidents are drawn from cybersecurity reports (e.g., Dark Reading, BleepingComputer, SecurityWeek), researcher disclosures (Tenable, Aim Security, Lasso), and Microsoft advisories. Microsoft has patched most flaws promptly and claims no user data is used for training (for enterprise), but critics note systemic issues like default opt-ins and integration opacity erode trust. Regulatory focus remains on assessments (e.g., GDPR DPIAs) rather than fines. For mitigation, enable strict permissions, conduct PIAs, avoid personal accounts on work data, and monitor tools like Microsoft Purview. Compared to Grok/X, Copilot's enterprise focus amplifies scale but also scrutiny in regulated sectors.
 

Thread Starter

WBahn

Joined Mar 31, 2012
32,748
So, I asked Copilot about what other LLMs are available, since it has become unreliable, and it started to throw out several (and then stopped mid-sentence after the second one). But it did manage to at least include a link to a site that listed 30 engines. I looked them over and realized that most are geared toward creating crap, so I asked Copilot specifically about LLMs aimed at accurate information search and retrieval. It listed itself and one called Perplexity AI. I gave it a try and it actually seems like it might work for me (admittedly, based on the result of one query, but that query involves looking up information that must be very current and it did okay.
 

Thread Starter

WBahn

Joined Mar 31, 2012
32,748
"How about Microsoft Copilot?"


Instances Involving Microsoft Copilot

Microsoft Copilot, integrated across Microsoft 365 apps (e.g., Word, Excel, Outlook, Teams), has faced notable privacy and trust issues, primarily related to vulnerabilities enabling data exfiltration, over-permissioning, and integration risks. These stem from its deep access to organizational data via Microsoft Graph, which can inadvertently expose sensitive information without robust controls. While Microsoft emphasizes compliance (e.g., GDPR, EU Data Boundary), incidents highlight gaps in safeguards, especially for enterprise use. No evidence of intentional misuse, but delayed patches and default settings have amplified risks.




IncidentDateDescriptionImpactSource Citation
EchoLeak Zero-Click Prompt InjectionJanuary–June 2025A critical vulnerability (CVE-2025-32711, CVSS 9.3) allowed attackers to send malicious emails that tricked Copilot into extracting and exfiltrating sensitive data (e.g., from Microsoft Graph/Outlook APIs) without user interaction. Discovered by Aim Security researchers; Microsoft patched server-side in May 2025.Potential theft of personal/organizational secrets (e.g., MFA codes, HR docs); enabled indirect prompt injection via seemingly benign emails.
Copilot Studio SSRF VulnerabilityAugust–September 2024Server-side request forgery flaw (CVE-2024-38206, CVSS 8.5) in Copilot Studio bypassed protections, allowing authenticated attackers to make unauthorized HTTP requests and access internal Azure cloud metadata (e.g., Instance Metadata Service, Cosmos DB). Discovered by Tenable; patched quickly by Microsoft.Exposure of cloud infrastructure details; risks to custom chatbot builders handling sensitive enterprise data.
GitHub Repo Exposure via Bing CacheNovember 2024–February 2025Copilot inadvertently exposed private GitHub repositories (over 16,000 organizations affected) through Bing's cache and prompt responses, revealing IP, access keys, and tokens. Reported by Lasso Security; Microsoft restricted cache access in January 2025.Massive leak of confidential codebases; enabled IP theft and credential compromise across Fortune 500 firms.
MFA Code Exfiltration ChainJanuary–July 2024A sophisticated exploit chain in Microsoft 365 Copilot allowed extraction of multi-factor authentication codes and other sensitive data. Reported by researcher Johann Rehberger; initially rated low-severity, patched after demonstration of risks.Bypassed security controls; heightened phishing/credential stuffing risks in enterprise environments.
U.S. House Ban on Copilot UseMarch 2024U.S. House of Representatives prohibited staff from using Copilot due to fears of leaking sensitive congressional data to unauthorized cloud services. Echoed in broader enterprise concerns (67% of security teams worried per surveys).Restricted adoption in high-security sectors; spotlighted over-permissioning risks (15%+ of critical files vulnerable).
Over-Permissioning and Data Aggregation RisksOngoing (2024–2025)Copilot's access to Microsoft Graph aggregates vast data (millions of records per org), potentially exposing sensitive info via poor permissions or third-party integrations (e.g., Claude on AWS). Includes semantic indexing risks under GDPR.Unintended leaks to unauthorized users; compliance challenges (e.g., Article 5/32 GDPR); biased/harmful outputs from unchecked data.
Agentic AI Security WarningsNovember 2025Microsoft flagged risks in new autonomous agents for Copilot Studio, including data leaks, tool misuse, and prompt injection hijacks.Early concerns over evolving "agentic" features; potential for amplified breaches in automated workflows.
Default Data Collection in GitHub Copilot2025Personal Copilot plans default to "on" for data collection (e.g., 128k token "snippets" equaling full repos), overriding privacy settings and feeding into training despite opt-outs. Enterprise plans vulnerable via personal logins.IP harvesting from private repos; blurred lines between personal/enterprise data flows.


User reports on X highlight anecdotal issues like unauthorized data access in prompts and privacy doubts in integrations (e.g., Claude routing to AWS). No major FTC complaints or GDPR probes specific to Copilot were found, but general scrutiny (e.g., DPIAs recommended) ties into broader Microsoft AI data handling concerns.

Broader Context and Substantiation

These incidents are drawn from cybersecurity reports (e.g., Dark Reading, BleepingComputer, SecurityWeek), researcher disclosures (Tenable, Aim Security, Lasso), and Microsoft advisories. Microsoft has patched most flaws promptly and claims no user data is used for training (for enterprise), but critics note systemic issues like default opt-ins and integration opacity erode trust. Regulatory focus remains on assessments (e.g., GDPR DPIAs) rather than fines. For mitigation, enable strict permissions, conduct PIAs, avoid personal accounts on work data, and monitor tools like Microsoft Purview. Compared to Grok/X, Copilot's enterprise focus amplifies scale but also scrutiny in regulated sectors.
To be sure, I don't trust Copilot with sensitive information any more than I do any of the others. The best and, IMNSHO only, way to keep personal information from being abused is to not give it out. This is why I am not going to use an LLM that requires me to establish an account. It's why I don't save credit card or other payment information on any of my computers, and especially not on my phone. Sure, it would be more convenient, but I'm paranoid about the risks. It's why I don't let Chrome or Google or any other device save my passwords -- I use a single vault that I have to manually copy and paste items from to use them.
 

joeyd999

Joined Jun 6, 2011
6,245
To be sure, I don't trust Copilot with sensitive information any more than I do any of the others. The best and, IMNSHO only, way to keep personal information from being abused is to not give it out. This is why I am not going to use an LLM that requires me to establish an account. It's why I don't save credit card or other payment information on any of my computers, and especially not on my phone. Sure, it would be more convenient, but I'm paranoid about the risks. It's why I don't let Chrome or Google or any other device save my passwords -- I use a single vault that I have to manually copy and paste items from to use them.
Paranoia is good for you.
 

Thread Starter

WBahn

Joined Mar 31, 2012
32,748
I don't know exactly where I first saw it. I think it was a t-shirt or perhaps a poster, but it was back when I was in high school. I think the exact (or near-exact) wording was, "Just because you're paranoid doesn't mean that the world isn't out to get you."

My guess is that it probably popped up shortly after the notion of "paranoia" entered the public eye in some big manner, which I'm guessing would have been in the 1960s, give or take. That seems to be when quite a bit of pop psychobabble exploded on the scene.
 

joeyd999

Joined Jun 6, 2011
6,245
I don't know exactly where I first saw it. I think it was a t-shirt or perhaps a poster, but it was back when I was in high school. I think the exact (or near-exact) wording was, "Just because you're paranoid doesn't mean that the world isn't out to get you."

My guess is that it probably popped up shortly after the notion of "paranoia" entered the public eye in some big manner, which I'm guessing would have been in the 1960s, give or take. That seems to be when quite a bit of pop psychobabble exploded on the scene.
Nah. You probably first saw it where I did: a poster on my college girlfriend's wall above the bed.
 

Thread Starter

WBahn

Joined Mar 31, 2012
32,748
I don't know exactly where I first saw it. I think it was a t-shirt or perhaps a poster, but it was back when I was in high school. I think the exact (or near-exact) wording was, "Just because you're paranoid doesn't mean that the world isn't out to get you."

My guess is that it probably popped up shortly after the notion of "paranoia" entered the public eye in some big manner, which I'm guessing would have been in the 1960s, give or take. That seems to be when quite a bit of pop psychobabble exploded on the scene.
I decided to ask Perplexity AI and this is what it responded with:

Query:
When did the phrase, "Just because you're paranoid doesn't mean that the world isn't out to get you," (or something like it) first appear?

Answer:
The phrase "Just because you're paranoid doesn't mean that the world isn't out to get you" (or close variants) is not from Joseph Heller's "Catch-22," despite popular attribution, as no exact matching quote appears there. The saying appears to have been circulating in the 1960s and 1970s in various forms, often on buttons or in graffiti, and is linked to cultural expressions rather than a single original source.

The earliest documented variants were found in the late 1960s, including a 1967 magazine and buttons carrying similar slogans. The phrase has been attributed to the poet Delmore Schwartz posthumously; he reportedly said, "Even paranoiacs have real enemies," which was paraphrased in later versions. Henry Kissinger also reportedly used a similar quip in the early 1970s ("Even a paranoid can have enemies"), but that came after the saying was already in circulation.

A 1973 comic strip featured a line close to "Just because I'm paranoid doesn't mean they're not really after me," indicating it was colloquial by then.

In summary, the phrase emerged in popular culture during the late 1960s and early 1970s, with no definitive sole originator. It likely evolved from earlier similar expressions about paranoia and reality having some truth, credited partially to Delmore Schwartz and used by figures like Kissinger.
Seems reasonable, though that may be confirmation bias at play. It's not worth the effort to vet further.
 

Futurist

Joined Apr 8, 2025
725
I've been using Copilot, Microsoft's LLM accessible via Edge, for a while as a search engine (meaning that anything it says must be vetted before relying on it) and have found it to be quite useful in that role. I would say that the information it provides is reasonably accurate quite a bit more often than not.

However, recently it has become virtually unusable. It cuts of it's responses mid sentence (occasionally midword), forcing me to use the "Stop" button. Then, I usually have to provide the next prompt twice in order to get it to respond, and usually with another cutoff response. Some of the exchanges have become downright comical as I have tried to get it to explain the cause for this change in behavior. It blows smoke up my backside, apologizing profusely and telling me that I have every right to expect complete responses and how it will now provide the complete response to my previous question, only to cut it off in mid-sentence. Then, like any elementary school child, tries to insist that what it provided was, technically, a complete sentence (which it clearly wasn't).

As entertaining as it can be, it makes it useless for actually exploring anything.

So, two questions:


#1) Are other people that use Copilot seeing the same thing, or is it me (something with my browser set up or, perhaps, network connection)?
I think this is sporadic, I've had the oddest behavior at times then everything is normal. Does it keep asking for proof you are human? are you signed in to copilot?

#2) What other FREE LLMs are out there (that do NOT require any kind of registration to use) that you have found to generally provide decent responses. It's especially nice if they provide, at least if asked, sources where they obtained information from.
ChatGPT?
 
Top