Rendered at 23:21:24 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
tobr 4 hours ago [-]
Interesting to note how similar this seems to what happened with Benj Edwards at Ars Technica. AI was used to extract or summarize information, and quotes found in the summary were then used as source material for the final writing and never double checked against the actual source.
I’ve run into a similar problem myself - working with a big transcript, I asked an AI to pull out passages that related to a certain topic, and only because of oddities in the timestamps extracted did I realize that most of the quotes did not exist in the source at all.
raw_anon_1111 3 hours ago [-]
This seems like a solved problem. Any RAG interface I design I have links to the original source and passage. Even NotebookLM does this.
It might be a solved problem in the sense that it has a possible solution, but not in the sense that it doesn’t happen with the tools most people would expect to be able to handle the task.
Peritract 3 hours ago [-]
It was already a solved problem with cmd/ctrl + f.
skygazer 3 hours ago [-]
Out of curiosity, if you asked for the same text extraction multiple times, each inside fresh contexts, is it likely to fabricate unique quotes each time? And if so, a) might that be a procedure we train humans to do to better understand LLM unreliability, and 2) and instrumentalize the behavior to measure answer overlap with non LLM statistical tools?
Also, quote-presence testing/linking against source would seem to be a trivial layer to build on a chat interface, no LLM required. Just highlight and link the longest common strings.
HN is full of people saying ABCD should know better and honestly I thought the same, but when I look at almost all of my friends working in critical domains like as a judge or engineer or lawyer or even doctor, they seem to trust ChatGPT more or less blindly. People get defensive when I point out out to them that ChatGPT will make things up and it is widely know, and some even tell me it is the fault of "tech people" for not fixing it and they can't be expected to double check every chatgpt conversation. So I am very sure this problem is more prevalent than what we see and also that it is going to continue increasing.
WarmWash 4 hours ago [-]
Every single person, every one of them, that I have watched google something since AI overviews launched, will instantly reference the AI overview. And that model is some bottom-rung high volume model, not even gemini.
jacquesm 2 hours ago [-]
The best way to deal with that is to kick the AI overview off using your browser.
jacquesm 2 hours ago [-]
Yes, this is the problem. You give people something that has an oracular interface they will treat it like an oracle.
andrewflnr 5 hours ago [-]
Your friends should know better. That their behavior is prevalent does not contradict that.
coffeefirst 2 hours ago [-]
This answer really isn’t good enough. The providers can’t both aim to replace search and claim PhD level intelligence that will do all the jobs, but hide behind “it makes mistakes” in small print.
andrewflnr 2 hours ago [-]
I'm not making excuses for the providers either. But seeing through the inflated claims of commercial service providers is not a new skill.
crop_rotation 4 hours ago [-]
Yes and the world should be utopia and everyone should be happy and we all wish for world peace and yada yada yada. What you are saying is a vision of ideal world as it should be, but doesn't help anyone understand the real world problems.
andrewflnr 4 hours ago [-]
You can't seriously compare the problem of world peace with the problem of exercising the most basic level of critical thinking w.r.t. LLM output after it has already proven itself unreliable. That's not a utopian dream, it's a level of prudence on par with not sticking a fork in an electrical socket.
ffsm8 4 hours ago [-]
You're seriously overestimating the average person's ability to understand what llms are.
Look at all the influences, streamers, podcasters constantly asking em things and taking it as fact - live.
Isn't the joe Rogan experience like the most watched podcast or something? Every episode I've ever stumbled upon he "fact checks" multiple things via their sponsor which is just an llm provider specialized on news.
People aren't good at statistics. If something is close enough to the truth enough times, and talks authoritively on everything with good English... Guess what, they're gonna trust it.
andrewflnr 1 hours ago [-]
You don't need to know how an LLM works to realize "sometimes the magic ChatGPT box tells me wrong things". Even if you fully fall for the anthropomorphism, this only requires the same level of awareness as realizing that after the third or fourth thing your weird uncle tells you that turns out not to be true, maybe you shouldn't take him at his word.
ben_w 36 minutes ago [-]
If human psychology worked like that, lotteries wouldn't be a thing. Nor prayer. There wouldn't be horoscopes in newspapers, nor homeopathy.
One of the various oddities going on with LLMs in particular is them being trained with feedback from users having a chance to upvote or downvote responses, or A/B test which of two is "better". This naturally leads to things which are more convincing, though this only loosely correlates to "more correct".
jacquesm 2 hours ago [-]
I would happily bet that you too have fallen for this at least once. Unless you cut AI out of your life completely and do not interact with others.
AI output is like that COVID video of contamination, you almost can't avoid it unless you scrupulously check each and every thing that is presented as fact that you are exposed to. And absolutely nobody does that.
andrewflnr 1 hours ago [-]
> Unless you cut AI out of your life completely
Pretty close. I only touched ChatGPT a couple times a few years ago, haven't used the others (on purpose at least. Google forces its Gemini summaries on me but I mostly avoid them, because, umm, see above.)
> and do not interact with others.
Most people I interact with are on the same page about AI. But I try to keep my critical thinking online anyway, like I always have. If someone tried to feed me AI slop, I would consider that person to have betrayed my trust and would, to put it gently, try to interact with them less.
philipov 3 hours ago [-]
You may demand that of yourself, but for others we must design around the fact that they are stupid. You do not have the power to change their stupidity, only your response to it.
andrewflnr 1 hours ago [-]
Indeed. I'm not sure why you think that's responsive to my post. I'm mostly pointing out just how deeply stupid they are.
Though if you have a useful response besides "weather the storm while everyone else learns the hard way", I'm listening.
bryanrasmussen 4 hours ago [-]
yes but the electrical socket in question is a fairly new-fangled one, who doesn't want to fork-test it a bit.
ath3nd 4 hours ago [-]
[dead]
friedtofu 4 hours ago [-]
I think this is an issue with anyone who relies on any LLMs. But yeah I agree and have had similar issues where someone will get defensive because they just don't want to admit they(the LLM's response) were wrong. It's hard to tell someone in a "nice/nonchalant" way:
"It's fine, the LLM just lied to you, but hallucinations and making claims based off of assumptions is just something they do and always have done!"
People don't like to feel dumb, and they don't want to feel betrayed by the same tool that gave them incredible factually correct results that one time only to give them complete and utter bullshit(that sounded legitimate) another time.
Also, yeah it feels like its everywhere these days and isn't showing any signs of slowing down(visited my parents and my dads using siri to ask chatgpt stuff now - URGHHHH) and I really hope we're both wrong
joe_mamba 5 hours ago [-]
>but when I look at almost all of my friends working in critical domains like as a judge or engineer or lawyer or even doctor, they seem to trust ChatGPT more or less blindly
That's why I lost trust and faith in people who end up in positions of doctor, lawyer or judge. When I was young I used to think they must be the smartest most high-IQ people in society, having read the most books and have the highest levels of critical thinking and debate skills ever. When in fact they were only good at memorizing and regurgitating the right information that the school required to pass the exam that gave them that prestigious title and that's it.
Now in my mid 30's when I talk to people from these professions at a beer, barbeque or any other casual gathering, I realize they're really not that sharp or well read or immune propaganda and misinformation, and anyone could be in their place if they put in the grind work at the right time. It's a miracle our society functions at all.
pessimizer 3 hours ago [-]
> almost all of my friends working in critical domains like as a judge or engineer or lawyer or even doctor, they seem to trust ChatGPT more or less blindly.
We do not live in a meritocracy, because society has no means to judge merit. We live in a society ruled by people who crammed before the tests, and who wrote the papers to agree with and flatter the teacher. Now they are the teachers (and bosses), and
1) expect to be flattered (and LLMs have been built as the ultimate flatterers),
2) feel that a good, ambitious student (or subordinate) will not question them and their work, but instead learn to conform to it, and
3) are not particularly interested in the quality of their work as such, but rather the acceptance of their work. In certain professions, such as judges, doctors, high-level lawyers and engineers, or politicians, they feel like (with good reason) that they can demand acceptance of their work, and punish those who don't accept it.
This position is what they worked so hard as young people for. They were not working to become the best at their jobs. They were working to get the most secure jobs. The most secure jobs are the ones that bad or lazy work doesn't endanger.
doctorpangloss 5 hours ago [-]
on the flip side, so much chatgpt usage, full of flaws, doesn't seem to really matter in various "critical domains." you can't generalize "critical."
ashwinnair99 5 hours ago [-]
The tool didn't fail here, the person did. An experienced journalist should know better. Editorial review exists for exactly this reason, if you skip it, this is what happens.
microtonal 5 hours ago [-]
But the article said he published it in his own Substack newsletter, I am assuming that it is not under editorial control, since it is personal?
Hendrikto 4 hours ago [-]
> The tool didn't fail here, the person did
Both failed.
intended 6 hours ago [-]
Looking at the media ecosystem at large, gives me a case of gallows humor.
In some sections of the ecosystem, firms still penalize journalists for errors. In other sections, checking reduces the velocity of attention grabbing headlines. The difference in treatment is… farcical.
We need more good journalists, and more good journalism - but we no longer have ways to subsidize such work. Ads / classifieds are dead, and revenue accrues to only a few.
I have no idea how we square this circle.
PeterStuer 6 hours ago [-]
We can't square this circle. It's why they're all A/B flipping headlines (resulting in the most deranged partisan clickbait), killed of their (too expensive) redactions (especially international news), rely solely on (barely) rewriting AP, Reuters and PRNewswire, and fill their site with opinion rather than factual reporting in support of gov handouts to the sector.
Chinjut 7 hours ago [-]
Good lord, even the apology is AI generated: "That was not just careless—it was wrong."
His non-apology apology even follows a familiar pattern: I wrote it myself but just used AI for some help, and it inserted false quotes! Bad tech! But I have now learned my lesson!
Very similar to what a rector recently wrote when she got busted giving an AI-generated speech in her inaugural speech in her new university job.
None of it is true, of course. These people are just sorry they got caught.
hvb2 5 hours ago [-]
I think his apology was actually written in Dutch so this might be a translation that was automated?
It is a faithful translation of the original Dutch. Dutch is structurally very similar to English so this type of nuance carries over pretty much intact.
Dutch: “Dat was niet enkel onzorgvuldig, het was fout.”
English: “That was not just careless—it was wrong.”
I’d say the only difference is the em dash.
Whether you consider it proof of AI is up to y’all.
hvb2 4 hours ago [-]
I'm not disagreeing it's a bad translation. Just saying that it's not the source
rsynnott 6 hours ago [-]
Particularly given that the dreaded em-dash is not commonly used in Irish or UK English; it’s mostly a US English thing.
microtonal 4 hours ago [-]
The original (?) apology in Dutch does not use em-dashes:
I’m tempted to agree, but this is a case where I think there’s more human than AI. Maybe he used LLMs for a bit, and changed parts of it. Maybe he is patient zero for LLM speak?
shahbaby 5 hours ago [-]
> That was not just careless – it was wrong
lol
mmooss 6 hours ago [-]
They said earlier that they didn't verify the quotes. I understand them to mean that the LLM outputted text that included quotes. They assumed the output was accurate and found it so appealing, on an emotional level, that they just went with it without checking.
The most valuable lesson here, by far, is not about other people but about ourselves. This person is trained, takes it seriously, and advocates for making sure the AI is supervised, and got caught in the emotional manipulation of LLM design [0].
We all are at risk. If we look at the other person and mock them, and think we are better than them, we are only exposing ourselves to more risk. If we think - oh my goodness, look what happened, this is perilous - then we gain from what happened and can protect ourselves.
(We might also ask why this valuable tool also includes such manipulative interface. Don't take it for granted; it's not at all necessary for LLMs to work, and they could just as easily sound like a-holes.)
[0] I mean that obviously they are carefully designed to sound appealing
camillomiller 6 hours ago [-]
I have witnessed in person what LLMs have done to the mind of seemingly intelligent people. It’s a disaster.
cinntaile 6 hours ago [-]
Don't leave us hanging. What happened?
camillomiller 5 hours ago [-]
A CTO sent me a message that opened with:
“Here’s a friendly message that will perfectly convey what you want to say”.
A double PhD friend says she has to talk to chatGPT for all sort of advice and can’t feel safe not doing it, “because you know I’m single and don’t have a companion to spitball my ideas”. She let chatGPT decide which way to take to get to a certain island, and she got stranded because the suggested service didn’t exist.
I have more examples. It’s a fucking mind virus.
sigseg1v 5 hours ago [-]
How is the getting stranded example different than asking on a travel forum how to get somewhere, and an active and well intentioned user that isn't familiar with your area of travel answers, gives you wrong instructions, and you get lost?
andrewflnr 4 hours ago [-]
The key missing step is where the traveler exercises critical thinking and checks the advice they get. Some people seem to turn that off for LLMs.
array_key_first 58 minutes ago [-]
It's because we spent that last 50 years training people that computers are algorithmic, cold, and don't make human mistakes. Your calculator can't tell you the meaning of life, but it will never get 2 + 2 wrong.
Well, now the calculator can tell you a meaning of life, but it'll get 2 + 2 wrong 10% of the time.
shahbaby 5 hours ago [-]
Because they aren't probabilistic parrots? If they get it wrong, there's usually an understandable reason behind it.
dijksterhuis 52 minutes ago [-]
cunningham's law [0] [1] increases the likelihood that at least one other person will point out the error and correct it. chances are you'll probably get more than one person posting.
LLMs don't do this. they give confident language output, not correct answers.
Because the vast and overwhelmingly majority of the time, if you ask a question into the ether that nobody has a good answer to, most people will gloss over it and not bother answering, as attested by decades of relatable memes ( https://xkcd.com/979/ ). In contrast, the chatbot is trained to always attempt to give an answer, and is seemingly disincentivized via its training set to just shrug and say "I don't know, good luck fam".
55 minutes ago [-]
dude250711 6 hours ago [-]
They stop thinking and they stop verifying output too.
maxrmk 5 hours ago [-]
Ironic coming from the Guardian. One of their journalists consistently publishes ai slop and the paper is in denial about it.
It doesn't seem AI generated to me. Are we at the point where you have to write in a particularly outrageous style in order to not be accused of using AI?
gruez 5 hours ago [-]
>Are we at the point where you have to write in a particularly outrageous style in order to not be accused of using AI?
I don't think we've gotten to the extent that all popular writing styles (eg. hamburger paragraphs) are considered suspect, but the "it's not just X, it's Y" construction[1] attracts particular scrutiny.
I was giving this the benefit of the doubt as well and was just looking at his older writings that have a little "This article is more than 5 years old" banner above it. Looks totally different indeed.
maxrmk 5 hours ago [-]
Fair enough. It reads as extremely AI generated to me. But that isn’t completely reliable.
PeterStuer 6 hours ago [-]
"Journalism" over here seems to have died a long time ago. Most if not all of the former "quality newspapers" unfortunately seem to have devolved into what could be more accurately described as "pro regime activist blogs".
hvb2 4 hours ago [-]
If by "over here" you mean the US, that sounds about right. Can be summed up succinctly into "don't bite the hand that feeds you".
PeterStuer 2 hours ago [-]
EU is actually just as bad
phreack 7 hours ago [-]
> “It is particularly painful that I made precisely the mistake I have repeatedly warned colleagues about: these language models are so good that they produce irresistible quotes you are tempted to use as an author. Of course, I should have verified them. The necessary ‘human oversight’, which I consistently advocate, fell short.”
What? Irresistible quotes? This betrays a terrible way of thinking as a journalist. Basically an admission of wanting to fake news that'd sound good. At that point just write fiction.
Obscurity4340 6 hours ago [-]
Cant you, like, ask or instruct it to create a bibliography with the citations or at least put the source of any quotes next to it for reviewing purposes?
6 hours ago [-]
sofixa 6 hours ago [-]
> Basically an admission of wanting to fake news that'd sound good
How did you read that? Something sounding good and making sense and you wanting it to be true doesn't mean you'd fake it.
abaieorro 6 hours ago [-]
> I wrongly put words into people’s mouths, when I should have presented them as paraphrases
Journalists were doing this for decades. Stitching and editing words out of context, to put words into peoples mouths! I will take AI halucinations over journalists halucinations anytime, at least machine has no hostile intent, and is making a geunine error!
garciansmith 6 hours ago [-]
The idea that somehow AI is magically unbiased and not influenced by those making it is incorrect.
hulitu 6 hours ago [-]
> I will take AI halucinations over journalists halucinations anytime, at least machine has no hostile intent,
Famous last words. What do you think is the main application for AI ? Spreading propaganda.
I’ve run into a similar problem myself - working with a big transcript, I asked an AI to pull out passages that related to a certain topic, and only because of oddities in the timestamps extracted did I realize that most of the quotes did not exist in the source at all.
e.g.: https://docs.cloud.google.com/vertex-ai/generative-ai/docs/g...
Also, quote-presence testing/linking against source would seem to be a trivial layer to build on a chat interface, no LLM required. Just highlight and link the longest common strings.
Look at all the influences, streamers, podcasters constantly asking em things and taking it as fact - live.
Isn't the joe Rogan experience like the most watched podcast or something? Every episode I've ever stumbled upon he "fact checks" multiple things via their sponsor which is just an llm provider specialized on news.
People aren't good at statistics. If something is close enough to the truth enough times, and talks authoritively on everything with good English... Guess what, they're gonna trust it.
One of the various oddities going on with LLMs in particular is them being trained with feedback from users having a chance to upvote or downvote responses, or A/B test which of two is "better". This naturally leads to things which are more convincing, though this only loosely correlates to "more correct".
AI output is like that COVID video of contamination, you almost can't avoid it unless you scrupulously check each and every thing that is presented as fact that you are exposed to. And absolutely nobody does that.
Pretty close. I only touched ChatGPT a couple times a few years ago, haven't used the others (on purpose at least. Google forces its Gemini summaries on me but I mostly avoid them, because, umm, see above.)
> and do not interact with others.
Most people I interact with are on the same page about AI. But I try to keep my critical thinking online anyway, like I always have. If someone tried to feed me AI slop, I would consider that person to have betrayed my trust and would, to put it gently, try to interact with them less.
Though if you have a useful response besides "weather the storm while everyone else learns the hard way", I'm listening.
"It's fine, the LLM just lied to you, but hallucinations and making claims based off of assumptions is just something they do and always have done!"
People don't like to feel dumb, and they don't want to feel betrayed by the same tool that gave them incredible factually correct results that one time only to give them complete and utter bullshit(that sounded legitimate) another time.
Also, yeah it feels like its everywhere these days and isn't showing any signs of slowing down(visited my parents and my dads using siri to ask chatgpt stuff now - URGHHHH) and I really hope we're both wrong
That's why I lost trust and faith in people who end up in positions of doctor, lawyer or judge. When I was young I used to think they must be the smartest most high-IQ people in society, having read the most books and have the highest levels of critical thinking and debate skills ever. When in fact they were only good at memorizing and regurgitating the right information that the school required to pass the exam that gave them that prestigious title and that's it.
Now in my mid 30's when I talk to people from these professions at a beer, barbeque or any other casual gathering, I realize they're really not that sharp or well read or immune propaganda and misinformation, and anyone could be in their place if they put in the grind work at the right time. It's a miracle our society functions at all.
We do not live in a meritocracy, because society has no means to judge merit. We live in a society ruled by people who crammed before the tests, and who wrote the papers to agree with and flatter the teacher. Now they are the teachers (and bosses), and
1) expect to be flattered (and LLMs have been built as the ultimate flatterers),
2) feel that a good, ambitious student (or subordinate) will not question them and their work, but instead learn to conform to it, and
3) are not particularly interested in the quality of their work as such, but rather the acceptance of their work. In certain professions, such as judges, doctors, high-level lawyers and engineers, or politicians, they feel like (with good reason) that they can demand acceptance of their work, and punish those who don't accept it.
This position is what they worked so hard as young people for. They were not working to become the best at their jobs. They were working to get the most secure jobs. The most secure jobs are the ones that bad or lazy work doesn't endanger.
Both failed.
In some sections of the ecosystem, firms still penalize journalists for errors. In other sections, checking reduces the velocity of attention grabbing headlines. The difference in treatment is… farcical.
We need more good journalists, and more good journalism - but we no longer have ways to subsidize such work. Ads / classifieds are dead, and revenue accrues to only a few.
I have no idea how we square this circle.
https://pressanddemocracy.substack.com/p/i-am-admitting-my-m...
Very similar to what a rector recently wrote when she got busted giving an AI-generated speech in her inaugural speech in her new university job.
None of it is true, of course. These people are just sorry they got caught.
Source: https://www.linkedin.com/posts/peter-vandermeersch-a4381b30_...
Dutch: “Dat was niet enkel onzorgvuldig, het was fout.”
English: “That was not just careless—it was wrong.”
I’d say the only difference is the em dash.
Whether you consider it proof of AI is up to y’all.
https://steady.page/en/journalistiekondervuur/posts/dd6e066f...
lol
The most valuable lesson here, by far, is not about other people but about ourselves. This person is trained, takes it seriously, and advocates for making sure the AI is supervised, and got caught in the emotional manipulation of LLM design [0].
We all are at risk. If we look at the other person and mock them, and think we are better than them, we are only exposing ourselves to more risk. If we think - oh my goodness, look what happened, this is perilous - then we gain from what happened and can protect ourselves.
(We might also ask why this valuable tool also includes such manipulative interface. Don't take it for granted; it's not at all necessary for LLMs to work, and they could just as easily sound like a-holes.)
[0] I mean that obviously they are carefully designed to sound appealing
“Here’s a friendly message that will perfectly convey what you want to say”.
A double PhD friend says she has to talk to chatGPT for all sort of advice and can’t feel safe not doing it, “because you know I’m single and don’t have a companion to spitball my ideas”. She let chatGPT decide which way to take to get to a certain island, and she got stranded because the suggested service didn’t exist.
I have more examples. It’s a fucking mind virus.
Well, now the calculator can tell you a meaning of life, but it'll get 2 + 2 wrong 10% of the time.
LLMs don't do this. they give confident language output, not correct answers.
[0]: https://meta.wikimedia.org/wiki/Cunningham%27s_Law
[1]: https://xkcd.com/386/
https://x.com/maxwelltani/status/2023089526445371777?s=46
I don't think we've gotten to the extent that all popular writing styles (eg. hamburger paragraphs) are considered suspect, but the "it's not just X, it's Y" construction[1] attracts particular scrutiny.
[1] https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing#...
[1] https://xcancel.com/maxwelltani/status/2023089526445371777?
What? Irresistible quotes? This betrays a terrible way of thinking as a journalist. Basically an admission of wanting to fake news that'd sound good. At that point just write fiction.
How did you read that? Something sounding good and making sense and you wanting it to be true doesn't mean you'd fake it.
Journalists were doing this for decades. Stitching and editing words out of context, to put words into peoples mouths! I will take AI halucinations over journalists halucinations anytime, at least machine has no hostile intent, and is making a geunine error!
Famous last words. What do you think is the main application for AI ? Spreading propaganda.