...yes?
Is 120K people enough for witch hunting? It happens in groups of hundreds. It happens in groups of tens.
Three or more? Yes, we have three or more members.
Imposters win when a crewmate and the imposter vote off the other crewmate.
Don't play try to play Mafia in real life.
I get the feeling that we have different definitions of witchhunt, but that I also don't really have a strong implicit understanding of what mine IS, so I'll take your word for it.
It doesn't help that "AI detectors" are total bullshit with no conclusive evidence to back them up. They're just pattern checkers that look for commonly used AI elements at the best of times, and LLMs are TRAINED on people's stories, so the shit they copy is patterns they learned from real people. Your average coin flip has a better chance of detecting AI than most of the detection tools people use, if they even bother.
Yeah, the AI detectors are complete dogshit at it hahahaha. False positives, false negatives; I haven't found a single one with a success rate better than, like, 15% lmao.
Like if I hear one more person call something AI because someone used an em-dash I'm gonna scream. Speaking as a member of a thriving author community who have been espousing the use of em-dashes as a super punctuation for like ten years, it's just such a stupid thing to base an assumption on. To clarify, I HATE em-dashes, they try to do too much and they annoy me, but they're ABSOLUTELY a thing that many professional authors love to use, which is where the AI picked up the habit to begin with lol.
Aren't people pretty bad at detecting ai use vs poorly written if it's even slightly edited anyway? Like people see a single em dash and assume it must be ai.
Yep.
Most people who tell you they can spot AI at a glance can't. The biggest sign you can have that someone isn't as good at spotting it as they think is when they tell you that they are absolutely, completely sure.
Not that you can't spot it - I do it sometimes when I am reading. But if someone thinks they can absolutely, always, one hundred percent of the time tell? It usually means that they're just assuming anyone doing something they don't like is Ai writing and drowning in a sea of false positives and confirmation bias.
I strongly disagree that humans can't detect AI writing, though. Sure, if your heuristic is simply "em dash = AI," then yeah, you'll be no better than a coinflip. Probably worse, actually.
But that's not how I can tell. Fiction is still a little harder than non-fiction, but even in fiction, AI has a WAAAAAAY different
tone than humans do. It just doesn't write like any person.
The trick is you have to throw away "bad writing" as a marker for AI. AI makes fantastic, impeccable sentences, better than 99% of amateur writers; but, as soon as it tries to string them together into paragraphs, and string those paragraphs into a chapter or a page, it completely fails to make anything with a coherent tone or emotional throughline. Crucially, it fails in a
completely different way than "bad writers" do; because all human writers—even the bad ones—have something LLMs can't replicate:
a brain, which has been "trained" its entire life to
communicate with spoken language.
Someone with no grasp of grammar, or spelling, or rhetoric, or conciseness, or variation, or any other pillar of good writing, will STILL make a more compelling "argument" than AI. Unless they have a condition like schizophrenia, but in cases like that, it's equally obvious it can't be AI, because the spelling and grammar and organization will be "substandard," compared to AI. "AI would never write this," basically.
LLMs work by a statistical, probabilistic model of "generating the next word backwards from white noise," essentially. As soon as you try to GENERATE something with that probabilistic model, it falls apart, except for recreating or modifying the exact material it was trained on. Just by the way it works, it's simply incapable of doing it. It can ape it for a while, but eventually it completely cocks it up.
If you're ever skeptical of something was written by AI, ask yourself if it sounds like it was written by the senior management at a company that makes office supplies. Technically perfect, but emotionally hollow.
I find all variations of 'if they're angry about being accused of [bad thing], they probably did it' deeply suspect. If you put a lot of care and effort into something, and someone responds to that with 'did a machine write this for you', or the more confident 'a machine wrote this for you', it's not unreasonable to be a bit offended and defensive.
You're allowed to be a little rude if you pair it with constructive criticism, though. By all means, if you notice some flaws or distasteful quirks in the writing, feel free to mention them in-thread, and you can even guess at it being caused by AI. Without the substantive criticism, though, calling something AI is just saying 'this is shit' and 'you didn't write this', and we'd prefer you didn't.
I've written things that people ask are AI, and I just say "no, lol." I don't get offended by it, though I guess I can see how someone else would; so, fair enough.
Anyways, I actually think that even IF you're giving constructive criticism, you should NEVER be rude about it. If you point out an actual flaw in someone's writing, and tell them how to fix it, but do it in an asshole-ish way, then what you did was "me an asshole," not "give concrit."