• An addendum to Rule 3 regarding fan-translated works of things such as Web Novels has been made. Please see here for details.
  • We've issued a clarification on our policy on AI-generated work.
  • QuestionableQuesting has new Moderator positions. To submit your application, please see this thread.
  • Due to issues with external spam filters, QQ is currently unable to send any mail to Microsoft E-mail addresses. This includes any account at live.com, hotmail.com or msn.com. Signing up to the forum with one of these addresses will result in your verification E-mail never arriving. For best results, please use a different E-mail provider for your QQ address.
  • For prospective new members, a word of warning: don't use common names like Dennis, Simon, or Kenny if you decide to create an account. Spammers have used them all before you and gotten those names flagged in the anti-spam databases. Your account registration will be rejected because of it.
  • Since it has happened MULTIPLE times now, I want to be very clear about this. You do not get to abandon an account and create a new one. You do not get to pass an account to someone else and create a new one. If you do so anyway, you will be banned for creating sockpuppets.
  • Due to the actions of particularly persistent spammers and trolls, we will be banning disposable email addresses from today onward.
  • The rules regarding NSFW links have been updated. See here for details.

Clarification regarding AI policy

I suppose broadly my two cents is that I'm concerned that if there are witch hunts, they're going to run in the opposite direction that everyone seems afraid that they will, and that QQ is going to wind up a place extremely hostile to any discussion of AI writing and its pitfalls that doesn't take place in a 'safe space' sequestered off from any actual stories and the authors writing them. In an environment like that, I'm worried that the writing quality of any and all stories using AI assistance would plummet because nobody felt like it was worth it to bring it up and lazy, LLM dependent authors would be able to get away with just about any level of slop that isn't outright illegible.

It's The Emperor's New Clothes, but instead of non-existent threads it's a fucking wasteland of emdashes as far as the eye can see.
I mean, it's pretty disingenuous to believe that every case is like the one you mention. You can see in this thread itself how that's not really true at all or you just need to open your eyes when it comes to the rest of the internet (reddit, youtube and so on). The post before yours, the one you liked, are much more common in general.

If the author replies to your first post that they aren't using AI, even if you are 99% sure they are doing so, I'm not sure why you would start a discussion for several pages trying to defend your position. At that point just disengage and consider it a lost cause or a very special case. To be clear, an author that lies about it does not really come out smelling like roses either, but in this case insisting on it is being an ass.

You can criticize the issues on their own at that point if you really want. So unless your main issue was the AI itself, I don't see how that's any less valuable. "Hey, you're overusing the emdash to death" is just as valid.

People end up trying to hide AI usage because some idiots will just consider any amount of it "ai slop not worth reading." I particularly do not care for those people, but I see how many others will prefer to avoid that sort of thing. Comments do get a lot more aggressive outside of QQ.
 
Last edited:
In an environment like that, I'm worried that the writing quality of any and all stories using AI assistance would plummet because nobody felt like it was worth it to bring it up and lazy, LLM dependent authors would be able to get away with just about any level of slop that isn't outright illegible.
Frankly, while it might be helpful to know and refer to them using AI in some specific contexts, such as specific kinds of "AI slop" - i.e. over-represented lexical patterns - the truth is there's no such thing as poor quality *because* of AI, because it's still the author's responsibility to proofread before posting and to only do so if it meets their standards. If they are someone willing to post bad writing, then they, *personally*, must lack the ability to see it or the willingness to care. So if it's bad, it's because they looked at their story (or worse, did not) and found it good when it was not.

So rather than worrying about telling them how to use AI if they're using it, you should be telling them, directly, what is wrong with the narrative regardless of how that narrative was written. Cure that blindness, if they are willing to listen (probably not).

For example, you might say: "You overuse the sentence pattern 'Not <word>. [Not <word>.] <Word>.', it shows up like once every two or three paragraphs, sometimes many times in a paragraph, this is bad flow."

Don't worry about whether they're using AI or not. Either they care about their quality and will use your advice, will explain why they can't, or they don't really care and so neither should you. If they want to 'get away' with something, they will no matter what anyone says.
 
Last edited:
Disappointing stance, for sure. "Generative" AI only serves to illustrate what kind of slop the lowest common denominator is willing to consume. Unchecked, this is even worse.

I've seen what happened to pixiv that introduced AI-tagging but didn't bother to control it. My pixiv options should prevent AI slop from appearing in my search results, but they still do, much to my frustration. Given the moderation's indifferent stance, I fail to see how QQ would fare better.

In the end, if admins and mods don't care about keeping things separate or even creating subforums for AI, I don't see why any creative here should care about keeping their stories on QQ. I definitely don't care to see results of prompts and regurgitation of stolen content coexisting with what I wrote or would have been willing to write. Some people might be going over the produced slop, but the vast majority will be just dumping prompt results as is.
 
I've been working with Novelcrafter to write. You still need to edit it and write your own stuff. But my 1k word scene/chapter I write can be turned into a 3-5k scene/Chapter. Its good use of codex entries for characters, locations, powers, scene summaries are great to keep ai generation from going off the rails.

Reason I uses it.
1.) I'm a slow writer
2.) I have issues with spelling and grammar
3.) My schedule keeps me to busy for sitting down and writing for long periods of time. (12 hour shifts)
4.) I like experimenting with AI

Runner up is sudowrite.

But I do agree pure AI generated stories are sloppy and have plenty of issues. Some are so horrible i can't even finish reading a chapter.
 
Last edited:
Don't worry about whether they're using AI or not. Either they care about their quality and will use your advice, will explain why they can't, or they don't really care and so neither should you. If they want to 'get away' with something, they will no matter what anyone says.
Yep, exactly what I said too. The thing is, more people are focused on how it came to be rather than the writing itself.

The human is always the barrier to what should be posted or not. I have the same feeling when some people excuse posting mangled fingers because it's the AI's fault. You could just have fixed that yourself with inpainting (or own up to it being more work than you were willing to put on that image).

Disappointing stance, for sure. "Generative" AI only serves to illustrate what kind of slop the lowest common denominator is willing to consume. Unchecked, this is even worse.

I've seen what happened to pixiv that introduced AI-tagging but didn't bother to control it. My pixiv options should prevent AI slop from appearing in my search results, but they still do, much to my frustration. Given the moderation's indifferent stance, I fail to see how QQ would fare better.

In the end, if admins and mods don't care about keeping things separate or even creating subforums for AI, I don't see why any creative here should care about keeping their stories on QQ. I definitely don't care to see results of prompts and regurgitation of stolen content coexisting with what I wrote or would have been willing to write. Some people might be going over the produced slop, but the vast majority will be just dumping prompt results as is.
Good riddance, then. You conveniently avoid the problematic of not being able to tell what's AI-assisted or generated and what isn't (not with any degree of certainty, at least). So there would be basically a rule that's based around feels and vibes rather than any actual concrete action mods could take beyond the duties they already have.

By the way, regurgitating the 'stolen content' line is pretty ironic considering the topic.

I've been working with Novelcrafter to write. You still need to edit it and write your own stuff. But my 1k word scene/chapter I write can be turned into a 3-5k scene/Chapter. Its good use of codex entries for characters, locations, powers, scene summaries are great to keep ai generation from going off the rails.

Reason I uses it.
1.) I'm a slow writer
2.) I have issues with spelling and grammar
3.) My schedule keeps me to busy for sitting down and writing for long periods of time. (12 hour shifts)
4.) I like experimenting with AI

Runner up is sudowrite.

But I do agree pure AI generated stories are sloppy and have plenty of issues. Some are so horrible i can't even finish reading a chapter.
Novelcrafter seems pretty good but it still needs a lot of work and editing as usual. It's honestly a good alternative to Scrivener even without AI.
 
What do you mean by this?
That plenty of people keep repeating that line because they hear it somewhere (usually twitter, reddit or some artist communities) without really thinking about it. Most can't really even argue in its favor because they don't understand the tech enough. Not saying it's the case for them, but repeating that line constantly doesn't really make it any more true... and it's really common. It's ironic because of the other criticisms of AI or people that use it.

Yes, raw AI outputs are probably not copyrightable and such. That's very different from actually saying that the outputs are just stolen content.
 
You conveniently avoid the problematic of not being able to tell what's AI-assisted or generated and what isn't (not with any degree of certainty, at least). So there would be basically a rule that's based around feels and vibes rather than any actual concrete action mods could take beyond the duties they already have.
It's not my job or duty to provide solutions when the moderation team is content doing absolutely nothing. Save your venom for elsewhere or don't bother replying.

By the way, regurgitating the 'stolen content' line is pretty ironic considering the topic.
Stolen content is stolen content. You can pretend prompt results are generated ex nihilo, but they are not. The data that models are trained on is stolen, plain and simple.
 
It's not my job or duty to provide solutions when the moderation team is content doing absolutely nothing. Save your venom for elsewhere or don't bother replying.
Venom? I've heard it was the AI people who were delicate in this very same thread. Pointing out an issue with your complains is very far from anything close to venomous.
Stolen content is stolen content. You can pretend prompt results are generated ex nihilo, but they are not. The data that models are trained on is stolen, plain and simple.
And stolen content is not the same as the outputs. For example, it's been legal to train in publicly accessible information for many other models for years. Image generation models do this last one. This is something that still needs to be defined by the law rather than established as a fact by people who have something to gain from it or just have an issue with the tech (that they also don't understand).

Or worse, people who actually earn money with the properties of others. Should we just forget that part too?
None of what you just said lines up with any definition of irony I've ever heard.
They are at fault of one of the main things that criticize AI. How is that not following the definition?

AI is unoriginal and steals content created by others without any understanding or thought = People repeat an argument against AI first said by others without any understanding of the topic or real thought about it.
 
The data that models are trained on is stolen, plain and simple.
Depends on the model, honestly? For text generation, a lot of source material is stuff like public domain works (owned by no one), old published works (especially the type owned by large corporations that can happily resell otherwise mostly-defunct IP rights for real cash money), or social media posts (legally the property of the social media site, which then turned around to sell them to the AI company). Unlike with image generation, having good source material is a fair bit more important than having abundant source material, so there's been more focus on better curation on said source material, which (with many companies) has included better curation of legal right to the materials used. This is at least a step above the image generation scene, which was a lot more like eagerly devouring every publicly-accessible dataset which includes a catalog of tagged images.

Whether or not you approve of it, the datamining of social media posts is arguably not theft. After all, so few people bother to read things like the terms of service even for the sites that they use regularly, especially when it comes to something like a change in those terms. Maybe they didn't think through the implications of that service's host company owning their posts, or bother to care about a change in the ToS section when "things we're allowed to do with the data we collected from you" had "use that data in training AI models" added. Sure, there are doubtless authors who didn't realize that they were giving away their IP rights at the time, but that's a wider problem with the current structure of the Internet, not a problem with AI in particular.
 
Venom? I've heard it was the AI people who were delicate in this very same thread. Pointing out an issue with your complains is very far from anything close to venomous.
I've been courteous enough to not quote your opening statement, but there's no point in it if all you are in here for is trying to own someone who takes an opposite stance.

And stolen content is not the same as the outputs. For example, it's been legal to train in publicly accessible information for many other models for years. Image generation models do this last one. This is something that still needs to be defined by the law rather than established as a fact by people who have something to gain from it or just have an issue with the tech (that they also don't understand).
You're missing the point as expected. This hyperfocus on output is doing you no favours, except showing your clown behaviour.

So, is all content in the database public data? No copyrighted stuff at all, eh? Everything obtained with explicit consent of creators? No, no, and no. Definitely no reason why every single creative industry in the US was on strike because of AI, eh?

If all you can provide is being blind to faults of AI, then this talking past one another is over.
 
They are at fault of one of the main things that criticize AI. How is that not following the definition?

AI is unoriginal and steals content created by others without any understanding or thought = People repeat an argument against AI first said by others without any understanding of the topic or real thought about it.
What the hell are you talking about? Get thee to a dictionary.
 
So, is all content in the database public data? No copyrighted stuff at all, eh? Everything obtained with explicit consent of creators? No, no, and no. Definitely no reason why every single creative industry in the US was on strike because of AI, eh?
Some AI models were built that way, yeah. It takes more attention to detail than "scraper goes brrrr", but that attention to detail means that things like random gibberish produced by scrapers accidentally grabbing part of the site framework instead of just the content, or OCR failing to digest a bunch of pictures doesn't wind up tainting the end product.

And honestly, the massive strikes were more about "this is going to replace us" than "this is stealing from us".
 
I've been courteous enough to not quote your opening statement, but there's no point in it if all you are in here for is trying to own someone who takes an opposite stance.
You are right, the good riddance is more aggressive than the rest. Still matches your attitude, considering you give no actual solutions and demand the mods to do something miraculous with something that is not even their actual job. There's literally no way to detect AI-assisted text or images.

Bluntly? None of the staff have strong feelings on if they care at all. If anyone did, we don't have the tools to reliably enforce such a rule - and they flat-out don't exist, most 'AI' detectors are less reliable than letting a pitbull babysit your cage of hamsters. Sure, it may scold one that is wrong… or it could just decide that today, everyone gets flagged and eaten.

If you have such strong feelings about it… you're welcome to go start your own splinter-site; expect about $2,500 a year to get features close to comparable to here, assuming you've doing all the work yourself. If you're hiring assistance… hope you've got 5 digits lying around.

On the topic of witch hunts:
The main reason we put this up is that we were seeing witchhunts and harassment starting about whether people were using AI or not. These were reported, and were indeed breaking the rules, so we did the standard warning. So, please report this harassment.

You're missing the point as expected. This hyperfocus on output is doing you no favours, except showing your clown behaviour.

So, is all content in the database public data? No copyrighted stuff at all, eh? Everything obtained with explicit consent of creators? No, no, and no. Definitely no reason why every single creative industry in the US was on strike because of AI, eh?

If all you can provide is being blind to faults of AI, then this talking past one another is over.
I'm only seeing one clown here, and it's the one who started with that sort of name-calling.

Training on things that are publicly accessible != not copyrighted data or consent of creators

And even then, you need to prove a lot more than the fact that they are used to have grounds to call the output stolen content. I 'hyperfocus' on the output because it's the actual thing that it's the subject here (the text or images). Trying to argue that sort of stuff as a gotcha is just embarrassing.

I do agree we'll probably gain nothing from discussing things.

Some AI models were built that way, yeah. It takes more attention to detail than "scraper goes brrrr", but that attention to detail means that things like random gibberish produced by scrapers accidentally grabbing part of the site framework instead of just the content, or OCR failing to digest a bunch of pictures doesn't wind up tainting the end product.

And honestly, the massive strikes were more about "this is going to replace us" than "this is stealing from us".
Good luck adding some nuance to this...

What the hell are you talking about? Get thee to a dictionary.
At this point the one who needs a dictionary is you. You can disagree with the principle of people repeating arguments without understanding, not an issue, just don't try to make shit up to say it's not ironic. It can't be explained easier than that last sentence you quoted.
 
Last edited:
For text generation, a lot of source material
As long as the percentage is not 100% and the scraping is an opt-out instead of an opt-in (and even that opt-out option is not a guarantee in the current climate), that makes no difference to me.

the datamining of social media posts is arguably not theft.
A lot of things are legally not a theft, but that doesn't change the spirit of what it is.

Sure, there are doubtless authors who didn't realize that they were giving away their IP rights at the time, but that's a wider problem with the current structure of the Internet, not a problem with AI in particular.
Oh, it's a problem with everything, quite. Doesn't make AI any less of a problem on its own, nor do I condone the victim blaming. Yes, everyone should read ToS, but no, it doesn't make the companies that abuse ToS innocent and free of guilt.

And honestly, the massive strikes were more about "this is going to replace us" than "this is stealing from us".
That's interchangeble in context. Voice actors would have been replaced by a voice data bank that would have stolen their previous work. Script writers would have been replaced by a giant database of their own work. Same for artists, same for actors, same for everyone. You can argue the wording, but the meaning remains.
 
No, it's still you. How about you tell me what you think "ironic" means. Here's a hint: it doesn't mean "wrong over and over."

a situation in which something which was intended to have a particular result has the opposite or a very different result:

1) You (general) criticize AI for stealing and being unoriginal without critical thought or originality.
2) You repeat the same argument you heard somewhere else (stealing) without really understanding the nuance or tech beneath it (without critical thought or knowledge).
3) Your argument (2) is an example of the things you're criticizing in (1). Your form of argument contradicts what you're trying to say with that argument. This is "a situation in which something which was intended to have a particular result has the opposite or a very different result."

Clear enough for you?

Basically, how you are forming an argument goes against what you're trying to say with said argument.
 
Last edited:
Eh in all honesty the argument that writing AIs make use of stolen content is arguably the weakest in my eyes. Like at that point anyone who takes particular inspiration from a book is stealing (before you go that's different, copying a writer's particular style is both more blatant and more accidental than with an artist's style because unless you take proper classes it's a lot of monkey see monkey do based on what you read). If AI writing has particular call signs and vibes then that means it is not meaningfully copying a particular author, and in fact with the massive amount of writing available online I find it hard to imagine how any voice created from that could be anything but generic. Also by this logic I would argue that most fanfiction authors who accept money are stealing because they are directly aping tones and styles from pre-established IPs.

Anyways wet AI and stealing I am of the opinion that it is much better to take the stance of accusing someone of theft if they purposefully coached the AI into aping someone else's work rather than taking the stance of "it's all theft". Which I feel is both a misnomer and reductive.

I probably said this earlier in thread but if AI generated content becomes significant I would be up for isolating it into its own subforum. The filters on QQ have always been loose and id prefer to keep it that way. This is not meant to be a bespoke fanfiction archive like AO3.
 
Fanfiction itself is already in a legal gray area because the author is making use of characters/settings that fall under someone else's copyright, and it's technically an even more blatant violation if the writer makes money from such work either by writing commissions or collecting donations specifically for the purposes of writing fanfiction (like many prominent authors here do). The only reason copyright holders don't bring down the hammer of the law is simply because it is beneath their notice most of the time. So in that sense the whole "AI steals" argument is kind of pointless in this discussion, at least in my opinion, because we're all already making and consuming works that are derivatives based on someone else's works.

As for the whole "slop" thing, I'd say just judge the work on its own basis rather than where it came from. It's free content in the end, and the author is not under any obligations beyond having to follow the same rules that everyone else has to. At most I'd say that people should be encouraged to use the tag system to tag work that's AI-assisted, just so people who don't want to see that stuff can more easily avoid it and don't end up whining at unknowingly being subjected to "slop."
 
It's not my job or duty to provide solutions when the moderation team is content doing absolutely nothing. Save your venom for elsewhere or don't bother replying.
Reliable, detection of AI generated text, without false positives, is impossible in practical terms.

If it was possible to create a consistent, objective process that said "yes it's AI" reliably, that procedure could, and would, be used to train LLMs to generate better outputs. And by better, I mean outputs that lack whatever indicators "prove" they are AI. Certainly so for narrative or image.

Not because the AI developers have any interest in fooling detectors (frankly they barely care about creative writing at all, that's an incidental capability intrinsic to the way LLMs work), but because any output a human wouldn't do is, almost by definition, a flaw. And indeed, I have no doubt that the developers (OpenAI, Meta, Google, etc) did exactly this for anything that their massively overpaid people could create a reliable signal for. They're not trying to make bots that sound like bots, they're trying to make bots that sound like people. And yes, it's imperfect, but only in ways they can't generate a signal for - if they could measure it, it'd be gone, suffice to say without a deep dive into LLM alignment I doubt most care about.

And of course, if it could be done reliably we'd see tools that actually work not all these scam detectors that say "it's maybe possibly probably..." and falsely flag things left and right.

This matters because -as the mods have said, as others have said - it means a rule against declaring, or writing, AI literally cannot be enforced with any fairness or reliability. The mods would have to just say "feeling cute, seems AI-ish, banbanban." And (also as others have mentioned) there have been many cases online of people claiming things are AI, only for evidence to be given that the thing wasn't AI. If you believe, at all, in "innocent before proven guilty," then you should understand that that would be unjust.

With that established - yes, the mods also said they don't personally care. But if they said they care and feel really bad but it's still impossible to do fairly... well, maybe that'd be better to you, but it wouldn't change anything in practice, you know?
 



1) You (general) criticize AI for stealing and being unoriginal without critical thought or originality.
2) You repeat the same argument you heard somewhere else (stealing) without really understanding the nuance or tech beneath it (without critical thought or knowledge).
3) Your argument (2) is an example of the things you're criticizing in (1). Your form of argument contradicts what you're trying to say with that argument. This is "a situation in which something which was intended to have a particular result has the opposite or a very different result."

Clear enough for you?

Basically, how you are forming an argument goes against what you're trying to say with said argument.
So, you're trying to discredit an argument by claiming its proponents "stole" it, and do not understand it themselves, which is what they accuse the LLMs and their trainers of doing to text and drawings. That's not an example of irony. That's just you using the fallacious tu quoque discussion technique.
 
So, you're trying to discredit an argument by claiming its proponents "stole" it, and do not understand it themselves, which is what they accuse the LLMs and their trainers of doing to text and drawings. That's not an example of irony. That's just you using the fallacious tu quoque discussion technique.
I'm not trying to discredit something (please point me out where I did). What they think about AI is not wrong because they do the same (that's just funny), people have already given plenty of other arguments and I myself have done so in other threads too. I only said that he is free to go, like a mod said to people who have such strong feelings about it, and that their magical solution doesn't exist.

After that we actually kept talking and then agreed that the discussion wouldn't be productive anymore.

You asked what was ironic and I replied. I know you weren't doing it in good faith, but I don't mind explaining it.

By the way, regurgitating the 'stolen content' line is pretty ironic considering the topic.
Not saying it's the case for them, but repeating that line constantly doesn't really make it any more true... and it's really common. It's ironic because of the other criticisms of AI or people that use it.

And yes, the case I referred is still a textbook example of irony. Good talk.
 
Last edited:
I vehemently disagree. You have wasted my time and your own.

👍

And please, if you are going to reach for the ad hominem attack, at least learn how to read what you link.
 
Last edited:
I mean, it's pretty disingenuous to believe that every case is like the one you mention. You can see in this thread itself how that's not really true at all or you just need to open your eyes when it comes to the rest of the internet (reddit, youtube and so on). The post before yours, the one you liked, are much more common in general.

I agree. I never claimed to believe every case was like the one I mentioned, nor did I claim to believe every case would come to be that way in the scenario I described. And I also agree that on this site and on the internet in general there's more hostility toward people using LLMs than toward people criticizing how it's used.


You can criticize the issues on their own at that point if you really want. So unless your main issue was the AI itself, I don't see how that's any less valuable. "Hey, you're overusing the emdash to death" is just as valid.

Frankly, while it might be helpful to know and refer to them using AI in some specific contexts, such as specific kinds of "AI slop" - i.e. over-represented lexical patterns - the truth is there's no such thing as poor quality *because* of AI, because it's still the author's responsibility to proofread before posting and to only do so if it meets their standards. If they are someone willing to post bad writing, then they, *personally*, must lack the ability to see it or the willingness to care. So if it's bad, it's because they looked at their story (or worse, did not) and found it good when it was not.

So rather than worrying about telling them how to use AI if they're using it, you should be telling them, directly, what is wrong with the narrative regardless of how that narrative was written. Cure that blindness, if they are willing to listen (probably not).

For example, you might say: "You overuse the sentence pattern 'Not <word>. [Not <word>.] <Word>.', it shows up like once every two or three paragraphs, sometimes many times in a paragraph, this is bad flow."

Don't worry about whether they're using AI or not. Either they care about their quality and will use your advice, will explain why they can't, or they don't really care and so neither should you. If they want to 'get away' with something, they will no matter what anyone says.

I think you're both right on this and that what you're both suggesting is gonna be the best way to handle it, yeah. If the writing quality is jarring because of problems with how LLMs do creative writing, then it's still a problem of the writing quality itself that the author can fix with proofreading, and should be treated as the author's responsibility without AI being brought up at all. The writing quality taking a hit because of work done by an LLM that wasn't properly proofread is a failure in the author's ability to craft a well written story, and at the end of the day it wouldn't matter from a technical standpoint whether it was made by an LLM or not, so it'd not only be less likely to cause offense but still wouldn't detract anything from a piece of constructive criticism to just frame it as jarring writing on the author's part. AI wouldn't need to enter the conversation.
 

Users who are viewing this thread

Back
Top