• An addendum to Rule 3 regarding fan-translated works of things such as Web Novels has been made. Please see here for details.
  • We've issued a clarification on our policy on AI-generated work.
  • Due to issues with external spam filters, QQ is currently unable to send any mail to Microsoft E-mail addresses. This includes any account at live.com, hotmail.com or msn.com. Signing up to the forum with one of these addresses will result in your verification E-mail never arriving. For best results, please use a different E-mail provider for your QQ address.
  • For prospective new members, a word of warning: don't use common names like Dennis, Simon, or Kenny if you decide to create an account. Spammers have used them all before you and gotten those names flagged in the anti-spam databases. Your account registration will be rejected because of it.
  • Since it has happened MULTIPLE times now, I want to be very clear about this. You do not get to abandon an account and create a new one. You do not get to pass an account to someone else and create a new one. If you do so anyway, you will be banned for creating sockpuppets.
  • Due to the actions of particularly persistent spammers and trolls, we will be banning disposable email addresses from today onward.
  • The rules regarding NSFW links have been updated. See here for details.

Clarification regarding AI policy

The problem is not the use of Ai itself. But the non disclosure of it and vehement denial and defensive posturing when asked about it politely. Yeah, there is no putting the genie back in the bottle and AI use is definitely going to be the future.

But, it ain't quite there yet. It screws up the plot when asked to generate it completely and when just proofreading, it writes in language that _feels_ unnatural enough (using double adjectives, writing in rhyme schemes and seldom used punctuations) that it brings you out of the reading experience.

It's still at the Uncanny Valley stage for writing at least, while images are much further along. So, it's not too much to ask to disclose its usage and if it is going to be continued to be used in the future. Shame that disclosure is only considered polite and not a required rule.
At this point, it's mostly a misconception: the latest generations of AI models are more than capable of writing indistinguishably. They typically don't do it by default, though. It's up to the user to discover specific prompts and contexts that work. Today, the works that are obviously AI-gen are those sourced by lazy authors.

However, I do agree that non-disclosure should be frowned upon.
 
The problem is not the use of Ai itself. But the non disclosure of it and vehement denial and defensive posturing when asked about it politely. Yeah, there is no putting the genie back in the bottle and AI use is definitely going to be the future.

But, it ain't quite there yet. It screws up the plot when asked to generate it completely and when just proofreading, it writes in language that _feels_ unnatural enough (using double adjectives, writing in rhyme schemes and seldom used punctuations) that it brings you out of the reading experience.

It's still at the Uncanny Valley stage for writing at least, while images are much further along. So, it's not too much to ask to disclose its usage and if it is going to be continued to be used in the future. Shame that disclosure is only considered polite and not a required rule.
This reply sums it up nicely.
Make it a rule to disclose if you used AI assistance to write your shit and most of the complaints people have outside of AI art spam disappear, which is exactly why I expect the opposite to happen and the problem to become much worse as malicious actors realize they can now flood the market with their product with impunity. Soon the only way to spot the work of the digital golem will be the emdash and reddit style prose.

You think the past age of godawful formulaic HP/Naruto/DxD/whatever fanfic was a scourge? You will think fondly of the days of those teen-written horrors, the crossover haremslop, the mary sue cardboard girl inserts, the fixfics and the rest...
 
A somehow enforced AI tag for writing would be nice, but I have no idea if it would be possible. There's nothing more irritating than getting a few paragraphs into something and going, "Hey. Wait a minute." As soon as you realize it's AI generated words, you can't help but start spotting for the usual AI generated problems.

It'd be nice to get able to avoid that from the get-go.
 
Ain't asking for a rule, just saying if your fic smells like a silicon smoothie, don't get offended when someone goes "yo did chatgpt write this" when there is no tag.
Frankly this feels outside the bounds of constructive criticism or honest review, just either picking a fight or trying to insult someone. If you don't like a story, that's fine, just go on with life if you aren't willing to give honest feedback on how they can improve. The writer will either get better, stop posting garbage, or you can just set them to ignore if you are somehow offended by them. We can all live and let live within reason for writing here.
 
It's not a big deal if they are going to use AI to help them write, but they should give us at the very least a small courtesy of putting a tag there if they used an AI. AI is an amazing tool to help people to locate mistakes, not gonna criticize them for that. But there are some people who truly abused it and created their entire stories by using AI, and they will get mad or get offended when you asked if they've used AI. As if it was our fault that reading their ffs felt like reading a textbook, and when you ask for clarification whether they've used AI to generate the entire chapter, they will somehow take it as attacking them personally when you just want to know so you can get out quietly if they actually did.

In short, there's a tag, use it to warn people.
 
It's not a big deal if they are going to use AI to help them write, but they should give us at the very least a small courtesy of putting a tag there if they used an AI. AI is an amazing tool to help people to locate mistakes, not gonna criticize them for that. But there are some people who truly abused it and created their entire stories by using AI, and they will get mad or get offended when you asked if they've used AI. As if it was our fault that reading their ffs felt like reading a textbook, and when you ask for clarification whether they've used AI to generate the entire chapter, they will somehow take it as attacking them personally when you just want to know so you can get out quietly if they actually did.

In short, there's a tag, use it to warn people.
AI will be the new slash. Can't remove it from your search however many tags you ban.
 
Tags are user-generated and user-assigned, unless the people who are using AI to write put the tag on their threads it is meaningless.
Mods can put a tag on a thread, and if they do the thread creator can't remove it.


I will note that there are people who are trying to avoid AI output for various reasons, some of them philosophical and some pragmatic*, and not making the disclosure mandatory makes QQ unsafe for these people.

*For instance, "AI always tries to hack its reward. A decent amount of them (including the infamous GPT-4o) are trained with their reward being human feedback. I don't want to be hacked."
 
Last edited:
AI text generation is in a weird place right now. On the one hand, it's miraculous that it can make something even approaching a resemblance to human writing at all. On the other hand, it's still completely godawful at it!

Anyone who thinks of themselves as a bad author, I promise that you can write five times better than ChatGPT, easily. EASILY! It'll take you more time, sure... but it would take you more time to coax the robot to make something of similar quality, so it actually takes LESS time to just write it yourself.

It's funny, I'm always hearing English professors and copywriters and other humanities people doomsaying about AI already being better than them, and they're right in ONE way: if you want fast, cheap mush, I guess it's alright. But if you want anything with ANY level of quality, you've still gotta go to a human.

I can usually tell something is AI generated within the first 10 words or so. Idk how to describe it, but it just doesn't feel right, y'know? Like, it's got the same tone you'd write with as when you're trying to pad your 1000 word essay out to 5000 words. It's definitely possible to fool people with it, but to do that you either have to edit the output manually, or spend a LONG time prompting it and re-prompting and RE-prompting; both of which require a decent understanding of good writing, to know how to make it to fool people; in which case, why not just write it yourself in the first place?

The answer is most people who post AI text don't put in more than the bare minimum effort, because they think writing is a waste of time to begin with. These people are, obviously, not worth anyone's consideration.

Anyways, this seems like a good clarification, I approve.
 
I do find myself having to run every new story I read through an AI checker before I start to avoid wasting my time. This is pretty annoying when I am scanning through for new stuff to read. This is my only real problem with AI writing on the site, and would be partially resolved by mandating a tag for AI generated writing, but I don't know how much work and false positives this would create for the mod staff to deal with.
 
At this point, it's mostly a misconception: the latest generations of AI models are more than capable of writing indistinguishably. They typically don't do it by default, though. It's up to the user to discover specific prompts and contexts that work. Today, the works that are obviously AI-gen are those sourced by lazy authors.

This has been said by supporters of literally every generation of AI models. It has never actually been true, outside of very niche circumstances that are effectively the equivalent of laboratory conditions, the user using such extensive prompts that it would be more efficient to just write it themselves or directly looting from existing fiction.

That last part is actually a severe problem. There was a relatively major court case back in June which ruled that the final work itself was transformative but the AI company had extensively violated copyright with their pirated library of seven million books. That wasn't a small AI company, it was backed by Google's parent company and Amazon.

It suggests quite strongly that if you use a LLM to write anything for you, while it could easily end up transformative by a court's standards it could still be direct plagiarism by most site's standards.
 
Last edited:
4nec8maabg4.jpg
*goons*
🥵🥵🥵
 
AI will be the new slash. Can't remove it from your search however many tags you ban.

It's funny, but if LLM keep growing and growing this fast... well I can easily see more and more people just leaving the internet. Keeping it on for just a few site, like governmental ones, Amazon, steam ?

AI text generation is in a weird place right now. On the one hand, it's miraculous that it can make something even approaching a resemblance to human writing at all. On the other hand, it's still completely godawful at it!

Anyone who thinks of themselves as a bad author, I promise that you can write five times better than ChatGPT, easily. EASILY! It'll take you more time, sure... but it would take you more time to coax the robot to make something of similar quality, so it actually takes LESS time to just write it yourself.

It's funny, I'm always hearing English professors and copywriters and other humanities people doomsaying about AI already being better than them, and they're right in ONE way: if you want fast, cheap mush, I guess it's alright. But if you want anything with ANY level of quality, you've still gotta go to a human.

I can usually tell something is AI generated within the first 10 words or so. Idk how to describe it, but it just doesn't feel right, y'know? Like, it's got the same tone you'd write with as when you're trying to pad your 1000 word essay out to 5000 words. It's definitely possible to fool people with it, but to do that you either have to edit the output manually, or spend a LONG time prompting it and re-prompting and RE-prompting; both of which require a decent understanding of good writing, to know how to make it to fool people; in which case, why not just write it yourself in the first place?

The answer is most people who post AI text don't put in more than the bare minimum effort, because they think writing is a waste of time to begin with. These people are, obviously, not worth anyone's consideration.

Anyways, this seems like a good clarification, I approve.

LLM are predictive by nature and there's things they can't do, like plot twist. or writing an ending you haven't seen from a mile away, that's why it's great for anything administrative or technical but still shit at creative work. Let alone all the "minor" tell.

What annoy me more honestly is how people who obviously use full on AI text and call themselves writers. A writer is a wordsmith as much as a guy hammering steel is a blacksmith, while a AI user is more aking to a guy pushing a button to activate the robot arms in a steel factory. No the guy pushing buttons is not a blacksmith even if he's making steel.

Honestly after testing it for myself I've done away with it for my writing entirely, I've had to fight it off changing my text too many times (which bad ideas at that) It's not bad at writing bad fic but it's not good at anything else honestly...
 
Only annoyed by the lack of disclosure for when it generates the whole story. I'm aware of at least one author that has admitted to it on another site but makes no mention of it on QQ despite the continuity errors from chapter to chapter.

AI checkers are only sometimes useful. These writing models mirror the humans they are trained on so there are people that just write that way. And there is probably an AI for bypassing common checks too. I've also seen some writers and artists get falsely accused of and then harassed despite not even using AI. Even if they did, that's worse than what they are criticizing.

Using AI to come up with ideas is fine, people do all sorts of things to come up with ideas like reading other works. The writing is inspired but ultimately your own work in that case.
 
Last edited:
There was a relatively major court case back in June which ruled that the final work itself was transformative but the AI company had extensively violated copyright with their pirated library of seven million books. That wasn't a small AI company, it was backed by Google's parent company and Amazon.

It suggests quite strongly that if you use a LLM to write anything for you, while it could easily end up transformative by a court's standards it could still be direct plagiarism by most site's standards.
You're referring to the case involving Anthropic:


The judge's decision in that case was that training AI is fair use, period, even on copyrighted material. However, Anthropic was still liable for pirating ebooks, same as anyone else who downloaded a torrent.

An analogy would be a college student who pirated their textbooks and then later on wrote a paper. They would be guilty of torrent the books, but it would not affect the work they produced as a result.

And users of AI products in turn would be another step further removed from the training process.
 
The issues I have with AI are beyond the scope of this post but needless to say I'm not a fan.

It's all the god tier, it's all the sewage tier, taken and then smooshed together to form the most mid content to ever pollute the internet. But that is only if you're lucky and the person posting the content gives enough of a fuck to do some basic quality control. If they don't? Good luck.

AI doesn't plan, it can describe a scene ok-ish but falls onto the bad side of generic very quickly even with basic descriptions. Which is why I typically avoid content generated by AI, it gives the feeling an amateurish half-baked, half-assed approach.

I am here to read that people want to put to 'paper', not what they want to generate with a bot, even the poorly written is better infinitely better than AI slop because at least then? I know it has some soul in it.

Which is why if it gets any more prevalent please consider adding a rule about disclosure.

tl:dr "Reeeee AI bad, make rule!" - Man with terrible, no-good, godawful grammar.
 
It is generally considered polite to disclose explicitly whether AI is used in writing a story. This is not a formal rule, but it may cause people to dislike you if you don't.
I understand allowing people to use generative AI for pictures or stories (even if I don't agree with the use of it), but I feel like it should be in the rules to tag your work with AI usage, like having 3 predefined tags, 'AI generated text', 'AI assistance' (for the people who will generate a base text, then write around it), and 'AI generated pictures'. That would let users avoid stories tagged with these if they dislike AI usage.

Is it possible to request a rule change on this (and if yes, how do I do it), or is your stance firm on the matter ?

For the rest of the statement, I think it's entirely understandable and a good way to handle AI for a website.
 
Last edited:
The issue with requiring the disclosure? Even if we, the staff, cared (and we don't), there is no easy way to determine AI use. And thus, enforcement is impossible. And thus, there is no way we will require it.

The main reason we put this up is that we were seeing witchhunts and harassment starting about whether people were using AI or not. These were reported, and were indeed breaking the rules, so we did the standard warning. So, please report this harassment.
 
This has been said by supporters of literally every generation of AI models. It has never actually been true, outside of very niche circumstances that are effectively the equivalent of laboratory conditions
Have you tried 1. using an advanced model that can both adhere to instructions and be intelligent about it? 2. defining a specific writing style by providing a large example? 3. holding it's hand and asking for very specific descriptions of events? 4. always concentrating on narrow scope? 5. going for a multi-pass workflow?
I have tried and spent a decent amount of time. It can generate absolutely brilliant scenes, dialogues, prose - better than a lot of professional writers. But it takes patience and effort. It is nowhere near the state of just generating a full chapter at demand with vague instructions. If you want good results, you will have to work for it. At this point it's still just a tool. And using it professionally is still work.

There's a great disconnect between people discussing AI in general. So many have only ever tried simpler base models, supplied by default on free ChatGPT tier, even including many actual scientists that hilariously publish research done on them, and come to some very... questionable conclusions as outcome. Entirely dismissing the fact that the more easily accessible models tend to have significantly lower intelligence. And the less said about using proper system instructions for the specific task, the better.

TL;DR: "AI" can refer to vastly different models, ranging from... let's say IQ 60 to 130, with entirely different strengths and weaknesses; and even the best of them (for a specific task) are only as good as the user. If you are willing to give it an honest chance, google "AI Sudio" and try Gemini 2.5 Pro, that's the top one today for texts. It's extremely capable. Warning: only use the 2.5 Pro in AI studio. Other sources of the same model supply it with their own custom instructions, significantly lowering the ceiling of what's possible due to the clash of their instructions (many hundreds of lines) and your instructions.
 
It is generally considered polite to disclose explicitly whether AI is used in writing a story. This is not a formal rule, but it may cause people to dislike you if you don't.

I don't like this.
I think AI labelling should be obligatory for both text and image and if its not labled it should be a violation. Best would be a trinary labelling "" (nothing, ie no AI) / "Some AI" / "AI" with the cutoffs being 0% and 50%, so for images using AI to touch up a non-AI original is "Some AI" while letting the AI make a whole picture based on a prompt is "AI".
Spellchecking should be clearly stated to be not classed as AI.
 
At the very least there should be a rule where people who use it disclose it in some manner. I don't think that's too much to ask.
If its feels not too much to ask, its only because you are not the one who have to enforce it. Do you think reading through dozens of stories to determine wthether they use AI or not easy task? Especially since those stories would be badly written to cause them got reported/accused of using AI.

Some people may say that it doesn't require that much effort. But if its really effortless, then why there's need for tag in the first place? If its so easy, then people could have just open a stories, deduce that it was made by AI at single glance, and leave. But no, the whole point of asking for obligatory tagging is because some people that's too much bother and want to offload the work to others.

"Use AI detector" Some may say. So, apparently AI has become so good that you can ask it a question and get answer with 100% accuracy? Wonderful ... waitaminute. We got here in the first place because AI returns are flawed. So in order to save us from bad AI we need to rely on the another AI? I think the problem with this should have been obvious.
 
The issue with requiring the disclosure? Even if we, the staff, cared (and we don't), there is no easy way to determine AI use. And thus, enforcement is impossible. And thus, there is no way we will require it.
Yeah, with text, enforcement is hell to deal with, so no. I've already seen some gits complaining about AI in a story whose narration has been the same long before AI became mainstream. It's just fucking annoying for readers, and infuriating for authors.

And with art? It's kinda obvious when it's AI.

So the issue handles itself with the current rules.
 
I think AI labelling should be obligatory for both text and image and if its not labled it should be a violation
Bluntly? None of the staff have strong feelings on if they care at all. If anyone did, we don't have the tools to reliably enforce such a rule - and they flat-out don't exist, most 'AI' detectors are less reliable than letting a pitbull babysit your cage of hamsters. Sure, it may scold one that is wrong… or it could just decide that today, everyone gets flagged and eaten.

If you have such strong feelings about it… you're welcome to go start your own splinter-site; expect about $2,500 a year to get features close to comparable to here, assuming you've doing all the work yourself. If you're hiring assistance… hope you've got 5 digits lying around.
 

Users who are viewing this thread

Back
Top