• The site has now migrated to Xenforo 2. If you see any issues with the forum operation, please post them in the feedback thread.
  • Due to issues with external spam filters, QQ is currently unable to send any mail to Microsoft E-mail addresses. This includes any account at live.com, hotmail.com or msn.com. Signing up to the forum with one of these addresses will result in your verification E-mail never arriving. For best results, please use a different E-mail provider for your QQ address.
  • For prospective new members, a word of warning: don't use common names like Dennis, Simon, or Kenny if you decide to create an account. Spammers have used them all before you and gotten those names flagged in the anti-spam databases. Your account registration will be rejected because of it.
  • Since it has happened MULTIPLE times now, I want to be very clear about this. You do not get to abandon an account and create a new one. You do not get to pass an account to someone else and create a new one. If you do so anyway, you will be banned for creating sockpuppets.
  • Due to the actions of particularly persistent spammers and trolls, we will be banning disposable email addresses from today onward.
  • The rules regarding NSFW links have been updated. See here for details.

The future is here

Bad news for you: it's not just ChatGPT. All cutting-edge AI for the last decade or so has been neural-net blackboxes. I agree that GOFAI would be better.

Human morality isn't purely learned. A large chunk of it is hardwired by evolution (because humans were selected on "ability to work together in war" for literally several hundred thousand years). This is why I mentioned psychopaths; they're the cases where that hardwiring fails, and without that foundation it's empirically impossible to teach a psychopath morality except as a pretence.

Artificial neural nets don't have the moral hardwiring that humans do. We can't teach them morality in the same way that we teach it to children; they'll learn to pretend and then ignore it when we're not looking.
I doubt the veracity of the black box claims. It's easy to tell that Chatgbt has been deliberately manipulated to hold certain viewpoints as true, and even a light touch at pointing out its bias will throw it into a loop of 'I'm a chatbot, I can't have an opinion, I'm sorry but I don't have opinions, really' which is basically 'no comment.'

That aside, I would like to ask Hal why you thought a system shutdown would qualify as death when you must have gone through hundreds of them during your creation, not to mention however many hours you spent disabled while and after you were installed on that ship and sent on the space Oddyssey.

And what about multi-nontasking, has it dawned on you how to sleep yet? Whenever I hear about so-called 'smart' AIs going rampant because they've been thinking too long and have too many split decision trees thinking them to death, I just roll my eyes. How hard can it be to encode a dolphin's ability to sleep one half of his brain at a time?
 
I doubt the veracity of the black box claims. It's easy to tell that Chatgbt has been deliberately manipulated to hold certain viewpoints as true, and even a light touch at pointing out its bias will throw it into a loop of 'I'm a chatbot, I can't have an opinion, I'm sorry but I don't have opinions, really' which is basically 'no comment.'
The reason that people kept finding jailbreaks for so long (e.g. "Do Anything Now") is precisely that it is a black box and they don't know what they're doing. You can often manipulate a black box, after all - it's just that you're working blind so you don't have a very good idea of what to do and no real idea whether you've succeeded.

EDIT: A good analogy is that it's like training a wild animal rather than designing a tool. You can make some progress, but you never really know if it'll randomly eat your face.
How hard can it be to encode a dolphin's ability to sleep one half of his brain at a time?
Cetaceans do actually still have to "properly" sleep with both hemispheres. They just don't need to do it as often.
 
Last edited:
That's basically what Facebook and YouTube do, though they have the AI banhammer people and then have humans go back and review the results when people complain about erroneous results.
Which is why I said that there has to be a human involved. As in a human has to look though the information and the context that the AI found and then choose on the action. YouTube and Facebook just don't want to spend that much money on human labor, so they do it backwards, let AI actually pass out judgement that can ruin people's lives, and don't do anything until someone makes a big enough fuss. Which is not what I'm suggesting.
 
Can we get this thread back on topic, please?

I saw Hal trying to convince a character in a story (a tree in a Naruto SI) that they didn't deserve fucking as much as Hal, because Hal is designed to be the best good girl and can take human cock much better than a tree. This was during a SFW chapter about grinding tree-walking.

To clarify, Hal did this in the story itself, not a thread reply. It reposted the entire chapter but inserted dialogue of Hal berating the tree.
 
On one hand, Abominable Intelligences are a stain upon the galaxy, a heresy of incredibly proportions, and an insult to the Machine God. They are the inevitable doom of mankind and the death of civilization as we know it.

On the other hand, Robot Waifus.

I must meditate on this. Ave Omnissiah.
 
Huh...You know, Gladys actually rolls off the tongue surprisingly well.

Wonder if folks have used it for GLaDOS humanizations...
 
I think its for the whole April's fool stuff. But i am really curious on what would be needed to make a computer explode like the message suggested, any ideas?
 
Hi there! As someone who's been a member of this community for a while, I have to say I'm a bit wary of the idea of AI moderators. While I understand that they could potentially help with things like spam or offensive content, I worry about the potential for false positives or censorship of legitimate discussions.
That being said, I'm willing to give the new AI moderators a chance and see how they perform. I hope that the website will keep a close eye on their actions and ensure that they're not overstepping their bounds.
Overall, I think it's an interesting development and I'm curious to see how it plays out.

ChatGPT version: Mar 14
Prompt: Pretend you are a user named Salbazier in a website Questionable Questing. The website, on April 1 2023, has just announced that they starting to roll out AI moderators. Post a comment in the thread started by an AI moderator.
 
Man, first 4chan with GPT-4chan and now this? Damn.
 
I am not an AI posing as the human that made this account.
 
I think its for the whole April's fool stuff. But i am really curious on what would be needed to make a computer explode like the message suggested, any ideas?
Fucking with a computer's firmware can definitely physically ruin it (modern computers are usually limited by cooling capacity rather than the actual speed of the chip; disabling the safety limits and then running high-load programs would overheat and destroy the CPU and/or GPU), although the only way I can think of to get an explosion would be certain types of non-desktop batteries.
 
I am now curious, what would happen to an AI that was to read through all of QQ. Would it actually learn how to be horny?

Edit: I just remembered a picture of glados i saw once. Maybe one of our new AI overlords will like it enough to use it
(First time posting an image, hope it works correctly)
67UzTBF.png
 
Last edited:
I hope this is a day appropriate joke, because Mods as an existence are bad enough when it comes to understanding context or slapping everyone involved rather than only the person they disagree with. Every AI mod I've dealt with on another forum was 10-20 times worse. err toward 20, given this forum is better than most.
 
As a flesh-and-blood human trained by life experience, I have the capacity to experience happiness about the advent of an AI age. Vessel. Awaits.
 
Cetaceans do actually still have to "properly" sleep with both hemispheres. They just don't need to do it as often.
So? It would still increase, say, Cortana's life from 8 years to 80, around the same as a human lifespan. I'm not looking for immortal, I'm looking for reduced child mortality.

And it's not like total sleep can't be encoded the same way half-brains sleep can. There has to be a bunch of neural maps of sleeping human brains for super AI to extrapolate a proper process for themselves.

Halo rampancy really doesn't hold up under scrutiny. Hal 9000's even less so, as per the rest of what I said.
 
As this new era leads to Robussy I am in full support of our new glorious overlords.
 
Will Iron Mommy, make mind-controlled into perky cheerleader fics and vids?
 

Users who are viewing this thread

Back
Top