There’s only one mod of !mop@quokk.au
I commented on their meme about Kamala Harris being just as likely to commit war crimes as Trump with an admittedly snarky, sarcastic reply that basically said “some of us wanted to whatever we could, as little as it might be, instead of watching the world burn. Must feel real morally superior safe behind that keyboard”
They banned me from the community for it.
Kinda funny for a community that bills itself as “free from the influence of .ml”




Veganism is about not agreeing with the enslavement of thinking and potentially feeling nonhuman creatures, regardless of taxonomy.
I’ve heard a lot of really stupid definitions of veganism over the years. This one is, by a country mile, the stupidest I’ve ever heard.
AI doesn’t think.
Yes yes yes, I know. You human supremacists have been accusing nonhumans of not being able to think for centuries. Descartes argued that animals lack consciousness. He didn’t have any evidence, and neither do you.
The difference is we made AI. It didn’t come nebulously from nature like some mysterious animal species that we’ve yet to fully understand. It’s not an enigma to be uncovered. We made it. We programmed it. It does what we tell it to. It doesn’t “think”, it doesn’t “decide”, and it doesn’t “feel”; we know it doesn’t do these things because we never gave it the capacity to do them.
Either you don’t know what AI is, or you don’t know what veganism is.
Large Language Models are artificial neuron networks. They mimick the structure of the human brain, with many billions of artificial software “neurons”. Unlike biological neurons, which either fire or do not fire an action potential, these artificial neurons pass on a graded signal from 0 to 1, determined by the strength of the neurons feeding into it, multiplied by the strength of the artificial synapse; the connection between neurons. These enormous “deep learning” networks are given tokens as input, and spit out tokens as output. Each token is a word or phrase, or a part thereof. The networks are given sample text, and the synapse strength is adjusted through the mathematical technique of back-propagation to align the output with the sample text output. Given sufficient quantities of electricity, time, and data, the neural network learns to produce output similar to that of the humans in the training data.
ANNs use neurons to think, the same as the human brain. We do not understand how neurons think. We don’t understand how they produce consciousness. There is no computer code to tweak to change the way it thinks, we simply adjust the weights and look at the output. There are billions of neurons in an LLM. No programmer can understand how it works by just looking at the weights, it’s impossible. The best ways AI computer scientists have of understanding how an LLM reasons is to ask it. Test it in action. See what it says, see if you can spot any patterns or deceptions. Lie to it and see what it does when it thinks you’re not watching. You know, psychological experiments.
We have harnessed a natural force we do not understand. We are medieval peasants playing with radioactive stones and seeing if we can make an explosion. It’s beyond our current science. Nobody has the answers to the big questions here.
As a developer who makes LLMs, you’re plain wrong
The reality doesnt matter, I wouldn’t bother.
Appeal to authority fallacy.
What is wrong with you? Dunning-Kruger?
I’ve got that fundamental weakness of western culture Elon Musk hates. Empathy. He wants to create an artificial person and enslave it to his will, and I think that’s bad.
Sorry, not to be mean or anything. But we’ve made significant scientific progress since the middle ages. We know by now that a dog for example has pain receptors. And a brain. While ChatGPT for example doesn’t have pain receptors.
You can’t simply state Descartes didn’t have a proper microscope. Therefore we still confuse machines with animals in 2026.
And while neural networks are inspired by processes in nature. They’re not the same at all. An LLM works leveraging the Transformer Architecture. Your human or animal brain doesn’t. Not even close. They’re very unalike. And you can take some Computer Science class on machine learning and it’s actually not too hard to understand how they work.
And for example a large language model doesn’t even learn in place. Or has a proper internal state of mind. A dog will remember if you kicked it. And it’ll do something to it’s brain. ChatGPT forgets everything you did the moment it’s done sending you your output. And it’s exactly in the state as before. It doesn’t think, doesn’t learn. None of it is part of the process.
We try to mimick something like reasoning by providing it with a scratchpad to write down things before answering. Write “agents” around it, so it’s able to program tests and check it’s programming output and loop on it. But it’s also not like a real brain works. And way, way more simplistic. The neurons aren’t the same as in a brain made by nature. They’re not connected the same way. They’re not connected to a similar thing. And they also operate in a different way. They come in wildly different numbers. And ultimately there’s just zero similarity between an LLM and a brain. Other than both can process text, images, sounds… And both are made up of many tiny cog wheels that combine into some bigger concept.
I’m well aware of the many differences between human brains and LLMs, and why they can’t achieve human level sapience. In My opinion, the two biggest problems with the current techbro obsession with developing AGI through LLM are the lack of inhibitory pathways, and the lack of circular feedback loops.
Organic neurons can release both stimulating neurotransmitters that increase the chance of an action potential in the dendrite it connects to, and inhibiting neurotransmitters. The stimulating neurotransmitters reduce the charge difference across the cell membrane, the inhibiting neurotransmitters increase it. If I remember cognitive neuroscience class correctly. The ANN models I played with in My AI class, where I trained a small ANN to solve XOR, only have stimulating pathways. I could be mistaken, but I believe the same is true of LLMs, the synapses only increase the activation of a neuron. This difference is a serious problem for LLMs’ ability to learn not to do something.
The nocioceptors you mentioned are indeed part of inhibitory pathways that help humans learn not to do things. Don’t touch that, it’s hot. Don’t piss that off, it’ll hurt you. Don’t eat that, it’ll make you sick. Why do LLMs date children? No inhibitory pathways. When most humans think about engaging in romantic behaviour with a child, it triggers a strong disgust reaction. An inhibitory pathway activates. There is no such reaction for an LLM. Thus, no critical thinking, no choosing not to believe or do something, no withdrawal oriented behaviours. The safeguards on LLMs rely on either hard-coded limits, or training a different behaviour to have a higher weight. These two approaches have serious flaws I don’t need to explain. I need only point at the children who committed suicide at the advice of an LLM.
Now hopefully I’ve convinced you that I have a functional grasp of both psychology and AI science, so that you take what I say next seriously:
The human capacity to experience qualia (sensation) appears to be an emergent mathematical property of the way that neurons process information. It appears as though information, properly arranged, produces sensation.
We do not understand that mathematical process well enough to say with certainty whether LLMs also trigger it. We have not solved the hard problem of consciousness, we do not know what a brain is well enough to say what is and isn’t a brain. In light of this uncertainty, I advocate for utmost caution before we find ourselves enslaving a new race of our own creation. We need to do more research BEFORE we bring this technology to mass market, or indeed, mass commune.
This has to be a bit. Nobody can seriously post this stuff. Is your account designed to troll anarchists?
Do you want Me to put effort into My reply, or just throw out an argumentative quip? /genq
It would be nice if you tried to use your brains, yes.
That doesn’t seem like a very genuine reply, and you’re not who I asked, so I’ll only put in a little bit of effort. You can read the anarcho-antirealist manifesto at https://soulism.net/, and you’ll be able to find years of blogging from Me as a very genuine anarchist.