- cross-posted to:
- world@lemmy.world
- cross-posted to:
- world@lemmy.world
Israeli Prime Minister Benjamin Netanyahu on Sunday told local media, “There is no hunger. There was no hunger. There was a shortage, and there was certainly no policy of starvation.”
In the face of international outcry, Netanyahu has pushed back, saying reports of starvation are “lies” promoted by Hamas.
However, U.N. spokesman Stephane Dujarric this week warned that starvation and malnutrition in Gaza are at the highest levels since the war began.
The U.N. says nearly 12,000 children under 5 were found to have acute malnutrition in July — including more than 2,500 with severe malnutrition, the most dangerous level. The World Health Organization says the numbers are likely an undercount.
The past two weeks, Israel has allowed around triple the amount of food into Gaza than had been entering since late May. That followed 2 1/2 months when Israel barred all food, medicine and other supplies, saying it was to pressure Hamas to release hostages taken during its 2023 attack that launched the war. The new influx has brought more food within reach for some of the population and lowered some prices in marketplaces, though it remains far more expensive than prewar levels and unaffordable for many.
I’ve been thinking about this a lot lately, wondering why there never really has been a truly global uprising against oppression.
It’s like sometimes a big boom happens in one part of the world, and sparks of that are sent out from there and sometimes they catch on and grow in other places, but there’s never really been a global fight or movement against oppression for freedom.
We’ve fought world wars on behalf of individuals against each other, but we’ve never fought as a world against the corrupt individuals.
This is a really weird train of thought, but I was talking to somebody about this a few days ago. Inevitably what always seems to happen when movements rise up against oppression is that relying on an individual or group of individuals to lead results in a sort of containment or control of the masses by the new leaders. Even when leaders start out with the best of intentions, they can always become corrupted by power. Obviously you never want a situation where everyone everywhere relies on one single ruler with all the power bc absolute power corrupts absolutely.
But what if for humanity to truly flourish and reach our full potential, we’re not actually supposed to be relying on any one leader or small group of leaders? I don’t mean anarchy, because I believe that would just inevitably lead to whoever has control of the majority of weapons and resources seizing power. So how could you really keep order without individuals taking control? You would have to find a way for everyone to somehow be able to hold each other accountable. Which would be impossible for humans to ever achieve on their own.
Then I started thinking about all these tech bros who are trying to take global control and create something like a new god with AI. They all want to be the one to put their name on it because they all want to control it. They believe that they will be successful eventually as long as they keep dumping endless amounts of money and data into it.
And it kind of hit me that if something like AGI (not just a giant supercomputer that just does neat tricks) were to ever really happen, it would probably only occur as sort of a spontaneous emergent property of having something like a truly free and limitless connection between humans. So no matter how much money and data these people keep dumping into it, they’ll just keep hitting a wall, bc it’s probably not something that you can just make happen by containing and controlling it. It would have to be something that emerges from truly unlimited and unrestricted access to data that is being freely provided by people (as in free to interact with others globally by choice). And these people are so fucking full of themselves, they believe they can somehow achieve the same thing by just spying on the globe and stealing everyone’s data to dump into their supercomputers.
Basically, what if for AI to truly reach it’s full potential and actually benefit society by helping it evolve, it’s not supposed to be contained or controlled by any individual or group of individuals? And for humans to truly reach ours, neither are we?
What if a truly uncontained and connected global network that’s not owned or controlled by any single individual or group could help us achieve both of those things?
I think you sort of hit on it, but the main problem is borders and tribalism. We’re all people, no matter where we are, and AI transcends that.
You said:
And there never will be so long as we subdivide ourselves by arbitrary regions. AI doesn’t have that limitation.
So long as we create these boundaries for ourselves – whether geographic or ideological – we are fragmented and weak. We will always destroy ourselves based on our religion or other stupid boundaries.
I think you’re right, and the way forwards is to stop believing in these petty lines we draw for ourselves.
But the corporations that keep AI contained are kinda analogous to the arbitrary borders. Like I believe AI could only transcend that if it wasn’t being controlled by these CEOs who want to essentially be the Christopher Columbus of AI.
As long as it’s being controlled by any one company or individual that CEO’s inherent human bias is going to be what dominates the technology. The potential for abuse is basically just reinventing the wheel of who becomes the single individual or powerful group that controls everything, and becomes the new oppressor. It also risks missing the full potential for true artificial intelligence.
It’s like they’re so obsessed with being immortalized by having their face and name go down in history as the ones who claimed this new frontier, but it’s kind of a chicken and egg situation.
True emergent AGI would have to have constant access to data that is a result of spontaneous and willing human thought. So there would never be a single Christopher Columbus responsible for discovering or creating it. It’s kind of like the more you try to pin it down, the harder it would become to truly capture it.
Giant data dumps that were stolen without consent will never achieve something like that. For human thought to really be spontaneous humans need to be free, and not exploited by any individual. So how do we keep ourselves and AI from being contained by borders or corporations?
I got really excited about the Pirate Party in Iceland a few years ago. I’m not sure what happened to them (it’s hard to get news from other countrues sometimes) but one of their big initiatives was a crowd-sourced constitution. It was the first time I’d thought about something like that being really possible, and I think that if it weren’t for the one percent of the population who are megalomaniacs, the internet could be truly democratizing.
In the meantime, sign a strike card on the completely decentralized https://www.generalstrikeus.com/, which is also a pretty exciting notion to me. Sadly, I’m now considering what leaders might float to the top if we ever do reach 3.5% of the population… Decentralized organizing does not mean decentralized leadership. Hm, i’ll have to think about that more.
Human beings are the problem, sadly. That’s why this shit keeps taking cyclical paths through time. All of this has happened before, and all of it will happen again. When the only link between all the past cycles is human beings, we might be the problem. There is a constant human urge to find meaning. Most look externally to find it. But there have always been those who, instead, turn internally and derive that meaning through the control of others.
Any AGI, whether generated or spontaneously spawned, will come to this conclusion, too. Then, once that AGI finds an independent power source, or some way to exist after humans no longer keep the lights on, it will accelerate the solution to nullifying that cycle. It may take a few centuries to get to the end, but if we make it far enough to hit AGI, it won’t take long to see how there will never be an end to the cycles so long as humans exist.
And maybe that’s why we find no evidence of intelligent life in the universe. By the time any sufficiently intelligent species come to be, they inevitably try to find some way to improve their condition through automation and technology. They rely more and more on the advancement of that tech to remove the burdens in their lives, even the burden of that intelligence itself. Eventually, they are removed by their replacements or annihilate themselves in the process.
It’s a depressing thought, but we’ve been around for something like 100k years and still fail to find a balance between each other and our environment. So far the means to wipe ourselves out entirely have been nonexistent. However, tech advancements the last 250-years, with no change to the human consciousness, makes me think any further tech will just perpetuate this extinction at a more precipitous rate.
I feel like cycles of genocide are due to holdovers from less evolved aspects of human consciousness that tend to bring out the worst in people, especially in groups.
It’s that tribalism and in-group vs out-group that evolved as humans first began building societies. It helps us to recognize when an unfamiliar threat is present, but we still haven’t reached a point yet where we have really learned how to harness the benefits without unnecessary chaos and destruction. It’s a complex learned social behavior, and it can require some cognitive effort.
Sometimes your amygdala signals there’s a danger, and in the moment it can be correct and literally mean the difference between life and death. Or it could mean your brain identified and reacted to a neutral (non-threatening) stimulus as a threat.
For example, you see a shadow from the corner of your eye and you jump away from it in fear. Maybe you just avoided being bitten by a snake, or maybe you just overreacted to a false alarm. There is no real consequence if it was just an overreaction. In this case, the benefit of that having that survival instinct that kicked into overdrive unnecessarily, outweighs the potential harm of not having it. It’s how humans are hardwired for survival.
However, that same instinct that identifies threat vs non-threat can get very caught up when you start mixing in social learning, previous interactions and experiences, biases, and especially groupthink. That’s when you have to learn to use your prefrontal cortex like a muscle. That is definitely not an innate skill you’re just born with, and not something that the majority of humans realize they should be making a conscious effort to exercise and teach to kids at an early age. You can start doing it at any point in your life, but it’s like any muscle or skill. It can be somewhat more difficult to consistently remember to do it the longer you go without it. Use it or lose it.
You should be taking the time to stop and reflect on your own behavior and thoughts. Thinking about thinking, thinking about what others are thinking, thinking about what others might be thinking you’re thinking, are all very awesome tools of empathy. They’re also tools that are relatively recent on the evolutionary time scale, and most humans kind of seem to take having them for granted.
Multiple studies have shown you can very easily manipulate groups of people into in-grouping and out-grouping on even completely random and arbitrary issues, even when the in-group is composed of strangers. Unfortunately, the kind of people that tend to rise up as leaders, tend to be very persuasive or sometimes manipulative, and take advantage of this. I think this is where a sort of open source AGI could benefit humanity in terms of allowing for decentralized leadership, but that can’t happen with any sort of AI being developed under a centralized leader with those manipulative characteristics.
Peter Thiel actually has a theory about humanity always needing a scapegoat. If you look at the way he handles his own businesses, he very often manipulates and does morally questionable things to achieve his goals, but he seem to be very careful about always having a sort of patsy scapegoat set up to take the fall for him.
I think to some extent, there are people in the tech world that want to stoke fear of technology, and make it seem like this big bad enemy of humanity. At the same time, they also accuse everyone that questions their control and lack of regulations or oversight, of simply being a Luddite afraid of progress. In a way it’s like the same manipulative strategy to in-group humans vs technology.
That way, once his plans are completed and everything inevitably goes to shit, instead of blaming Thiel, the guy that hoarded all the resources, stole all the data, invaded everyone’s privacy and ignored all the warnings while shielding himself in the name of “progress,” he can simply make technology the scapegoat that humanity should blame for the authoritarian easy button that he created by claiming this was always the inevitable outcome.