red_giant [comrade/them, he/him]

  • 0 Posts
  • 18 Comments
Joined 13 days ago
cake
Cake day: January 11th, 2026

help-circle






  • How AI Destroys Institutions

    Full paper https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623

    Extracts

    AI systems are built to function in ways that degrade and are likely to destroy our crucial civic institutions. The affordances of AI systems erode expertise, short- circuit decision-making, and isolate people from each other. They are anathema to the kind of evolution, transparency, cooperation, and accountability that give vital institutions their purpose and sustainability. In short, current AI systems are a death sentence for civic institutions, and we should treat them as such.

    Authoritarian leaders and technology oligarchs are deploing AI systems to hollow out public institutions with an astonishing alacrity. Institutions that structure public governance, rule of law, education, healthcare, journalism, and families are all on the chopping block to be “optimized” by AI.

    we are arguing that AI’s current core functionality— that is, if it is used according to its design—will progressively exact a toll upon the institutions that support modern democratic life. The more AI is deployed in our existing economic and social systems, the more the institutions will become ossified and delegitimized.

    AI can only look backwards.63 In other words, AI systems are bound by whatever pre-existing knowledge they are fed. They remain dependent upon real-world inputs and checks. In their remarkably clear and powerful book AI Snake Oil, Arvind Naryanan and Sayash Kapoor write that predictive AI simply does not work because the only way it can make good predictions is “if nothing else changes.”

    AI is incapable of intellectual risk, that is, a willingness to learn, engage, critique, and express yourself even though you are vulnerable or might be wrong.76 AI systems are incapable of intellectual risk because they lack true agency, intrinsic motivation, the ability to experience consequences, and they cannot choose to willingly defy established norms or venture into the unknown for any purpose, including for ®evolution, resistance, or adventure. Most AI models are optimized for accuracy, reliability, and safety.77 They are trained to find patterns in data.78 Their “creativity” is constrained by safety filters and has a tendency to drift to the middle