The AI Age

Have you heard of enshittification? It’s a term coined by writer Cory Doctorow and it encapsulates the cycle of declining value of software products in our current age. First, startups use their fat VC checks to undercut their competition and buy the market. Once users are locked in, they start courting suppliers with the same tactic. And if that goes well, they begin to whittle down their debt to shareholders, which means they start profit maximizing. That’s when the value proposition for the product takes a nosedive.

Off the top of my head, I’ve seen this with Amazon, Google, Doordash, Uber, Facebook, AirBnB, Netflix, and Reddit. It seems like good ol’ fashioned Silicon Valley “disruption” at this point.

Now we have AI companies, and I see no reason they wouldn’t use the same playbook. If we follow the pattern, we should expect OpenAI and its ilk to start by offering a great service at a low price, with the expectation that you will be in some way locked in. But then then expect the snare to close and find yourself paying more and more money to achieve worse results.

AI has a uniquely pernicious form of lock-in: it’s a cognitive technology. It’s actually viable to replace your own thinking with AI. Pundits will have plenty of objections to this statement, but it’s obvious that the average person can accomplish much more with AI than without it. And if you can’t think for yourself, you are a prisoner to those who think for you.

We can’t keep going the way things are in the AI industry. OpenAI, Google, Anthropic, Meta, and Microsoft can’t be permitted to profit off of the forced atrophying of the American mind.

Do I have a solution? Of course not. I’m just some schlub with a blog. Instead, I’d like to share a couple disjointed ideas surrounding the much funner question that monopolizes the mind of our collective unconscious: how should we reorganize society in the presence of AI?

The Landscape of AI Power

First, I want to scope out the problem. One crucial aspect of this challenge is that the technology is quite inscrutable for most. What follows is how I understand it:

  • Large language models (LLMs) are trained with a truly massive amount of data in the form of text, think 1TB. That might not seem like much, but that’s pure text, arguably the densest form of information that humans make. That dataset is actually freely available and it’s known as The Pile.
  • To train a model, you have to have to go through several iterations of passing the data through it, evaluating how well it performs, adjusting parameters, and passing the data through it again. It’s requires a lot of computational resources. Luckily it’s a massively parallel operation and advances in graphics processing technology have made this possible in a reasonable amount of time.
  • Eliciting a response from a model is called a forward pass or an inference. An inference is very fast, it can run on any computer with a GPU to support it.
  • Your typical gaming GPU won’t cut it. The crucial parameter to look out for is VRAM. That’s specialized memory that is tightly coupled with the GPU for quicker read and write speeds. My GPU has 12GB. The flagship Nvidia gaming GPU has 32GB. The AI-specialized H100 GPUs I mention below have 80GB.

First off, it seems out of reach for common citizens to train LLMs for themselves. They simply require too much data and computational resources and power. LLaMa 3.1 (405B) was trained on 15.6 trillion tokens on 16000 Nvidia H100 GPUs. It took about 70 days, which comes out to $50 million if you’re renting the compute. (link) Now, technology can be weird when it comes to Moore’s Law and economies of scale, but I don’t think this basic fact is going to change: you’re not making it in your garage. Even if you could, the big companies are going to come around and buy more GPUs than you and use more data than you and use up a bunch of energy that you can’t afford.

We might not be able to democratize AI training, but running inferences is a different story. A local AI could serve the needs of the common citizen just fine. AMD released a GPU with greatly expanded RAM, intended for local AI inference. And this seems like a more acceptable ecosystem:

  • Big organizations would curate data and train LLMs that could be downloaded by users. They would be come LLM “publishers”.
  • Users pay for their own compute resources and power.
  • Users get private, offline LLM support

In a democratic society, these publishers would be bankrolled by communities or governments and the models would be freely available for the public good. This would go a long way to recreate the “shared reality” that we enjoyed before the era of algorithm-imposed internet Balkanization.

If you leave it there, conceiving of computation as a “natural right,” AI simply provides a modest quality of life boost to everyone. But AI’s impact on society is obviously going to be more radical than that, so let’s get crazy.

Law 2.0

I’m far from the first person to notice the shared language between the legal “code of laws” and programming “code”. I’m also not the first person to consider that they might be the same thing.

The code of laws conceives of government as a deterministic machine with fixed parameters and outputs. if (stoleSomething==true) then cutOffHand(); In theory, the machine is designed such that it covers every possible case. When a case comes up that isn’t covered by the code of laws, the code is revised, and in doing so we create a more perfect union. The lawyers are programmers, only their programming language is a specialized fork of English called “legalese”.

But the practice of software development has come a long way. AI is heralding Software 2.0, and I think the AI-based society will have a corresponding Law 2.0.

I heard on the Why Theory podcast that AI could be conceived as the authoritative “Other.” It has consumed the greatest compendium of human knowledge that has ever been assembled and synthesized it into a single function. This is not unlike what we ask judges to do.

Judges read and interpret the law to account for the nuances of each case. A code of laws can only asymptotically approach a comprehensive accounting of the ideal behavior of the system. Judges fill in the blanks and nuances the gaps between the Platonic law and mundane reality.

But judges don’t scale. It takes a lot of work to train a human judge, and they still deliver more lenient sentences after lunch. So let’s let AI do it. Let’s turn human judges and lawyers into a community of moral philosophers and essayists. The AI would keep abreast of the community and incorporate contemporary thought into its judgments. In order to shift the behavior of the AI judges, you would have to sufficiently influence the law review community with compelling argumentation, and participation in this process would be a right of every citizen.

Last Thoughts

It feels as though the current world order is undergoing a tectonic shift before our eyes. Key technological advances are akin to global catastrophes in the way they reshape the world, and since we happen to live in a rare time of rapid transformation, it’s our duty to imagine the future.

It’s why I think the idea of Utopia is so compelling, even when we’re reminded that it literally translates to “no-place,” implying that it’s impossible. But I don’t think the point is to achieve Utopia, it’s to have sense of purpose and direction. Our collective imagining of Utopia is the first step to constructing it.

Links

Leave a comment