I have been fascinated with technology since I was a teenager, and probably before. I’ve been professionally involved with computers and their use to collaborate and work with others for the majority of my working career. At my age, it’s common for me to be impressed by humanity’s newest step forward. But it is vanishingly rare when I can see something that we’ve accomplished that really makes me think about Arthur C. Clarke’s quote than what companies are selling as “artificial intelligence”.

Any sufficiently advanced technology is indistinguishable from magic.

First, I saw the systems that generate art from text prompts. And then I used ChatGPT for the first time, and I felt that magic. I knew machines were generating this material, but it’s been outside of my ability to understand how that happens. It feels magical.

As a quick sidebar, as someone who also grew up reading science fiction, a genre which has held very specific understanding of what it would let its author’s get away with by pointing at something and calling it “A.I.” - a large language model system like OpenAI’s ChatGPT is not artificial intelligence. It isn’t even worth contemplating it in the context of the Chinese room thought experiment. Just before I leave this sidebar, when I hear people refer to these systems as “AI”, I feel the same way the year that “hover boards” were a thing and they did no hovering at all. I watched Back to the Future II growing up, and I know what is a hover board.

What OpenAI is doing right now is working to scare the politicians in Washington to entrench their business model. If I were to ask you which of the two right wing political parties that run the United States of America is more against “regulation”, I bet you’d guess the same way I would. Right now, it appears that they’re being successfully lobbied into regulating “AI”, on unrealized threats and fears that aren’t even in the immediate future. It appears that there are those that think that ChatGPT is Skynet and it’ll unleash a mechanical apocalypse on us very soon.

Let me show my work on this.

OpenAI has a blog. On May 22, 2023, they posted an entry titled: Governance of superintelligence. In it there is thoughtful discussion about looking to the future and getting ahead of any problems that might imaginably arise in the future. It leads these ideas with an analogy to nuclear power - just to make sure that the argument starts off on a dangerous foot. It preached about international cooperation of the development of these technologies. To OpenAI’s credit, I commend them for taking some initiative on this, as I’d bet no money on members of the United States Congress having any technological foresight to contemplate this and advocate for “cooperation”. Having said that, I think it’s a smoke screen. Here comes the entrenchment:

We think it’s important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here (including burdensome mechanisms like licenses or audits).

This line here is what OpenAI wants to draw. Anything that might possibly compete with what it is doing has to be able to cross a bureaucratic hurdle that will come with an economic cost to overcome.

Keeping a federal law that requires extensive participation in regulation on a technology that has completely unrealized dangers and has not yet delivered on its promises is what Porter’s five forces analysis would point to as entrenchment against threat of new entrants. OpenAI is trying to make sure that no other organization can follow behind it and make that money.

The technology still, right now, feels magical to me, but it’s also marketing snake oil. How about three men whose lives were ruined because “artificial intelligence” told law enforcement that they were criminals (I’m completely against facial recognition software used by any government). How about a New York lawyer that submitted a legal filing citing non existent case precedent that ChatGPT just made up for fun (this one is just as much on a credentialed lawyer for not double checking the machine’s work)? Or how about the public school system that continually allows itself to be turned into prisons and still can’t keep children safe, including Evolv’s “artificial intelligence” scanners that apparently don’t know what knives are?