Create AI Problems, Profit from the Panic
I’d written a little while ago about how OpenAI is sowing the seeds of panic in the public for the very product that it’s making. I’d pointed out that its own public blog clearly shows that it is trying to entrench itself against new entrants into its market. That’s basic monopoly work and a foul in any rational economic system - which I don’t believe the American economic system to be. In 2001, the federal government successfully sued Microsoft for strongly nudging Windows users to use Internet Explorer instead of other browsers, namely Netscape. Now, Microsoft, Apple, and Android all have that same behavior and no one is suing them for monopolistic business practices. Therefore, I’m unsurprised that OpenAI can behave this way without really having any serious concern that the federal government will have any problems with this.
I want to preface this by saying - just like the original promises of the Internet and social media - I heard the original promises of cryptocurrencies and thought, “Ah, this is something new and important,” I do believe that cryptocurrency work is working to solve problems that exist now, but the people who hand wave all of it as a pyramid and a scam aren’t wholly wrong. Case in point - thanks to an article written by Molly White (Worldcoin: a solution in search of its problem), I’d learned of a cryptocurrency that builds itself on the scary premise that AI will put people out of work and it will, somehow, generate money for people from the economic activity of artificial intelligence. And who is the mind behind this cryptocurrency?
Worldcoin was founded by Sam Altman, the “tech visionary” du jour who is behind OpenAI. That’s right, the guy who’s going to sell us all the solution to a worsening AI-powered bot infestation of the Internet and to AI-induced mass unemployment is the same guy who’s making the AI in question.
Her whole article is worth reading. WorldCoin does set out to solve engineering issues that are hurdles to something that I want - democracy and efficient democracy. Currently, the tiny amount of democracy that happens in the United States happens twice a year (typically) and requires a bit of paper sent to my mailbox or I have to go to a church basement. These are the currently the best ways, in practice, that set out to address voting concerns. From Molly White’s article:
Identity projects aim to answer one or several of the following questions:
- Is this user a human?
- Is this user a unique human? (i.e., do they only control one identity in a given network?)
- Can this user prove they meet some criteria? (e.g., are they over 18? are they a U.S. citizen?)
- Can this user prove they are a specific person? (e.g., does Molly White control this identity?)
Mail in voting and in-person voting achieves “good enough” answers to all of the above. However, if our society wanted to move to electronic voting and, better still, direct, electronic democracy - the solutions currently used are not going to achieve “good enough”. All four of the points are technical hurdles that have not been solved for systems less important than democratic governance. As a reminder and than a note - I don’t look towards presidents to improve American governance, however America nearly did have a pivot towards being a democracy with the Ross Perot campaign of 1992 which had a platform plank of electronic democracy. If that idea had gained traction, I do wonder if in 2023 the state of the art would have been improved enough that the aforementioned four issues would have been resolved. I think I’m enough of an optimist to think that the United States would have achieved “good enough” solutions for those four challenges in an electronic democracy - but, we’ll never know.
I did want to also share some thoughts with historical precedent about the concerns about artificial intelligence costing people their jobs. This will never happen. Capitalism will cost them their jobs, not the artificial intelligence. I recall when I was late in my teens, I had a friend whose parent loved to go on and on about “Mexicans” taking American jobs. I think that I heard and believed that for quite a few years, but the working people of Mexico didn’t take anyone’s jobs. The owners of American companies determined that they could cut labor costs by outsourcing that function of the businesses they control to a labor market that will undercut the labor costs of having Americans do the same or similar work. If having someone draft up emails for a company costs more than having ChatGPT do the same or similar work - the people who profit from that person’s labor may be inclined to profit more by cutting them from the payroll and replacing them with ChatGPT.
Personally? I look towards the increase of artificial intelligence with hope and optimism. How I look towards how the ruling class of the United States and the world will use these technologies is not so rose colored. But those that are so concerned about artificial intelligence plunging even more of the world’s working class into poverty are probably not wrong, and you or they have historical precedents - “the Luddites”.
In my line of work, I’ve had decades of people who believe themselves to be “bad” at technology as “Luddites”, but I’m confident than many of them had no historical understanding of who “the Luddites” were. They were not people who were bad at technology, nor specifically feared technology - they were people who were angry that their jobs were being threatened by increasingly efficient technology. One of their primary methods of preventing the loss of their jobs to machinery that would replace them is to damage that equipment, which is probably why their name became a descriptor in the English speaking West. I do wonder if there is anyone who works at OpenAI or any of its few competitors that is proud of their important and futuristic work that also doesn’t trust those that govern us to use it correctly and, from time to time, tosses a monkey wrench into the works.