My father is a professor of computer science at a world-class university, a veteran of the field who has been teaching there since the mid-1970s. So, computers, and their applications like artificial intelligence, have always been something I know a little about, even if the actual science and engineering behind it is a little beyond my bachelors of fine arts education.
On a philosophical and even spiritual level, I've tended to think of the creation of artificial intelligence as a good thing. In Jewish folklore, there is the concept of the Golem, which is most famously explored in the story of the Golem of Prague, a defender of the ghetto created by Rabbi Loew (a real, historical figure). While the Golem story is, I believe, often given a Frankenstein-esque horror tone in which the creation of a new life is seen as an act of hubris punished by violence in re-tellings, there's another, possibly truer version that makes the golem an imperfect, but ultimately heroic figure. Indeed, as explored in The Adventures of Kavalier and Clay, a direct line between the Golem and the American Superhero archetype is drawn (notably, most of America's most beloved superheroes were created by Jewish writers and artists, and during a time when Jews around the world really felt they could use a defender).
As I understand it, the act of creating the Golem was only possible by a sufficiently righteous man. There's an idea in both Judaism and Christianity that imitation of God is the way you live a righteous life, and so rather than an act of hubris, the creation of the Golem is an attempt to follow the example set by the divine - just as God did with Adam, the Rabbi creates the Golem from clay.
Furthermore, I grew up with a love of science fiction. And one of the most important and favorite characters to me was Data, from Star Trek: The Next Generation, a show that came into the world not long after I did, premiering the year after I was born.
While there are certainly several episodes that see Data acting in alarming, unpredictable ways that endanger people, the overall attitude the show takes toward him is one of love and acceptance. His presence on the ship is a positive for everyone, and despite his profound, superhuman technical competence, he demonstrates no desire or will to take over the ship or the duties of others. Instead, he wants nothing more than to feel and be treated as part of the crew. And the environment that Captain Picard, as the sort of ideal paternal authority, creates, is one in which he is accepted as a friend and colleague.
It is a dream, I think, of humanity to create something like Data. If we could bring life to a new form of intelligence and teach it kindness, ethics, and responsibility, we would be all the richer for it. In a certain way, it would be like bringing about humanity's offspring. But we have anxieties about offspring, don't we? Greek myth has a pattern of the top, paternal god always being toppled by their son. Oranos' dominance is usurped by Cronus, and then Cronus' dominance is usurped by Zeus. Zeus, then, spends a great deal of effort trying to ensure that none of the absurd number of kids he's fathered will be the one to rise up and take him down.
We fear our creations, because they always result in unexpected consequences. In an epoch (one that, it can be easy to forget, has not yet lasted a century) where we have weapons that could turn our planet into an irradiated boneyard, there is a fear that any unpredictable intelligence could, and perhaps inevitably would, decide to exterminate us.
But the truth is that we really don't know where A.I. will go.
What I think we can see is the danger of how we're thinking of applying it.
In the past year, we've seen the rise of Chat-GPT, Dall-e, and other A.I. systems that take in vast amounts of sample data, which then allows them to produce text or images or music, or what-have-you that does a remarkable job of imitating the real thing. You plug the Complete Works of Shakespeare into one of these things and then ask it to write a sonnet about playing Fortnite, and you'll get some impressive mash-up of rhyming couplets and modern video games. (To be clear, that's not an example I've seen, so I'm not sure how successful it would be in this particular case).
Removed from the dire consequences of how it might be implemented, this is really interesting research material. My dad asked one of these (I think Chat-GPT) to produce a summary of his career, and what he got was a very well-written, plausibly professional short biography that just so happened to, with great authority, state that he was born two years before he actually was, claimed he got his PhD at Stanford (he went to Caltech,) and that he, the person who had made this request, had died in 2016.
What that demonstrates, as I understand it, is that this particular model has gotten really great at putting together sentences that seem sensible and plausible, with a flow of words and sentence structure that makes it seem very much like a professional writer is producing them. But that's it - it sort of guesses at the facts (and to be fair, the person described in that biography could be very similar to my dad - it got some of the details right) based not on a real understanding of them, but more that his name often finds itself in documents that also contain references to, for example, Stanford (when I was 3, and then when was 11, we lived in Palo Alto for a year while my dad did a visiting professorship there on his sabbatical, but he was never a student there).
What's interesting about these models is that they learn by looking at other works - they study vast amounts of text, or in the case of Dall-E, vast numbers of images, and discern some sort of rule about the patterns that are most common within the medium. At no point to you sit down and teach Chat-GPT about grammatical structures like "subject-verb-object," instead it just notices that these words tend to come in this order, and therefore it will tend to put them that way.
Now, this feels, then, like the perfect example of the "Chinese Room" thought experiment, which argues that these A.I. models are not building intelligence, but rather the hollow illusion of intelligence. The Chinese Room was a rebuttal to the Turing Test (which more or less, as I understand it, says that if a computer talks to you in a way that is indistinguishable from a person, you must assume that it possesses real intelligence,) but counter-rebuttals have argued that we're actually overselling the uniqueness of human intelligence, and that we, ourselves, just figure out the right order for words over time and kind of "fake it 'til we make it."
Personally, I've always found myself more sympathetic to the Chinese Room argument, and that human minds are not just behavioralist input-output machines, though I recognize that when it comes to skepticism of artificial intelligence, it's easy to find oneself moving the goal posts. One of the best episodes of The Next Generation, The Measure of Man (rarer still that such a good episode comes in one of Next Gen's rather not-great first two seasons) has Picard forced to act as a lawyer for Data, arguing for his sentience and bodily autonomy, when a researcher from a prominent research institution believes that, being an android, Data's actually more of a piece of equipment that he has every right to requisition for research purposes (research that would involve disassembling him to see how he works). In the case of Data, whom the show clearly portrays as being not only likable, but also probably sentient and definitely intelligent, there's no question that our sympathies are meant to rest with Data and the argument to allow him to choose not to participate in this risky study.
And yet, certainly among any technology that has been developed as of the modern day, my tendency would be to treat it as simply machine, and assume that there is no inner life that would be threatened by dismantling it. I wonder if I will live to see a day when the line is actually blurred to the point where I'd find myself facing a Data-like figure whose rights I would feel the need to defend.
I certainly believe in human rights. In fact, despite not being a vegetarian, I also believe in animal rights (actually, as far as technology goes, I'd be very happy to see a way to grow meat that doesn't require an animal to die or suffer for us to eat it, if and when the technology to make it A: environmentally sustainable, B: safe to eat, and C: taste good, exists). I'm also agnostic on whether we live in a strictly material world or whether there's some transcendent aspect of reality where consciousness exists. In other words, I don't know if we have souls or not.
Ultimately, our brains are naturally evolved meat-computers. But whether the processing of information in these giant neural networks actually produces the experience of consciousness, or if we have some ethereal, extraplanar essence that the brain merely feeds input into... I don't know. I hope for the latter (largely because it could mean that one could truly persist as a conscious being beyond one's death), but at the same time, if that is the case, why should we be so sure that our organic meat-brains have attached souls but synthetic computers cannot?
These heady questions are going to continue to be some of the most fiercely debated ideas for as long as humanity will be around, I think.
What worries me, then, about A.I, is not really all that, but rather, capitalism.
And to be clear, let's make some definitions. I am no student of economics. I'm just someone who was born in the Reagan era and have basically seen a world built on the premise that unfettered pursuit of wealth is the right and proper structure of society, and seen how that structure seems to produce a collapsing, deteriorating world where comfort and security is becoming less and less attainable and future generations have things worse off than older ones.
When I say capitalism, I'm using this in the broad, political sense of the modern day, meaning a value system that considers perpetual growth, perpetual wealth accumulation, and maximizing short-term gain to be the highest goals, and where ethical considerations and social responsibility are an obstacle to overcome, or at best, an optional side-goal.
Dall-E, the image generating machine learning system, infamously sweeps the internet for images to train on, analyzing them and building a set of rules by which to create new images. The result, then, is that, often, fragments of artists' work can find its way into the system's output - hilariously and damningly, there are a number of images that have been produced that actually have a distorted, but sometimes still legible, Getty Images watermark, making it clear that the system trained on images hosted by the famous stock-photo website - photos that Dall-E did not pay one cent to use.
And therein lies one of the big dangers: many enthusiasts for these systems have touted them as a way to "democratize" the creation of art. That this will allow anyone to be able to produce the images in their heads, not having to find and pay artists to produce them. But not only does this argument seek to portray members of a profession that, famously, does not pay well for the vast majority of its practitioners as greedy misers - the "man" to which one can "stick it" - it also straight-up steals their work to produce its product.
Now, you could make the following argument: doesn't an artist learn to paint by looking at other paintings, imitating their styles and techniques, to produce something new? And I honestly don't have a well-formulated rebuttal.
Instead, where I think the problem lies is who is holding the reins.
We're in an era when companies, and certainly in the tech world, are trying to centralize and monopolize. It's actually part of that same capitalist impulse of wealth-accumulation. Just as the rich want to concentrate more and more wealth at the top, the equivalent companies want to accumulate business power and market share. When I was in middle school, there were a bunch of different search engines people used. Then, Google showed up and was much better, so people started using it. And its origins were humble - literally, my dad came home from work one day and told us that some Stanford grad students had put together a really clever, efficient search engine that we should start using. Two decades and change later, and Google is now synonymous with doing a search on the internet - goodbye Alta Vista, bye-bye Ask Jeeves, so-long Yahoo (wait, Yahoo, are you still there? Weird).
And the practices of Google have tended toward greater centralization. Hell, the website that hosts this blog is owned by them. But also, even within their searches, when you ask a question, rather than pointing you to a website that has the answer, it seeks to extract that answer from the website, which then has you only using Google (I looked up which season that Star Trek episode was, and looking to the next window of my browser - I never got used to using tabs - I just googled it and found a big summary of the episode next to all the search results, including its season and episode number, without clicking on any links).
Google and Microsoft want to use Chat-GPT-style language models to answer questions posed on their search engines (yes, MS is still trying to make Bing a thing). In other words, rather than extracting information from particular web pages, they want to have an AI extract information from the whole internet and present it in an easily-readable, professional-looking manner.
But, as my dad's inaccurate and premature obituary demonstrates, just because something reads well doesn't mean it's actually correct or useful.
And dear lord, it's bad enough with actual humans writing intentional disinformation to try to swing politics one way or another. We're in an era when confirming facts is very difficult, and now we want to make the authority for truth a bunch of thoughtlessly credulous language simulators?
But you can imagine why Google and Microsoft are racing to do this. There's money to be made, and a culture to dominate. You want to be the one who brings about the next big thing.
See, I don't think that A.I. will inevitably decide to launch all the world's nukes at once. But I do think that a race to be first, a race to dominate, and a race to embrace this new world without actually understanding it or even knowing what the tools we've built are useful for, is a genuine threat. In the case of nukes, I can only hope that sanity prevails and we never build an autonomous launch system (hell, I'd love it if we secretly created a system that didn't actually even let them launch in the first place, and that the only thing they can ever do is fool other people with nukes into thinking it would be too dangerous to attack us). But when it comes to less obviously dire, less obviously existential threats to humanity, I think it behooves us to think about what we're actually trying to get out of this.
If our goal is to create a sentient A.I. that can be like a second generation of humanity, to expand the diversity of life and intelligence, to go on a journey with us as we explore what it means to be human, then that's great, and I love it.
If the goal is to automate intellectual labor roles in order to eliminate the need to pay people much in the way that earlier automation has eliminated manual labor roles, all in the interest of further concentrating the benefits of innovation within the capital class that owns the means of production while letting the rest of humanity fall by the wayside, then we need to slam the fucking brakes.