The world is agog right now over generative Artificial Intelligence. There has been a spectacular rise in what these systems appear to do, and an incredible amount of investment is going into them. There is a great deal of discussion about how much Artificial Intelligence will change the world, and in what ways. This runs the gamut all the way from dystopian to utopian- AI will destroy us all, or AI will solve all the world's problems.
I respectfully submit that AI won't do any of those things. Specifically: We will make the world, not AI. The real question is how we position this new technology as we do it.
This question of how to position AI is an important one, because in order to fully understand it, we need to address the fundamentally deceptive nature of this technology. It's not what anyone says it is. The deception begins with the name, "Artificial Intelligence".
I've written another essay on why I don't treat AI as a social interaction; you can see it here. The upshot is, I don't want AI friends or an AI girlfriend- those aren't real, and they subvert the idea that the computer is just a tool for me to use.
But I want, specifically, to address another issue with AI here: It's not really very intelligent.
I go into this a bit in the other essay, and talk about how there can be different kinds of intelligence - dance, for example, or cooking and enjoying food. But I don't pull this thread very far, and it's critical.
Because one of the uses of generative AI is now 'research'. Generally this is using AI as a tool to comb through mountains of data, either in the form of texts (the vast piles of academic papers generated by the 'publish or perish' system is ripe for this kind of harvest) or raw data from larger data sets. And the results can appear fantastic: It would take a human being a very long time to read thousands of articles and summarize them, ranking them with respect to relevance, etc. This seems like it would be an awesome research tool.
But I'm extremely leery of this, and my concern has several different aspects.
First: Yes, it would take a human a long time to read that. That's what academic training is for. Doctoral researchers spend time in guided exploration through reams of written works. Each pursues an individual goal (their specific theoretical niche, say). The field relies on the range of these individual goals to be appropriate to moving the field forward, training the next generation of scholars, winnowing out intellectual dead ends (that is, research directions that seemed promising in the past but are not), and building the collection of knowledge in the form of a society of researchers. Simply claiming that ChatGPT can imitate a PhD- level response to any given topic does not build this collection of socially embedded knowledge.
(And this is kind of the point: The tech bros who are pushing AI as 'the answer' are deliberately trying to undermine this society of researchers. It's not intended to help them; it's intended to show them that the tech bros are really the smart people, and the rest of the world is just not seeing the 'big picture'. Naturally most of the tech bros are young, perhaps never attended college, or dropped out of their programs to engage in an activity that seems to be advancing knowledge, but strangely is also driven by making money. A key part of this subterfuge is to insist that these two different things are actually the same. In fact, building a social architecture that allows people to become researchers and navigate the structure of an intellectual field is a much harder problem than building generative AI.)
But second: There is an assumption that this is what researchers do: they read papers and write about them. Yes, that's kind of true. Certainly I'm a researcher, and this is mainly what I'm doing. But my training is in Anthropology, and one of the fundamental assumptions that drives Anthropology as a field is the belief that the vast majority of knowledge in the world - and I will even say that the vast majority of valuable knowledge, even though it's sometimes difficult to know what's valuable in advance of a specific challenge - is not contained in the mountain of academic research papers.
The vast majority of knowledge in the world is still only found in the world.
This is the belief that makes Anthropologists dedicate a portion of their lives to going to an unfamiliar community, often one that is very far from the halls of Silicon Valley, and hence not rich, not industrialized, and maybe not even integrated into the modern world system of trade and exchange - what was once called (incorrectly) 'primitive' - and living among the people there.
Human beings can make their world in many, many ways. Most people working in AI have actually experienced only a tiny slice of the world - but this is simply a truism, because most people in their lifetimes can only experience a tiny slice of it. Dedicated Anthropologists that go live in a remote (for them) part of the planet can spend time there, but while they are there they are not elsewhere, and time is finite. The full bodies of knowledge that are embedded not just in an academic research society, but in the global society of humanity (as well as all of its past starts that are now in historical records and archaeological sites) offer vastly more than the data used in the AI chatbots.
At best, the AI bot can respond with not, "what is the answer to this question?" but "what might an answer to this question look like, given a body of input knowledge?" When we surrender our judgment and believe we are getting responses to the former rather than the latter, and accord the bot the authority of 'knowing' (which it cannot do), we lose the recognition that it can only answer based on a tiny slice of the world.
This is subversive in lots of ways. For example, once, early on, when asked to test the new (at the time) ChatGPT, I entered a prompt, "Who is the greatest tennis player of all time?" The output was a paragraph comparing Rafael Nadal, Roger Federer, and Novak Djokovich, but ending with the suggestion that the question was subjective and opinions would vary.
My first reaction was to wonder whether this was more racist or more sexist? My prompt, after all, had not specified whether women's tennis should be considered, but I thought it would be appropriate to mention Serena Williams.
However, after thinking this, I reconsidered: The response was not racist or sexist. It can't be. Because ChatGPT is not a person.
Of course we need to be aware of the biases in the input data for these models, and it is perfectly fair to evaluate and criticize the responses in a manner similar to the way we would if generated by a human. But notice the slippage: I could look at the response and ask if the human is sexist or racist; that doesn't fully make sense when the response was generated by a bot. Consequently, we look to the input data for the bot and the way it was trained. If we find fault (and I would consider racism and sexism faults), we find humans to blame.
But it is equally important to realize that there is another opportunity: the racism and sexism can inhere not in the generation of the response, but in its usage. If I actually believe Rafa, the Joker, and Roger are the greatest tennis players of all time and ignore Serena just because the bot told me so, then I, not it, am the sexist and racist one.
But in order to make that judgment, I need the training and experience to know I should. This winds things back to the notion that I would need to be the one conducting guided research: ChatGPT can't be a PhD-level scholar; I can. ChatGPT can only answer, 'what might PhD research look like?' We are responsible for deciding what qualifies and what does not.
When it comes to Silicon Valley's promises that they will use AI to solve the world's problems, we should respond with a collective rebuke. They are trying to use their tools, which do very small things, to take power away from us. We should instead be focusing our efforts on building our knowledge into our communities, with every child, young adult, and mature researcher endowed with the ability to craft their role in this society and contribute to it. They will be the ones to identify the problems, to crawl through the myriad possible solutions, and evaluate the opportunities in each direction.
And the community will have to decide on it - we cannot outsource that to AI nor blindly believe the Silicon Valley tech bros who are claiming they know best. Perhaps ChatGPT can play a small role in this, but despite the fact that it can generate amazing texts and pictures, I remain generally unimpressed. It can do some interesting things. I cannot replace us individually, and it cannot replace us collectively.
We will need to build our world ourselves, and there are many more possible than ChatGPT can know.