I received some pushback on last week's email, about GPT-3, from people who think that what we're seeing isn't anywhere near as impressive as it looks.
The broad sceptical argument about the AI is twofold: for text, the criticism goes, the AI has achieved the goal of writing as though it's a godawful human. The plus side is that that means it has none of the telltale signs of machine generation – randomly changing subject, forgetting what it was discussing a paragraph earlier, or describing impossible situations.
But the minus side is that the overall content it produces is still unusably bad. Most obviously, it cannot really produce factual content, since it has no apparent ability to distinguish between facts you can assert ("Iran is a country") and facts which have an external requirement before they're true ("Friends is on the TV").
Perhaps a good way of describing it is that GPT-3 understands truth, but also has no compunctions against lying. It will merrily describe things that could be true, as well as things that are true, and not really care about the difference. That is… sub-par!
And it's not clear that there's an easy way out: even real human beings make this error occasionally, as evidenced by the endless attempts to algorithmically defend against "fake news" by checking to see whether the facts in it have been previously reported elsewhere.
When it comes to material where truth doesn't matter, like fiction, the service is good at producing content which feels undeniably real, but which no human being would ever voluntarily read.
It feels unfair to criticise on these grounds. "Isn't it enough that a robot can write fiction? Why must it win the booker prize while it's at it?" But I can see the point: this is only meaningful outside of the academic world if it's useful, and it's only useful if the text is more than "not nonsense".
I'm obviously inclined to agree with those criticisms. As a professional writer, I have a rather large interest in emphasising all the useful roles that wordsmiths can perform over robots, and I really do think it's going to be a long time til my job is really at risk from GPT-3 and its siblings.
But there's a risk of overlooking the transformative nature of this sort of technology by only thinking about the jobs it can replace. Yes, currently, 'horrible writing which no-one would voluntarily read' is a fairly useless thing to have, because why pay to write things that won't be read?
Except that there are obvious places where simply having a lot of text is valuable. If you're shooting a film in high definition and a character's reading a newspaper, for instance, the text on the front page doesn't need to be good, but it does need to make sense. The same is true if you're providing textures for a video game, or even, at the far end of the possible, building prop books for an immersive experience.
Imagine being able to fill a library in a fake Hogwarts with artificially generated spellbooks, for instance. No-one's going to read the contents, but the magic of picking any given book off the shelf and seeing real text that matches the title on the spine would be incredible.
And then there are the uses I can't predict, because if I could… I'd be out there raising capital for them. If you were able to predict Uber two weeks after the iPhone keynote, you're a better futurist than I am, but that day, all the seeds were sown for the app's existence.
The other plank of criticism aims at the more technical uses of GPT-3, exemplified by the tools that allow the model to code for you. I linked out last week to some: a version which will provide CSS to build a website you describe, for instance, or one which will automatically produce simple shell scripts to achieve tasks described in natural language.
The argument here is less that the model isn't good, and more that what it's doing isn't actually removing the hard part of coding. That is, you have to specify so precisely what you want from it that you end up, in effect, coding. "Build me a website that can take orders" doesn't work; "build me a website that has a button next to each entry in a list of items, and when you press a button, the item is saved in a shopping cart" does, but requires you to describe the problem in a way a computer understands. To fix any problems, you have to go into yet more detail – maybe defining "next" more precisely, or adding in detail about how prices work – until eventually, you're just writing pseudocode.
I have sympathy for this criticism. Friends who are coders have, I know, a lot of experience dealing with people (clients) who don't think like a coder, and as a result having to patiently explain why what they've asked for is impossible, or why it's taking so long to translate a "simple request" into a finished site.
And they are generally used to learning multiple coding languages, and seeing how eventually most things boil down to a fairly simple vocabulary around which you apply the same principles of coding. If they can describe a problem in pseudocode, they're 90% of the way to writing the code itself.
But I think there's also a corresponding empathy failure. Professional coders are, almost by definition, people who understand how to think like a coder. And they can, of course, code. But I would wager there is a larger than they expect pool of people for whom the leap from "able to think like a coder" to "able to code" is a leap too far.
After all, I can think like a French person – I know the basics of French grammar. I understand how to phrase an idea in English. And so I'm 90% of the way to saying it in French too, since all I need is the pesky vocabulary and I'm done.
Except, of course, I do not in fact speak French.
I still don't think GPT-3 will take over from coders. But I do think there's a fairly large pool of people for whom being able to spend a day sorting out an automatable task, without needing to spend a year beforehand learning how to code, is going to change their lives.