ChatGPT is coming for your job, and that’s good


This illustration shows the ChatGPT logo on a phone in front of the OpenAI logo.
Image: gguy/Adobe Stock

According to a new study from researchers at the University of Pennsylvania and OpenAI, if you’re an accountant, translator or writer, your job prospects are bleak. By analyzing which jobs could be done at least 50% faster by generative pre-trained transformers, the report authors suggested that 20% of the U.S. workforce is at risk of being rendered obsolete by large language models like ChatGPT. It’s a scary prospect. It’s also likely wrong.

As we’re learning with developers, yes, large language models can eliminate some repetitive tasks but, no, this doesn’t make software developers obsolete. Done right, it makes them much more productive. The same can be true of other jobs and industries. The trick is to learn how to harness the power of LLMs without being mowed down by them.

Death of the developer?

In some ways, software development should be highly susceptible to GPTs. For any LLM and GPT to produce good results, it needs training data — the better the training data, the better the output.

For software, training data is vast and easily accessible. Small wonder, then, that products like GitHub’s Copilot have amazed some developers with how quickly they can improve productivity. For others, these results have prompted doomsday declarations of the death of the software developer.

The mixed reactions are understandable. Take, for example, the ability of Copilot to write code for the developer. You can see that as a replacement for the developer, or you can see it as an enhancement. The developers I follow are in the latter camp. For example, some who have tried Copilot find it quite additive and addictive.

“I have grown used to Copilot uncannily inferring what I am trying to do after writing the first couple of words,” wrote developer Manuel Odendahl.

“You get the LLM to draft some code for you that’s 80% complete/correct [and] you tweak the last 20% by hand,” suggested Sourcegraph developer Steve Yegge.

That’s a significant performance boost, and it’s accruing to those developers who figure out how to put LLMs to good use.

But it’s more than that. For the founder of the open-source project Datasette, Simon Willison, GPTs enable him to be dramatically more ambitious with what he codes because they change how he codes.

“ChatGPT (and GitHub Copilot) save me an enormous amount of ‘figuring things out’ time. For everything from writing a for loop in Bash to remembering how to make a cross-domain CORS request in JavaScript — I don’t need to even look things up anymore, I can just prompt it and get the right answer 80% of the time,” noted Willison.

In other words, a technology that could replace developers hasn’t, and won’t. Not for those developers who learn how to make the LLMs work for them, rather than in place of them.

Which brings us back to one of the report’s central arguments: “Most occupations exhibit some degree of exposure to LLMs, with varying exposure levels across different types of work.”

Furthermore, “Roles heavily reliant on science and critical thinking skills show a negative correlation with exposure, while programming and writing skills are positively associated with LLM exposure.”

This may suggest how little the report authors understand programming and writing; both involve heavy doses of critical thinking.

Bloody awful poetry

In the report, among the list of occupations most exposed to replacement by GPTs/LLMs, there are some head-scratchers. Take public relations specialists. If a PR person’s job is writing press releases, I’d agree with this assessment. The average press release sounds like it was written by a computer, and not a particularly advanced computer. But this isn’t what good PR people do. They build relationships with journalists. They try to understand shifting industry narratives and how to incorporate their company’s products or services therein. They are, in summary, thinking about content and its place in a wider context, rather than just mindlessly outputting press releases.

For example, the report authors conclude that poets, lyricists and creative writers are among the groups most at risk from LLMs mowing them down. Never mind that the training data for the LLMs is human-generated content (in this case, poems, lyrics and prose). That means the machine is always dependent on a person to give it the semblance of smarts.

Going further, while it’s superficially impressive to tell ChatGPT to write a talk, short story or poem for you, but in my experience the results sound a bit tinny a little off or even derivative. I have no doubt that ChatGPT could do some content marketing copy on my behalf because, let’s face it, most content marketing is a bit derivative and dull. It’s designed to speak to machines (SEO, anyone?) and, hence, doesn’t try to offer great writing.

Even great writing is a bit derivative. Steinbeck’s “East of Eden” is a retelling of the biblical Cain and Abel story, for example. But anyone who thinks ChatGPT could come up with that masterpiece of creative writing is way too high on their LLM paint thinners. Great writing emerges from human genius, articulating common themes in uncommon ways. The day I see that come from a prompt I drop into ChatGPT will be the day it’s all over for the human race, but guess what? That day isn’t coming.

Not now. Not soon. Not ever. Machines, as with the development examples above, are good at incorporating human-created input and mimicking it to generate human-acceptable output. But they’re not ever thinking through the all-too-human experience that gives rise to great literature, just as they’re not able to grok and respond to the business problems that great developers resolve with code.

Instead, we have a happy union of people and machines. How happy that union will be for given industries and the people therein depends on how well they use GPTs to remove repetitive tasks or code so that they can focus on the innovative, human side of their jobs.

Disclosure: I work for MongoDB, but the views expressed herein are mine.



Source link