While AI is hardly a new topic to anyone at this point, with there being constant discussion about ChatGPT and other large language models in the public sphere, it seems that there is a weird middle ground that people find themselves in: fear and acceptance. The fear is justified. It wasn’t that long ago when we thought that the more ‘intellectual’ professions would be safe from AI.
It seems that assumption has been turned on its head, since the latest model from OpenAI, GPT-4, scores in the top 10% for the Bar Exam, with similar results when taking the SAT. These scores in isolation don’t accentuate the actual ‘intelligence’ that these machines appear to have. Some of you will surely remember the article that was published earlier this year that itself was written by ChatGPT.
However, I think these fears should be only considered with a healthy dose of skepticism. While there is certainly a real worry that these systems will take societal responsibilities soon, there are various legal and ethical hurdles that have yet to be crossed. While the cat is out of the bag in the sense that the technology is now known to exist, the software’s legality remains somewhat murky.
To avoid rehashing the previous article on the subject, it essentially boils down to the rights of the humans whose work was used in the training set. If there is a real push for human ownership of creative works, then I would not be surprised to see these models become for private use only, with public access being very tightly controlled, if only to keep potential lawsuits in such a scenario from arising.
That’s not to say that in the long term these language models won’t become more integrated into society. It’s already possible to have them do valuable work, saving money and time in comparison to humans. Humans, on minor intellectual tasks, will be out-competed by these models. As for academic style research, that is likely not far from the chopping block either - perhaps instead of becoming the academic they only become an aide or able to see connections between different subjects that a lone person couldn’t. Time will ultimately tell.
Perhaps the biggest issue that the long term use of these models will bring is how do we keep them from making large swathes of the population, from an economic standpoint, useless? Certainly the world is not just the economy, and there is intrinsic value in all human life. How, if these models are integrated, will we keep purposelessness from becoming the default state for millions of people? It’s important to remember that these models’ proliferation is ultimately up to us to decide. We could try to stop their proliferation.
You can also view this article here.