As I keep reading about ChatGPT and other AI language learning models, my poor head keeps wandering back to the term Renaissance Man. (Sorry, don’t accuse me of gender bias, or with showing little wokeness. Political correctness didn’t exist back in the 1400s).
This term used to describe a person who had a wide knowledge of many disciplines and a genius for understanding many of the natural laws that govern the world. Think of Leonardo da Vinci and Michelangelo. A lucky Renaissance person who was rich and had access to both Greek and Arab texts could possibly have had a handle on all that knowledge. Add in a touch of genius and an ability to understand how to draw, sculpt and invent and you have our Renaissance heroes.
But what are we to make of the term now in the twenty first century as we are bombarded with both seemingly unlimited knowledge and AI programs that threaten to do most of our thinking, our drawing and our writing for us? Has the term become redundant?
It is no longer possible for one man or woman to have the scope of knowledge that is available to us. Knowledge has grown exponentially year upon year, gaining warp speed as the written word was understood by the masses, and then as the internet has enabled us to share and store vast amounts of information to be retrieved in nanoseconds.
Now all we have to do is google for information or pay a subscription to the latest AI bot and Bob’s your uncle. We can ask it to find what we want and write what we want. We can pass our creativity across to a machine.
But is it so easy and so reduceable to such a simplistic view? I would argue that we still need a knowledge base from which to work. You have to know what question to ask in the first place to get a sensible and certain answer. You have to also give the AI bot some parameters from which to work. Do you want answers to a question that would suit a year eight student or do want an answer more suited to an academic?
Never before has the ability to think critically and analytically been so important. In dealing with what is produced by an AI, the questions need to be far -ranging and deliberate. Am I being given a biased view of the situation? What has been left out? What sources have been consulted to arrive at this answer/essay/view of the world?
I would also throw in the need for having an ethical framework from which to operate. A machine has no emotions. Its answers are only as good as the data it has mined and sorted and the inherent biases hidden within that data. What if all the data your preferred bot used was reliant on a narrow political view or philosophy? Or worse in some ways, had no ethical or legal framework at all for its answers. A recipe for a nuclear fusion bomb to kill millions and the method to disseminate it given out as freely as a recipe for scones; remember don’t handle either too much, to keep them light and fluffy.
Many world AI experts are now calling for a moratorium on these large language models to ensure a set of rules and governance is set up.
And of course, what happens when the AI program begins to think for itself? Having grown up with the science fiction of Hal chillingly refusing to open the pod bay door and The Terminator on our movie screens, I am fearful.
What do you think of the rise of AI? Why not share your thoughts in the comments section below?