It has always bothered me that scientists have a different way of viewing the world than I do. To my mind they operate on the ‘because we can, we must’ philosophy, doggedly pursuing inventions and developments with little concern for ethics, morals and the ramifications of their discoveries.
The global community then ends up having to play a version of catch-up, attempting to formulate laws and rules of governance to prevent the damage these developments can wreak.
Think about the development of ‘The Bomb’. The father of the Manhattan Project, J. Robert Oppenheimer, quoted chillingly from Hindu scripture after the first successful explosion, that “now I become Death, the destroyer of worlds”.
He knew the terrible change the discovery of nuclear bombs had unleashed, famously declaring that “we knew the world would never be the same”.
For the first time in the history of humanity, humans had created an invention that could ultimately wipe themselves out. Rather an example of supreme stupidity, I would have thought.
Fortunately, some ‘sense’ eventually prevailed and under the frightening concept of mutually assured destruction, world nuclear powers finally agreed to nuclear non-proliferation treaties. So far, the world has escaped Armageddon.
Consider now the development of social media – Facebook, followed by others. Perhaps not quite in the realm of material- and life-destroying technology; nevertheless, these media platforms have been highly destructive of people’s lives, reputations and happiness.
Ostensibly, Facebook was marketed as a way to connect people across the planet: to share ideas, to stay in touch, to alleviate the sense of isolation that many people feel. That is the spiel, anyway. In reality, Facebook, started by Mark Zuckerberg, was created to rate females on his university campus and then exploded to make its founder a fortune through advertising. The power of this platform and others to disseminate hate, to promote lies and disinformation and to enable online trolling and bullying has been profound.
It is only recently that attempts have been made to prevent these elements online. And some of those attempts have been half-hearted, to say the least.
Now we have the AI explosion. Some see this technology as freeing mankind from boring and time-consuming activities. AI can do tasks in nanoseconds that take humans many hours or even weeks. Others see it as a potential danger to mankind. Yes, they say, it has the power to replace many workers’ jobs. They’re also paranoid of its capacity to evolve and learn – perhaps, someday, beyond human control. Lethal autonomous weapons systems, operating by AI without human intervention, immediately spring to mind.
There does appear to be some attempt at preventing damage to humanity. Under an additional protocol to the Geneva Convention, the Martens Clause makes it plain that emerging technology should comply with “the principals of humanity and the dictates of public conscience”.
However, an AI system playing out a predetermined algorithm has no empathy or compassion, no concern for human loss. So, what do we do? Wish that we lived a simpler life with limited technology? Perhaps keep our fingers crossed and pray to whatever god is currently in favour?
With all scientific developments there is often a risk–reward equation. All of the above technologies have had benefits to mankind as well as downsides. It is the slowness of the world community to develop safeguards and to see the implications of these developments that is so often part of the ongoing problem.
What do you think? Should ethical issues be ironed out before technology is implemented on a global scale? In a classic ethical question: Just because we can, should we? Which advancements do you think could have used more consideration? Are you concerned about the effects of AI?