The AI Explosion

“I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.” - Albert Einstein
The AI Explosion

AI development is moving fast. Over 1,000 high level tech leaders have called for a pause. Others are refusing to stop. AI continues to accelerate.


I find this similar to the development of the Atomic bomb by the Manhattan Project. If you didn’t know it wasn’t in Manhattan. After college I lived in Washington State and my wife worked on the Hanford site, one of the government sites where the Manhattan project took place. The site is now a larger facility focused on the many facets of the nuclear industry. However, the mystique still exists.


Many of the counter points to AI profess that AI will be the worst thing ever created for humanity. When the Atomic bomb was dropped on Nagasaki,similar phrases were used. The atomic age had begun.


The 50’s were filled with artwork depicting the many elements of the Nuclear industry. Kitchen wallpapers sported the atom. There was an acceptance that this horribly terrible city killing “thing” could be harnessed for good. I see similar arguments on both sides of AI. Everything has an equilibrium.


AI is different though. The atom bomb was created in a highly secure program. Aside from the 1,500 leaks it was not disclosed to the public till after its development and use. Even the people who worked on it were kept in information silos and very few high-level people knew the entirety of the project. Many of the known dangers had been identified. While it continued to be known as a highly dangerous item, Uranium and Plutonium were kept safely locked up, away from the public. The public benefited from its existence without the need for public interaction to work.


Nuclear continues to be developed in secure locations around the world and governments work hard to keep the public safe from any issues. Accidents do happen; Three Mile Island, Chernobyl, etc. History has proven that even with a high level of scrutiny major accidents can and have happened.  


AI, in contrast, is being developed out in the open. The public is interacting with it daily and the AI is feeding off the interaction. There are no safeguards. Admittedly, because the edges of AI haven’t been discovered yet. All of this is creating a growing margin for error and thereby disaster.


I am not and will never advocate for government intervention on this. However, every company and employee working on this needs to sit back and seriously think about the harmful side effects of their work and how to mitigate the risks associated. Your family, friends, community, region, and world is at stake. Dare I say, humanity is at stake.


For the Manhattan project they assessed the risks constantly and worked to mitigate the risks. The nuclear field continues to do this everyday. Teams of engineers work on safety processes for reactor prototypes and try to create the safest system possible.

Unfortunately, as humans, we learn best from failure. The beneficial algorithms that were created at the onset of the internet age have proven harmful side effects and there are a few people working today to mitigate the problems. The risks associated with AI are arguably high and potentially insurmountable. The question becomes not “if we will learn from a mistake”, but rather will we be able to repair the mistake sufficiently or with enough haste to survive the fallout from the mistake.

So where do we go from here? I like the benefits of AI. The same way that I like the benefits of Nuclear. However, I wouldn’t give control of a nuclear bomb to just anyone. Should we treat AI the same way?



Related News