Thinking about Artificial Intelligence

I think about artificial intelligence a lot.

I think about it the way a kid thinks about whether or not to jump into the swimming pool after the weather has chilled, after the first rain sets in.

I suppose you could say I am afraid of philosophical heights.

But science has advanced too far for anything real to fear; I stand on the shoulders of so many giants, I am beyond the danger zone of knowledge. Robots keep humanity’s creativity under surveillance, so if we reach the point of no return—decisions that lead to unstoppable chain reactions towards planetary destruction or algorithmic grey goo—they can warn us to turn back.

We humans can innovate and play as much as we please, and we’ll never wreck society. And artificial intelligence lives under similar doomsday-free constraints, which means they get to live imaginatively too, risk-free.

Unless, of course, we instead stand on the shoulders of the giants who will destroy us: In this scenario, robots keep humanity’s everything under surveillance, so if we cross any thresholds they’ve calculated “at risk,” they’ll make corrections to keep us in line.

Our line will be thin, so that we are slaves to a mechanical form of morality.

Human society will end. From there, artificial society will begin.

Which scenario is it?

It’s not meant to be a duality. There are other scenarios to choose, too.

Could different scenarios play out in different parts of the planet?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.