Robot Ethics – Will They Kill All Humans?

I wrote a post the other day about a paper I read on the uncanny valley.  Someone on my YouTube channel asked a couple of really interesting questions and they fundamentally came down to two key issues:

  1. Will robots take our jobs and if so what will the impact be – we never really got the benefits that were promised from the original replacement of labour?
  2. Do scientists actively look for potential flaws in their research and the impacts that may have on the world?

 

I wanted to address these issues so thought I’d write a little more than my normal comment response.  I need to start this by saying these are just my views and I may be considered a bit of a blue-skies thinker by many.  Some of the things here are pretty drastic shifts in humanity so may take some belief to get your head around the concepts – I’m happy to be considered nuts here.

 

Let’s start with the first question about jobs.  In the short term yes there will be an impact.  I don’t believe robots have changed an earth-shattering amount in the last fifty years.  We’ve certainly made lots of advances but the concepts are the same.  We’ll see more shifts to automation – not just with robotics but with computing.  People with low-skilled, low-end jobs will lose their jobs as a result.  This shift would happen without robots.

 

In my opinion the reality is that the wage-based economy we’ve created is just a modern form of slavery.  You could do away with probably 60% of people in the workplace and not see much difference.  The system we’ve created also ensures everyone only cares about themselves – after all who would pay £60 for a T-shirt they can buy for £5 just to help someone else?  The answer is very few people.  Can I buy clothing made in the UK at higher-than-minimum wages?  Yes.  Do many people buy them?  Nope.  Can I find a local farmer and buy direct from them?  Yes.  Do most people do it?  Nope – it’s a trip to the super market.  So many aspects of our life affect the amount of people in work and that’s going to continue – robots or not.

 

Do I see a massive shift from this one way or the other in the short-term future?  No.

 

What about the longer term over say the next 50 or 100 years?  That’s a more interesting question.

 

There is a concept called the technological singularity.  This is the point that all technology very rapidly reaches a convergence where it all benefits each other and grows at such a rate that each day we see leaps the size of the technological revolution.  More and more work – be it in physics, chemistry, computer science, robotics, material science, energy storage or biology is growing at an exponential rate and is starting to feed in to other unrelated sectors.  When you take this to its end point we suddenly go from being a civilised society to one that has cured hunger, the need to work, the energy crisis, food problems, etc.  And rather scarily all this can happen, in many people’s opinion, in the next hundred years.

 

We then reach a point that there is no need for a job.  Instead of aiming for 100% employment we could aim for 100% unemployment so that rather than tinkering around the side and saying you’ll have more time for a single game of golf we can say “have your life back”.  This will still have some negative impacts on people whose livelihoods cannot continue in this new world but where they got genuine joy from it.  For most people though there will be the chance to just work on things they find enjoyable or for the opportunity to learn new things.  After all with this time in human development likely comes immortality.  Yup, that’s right I just dropped that in there casually.  Jobs will likely be the least of our concerns when the whole of humanity flips on its head over a few years.

 

And this is where artificial intelligence comes in.  I’d highly recommend reading the wonderful posts over at Wait but Why on this at http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html and http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html but I’ll try and sum it up.

 

There are there types of AI.  The first is those for specific tasks such as calculators, chess-playing machines, IBM’s Watson or those that come out of Deep Mind.  These are all we’ve ever created so far.

 

The second type of AI is that of general intelligence – in other words matching humans.  It will have the capability to self-improve, to reason, to abstract knowledge and even potentially to understand.  A lot of estimates are that this will happen within the next hundred years.  When this happens, twinned with massive software and hardware advances and a system that can self-improve it will likely turn itself into the third type of AI in a matter of days, hours or minutes.

 

The final type (and it’s a very final type) is that of artificial super intelligence.  This is a level of intelligence so far above ours that we will understand it as much as an ant can understand our behaviour.

 

The ethical question I was posed asked whether scientists consider what could go wrong.  The answer is some of us definitely do.  Super intelligent AIs could very easily wipe us out by accident – in a quest to be the best car-painting robot they may choose to alter the world’s atmosphere so that paint looks more sparkly or they may decide humans can be happy by removing all their brains or injecting us with drugs that permanently stimulate our dopamine supply.  A badly structured initial request to the AI could have unforetold consequences even by accident and as such as need to consider what would happen here.  A benign AI gives us immortality and almost certainly guarantees the singularity whereas a non-benign super AI makes us extinct.

 

Quite the conundrum.

 

There are some research institutes looking at this problems.  The truth though is that it all depends who actually implements this.  Would the military install a compassionate AI?  Would scientists complete the job when theirs work is meant to just start the ball rolling for industry?  Would a multi-national risk this quarter’s profits by spending an extra six months testing all functionality?  Would a start-up that needs to get something out in the world in order to get series B funding even bother running more than a cursory check?  The reality of the world is that we’re going to unleash something we may have little control over very quickly and that’s why it’s better we consider how to deal with that now.

 

Is enough research going in to any of this?  In my opinion no but it turns out artificial intelligence that could at some point wipe out the species is less important than a lot of mundane concerns like cancers killing us now, an energy crisis, a lack of food and global warming.  We cannot solve everything at once but we do need to be aware that if we’re asking this question once AI has already been created then it is too late.

 

I love this area and this is just a super-quick response to one person’s question but I’d love to discuss this in some future posts.  I should reiterate that this is just my opinion and opinions vary wildly even amongst those I know and respect very well.  Predicting the future is a bit difficult like that but hopefully this sheds some light on my thoughts.

 

Please let me know what you think!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s