Skip to content
Home » General Gadgets » Google Deepmind Needs Babysitting

Google Deepmind Needs Babysitting

  • by

Science fiction is very quickly heading to science fact and because of this fact (Skynet) Google are looking into taking precautions with their most powerful AI system, Deepmind. Ultimately they are looking into ways of being able to turn it off and make this solution futureproof.

Deepmind captured mainstream attention earlier this year when it soundly beat the world champion of Go: Lee Sedol. This is just the start though. Googles Deepmind is planning on doing a lot more than just beating humans at games of Chess etc. Deepminds algorithms were actually developed for much greater acts such as teaching themselves to learn.

With this in mind the greater question arises; what happens if Deepmind gets a virus or perhaps decides that humans are an unwanted parasite?

In co-operation with Oxford Uni’s Future of Humanity Institute, Deepminds researchers have written a paper wherein they state that AI systems are: “unlikely to behave optimally all the time,”

And that a human operator may well need a very large red STOP button to prevent Arnie’s from the future dropping in unannounced. To put it more mildly, we need a kill-switch.

Deepmind is an AGI (Artifical General Intelligence), this means that it learns from raw input to solve tasks without any kind of pre-programming. AI’s, in contrast, learn from specific tasks that they are created for. The co-founder of Deepmind has been quoted as describing their systems as ‘Agents’…


Agents Smith, Smith and Smith from The Matrix.

So, the real debate is whether an AI could learn to disconnect itself from the kill switch; could it evolve beyond all safety measures?

The reality is that AI’s abilities are growing day-by-day, inexorably growing closer to a human level for certain tasks. To put this in picture for you, Deepmind was able to process around a million images with a 16% approximate error rate back in 2012. Last year this went down to 5.5%!

Technological progress still continues at a phenomenal rate so the plan to implement safety measures has to be a good thing. With this in mind, as the paper suggests, Google are looking for: “a way to make sure a learning ‘agent’ will not learn to prevent being interrupted by environments or human operators”.

This can only be a good thing right?