Now Google are heading into uncharted territories with AI, changes are imminent. First, we had keyboard and mouse. This evolved to keyboard and touch, now we will have vision and voice as viable input devices.
This has all come about through deep learning. We could go into an involved explanation here on how deep learning works, what a neural net is etc but we wont. Click the link below for a good, rough explanation.
“Instead of writing programs that solve the problem, we write programs that learn to solve problems.”
If you are one of the early adopters and already have a Google Home, its thanks to deep learning that it can now recognise up to six members of your household. Pretty impressive considering how different peoples voices are, especially in the U.K. where the number of dialects are almost uncountable.
With in home smart cameras this means you can literally ask Google what it thinks of your outfit. Google will be able to converse with you on the outfit and suggest other items of clothing or maybe changing a colour or adding a specific accessory. Working on your car and wondering what a specific part is called so its replacement can be ordered? Just show Google the part and it should be able to at least find similar items, maybe even name it for you.
The above imaginings are not in a couple of years time either. This could literally be a few months away from fruition.
Google Assistant SDK (Software developers kit)
Google have recently released a set of developers tools that give manufacturers the option to install Google assistant into almost any device. This means anything you could consider a smart item will be able to converse with you. Not only this but the assistant can now support transactions and there are currently 70 plus smart home companies working with Google Assistant today.
Google assistant is now available on iphone as well, watch yourself Siri!