Perspectives about the Artificial Intelligence growth nowadays
When you think of Artificial Intelligence, are you old enough to remember HAL 9000 from 2001 A Space Odyssey? Or perhaps Arnold Schwarzenegger or Ray Liotta’s characters in the Terminator series? Even worse: the Skynet Computer that created the Terminators? With the new Star Wars being released, perhaps C-3PO warrants inclusion in the fictional pantheon.
But did you think of advanced IVR (Integrated Voice Response) systems that assist you when on the phone with your financial institution or telecommunications provider? What about the advanced chess programs? Like IBM’s Deep Blue, that you may or may not have matched wits against. Or their computer that beat a human on the game show Jepoardy?
AI is here!
We imagine Artificial Intelligence through Hollywood lens as an impressive, sentient technology (often ominous and threatening). Also, we usually feel we should fear or at least laugh at it. But if you use Apple’s Siri, or Amazon.com’s Echo, you are deeply interacting with artificial intelligence, but you may not realize it because it seems so mundane, and certainly not threatening. We don’t just talk through our phones anymore. Now, we often talk to our phones. Also, we tell them to do something and they react. The new version of Dragon Dictate software was just released for computers. Earlier iterations allowed you to dictate correspondence directly to your computer, with the results from your spoken words as text in your email or word processing document. The latest version allows you to interact with your computer in natural language.
Neural networks have long since left the laboratory and are used in machine learning in ways that are often invisible to the end user. When you call your bank, it is very likely that they have anti-fraud software that has “learned” your voice over your last few calls. So even though you may get a different customer service agent every time you call, the system has learned your voice so that if someone else calls and claims to be you, the system knows that “something doesn’t sound right” and notifies the agent to take additional measures.
Amazon tracks your past purchases. Then, it determines, based on your history, “if you purchased that, you may also like this.” Besides, Pandora and other music services analyze the songs you say you like and the ones you skip over. By doing that, they learn your musical tastes, heuristically predicting songs you may like that you have never heard of before.
For another example of how deep learning works, look no further than the palm of your hand. Do you have the Shazam app on your smartphone? Shazam “listens” to a song you may be hearing in a car on the radio, in a nightclub or the beach. Wherever, and even with background noise, low quality, or people talking over it, can amazingly accurately. Tell you the name of the song and artist. This is a wickedly difficult task for a computer, even though our brains automatically filter out context from background noise. Shazam is fed thousands, perhaps millions of songs of all types on the backend. But it has to match that up, practically in real time with imperfect noise fed through a smartphone microphone to provide a match.
All of these are different everyday applications of AI (Artificial Intelligence). It is already present in our lives and it will become more and more so. It is not ominous, it is not scary, and is often so well implemented that it goes unnoticed.
So what is Artificial Intelligence destiny? One clue is in technology, by looking at what Google is doing. Google has just decided to freely share its “TensorFlow” deep learning software technology. It is already available to the world via Open Source, allowing anyone technically competent enough to employ it in their own innovation, programming in either C++ or Python. TensorFlow can mine through complex big data and learn heuristically.
A technology like TensorFlow can allow a computer to “look at” thousands or millions of photos and learn what a cat looks like, learn what a car looks like, and learn the difference between say, a motorcycle and a bicycle, so that when another picture is presented, the computer might determine the contents, even though it has never “seen” that particular image before.
Take a moment to think about the possibilities. “Trained” in radiology, a computer can scan x-rays for cancer. It can even analyze large quantities of blood samples to find previously unnoticed markers that can indicate disease or other health concerns.
This is already happening in applications like MetaMind, which has uses from the aforementioned medical image processing, to food recognition. Companies and developers can design applications using Metamind’s artificial intelligence and deep learning capabilities to create heretofore impossible solutions to complex problems that in the past may have required human brainpower to solve, manually and one at a time.
For an everyday application that you may already be using, look no further than Pinterest. Its computers use deep learning artificial intelligence to figure out what is in the photograph and what it is about. That way it can serve up similar photographs. This is how Pinterest calculates “Related Pins.”
All of these artificial intelligence applications are a lot friendlier. Also, they are technically a lot more sophisticated than a murderous talking autopilot aboard a spaceship. Artificial Intelligence is already here. However, it’s not trying to conquer and dominate the world.
At least, not in the short term.