How Does A.I. Compare to Human Intelligence?
For the Sci-Fi fans among us, we have this famous Terminator quote—and gesture—down pat. Some of us grew up using it, some of the geekier ones might argue that “I need your clothes, your boots and your motorcycle” is a much more memorable quote from the franchise, but this one is the one that really sticks with us. Maybe playing into our nightmares about the day the machines take over (April 19th, 2011, amirite?).
But before we begin to really worry that the giants Apple and Google are our Cyberdynes and things like deep learning will ultimately lead to a neural net-based conscious group mind and artificial general intelligence that will take over our internet akin to Skynet, what if we can bite the bullet of machines vs humans by making our machines behave human? Is that even a realistic possibility? And, given how flawed humanity is at times, would we really want our machines to behave like us?
Let us explain our reasoning before we begin to incite panic…
Deep learning
Deep learning is, essentially, machines learning for themselves and by themselves, through a series of intricate and complex mathematical calculations that roughly simulate what is happening in our brains when our interconnected layers of neurons fire off messages to one another. In essence, deep learning simulates the way that humans think without the ‘benefit’ our alleged common sense.
Deep learning, then, is an artificial neural network (...say it with us… Skynet…) that developers intend to use to second guess our wants and intentions, in an effort to make our progress more.... progress-able. Perhaps one day deep learning will be applied during surgery, working alongside the medically fittest to help with snap decisions that could mean the difference between life and death. Currently, however, if you want to witness deep learning in action, look no further than Facebook. Facebook has been using deep learning techniques to customise our feeds for a little while now, which poses two questions: 1, why does Facebook’s deep learning keep suggesting we watch videos of baby sloths, and, 2, is Mark Zuckerberg our very own Miles Bennett Dyson?
Language processing
Linguists may be pleased or thoroughly incensed by the idea of deep learning processes assisting with the natural order of things in language learning, but there is in fact a field of computer science dedicated to artificial intelligence, and computational linguistics focussing entirely on the interactions between computers and humans (Natural Language Processing, or NLP).
Google has recently added NLP to its Drive search, which essentially means searching for your files in Google Drive will become more like using the normal Google search engine. Don’t see the difference? Well, if you haven’t already noticed, gone are the days when if you wanted to search for something you had to use a series of keywords to find what you were looking for. Now you can just type what you are thinking: show me documents shared with >>>, for example, will now bring up exactly what you are after. Clearly this isn’t exactly innovative: many of us ‘speak’ to Siri daily to help us with our daily lives, with our devices adapting to our languages rather than all of us learning to speak in SQL or Javascript in the hope of being able to converse with them.
Learning a new language? Check out our free placement test to see how your level measures up!
There is, of course, a discourse here, because the internet does tend to become a bit of a language blackhole. English is well-represented, as are many other languages using the Latin alphabet. But that NLP becomes trickier when there are less sources for AI to work with: the lack of representation in Arabic, for example, means that there is less opportunity for our favourite technologies to recognise the language and effectively learn it, leading to a great language and therefore NLP disparity.
Teaching our machines
Which leads us to the point we were considering earlier. When we teach computers, and by teach, we mean provide with masses of information and let the computers do their processing thing, the data we choose to give them can, theoretically, influence those machines and pass on our biases and prejudices.
Think about the way we learn as children. We learn to smile when we see a cute furry animal but draw back in fear when we see a gnarly-looking insect. We learn our positives and negatives from those around us, and in effect, when computers are learning human languages they become those children: learning what is deemed good and bad by what we teach them.
Don’t believe us? A recent study out of the University of Princeton demonstrated that machine-learning algorithms showed strong gender bias related to profession based on the extraction of around a trillion words taken from English language texts from the internet. No actual biases were implied, just a sample of the English in use out there on the web, and this is what the algorithm came back with. Management means man, family means woman.
With that in mind, where would the line be drawn? If we can teach fear and intolerance to each other with carefully worded phrases and both subliminal and in-your-face propaganda, does it not follow that we can do the same disservice to our machines as well? Are we leading ourselves to a dystopian future of Skynet-esque 1984, with controlled language and the control of others we just don’t like or think are somehow beneath us?
A sobering thought, perhaps.
Next up on our look into AI and its impact on language, or perhaps language’s impact on AI, we will introduce you to Eugene and talk about the outcome and significance of the Turing Test, see if our future really is destined to be crushed beneath the foot of a cybernetic organism. Until then—