Last week I wrote about the looming threat of Artificial Intelligence with a story about scientists announcing they had successfully trained a robot by showing it videos from the web. This week we learn that another rapidly advancing area of technology will soon give our computers the ability to read and understand our emotions.
According to an article in the New Yorker, “a small number of researchers have been working to give computers the capacity to read our feelings and react, in ways that have come to seem startlingly human. Experts on the voice have trained computers to identify deep patterns in vocal pitch, rhythm, and intensity; their software can scan a conversation between a woman and a child and determine if the woman is a mother, whether she is looking the child in the eye, whether she is angry or frustrated or joyful. Other machines can measure sentiment by assessing the arrangement of our words, or by reading our gestures. Still others can do so from facial expressions.” Some machines can scan something as innocuous as your Facebook posts to learn what you feel, how you think, and what your social and political preferences are.
There is little doubt that computers and robots of the very near future will be able to read our emotions and know what we are thinking.
That is a good thing for the computer. Having that cognitive skill could be a life saver; and the life it saves may be its own. My computer in the near future will learn that I am a “type A” personality who does not handle technical glitches with a light touch or sensitive aura. It will be able to know when I am about to take a baseball bat and reduce it to a pile of twisted metal and broken circuit boards. It may be able to understand why its monitor is being ripped from the CPU and thrown down a flight of stairs. Computers will finally understand the frustration we often feel when dealing with them.
Nowhere will this be more self evident than the ubiquitous “software update”. We all know what the software update is. That is where we take reasonably functioning and reliable software and upgrade it to a point where nothing works anymore. We take perfectly fine systems and turn them into quivering piles of coded pooh on the premise that this is somehow “better” than what we were experiencing prior. This is a concept that most of us can identify with, particularly anyone who just upgraded their iPad to iOS 8 and is trying to use the buggy Safari browser that came with it. My iPad of the near future will know why it is being stomped to death in the middle of my living room.
And thanks to its forward facing camera, I will be able to look it in the eye when I do it.
Elsewhere in the workplace, computers that can read and understand the emotions of their users will be able to detect frustration or fatigue and respond appropriately. A computer may be able to advise us to “walk away” for a short while when it is about to experience the blue screen of death. It could suggest that “you go talk to Janet in Accounting for a few minutes while I work this out”. Of course, the fact that Janet was eliminated when the Accounting Department was fully automated might minimize the usefulness of that advice. It could also advise you to get some coffee or take a stretch if it detects you are tired and your attention is not “fully focused”. In fact, in very short time we could find ourselves working for the computers that will know and understand us so well.
Ultimately, that is the looming problem with robotics, automation and Artificial Intelligence. Scientists will be able to create machines that can learn, think independently, and understand the environment around them.
Nowhere, however, have I read about our ability to make them actually care. And you thought the boss you have today is a sociopath…….