JensenIT Blog
In Battle Employee vs. AI, Everyone Can Be a Winner
For many years now, there’s been a bit of a fear of AI—artificial intelligence—in the workplace, all while it has been put into practical use more and more often in many businesses. This all ties back to the work of Alan Turing, who (amongst his accomplishments in computing) created what we know as the Turing Test as a means of gauging how intelligent a computer is.
Today, this approach has many modern computer scientists wondering if that was the wrong question to ask, and if reframing the relationship between AI and human workers away from competition and towards collaboration is the right path to take.
Has Pursuing Turing’s Standard Created Economic Inequities?
Director Erik Brynjolfsson of Stanford’s Digital Economy Lab certainly seems to think so. According to Brynjolfsson, advances in AI have created some serious problems, as he has argued in Dædalus, a journal produced by the American Academy of Arts and Sciences. In his contribution to the Spring 2022 issue of the journal, The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence, Brynjolfsson states that the goal of AI swiftly became a mission to overtake the capabilities of the human mind.
That, he says, was the big mistake.
Brynjolfsson’s paper posits that the aforementioned obsession with creating a human-like machine has done little but exacerbated wage inequality.
According to Brynjolfsson, the AI that’s been created has done little but remove the need for human employees, and while this has had positive impacts on productivity, the benefits of this productivity tend to float up to business owners and leaders. In fact, Brynjolfsson points to this divide as the cause of wage stagnation amongst workers while millionaires and billionaires just get richer. It’s a phenomenon that he had dubbed “the Turing Trap,” as indicated in the title of his article.
Comparing the creation of AI to the inherent desire for apotheosis that humans have repeatedly demonstrated in stories throughout history—creating life in their own image, like the golem of Jewish folklore, the automatons that the ancient Greeks told tales of Dedalus creating, as well as the efforts of inventors in real life from the early Islamic kingdoms to the minds of the European Renaissance. Modern popular culture has continued this pattern, depicting artificial intelligence as human-like, often featuring AI seeking out a greater level of humanity.
Brynjolfsson feels that this is the wrong approach.
According to Brynjolfsson, AI Would Be Better Used as “Augmentation”
Let’s explain what he means.
Human employees can do certain things very well. AI can do certain things very well. And…this is key… the things that your human employees can do well aren’t always the same things that an AI can do well. Therefore, posits Brynjolfsson, it only makes sense that AI be used to supplement the capabilities of human employees. Not only would this help to promote improved productivity, the benefits of doing so would remain with the workers who are “partnered” with AI, as their work becomes more valuable as well.
Unfortunately, substitution or replacement is considerably more attainable than augmentation, simply because there’s no precedent for it in many cases. It is also important to note that other research has shown that there are very particular tasks that people do and don’t want automated as a rule—so things like “cleaning toilets” were popular ideas, while “opening gifts” was decidedly not. In terms of professional uses, however, augmentation demonstrates much more obvious cases. Many people have presumed that automation powered by AI will be used as an excuse to replace the human worker, but others have argued that AI’s use only covers a small part of what different jobs contain in their responsibilities.
So, Artificial Intelligence Should Be Seen as a Tool, Not Competition
JensenIT can help you implement the kind of automation and machine learning to benefit your own processes as well. Give us a call at (847) 803-0044 to learn more.
Comments