Don’t give up on human intelligence while adopting the artificial one

Álvaro Alegría    29 March, 2019

Artificial intelligence is here to stay, that’s for sure. Just look at the pink (and not that pink) pages of any newspaper or take a quick look at Linkedin’s feed to see that the “AI” is the last cry in the business world.

And the truth is that there are plenty of reasons for this. The capabilities offered by this new technology allow not only to overcome barriers that were previously insurmountable for traditional systems, but also to unlock fields that were previously reserved for science fiction films.

However, this rage for artificial intelligence has revealed a fact that is, to say the least, surprising: When it comes to making decisions, companies seem to rely more on machines than on humans.

No one denies that technology has exponentially higher computing capabilities than the most intelligent human on earth, but it is also true that not all business decisions require those levels of intelligence and, in those cases, the trend seems to be maintained: the “machine” decides better.

To position oneself on the certainty of this question in a categorical way, with a simple, true or false, is simply impossible. There are so many conditioning factors and nuances that make up the context of each decision that, probably, the most appropriate answer is that it “depends”.

However, the purpose of this article is not to give an answer to that question, but to raise the debate on the reasons that apparently lead us to trust a machine rather than a human. Because, when the decision is made to trust a machine, that machine has been designed and programmed by a human; and the ultimate decision to trust is also made by a human.

From a strictly personal point of view, giving more reliability to machines than to people is due to different factors, not all of them positive.

The first, as we have already mentioned, are the greatest computing capacities. Technology can store more information, more varied and cross and contrast it at levels that human beings are incapable of.

The second, although it may be obvious, is the lack of humanity. Human beings have feelings, prejudices, biases, experiences that, even if we don’t want to, influence our reasoning capacity and, therefore, condition our decisions. (I don’t judge whether this is negative or positive. I just want to make it clear that, sometimes, this is one of the aspects to be avoided by designating a machine as a decision maker).

The third, and possibly the most controversial, is a certain dose of cowardice. If the decisions are made by a machine, which we have also agreed is less fallible than a human, we humans are released from responsibility for the decisions. Thus, if the result obtained is not as expected, the fault will never be ours, but that of the machine. At most, it may be the fault of the person who designed and programmed the machine, but in no case of the human being who should have taken that decision in the absence of it. In my opinion, this is a dangerous aspect, because the absence of responsibility can lead to irresponsibility.

Be that as it may, the reality, motivated by the speed of technological advances, is that many large companies have gone almost directly from making 100% human decisions, based solely on experience and intuition, to ceding that decision-making capacity to artificial intelligence. And in this process an intermediate step has been overlooked: human decisions based on data.

It is very likely that machines will make more precise, more aseptic and more coherent decisions than humans, but, unfortunately, it seems that we ourselves have taken the opportunity to show how good we could have been compared to machines, if we had been able to trust ourselves.

In any case, there is something that should never be lost sight of. That no matter how superior the intelligence of a machine is compared to that of a human, it will always be artificial.

You can also follow us on TwitterYouTube and LinkedIn

Leave a Reply

Your email address will not be published.