Originally I had intended to write about what to look out for in products purchased for Christmas that are based in some form or another on artificial intelligence (AI) solutions. And also whether the time had finally arrived, as heralded by many science fiction works, when AI takes over from humanity. For many, the first thing to come to mind in this respect is a humanoid-type, weapons-wielding Terminator. Then would come the reassurance that this is still a long way down the road. In the meantime, however, artificial intelligence has perhaps already taken over control of a significant slice of our lives, not in human form, without a weapon in sight, stealthily, but instead in the form of massively complex and virtually uncheckable algorithms. And you didn’t even need to go out and buy it at Christmas!
I believe that the above assessment is not exaggeration. Anyone using Google, Facebook, YouTube, Twitter and similar apps is constantly being influenced by artificial intelligence algorithms, whether consciously or unconsciously. Our first thought might be that Google, however fantastic it might be, is still ‘merely’ an effective, global search engine; Facebook and Twitter are social platforms, YouTube is a video sharing site. However, these platforms are business applications the primary function of which is to generate profit, even if in the meantime they have to provide some kind of service we consider useful. Due to the business model they employ, the most important objective for their owners is to get users to use these apps increasingly frequently and for as much time as possible. This can be promoted, on the one hand, by integrating all sorts of handy functionality. However, there is an even more effective way to grab our attention to continue using, make further searches, browse and watch videos. AI!
In order to learn, AI solutions have to be given a clear goal and have to be fed data. A lot of data! Based on these, AI begins to configures itself in such a way that its activities encourage attainment of the objective. Contrary to computer programs drawn out on tables based on preliminary designs and transplanted into programming code, the fine-tuned algorithm generated by AI learning is on the whole a ‘black box’. Quite often, the designated objective is brilliantly achieved, but the reality is that it is far from obvious how this happened in the course of employing which internal decision-making processes. As a consequence, it is not uncommon to come across ‘side effects’ that nobody had reckoned on, and which cannot be simply cancelled without resetting the entire algorithm. It is sufficiently obvious also in the case of the abovementioned platforms that AI algorithms operate with spectacular efficiency. Or at least in terms of the pre-determined objective: to get users to spend as much time as possible making searches, chatting, sharing and watching videos. But there are side effects here, too!
Everyone thought that the Internet would open up the world for us. However, ten years ago a TeD talk by Eli Pariser caused a storm of surprise. He was one of the first to demonstrate just what tailored and closed worlds – information bubbles – algorithms create for us. All of this came about because AI discovered the most expedient way of achieving its designated objective is if it always supplies everyone with what they most want to hear. Diversity of opinion, opposite views? No thanks! Perhaps not even Pariser could have foreseen that all this would result not only in isolation but polarization as well. Today, the outcome is evident in the field of public discourse and politics. AI has not only shut off people in their own bubbles but it has increasingly reinforced the walls of these bubbles and made them impermeable.
Those who believe they can discern behind all this global conspiracies and the intrigues of some shadowy powers are probably looking in the wrong direction. No matter how much we consider Mark Zuckerberg and his colleagues not to be saviours of the world but instead coldly calculating business people, it is most probable that even they did not foresee what social problems their algorithms would cause. This is not about people with bad intentions, alien lizards, nor even is it about artificial intelligence algorithms that want to do humanity down! Rather, it is much more about the fact that this partly self-learning version of AI capable of achieving these otherwise fantastic results interprets the objectives very narrowly.
In 2003, the Swedish philosopher Nick Bostrom posited in a thought experiment the idea that an artificial intelligence designed to manufacture paperclips, given the appropriate abilities, would transform the entire planet into a massive paperclip manufacturing facility if it was not cognisant of a whole mass of other constraints, principles and values. The abovementioned AI solutions are unable to take into account what unintended damage they cause during algorithm refinements designed to achieve the objective.
Many people are already thinking and working on how to make such flaws in current AI solutions preventable or correctable; how it would be possible to integrate some kind of ethical considerations into the design and functioning of AIs. Until then, all we can do is to pay close attention to such manipulative actions of algorithms, and try to resist them, in the course of using platforms that otherwise provide great value. Do not take this lightly! To a certain extent, artificial intelligence knows us better, our motivations and instincts, than we do ourselves.
Head of CCS Division, INNObyte