
The line that separates the user, consumer, customer from the algorithm is getting thinner and thinner. Let us first of all go to the source. If the world is a machine, life is an algorithm. Algorithms, in their origin, have existed practically since the origin of our civilisation, more than 2,000 years ago. The word algorithm etymologically comes from the Greek and Latin words algorithmus and arithmos, meaning “number”. Or perhaps influenced by the name of the Persian mathematician Al-Juarismi. The best-known algorithm of the time comes from the Greek Euclid. More than twenty centuries after Euclid, the mathematical formulas developed – not only – by the world’s largest technology companies drive the main digital services that are now an essential part of human existence. To the point that some try to confuse people with what is called an algorithm.
We live in the desert of post-modernity in which the mirages of social networks and apps do not wash your brain but programme it. We are algorithms, we are the grains of sand that make up the post-modern desert. The economy of this era needs us all to consume in order to function; at the same time it needs us all to generate data. It sounds absurd, but it is happening. The post-modern consumer is portrayed by the data he or she generates. The interesting thing about this new scenario is that data is taken for granted, without taking into account that people lie, and can deceive themselves, generating information that is not necessarily true. We are what we pretend to be.
In this new era, concepts such as person and data are becoming synonymous. We know that people are not data, but emotion and circumstance, but it is not that simple. China’s new personal scoring system makes Black Mirror look like an episode of Bambi. The Social Credit System (SCS) that China began testing in a dozen cities, and expects to reach its 1.386 billion citizens by 2020, sets a score for each person. Depending on whether it is high or low, and how it fluctuates, that number determines intimate aspects of private life – such as access to discounts for public services or refusal to enrol a child in a quality school – in the form of a reward and punishment mechanism.
The combined use of algorithms and machines is what is changing the post-modern desert. But machines do not yet understand concepts such as lying and deception, and until they can filter truth from lies, data will inevitably be tainted, and less useful than expected.
Modern mechanisms of manipulation are increasingly sophisticated. And so complex that they are also camouflaged behind the hullabaloo of “likes” that boost your excitement and happiness. Social media has been shown to trigger changes in neurotransmitters such as serotonin, oxytocin, testosterone, adrenaline and dopamine. The latter is released when a ‘like’ is received. This activates the reward centres and increases the feeling of happiness.
When the Pew Research Center set out to examine the data used to allow advertisers to target ads to people likely to click on them, it found that 74 percent of Americans didn’t even know it existed, until the survey. Nearly 9 in 10 (88 percent) of Americans found that Facebook had generated material for them on the ad preferences page, and 6 in 10 had 10 or more interests listed for them.
Once they had a chance to see this list, a small majority, 51 percent, were not comfortable with Facebook collecting this information about them, according to the report. “We always find that there is a paradox at the heart of widespread privacy research,” said Lee Rainie, director of technology and Internet research at Pew. Nearly every user in the world says privacy is important, but they behave in a way that indicates otherwise.
The reality emerges that everyone, everyone has their own red line when it comes to privacy issues. There is a kind of unspoken agreement of a constant and fluid trade-off between privacy (the data the user offers and generates) and the benefit they get in return.
While we debate in very specific spheres our relationship with new technologies, everything seems hyper-normalised. There is hardly any questioning of a system that assumes with naturalness the indiscriminate giving away of information, privacy and intimacy. Anything to be able to use the networks for free, to have access to the comfort of the online no matter what the cost.
.
Technology has mounted on top of us and is the driving force of our society. The problem is that we allow it, we accept it, we assume it as “normal”. As long as technology dominates us, we will live in circumstances that man himself has created, and until man once again takes control of his own present and his own destiny, the future will continue to be a desert of numbers. We created machines to help us think, and now machines are thinking for us (Ken Liu). In the midst of the exciting digital age, a reality has emerged that escapes our imagination.
In the book “Weapons of Mathematical Destruction”, mathematician and scientist Cathy O’Neil highlights that the decisions made by algorithms lead to machines granting or not granting a loan, evaluating employees, monitoring the health of citizens and even influencing the criteria of potential voters in an election. Should we place all the responsibility on a machine? An algorithm also decides, based on a series of parameters and user activity logs, the recommended content that we are supposed to be most interested in from content services such as Spotify, Amazon Primer or Netflix. So instead of expanding our possibilities to learn more, algorithms will increasingly focus on our tastes and affinities, limiting our openness.
If we live in the “truth a la carte” mode, most platforms do not aim to change the user’s mind and broaden their horizons, but to make them feel comfortable among those who think the same way. Although users believe that they choose what they consume, it is the companies that decide what options we have on the menu.
A large part of this post-modern society lives in a state of almost constant alienation. As if vaccinated against the virus of free will, critical thinking and doubt. We wish for a future that never was, long for a past that will never be and ignore the present that never was. We live programmed for maximum production, maximum consumption, maximum efficiency and minimum awareness of the above.
The world is a complex web of interests and man creates systems so intricate that the very thought of changing them discourages the most daring.
Undoubtedly, the time has come to question everything. The few brave voices coming from the technological titans denounce what they themselves created: a space of manipulation in which users are abducted like a powerful drug. Disengaging is practically improbable.
Questioning and, if necessary, re-interpreting and re-thinking what kind of society we aspire to and re-establishing a new relationship with technology is indispensable. Today’s man is more attracted to the mechanical, to the bogged down, than to life itself, to that which is framed outside a device. Will we be able to escape the self-deception in which we are trapped? Will we be bold enough to rebel against this tyranny of data?
It is no longer enough to share, we must unite and make our neurons sizzle once again and thus come out of this state of collective anaesthesia.
Social networks, media, influencers, influencers, influencers, content platforms, apps, all form part of a huge and effective process of mass distraction. The question is: distract us from what?
On social media, the #10YearChallenge #2009vs2019 tsunami has gone so viral that it is rare not to see a celebrity, family member, friend or acquaintance sharing a picture of what it was like then and what it is like now. Present on Instagram, Facebook and Twitter, the consequence that no one could think of is that this “campaign” will end up being used for learning artificial intelligence.
This is Kate O’Neill’s theory in Wired, where she explains what the intended consequences of such a meme would be.
The idea that O’Neill elaborates on in his thinking is based on this question: What could a company like Facebook, which is particularly interested in creating algorithms for recognising people, do to create a system that takes into account the changes associated with age? That is, an algorithm that, by taking a photo of a person, could know what they will look like in ten years’ time.
Society lives in an increasingly relative space of time in which old concepts are confused, old age is getting younger, adolescence is advancing, ages are contracting and expanding, intermingling and age is being relativised. We are adopted children of speed, we are made of instantaneity and the already in our DNA marks the rhythm of the future that becomes a common noun.
Devices will soon be implanted in our own skin or even in our brains.
The printing press boosted reading literacy; cinema, television and the Internet have educated our sight. The creation of a scenario of images (between real and virtual) between authentic and manipulated has expanded the possibility of seeing further.
Within the post-modern system, the human being watches in bewilderment as uncertainty is dressed up as a news headline. The proper functioning of a system requires the proper functioning of each of its parts. That is why a system can only be transformed if changes are made in the whole system and not in one part or another.
Since the crisis, even before, we started to look for new norms and a new ethic as we watched old norms and an old ethic being diluted in acid but not at the speed that post-modernity establishes.
The vast majority of citizens choose to avoid the risk of creating something new, of having a new reality, and we should assume that risk as part of the evolutionary process of this new human being. The advance of humanity, progress “per se”, pushes towards the future with the force of the obvious, but not so much because of the merit of one or the other, but because it is almost the natural law. Before the new crisis arrives, it is time to start reflecting on the system we create and in which we believe, and to contrast whether it is really the ideal one, the least bad one, the one that does us good and serves us. Any similarities between the offline system and the new online ecosystem do not seem to be mere coincidence.
We need, as a society, to reflect more deeply.
What values do we seek or want, or need? What priorities do we seek as a society? What future are we building and where are we heading? What kind of progress do we seek?
The unprecedented pace of technological change means that our work, transport, education, health, communication, production, distribution, security and energy systems, to name but a few, will be completely transformed. Managing that change will require not only new frameworks for national and multinational cooperation, public-private partnerships, but also a new model of education, complete with specific programmes to teach new skills, more soft than hard, to workers. With the coming advances in machines, robotics and artificial intelligence we will have to move from a mindset of production and consumption to one of sharing and caring.
We have created an ecosystem that devours data and transforms it into profit, manipulation and currency for a few. But we all knew that. Facebook is a monster, but it is one monster among many.
But we cannot put the blame or the responsibility for our post-modern present on Facebook or the rest of the data titans because while we talk about the future, hardly anyone knows where it is going, no one explains or details how it is going and much less are there clear goals of why we are going in that direction. This is why we need to reawaken our consciousness, to recover the thinking of who we are, what we seek, what we intend and where we want to go. Only by becoming aware of this will we be able to make the decisions that we believe are best for where we want to go and how we want to go.
Human capacities (critical thinking, empathy, understanding, etc.) have never been so necessary to cross this post-modern desert. Man is imperfect, as we know, and we tend to place our hope in the perfection of machines, of algorithms. In this new reality in which we place our future, our decisions, our tastes, habits and behaviours in machines, I have been asking myself the same question for some time: What will happen to us as a society if the algorithm gets it wrong?