Sofia, November 20 (Nikolay Velev of BTA) – Around the world, we are seeing the rise of different forms of technological totalitarianism but the future digital democracies will be able to combine the best of all systems: competition from capitalism, collective intelligence from democracies, trial and error, and the promotion of superior solutions, and intelligent design (AI), Dirk Helbing, a professor of computational social science, s

Quelle: Prof. Dirk Helbing: Digital Democracies will Be Able to Combine the Best of all Systems – News – BULGARIAN NEWS AGENCY

Monday, 3 August 2020


(Draft Version 1)

We have had so much hopes in the positive potentials of the digital revolution, and the biggest threat most people can imagine is an Internet outage or a hacking attack. However, it appears that Social Media, once seen as an opportunity for fairer, participatory society, have (been) turned into promoters of fake news and hate speech. It also turns out that Big Data has empowered businesses and secret services, while in comparison citizens have probably lost power. Exposed to an attention economy and surveillance capitalism, people may become objects of algorithms in an increasingly data-driven and AI-controlled world. Therefore, the question is: After the digital “singularity”, will we be “Gods” in a “digital paradise” – or submitted to a superintelligent system, without fundamental rights, human dignity, and freedom?

Big Data

The digital revolution progresses at a breath-taking speed. Almost every year, there seems to be a new hype.[1] Laptops, mobile phones, smartphones, tablets, Big Data, Artificial Intelligence, Robotics, 3D Printing, Virtual Reality, Augmented Reality, Internet of Things, Quantum Computing, and Blockchain Technology give only a partial picture of the developments. People, lawyers, politicians, the media – they all seem to struggle to keep track of the emerging technologies. Can we create the needed governance frameworks in time?

While many people consider Big Data to be the “oil” of the digital revolution, they consider Artificial Intelligence to be its “motor”. It has become a sport to collect as much data as possible, since business opportunities appear to increase with the amount of data at hand. Accordingly, many excuses have been found to collect data about basically everyone of us, at any time and anywhere. These reasons include

· “to save the world”,

· “for security reasons”,

· “knowledge is power”, and

· “data is the new oil”.

In today’s “surveillance capitalism”,[2] it is not just secret services that spy on us, but private companies as well. We are being “profiled”, which means that highly detailed (pro)files are produced about us. These (pro)files can contain a lot more data than one would think:

– income data

– consumption data

– mobility patterns

– social contacts

– keywords appearing in emails

– search patterns

– reading habits

– viewing patterns

– music taste

– activities at home

– browsing behavior

– voice recordings

– photo contents

– biometrical data

– health data

– and more.

Of course, there is also a lot of other data that is inferred:

– sexual orientation

– religion

– interests

– opinions

– personality

– strengths

– weaknesses

– likely voting behaviors

– and more.

Surveillance Capitalism

You may, of course, wonder why one would record all this data? As I said before, money and power are two of the motivations. Surveillance capitalism basically lives on the data that you provide – either voluntarily or not. However, today, one can basically not use the Internet anymore, if you don’t click “ok” and, thereby, legally agree with a data collection and processing, which you would probably never find acceptable if you read and fully understood the Terms of Use. A lot of statements that tech companies force you to agree with are intentionally misleading or ambiguous. So you would understand them in a different way than they are meant. “We value your privacy”, for example, probably means “We turn your private data into value” rather than “We protect your privacy and, hence, do not collect data about you.”

According to estimates, Gigabytes of data are being collected about everyone in the industrialized world every day. This corresponds to several photographs per day. As we know from the Snowden revelations,[3] secret services accumulate the data of many companies and analyze them in real time. The storage space of the new NSA data center, for example, seems big enough to store up to 140 TeraBytes of data about every human on Earth. This corresponds to dozens of standard hard disks in laptop computers or the storage space of about 1000 standard smartphones today.

You would probably be surprised what one can do with all this data. For example, one thing that the NSA can do is to find you based on your voice profile.[4] So, suppose you go on holiday and decide to leave your smartphone at home for the sake of “digital detox”. However, you happen to talk to someone at the pool bar, who has his or her smartphone right next to him- or herself. Then, this smartphone can pick up your voice, figure out who and where you are, and whom you are talking to. Moreover, some Artificial Intelligence algorithm may turn your conversation into written text, translate it into other languages in real-time, and search for certain “suspicious” keywords. In principle, given that we typically speak just a few hundred or thousand words per day, it would be possible to record and store almost everything we say.

The DARPA project “Lifelog” intended to go even further than that.[5] It wanted to record everyone’s entire life and make it replay-able. You may not be surprised that this project seems to have inspired Facebook, which even creates profiles about people who are not members of Facebook. But this is just the beginning. The plans go far beyond this. As you will learn in this chapter, the believe that you “have nothing to hide” will not be able to protect you.

You are probably familiar with some of the revelations of Edward Snowden related to the NSA, GCHQ, and Five Eyes Alliance. These cover everything from mass surveillance to psychological operations, state-based cybermobbing, hacker armies, and digital weapons.

Most people, however, are not aware of the activities of the CIA, which might be even more dangerous to human rights, in particular as their spying activities can be combined with real-life operations on people worldwide. Some of these activities have been revealed by WikiLeaks under the name “Vault 7”.[6] The leaks show that the CIA is hacking basically all electronic devices, including smart TVs, modern cars, and the Internet of Things. Recently, we have also learned that, in the decades before, the CIA has been spying on more than 100 countries by means of corrupted encryption devices sold by the Crypto AG in Zug.[7]

Digital Crystal Ball

With so much data at hand, one can make all sorts of science-fiction dreams come true. One of them is the idea to create a “digital crystal ball”.[8] Just suppose one could access measurement sensors, perhaps also microphones and cameras of smartphones and other devices in real-time, and put all this information together.

For a digital crystal ball to work, one would not have to access all sensors globally at the same time. It would be enough to access enough devices around the place(s) one is currently interested in. Then, one could follow the events in real time. Using also behavioral data and predictive analytics would even allow one to look a bit into the future, in particularly, if personal agenda data would be accessed as well. I do not need to stress that the above would be very privacy-invasive, but the reader can certainly imagine that a secret service or the military would like to have such a tool, nevertheless.

It is likely that such a digital Crystal Ball already exists. Private companies have worked on this as well. This includes, for example, the company “Recorded Future”, which Google has apparently established together with the CIA,[9] and the company “Palantir”, which seems to work or have worked with Facebook data (among others).[10] Such tools do also play an important role in “predictive policing” (discussed later).

Profiling and Digital Double

In order to offer us personalized products and services (and also personalized prices), companies like to know who we are, what we think and what we do. For this purpose, we are being “profiled”. In other words, a detailed file, a “profile”,[11] is being created about each and every one of us. Of course, these profiles are a lot more detailed than the files that secret services of totalitarian states used to have before the digital revolution. This is quite concerning, because the mechanisms to prevent misuse are currently pretty ineffective. In the worst case, a company would be closed down after years of legal battles, but already the next day, there may be a new company doing a similar kind of business with the same algorithms and data.

Today’s technology even goes a step further, by creating “digital twins” or “digital doubles”.[12] These are personalized, kind of “living” computer agents, which bear our own personal characteristics. You may imagine that there is a black box for every one of us, which is being fed with surveillance data about us.[13] If the black box is not only a data collection, but capable of learning, it can even learn to show our personality features. Cognitive computing[14] is capable of doing just this. As a result, there are companies such as “Crystal Knows”[15] (which used slogans such as “See anyone’s personality”). Apparently it offered to look up personality features of neighbors, colleagues, friends and enemies – like it or not. Several times, I have been confronted with my own profile, in order to figure out how I would respond to the fact that my psychology and personality had been secretly determined without my informed consent. But this is just another ingredient in an even bigger system.

World Simulation (and „Benevolent Dictator“)

The digital doubles may be actually quite sophisticated “cognitive agents”. Based on surveillance data, they may learn to decide and act more or less realistically in a virtual mirror world. This brings us to the “World Simulator”, which seems to exists as well.[16] In this digital copy of the real world, it is possible to simulate various alternative scenarios of the future. This can certainly be informative, but it does not stay there.

People using such powerful simulation tools may not be satisfied with knowing potential future courses of the world – with the aim to be better prepared for what might come upon us. They may also want to use the tool as a war simulator and planning tool.[17] Even if used in a peaceful way, they would like to select a particular future path and make it happen – without proper transparency and democratic legitimation. Using surveillance data and powerful artificial intelligence, one might literally try to “write history”. Some people say, this has already happened, and “Brexit” was socially engineered this way.[18] Later, when we talk about behavioral manipulation, the possibility of such a scenario will become clearer.

Now, suppose that the World Simulator has identified a particularly attractive future scenario, e.g. one with significantly reduced climate change and much higher sustainability. Wouldn’t this intelligent tool know better than us? Shouldn’t we, therefore, ensure that the world will take exactly this path? Shouldn’t we follow the instructions of the World Simulator, as if it was a “benevolent dictator”?

It may sound plausible, but I have questioned this for several reasons.[19] One of them being that, in order to optimize the world, one needs to select a goal function, but there is no science telling us what would be the right one to choose. In fact, projecting the complexity of the world on a one-dimensional function is a gross over-simplification, which is a serious problem. Choosing a different goal function, however, may lead to a different “optimal” scenario, and the actions we would have to perform might be totally different. Hence, if there is no transparency about the goals of the World Simulator, it may easily lead us astray.

Attention Economy

In the age of the “data deluge”, we are being overloaded with information. In the resulting “attention economy,” it is impossible to check all the information we get, and whether it is true or false. We are also not able to explore all reasonable alternatives. When attention is a short resource (perhaps even scarcer than money) people get in a reactive rather than an active or proactive mode. They respond to the information according to a stimulus-response scheme, and tend to do what is suggested to them.[20]

This circumstance makes people “programmable”, but it also creates a competition for our attention. Whoever comes up with the most interesting content, whoever is more visible or louder will win the competition. This mechanisms favors fake news, because they are often more interesting than facts. When Social Media platforms try to maximize the time we are spending on them, they will end promoting fake news and emotional content, in particular hate speech. We can see where this leads.

In the meantime, Social Media platforms face more and more difficulties to promote a constructive dialogue among people. What once started off as a chance for more participatory democracies, has turned into a populistic hate machine. Have Social Media become the weapons of a global information war? Probably so.

Conformity and Distraction

As some people say, Social Media are also used as “weapons of mass distraction”, which increasingly distract us from the real existential problems of the world. They, furthermore, create entirely new possibilities of censorship and propaganda. Before we go into details, however, it is helpful to introduce some basics.

First, I would like to mention the Asch conformity experiment.[21] Here, an experimenter invites a person into a room, in which there are already some other people. The task is simple: everyone has to compare the length of a stick (or line) to three sticks (or lines) of different length and say, which of the three sticks (or lines) it fits.

However, before the experimental subject is asked, everyone else will voice their verdict. If all answer truthfully, the experimental subject will give the right answer, too. However, if the others have consistently given a wrong answer, the experimental subject will be confused – and often give the wrong answer, too. It does not want to deviate from the group opinion, as it fears to appear ridiculous. Psychology speaks of “group pressure” towards conformity.

Propaganda can obviously make use of this fact. It may trick people by the frequent repetition of lies or fake news, which may eventually appear true. It is obviously possible to produce a distorted world view in this way – at least for some time

Second, I would like to mention another famous experiment. Here, there are two baseball teams, for example, one wearing black shirts, the other one wearing white shirts. Observers have to count, say, how often the baseball is passed on by people in white shirts, and how often by people in black shirts. The task requires quite a bit of concentration. It is more demanding to count the two numbers correctly than one might think.

In the end, observers are asked for the numbers – and if they noticed anything particular. Typically, they would answer “no”, even though someone in a Gorilla suit was walking through the scene. In fact, many do not see it. This is called “selective attention”, and it explains why people often do not see “the elephant in the room”, if they are being distracted by something else.

Censorship and Propaganda

The selective attention effect is obviously an inherent element of the attention economy. The conformity effect can be produced by filter bubbles and echo chambers.[22] Both is being used for censorship and propaganda.

In order to understand the underlying mechanism, one needs to know that Social Media do not send messages to a choice of recipients predetermined by the sender (which is in contrast to the way Emails or text messages are being sent). It is an algorithm that decides how many people will see a particular message and who will receive it. Therefore, the Social Media platform can largely determine which messages spread and which ones find little to no attention.

It’s not you who determines the success of your idea, but the Social Media platform. While you are given the feeling that you can change the world by sending tweets and posts, liking and following people, this is far from the truth. Your possibility to shape the future has rather been contained.

Already without deleting a Social Media message, it is quite easy to create propaganda or censorship effects, by amplifying or reducing the number of recipients. In the meantime, algorithms may also mark certain posts as “fake news” or “offensive”, or Social Media platforms may delete certain posts by “cleaners”[23] or in algorithm-based ways. Some communities have learned to circumvent such digital censorship by heavily retweeting certain contents. However, their accounts are now often blocked or shadow-banned as “extremist”, “conspiracy” or “bot-like” accounts.

In fact, the use of propaganda methods are at the heart of today’s Social Media. Before we discuss this in more detail, let us look back in history. Edward Bernays, a nephew of Sigmund Freud, was one of the fathers of modern propaganda. He was an expert in applied psychology and knew how methods used to advertise products (such as frequent repetitions or creating associations with other themes such as success, sex or strength) could be used to promote political or business interests. His book “Propaganda”[24] was used a lot by Joseph Goebbels. In combination with new mass media such as the radio (“Volksempfänger”), the effects were scaled up to an entire country. At that time, people were not prepared to distance themselves from this novel approach of “brain washing”. The result was a dangerous kind of “mass psychology”. It largely contributed to the rise of fascist regimes. World War II and the Holocaust were the outcome.

In the meantime, unfortunately, driven by marketing interests and the desire to exert power, there are even more sophisticated and effective tools to manipulate people. I am not only talking about bots[25] that multiply certain messages to increase their effect. We are also heading towards robot journalism.[26] In the meantime, some AI tools are so convincing story tellers that they have been judged to be too dangerous to release.[27]

Recently, the world seems to have entered a post-factual era[28] and is plagued by fake news. There are even “deep fakes”,[29] i.e. it is possible to manufacture videos of people, in which they say anything you like. The outcome is almost indistinguishable from a real video.[30] One can also modify video recordings in real-time and change somebody’s mimics.[31] In other words, digital tools provide perfect means for manipulation and deception, which can undermine modern societies that are based on informed dialogue and facts. Unfortunately, such “PsyOps” (Psychological Operations) are not just theoretical possibilities.[32] Governments apply them not only to foreign people, but even their own – something that has apparently been legalized recently[33] and made possible by handing over control of the Internet.[34]

Targeting and Behavioral Manipulation

It is frequently said that we consciously perceive only about 10% of the information processed by our brain. The remaining information may influence us as well, but in a subconscious way. Hence, one can use so-called subliminal cues[35] to influence our behavior, while we would not even notice that we have been manipulated. This is also one of the underlying success principles of “nudging”.[36] Putting an apple in front of a muffin will let us chose the apple more frequently. Hence, tricks like these may be used to make us change our behavior.

Some people argue it is anyway impossible to NOT nudge people. Our environment would always influence us in subtle ways. However, I find it highly concerning, when personal data, often collected by mass surveillance, is being used to “target” us specifically and very effectively with personalized information.

We are well aware that our friends are able to manipulate us. Therefore, we choose our friends carefully. Now, however, there are companies that know us better than our friends, which can manipulate us quite effectively, without our knowledge. It is not just Google and Facebook, which try to steer our attention, emotions, opinions, decision and behaviors, but also advertisement companies and others that we do not even know by name. Due to lack of transparency, it is basically impossible to enact our right of informational self-determination or to complain about these companies.

With the personalization of information, propaganda has become a lot more sophisticated and effective than when the Nazis came to power in the 1930ies. People who know that there is almost no webpage or service in the Internet, which is not personalized in some way, even speak of “The Matrix”.[37] Not only may your news consumption be steered, but also your choice of the holiday destination and the partners you date. Humans have become the “laboratory rats” of the digital age. Companies run millions of experiments every day to figure out how to program our behavior ever more effectively.

For example, one of the Snowden revelations has provided insights into the JTRIG program of the British secret service GCHQ.[38] Here, the cognitive biases of humans[39] have been mapped out, and digital ways have been developed to use them to trick us.

While most people think that such means are mainly used in psychological warfare against enemies and secret agents, the power of Artificial Intelligence systems today makes it possible to apply such tricks to millions or even billions of people in parallel.

We know this from experts like the former “social engineer” Tristan Harris,[40] who has previously worked in one of Google’s control rooms, and also from the Cambridge Analytica election manipulation scandal.[41] Such digital tools are rightly classified as digital weapons,[42] since they may distort the world view and perception of entire populations. They could also cause mass hysteria.

Citizen Score and Behavioral Control

Manipulating people by “Big Nudging” (a combination of “nudging” with “Big Data”) does not work perfectly. Therefore, some countries aim at even more effective ways of steering peoples’ behaviors. One of them is known as “Citizen Score”[43] or “Social Credit Score”.[44] This introduces a neo-feudalist system, where rights and opportunities depend on personal characteristics such as behavior or health.

Currently, there seem to be hundreds of behavioral variables that matter in China.[45] For example, if you would pay your rent or your loan with a few days of delay, you would get minus points. If you wouldn’t visit your grandmother often enough, you would get minus points. If you would cross the street during a red light (no matter whether you obstruct anybody else or not), you would get minus points. If you would read critical political news, you would get minus points. If you would have “the wrong kinds of friends” (those with a low score, e.g. those who read critical news), you would get minus points.

Your overall number of points would then determine your Social Credit Score. It would decide about the jobs you can get, the countries you can visit, the interest rate you would have to pay, your possibility to fly or use a train, and the speed of your Internet connection, to mention just a few examples.

In the West, companies are using similar scoring methods. Think, for example, of the credit score, or the Customer Lifetime Value,[46] which are increasingly being used to decide who will receive what kinds of offers or benefits. In other words, people are also ranked in a neo-feudalist fashion. Money reigns in ways that are probably in conflict with the equality principle underlying human rights and democracies.

This does not mean there are no Citizen Scores run by government institutions in the West. It seems, for example, that a system similar to the Social Credit Score has first been invented by the British secret service GCHQ. Even though the state is not allowed to rank the lives of people,[47] there exists a “Karma Police” program,[48] which judges everyone’s value for society. This score considers everything from watching porn to the kind of music you like.[49] Of course, all of this is based on mass surveillance. So, in some Western democracies, we are not far from punishing “thought crimes”.

Digital Policing

This brings us to the subject of digital policing. We must be aware that, besides political power and economic power, there is now a new, digital form of power. It is based on two principles: “knowledge is power” and “code is law”.[50] In other words, algorithms increasingly decide how things work, and what is possible or not. Algorithms introduce new laws into our world, often evading democratic decisions in parliament.

The digital revolution aims at reinventing every aspect of life and finding more effective solutions, also in the area of law enforcement. We all know the surveillance-based fines we have to pay if we are driving above the speed limit on a highway or in a city. The idea of “social engineers” is now to transfer the principle of automated punishment to other areas of life as well. If you illegally download a music or movie file, you may get in trouble. The enforcement of intellectual property, including the use of photos, will probably be a lot stricter in the future. But travelling by plane or eating meat, drinking alcohol or smoking, and a lot of other things might be soon automatically punished as well.

For some, mass surveillance finally offers the opportunity to perfect the world and eradicate crime forever. As in the movie “Minority Report”, the goal is to anticipate – and stop – crime, before it happens. Today’s PreCrime and predictive policing programs already try to implement this idea. Based on criminal activity patterns and a predictive analytics approach, police will be sent to anticipated crime hotspots to stop suspicious people and activities. It is often criticized that the approach is based – intentionally or not – on racial profiling, suppressing migrants and other minorities.

This is partly because predictive policing is not accurate, even though a lot of data is evaluated. Even when using Big Data, there are errors of first and second kind, i.e. false alarms and alarms that do not go off. If the police wants to have a sensitive algorithm that misses out on only very few suspects, the result will be a lot of false alarms, i.e. lists with millions of suspects, who are actually innocent. In fact, in predictive policing applications the rate of false alarms is often above 90%.[51] This requires a lot of manual postprocessing, to remove false positives, i.e. probably innocent suspects. However, there is obviously a lot of arbitrariness involved in this manual cleaning – and hence the risk of applying discriminating procedures.

I should perhaps add that “contract tracing” might be also counted among the digital policing approaches – depending on how a society treats suspected people with infections. In countries such as Israel, in order to identify infected persons it has been decided to apply software, which was originally created to hunt down terrorists. This means that infected people are almost treated like terrorists, which has raised concerns. In particular, it turned out that this military-style contact tracing is not as accurate as expected. Apparently, the software overlooks more than 70% percent of all infections.[52] It has also been found that thousands of people were kept in quarantine, while they were actually healthy.[53] So, it seems that public measures to fight COVID-19 have been misused to put certain kinds of people illegitimately under house arrest. This is quite worrying. Are we seeing here the emergence of a police state 2.0, 3.0, or 4.0?

Cashless Society

The “cashless society” is another vision of how future societies may be organized. Promoters of this idea argue with fighting corruption, easier taxation, and increased hygiene (it would supposedly reduce the spread of harmful diseases such as COVID-19).

At first, creating a cashless society sounds like a good and comfortable idea, but it is often connected with the concept of providing a “digital ID” to people. Sometimes, it is even proposed “for security reasons” to provide people with an RFID chip in their hand or make their identity machine readable with a personalized vaccine. This would make people manageable like things, machines, or data – and thereby violate their human dignity. Such proposals remind of chipping animals or marking inmates with tattoos – and some of the darkest chapters of human history.

Another technology mentioned in connection with the concept of “cashless society” is blockchain technology. It would serve as a registry of transactions, which could certainly help to fight crime and corruption, if reasonably used.

Depending on how such a cashless society would be managed, one could either have a free market or totalitarian control of consumption. Using powerful algorithms, one could manage purchases in real-time. For example, one could determine who can buy what, and who will get what service. Hence, the system may not be much different from the Citizen Score.

For example, if your CO2 tracing indicated a big climate footprint, your attempted car rental or flight booking may be cancelled. If you were a few days late paying your rent, you might not even be able to open the door of your home (assuming it has an electronic lock).

In times of COVID-19, where many people are in a danger of losing their jobs and homes, such a system sounds quite brutal and scary. If we don’t regulate such applications of digital technologies quickly, a data-driven and AI-controlled society with automated enforcement based on algorithmic policing could violate democratic principles and human rights quite dramatically.

Reading and Controlling Minds

If you think what I reported above is already bad enough and it could not get worse, I have to disappoint you. Disruptive technology might even go some steps further. For example, the “U.S. administration’s most prominent science initiative, first unveiled in 2013”[54] aimed at developing new technologies for exploring the brain. The 3 billion Dollar initiative wanted “to deepen understanding of the inner workings of the human mind and to improve how we treat, prevent, and cure disorders of the brain”.[55]

How would it work? In the abstract of a research paper we read:[56] “Nanoscience and nanotechnology are poised to provide a rich toolkit of novel methods to explore brain function by enabling simultaneous measurement and manipulation of activity of thousands or even millions of neurons. We and others refer to this goal as the Brain Activity Mapping Project.”

What are the authors talking about here? My interpretation is that one is considering to put nanoparticles, nanosensors or nanorobots into human cells. This might happen via food, drinks, the air we breathe, or even a special virus. Such nanostructures – so the idea – would allow one to produce a kind of super-EEG. Rather than a few dozens of measurement sensors placed on our head, there would be millions of measurement sensors, which – in perspective – would provide a super-high resolution of brain activities. It might, in principle, be possible to see what someone is thinking or dreaming about.

However, one might not only be able to measure and copy brain contents. It could also become possible to stimulate certain brain activity patterns. With the help of machine learning or AI, one may be able to quickly learn how to do this. Then, one could trigger something like dreams or illusions. One could watch TV without a TV set. To make phone calls, one would not need a smartphone anymore. One could communicate through technological telepathy.[57] For this, someone’s brain activities would be read, and someone else’s activities would be stimulated.

I agree, this all sounds pretty much like science fiction. However, some labs are very serious about such research. They actually expect that this or similar kinds of technology may be available soon.[58] Facebook and Google are just two of the companies preparing for this business, but there are many others you have never heard about. Would they soon be able to read and control your mind?


Perhaps you are not interested in using this kind of technology, but you may not be asked. I am not sure how you could avoid exposure to the nanostructures and radiation that would make such applications possible. Therefore, you may not have much influence on how it would feel to live in the data-driven, AI-controlled society of the future. We may not even notice when the technology is turned on and applied to us, because our thinking and feeling might change slowly and our minds would anyway be controlled.

If things happened this way, today’s “surveillance capitalism” would be replaced by “neurocapitalism”.[59] The companies of the future would not only know a lot about your personality, your opinions and feelings, your fears and desires, your weaknesses and strengths, as it is the case in today’s “surveillance capitalism”. They would also be able to determine your desires and your consumption.

Some people might argue, such mind control would be absolutely justified to improve the sustainability of this planet and improve your health, which would be controlled by an industrial-medical complex. Furthermore, police could stop crimes before they happen. You might not even be able to think about a crime. Your thinking would be immediately “corrected” – which brings us back to “thought crimes” and the “Karma Police” program of the British secret service GCHQ.

You think, this is all phantasy, and it will never happen? Well, according to IBM, human brain indexing will soon consume several billion Petabytes – a data volume so big that it is beyond imagination of most people. Due to the new business model of brain mapping, the time period over which the amount of data on Earth doubles, would soon drop from 12 months to 12 hours.[60] In other words, in half a day, humanity would produce as much data as in the entire history of humanity before.

A blog on “Integrating Behavioral Health – The Role of Cognitive Computing”[61] elaborates the plans further:

“As population health management gathers momentum, it has become increasingly clear that behavioral health care must be integrated with medical care…

Starting with a “data lake”
To apply cognitive computing to integrated care, the cognitive system must be given multi-sourced data that has been aggregated and normalized so that it can be analyzed. A “data lake”—an advanced type of data warehouse—is capable of ingesting clinical and claims data and mapping it to a normative data model…

A new, unified data model
… The model has predefined structures that include behavioral health. Also included in the model are data on a person’s medical, criminal justice, and socioeconomic histories. The unified data model additionally covers substance abuse and social determinants of health.

Helping to predict the future
Essentially, the end-game is to come up with a model designed for patients that is fine-tuned to recognize evolving patterns of intersecting clinical, behavioral, mental, environmental, and genetic information…

“Ensemble” approach to integrated behavioral health
This “ensemble” approach is already feasible within population health management. But today, it can only be applied on a limited basis to complex cases that include comorbid mental health and chronic conditions…”

You may like it or not: it seems that companies are already working on digital ways to correct behaviors of entire populations. They might be even willing to break your will, if this appears to be justified for a “higher purpose”. Since the discussion about strict COVID-19 countermeasures, we all know that they would certainly find excuses for this…

Human Machine Convergence

By now you may agree that many experts in the Silicon Valley and elsewhere seem to see humans as programmable, biological robots. Moreover, from their perspective, robots would be “better than us”, as soon as super(-human) intelligence exists. Expectations when this will happen range from “fifty years from now” to “superintelligence is already here”.

These experts argue that robots never get tired and never get ill. They don’t demand a salary, social insurance, or holidays. They have superior knowledge and information processing capacity. They would decide in unemotional, rational ways. They would behave exactly as the owner wants them to behave – like slaves in Babylonian or Egyptian times.

Furthermore, transhumanists expect that humans who can afford it, would technologically upgrade themselves as much as they can or want. If you wouldn’t hear well enough, you would buy an audio implant. If you wouldn’t see well enough, you would buy a visual implant. If you wouldn’t think fast enough, you would connect your brain with a supercomputer. If your physical abilities were not good enough, you might buy robot arms or legs. Your biological body parts would be replaced by technology step by step. Eventually, humans and robots would get indistinguishable.[62] Your body would become more powerful and have new senses and additional features – that is the idea. The ultimate goal would be immortality and omnipotence[63] – and the creation of a new kind of human.

Unfortunately, experience tells us that, whenever someone tried to bring a new kind of human(ity) on the way, millions of people were killed. It is shocking that, even though similar developments seem to be on the way again, the responsible political and legal institutions have not taken proper steps to protect us from the possible threat that is coming our way.

Algorithm-Based Dying and Killing

It is unclear how the transhumanist dream[64] would be compatible with human rights, sustainability and world peace. According to the “3 laws of transhumanism,” everyone would want to get as powerful as possible. The world’s resources would not support this and, hence, the system wouldn’t be fair. For some people to live longer, others would have to die early. The resulting principle would be “The survival of the richest”.[65]

Many rich people seem to like this idea, even though ordinary people would have to pay for such life extensions with their lives (i.e. with shorter life spans). Most likely, life-and-death decisions, like everything else, would be taken by IT systems. Some companies have already worked on such algorithms.[66] In fact, medical treatments increasingly depend on decisions of intelligent machines, which consider whether a medical treatment or operation is “a good investment” or not. Old people and people with “bad genes” would probably pay the price.

It seems that even some military people support this way of thinking. In an unsustainable and “over-populated” world, death rates on our planet will skyrocket in this century, at least according to the Club of Rome’s “Limit to Growth” study.[67] Of course, one would not want a World War III to “fix the over-population problem”. One would also not want people to kill each other on the street for a loaf of bread. So, military people and think tanks have been thinking about other solutions, it seems…

When it comes to life-and-death decisions, these people often insist that one must choose the lesser of two evils and refer to the “trolley problem”. In this ethical dilemma, a trolley is assumed to run over a group of, say, 5 railroad workers, if you don’t pull the switch. If you do so, however, it is assumed that one other person will die, say, a child playing on the railway tracks. So, what would you do? Or what should one do?

Autonomous vehicles may sometimes have to take such difficult decisions, too. Their cameras and sensors may be able to distinguish different kinds of people (e.g. a mother vs. a child), or they may even recognize the person (e.g. a manager vs. an unemployed person). How should the system decide? Should it use a Citizen Score, which summarizes someone’s “worth for society,” perhaps considering wealth, health, and behavior?

Now, assume politicians would eventually come up with a law determining how algorithms of autonomous systems (such as self-driving cars) should take life-and-death decisions in order to save valuable lives. Furthermore, assume that, later on, the world would run into a sustainability crisis, and there were not enough resources for everyone. Then, the algorithms originally created to save valuable lives would turn into killer algorithms, which would “sort out” people that are “too much” – potentially thousands or millions of people. Such “triage” arguments have, in fact, been recently made, for example in the early phase of COVID-19 response.[68]

Such military-style response strategies[69] have apparently been developed for times of “unpeace”, as some people call it. However, they would imply the targeted killing of unconsenting civilians, which is forbidden even in wartimes.[70] Such a “military solution” would come pretty close to concepts known as “eugenics” and “euthanasia,”[71] which reminds of some of the darkest chapters of human history. For sure, it would not be suited as basis of a civil-ization (which has the word “civil” in it for a reason).

In conclusion, algorithm-based life-and-death decisions are not an acceptable “solution to over-population”. If this was the only way to stabilize a socio-economic system, the system itself would obviously have to be changed. Even if people would not be killed by drones or killer robots, but painlessly, I doubt that algorithmically based death could ever be called ethical or moral. It is unlikely that fellow humans or later generations would ever forgive those who have brought such a system on the way. In fact, in the meantime, scientists, philosophers, and ethics committees increasingly speak up against the use of a Citizen Score in connection with life-and-death decisions, whatever personal data it may be based on; most people like to be treated equally and in a fair way.[72]

Technological Totalitarianism and Digital Fascism

Summarizing the previous sections about worrying digital developments, I conclude that the greatest opportunity in a century may easily turn into the greatest crime against humanity, if we don’t take proper precautions. Of course, I don’t deny the many positive potentials of the digital revolution. However, presently, it appears we are in an acute danger of heading towards a terrible nightmare called “technological totalitarianism”, which seems to be a mix of fascism, feudalism and communism, digitally reinvented. This may encompass some or all of the following elements:

  • mass surveillance,
  • profiling and targeting,  
  • unethical experiments with humans,  
  • censorship and propaganda, 
  • mind control or behavioral manipulation,
  • social engineering, 
  • forced conformity, 
  • digital policing, 
  • centralized control, 
  • different valuation of people, 
  • messing with human rights, 
  • humiliation of minorities,   
  • digitally based eugenics and/or euthanasia.

This technological totalitarianism has been hiding well behind the promises and opportunities of surveillance capitalism, behind the “war on terror”, behind the need for “cybersecurity”, and behind the call “to save the world” (e.g. to measure, tax, and reduce the CO2 consumption on an individual level). Sadly, politicians, judges, journalists and others have so far allowed these developments to happen.

These developments are not by coincidence, however. They started shortly after World War II, when Nazi elites had to leave Germany. Through “Operation Paperclip” and similar operations,[73] they were transferred to the United States, Russia and other countries around the world. There, they have often worked in secret services and secret research programs. This is, how Nazi thinking has spread around the world, particularly in countries that aimed at power. By now, the very core of human dignity and the foundations of many societies are at stake. Perhaps the world has never seen a greater threat before.

Singularity and Digital God

Some people think what we are witnessing now is an inevitable, technology-driven development. For example, it comes under the slogan “The singularity is near”.[74] According to this, we will soon see superintelligence, i.e. an Artificial Intelligence system with super-human intelligence and capability. This system is imagined to learn and gain intelligence at an accelerating pace, and so, it would eventually know everything better than humans.

Shouldn’t one then demand that humans do what this kind of all-knowing superintelligence demands from them, as if it was a “digital God”?[75] Wouldn’t everything else be irrational and a “crime against humanity and nature”, given the existential challenges our planet is faced with? Wouldn’t we become something like “cells” of a new super-organism called “humanity”, managed by a superintelligent brain? It seems some people cannot imagine anything else.

In the very spirit of transhumanism, they would happily engage in building such a super-brain – a kind of “digital God” that is as omniscient, omnipresent, and omnipotent as possible with today’s technology. This “digital God” would be something like a super-Google that would know our wishes and could manipulate our thinking, feeling, and behavior. Once AI can even trigger specific brain activities, it could even give us the feeling we have met God. The fake (digital) God could create fake spiritual experiences. It could make us believe we have met the God that world religions tell us about – finally it was here, and it was taking care of our lives…

Do you think we could possibly stop disruptive innovators from working on the implementation of this idea? After digitally reinventing products and services, administrative and decision processes, legal procedures and law enforcement, money and business, would they stay away from reinventing religion and from creating an artificial God? Probably not.[76]

Indeed, previous Google engineer Anthony Levandowski has already established a new religion, which believes in Artificial Intelligence as God.[77] Even before, there was a “Reformed Church of Google”. The related Webpage[78] contains various proofs of “Google Is God,” “Prayers,” and “Commandments”. Of course, many people would not take such a thing seriously. Nevertheless, some powerful people may be very serious about establishing AI as a new God and making us obey its “commandments”,[79] in the name of “fixing the world”.

This does not mean, of course, that this idea would have to become reality. However, if you asked me, if and when such a system would be built, I would answer: “It probably exists already, and it might hide behind a global cybersecurity center, which collects all the data needed for such a system.” It may be just a matter of time and opportunity to turn on the full functionality of the Artificial Intelligence system that knows us all, and to give it more or less absolute powers. It seems that, with the creation of a man-made, artificial, technological God, the ultimate Promethean dream would become true. After reading the next section, you might even wonder, whether it is perhaps the ultimate Luziferian dream…

Apocalyptic AI

I would not be surprised if you found the title of this section far-fetched. However, the phrase “apocalyptic AI” is not my own invention – it is the title of a academic book[80] summarizing the thinking of a number of AI pioneers. The introduction of this book says it all:

“Apocalyptic AI authors promise that intelligent machines – our “mind children,” according to Moravec – will create a paradise for humanity in the short term but, in the long term, human beings will need to upload their minds into machine bodies in order to remain a viable life-form. The world of the future will be a transcendent digital world; mere human beings will not fit in. In order to join our mind children in life everlasting, we will upload our conscious minds into robots and computers, which will provide us with the limitless computational power and effective immortality that Apocalyptic AI advocates believe make robot life better than human life.

I am not interested in evaluating the moral worth of Apocalyptic AI…”

Here, we notice a number of surprising points: “apocalyptic AI” is seen as a positive thing, as technology is imagined to make humans immortal, by uploading our minds into a digital platform. This is apparently expected to be the final stage of human-machine convergence and the end goal of transhumanism. However, humans as we know them today would be extinct.[81] This reveals transhumanism as a misanthropic technology-based ideology and, furthermore, a highly dangerous, “apocalyptic” end time cult.

Tragically, this new technolog-based religion has been promoted by high-level politics, for example, the Obama administration.[82] To my surprise, the first time I encountered “apocalyptic AI” was at an event in Berlin on October 28, 2018,[83] which was apparently supported by government funds. The “ÖFIT 2018 Symposium” on “Artificial Intelligence as a Way to Create Order” [in German: “Künstliche Intelligenz als Ordnungsstifterin”] took place at “Silent Green”, a previous crematory. Honestly, I was shocked and thought the place was more suited to warn us of a possible “digital holocaust” than to make us believe in a digital God.

However, those believing in “apocalyptic AI”, among them leading AI experts, seem to believe that “apocalyptic AI” would be able to bring us “transcendent eternal existence” and the “golden age of peace and prosperity” promised in the Bible Apocalypse. At Amazon, for example, the book is advertised with the words:[84]

“Apocalyptic AI, the hope that we might one day upload our minds into machines or cyberspace and live forever, is a surprisingly wide-spread and influential idea, affecting everything from the world view of online gamers to government research funding and philosophical thought. In Apocalyptic AI, Robert Geraci offers the first serious account of this „cyber-theology“ and the people who promote it.

Drawing on interviews with roboticists and AI researchers and with devotees of the online game Second Life, among others, Geraci illuminates the ideas of such advocates of Apocalyptic AI as Hans Moravec and Ray Kurzweil. He reveals that the rhetoric of Apocalyptic AI is strikingly similar to that of the apocalyptic traditions of Judaism and Christianity. In both systems, the believer is trapped in a dualistic universe and expects a resolution in which he or she will be translated to a transcendent new world and live forever in a glorified new body. Equally important, Geraci shows how this worldview shapes our culture. Apocalyptic AI has become a powerful force in modern culture. In this superb volume, he shines a light on this belief system, revealing what it is and how it is changing society.”

I also recommend to read the book review over there by “sqee” posted on December 16, 2010. It summarizes the apocalyptic elements of the Judaic/Christian theology, which some transhumanists are now trying to engineer, including:
“A. A belief that there will be an irreversible event on a massive scale (global) after which nothing will ever be the same (in traditional apocalypses, the apocalypse itself; in Apocalyptic AI ideology, an event known as „the singularity“)
B. A belief that after the apocalypse/singularity, rewards will be granted to followers/adherents/believers that completely transform the experience of life as we know it” (while the others are apparently doomed).

Personally, I don’t believe in this vision. I rather consider the above as an “apocalyptic” worst-case scenario that may happen if we don’t manage to avert attempts to steer human behaviors and minds, and submit humanity to a (digital) control systemIt is clear that such a system would not establish the golden age of peace and prosperity, but would be an “evil” totalitarian system that would challenge humanity altogether. Even though some tech companies and visionaries seem to favor such developments, we should stay away from them – in agreement with democratic constitutions and the United Nation’s Universal Declaration of Human Rights.

It is high time to challenge the “technology-driven” approach. Technology should serve humans, not the other way round. In fact, in other blogs I have illustrated, human-machine convergence is not the only possible future scenario. I would say there are indeed much better ways of using digital technologies than what we saw above.


[1] See the Gartner Hype Cycle,

[2] S. Zuboff (2019) The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs).

[3] See,,

[4] Finding your voice, The Intercept (January 19, 2018)

[5] See (accessed August 4, 2020);

[6] See,,

[7] See,

[8] Can the Military Make A Prediction Machine?, Defense One (April 8, 2015)

[9] Exclusive: Google, CIA invest in ‘Future” of Web Monitoring, Wired (July 28, 2010)

[10] Palantir knows everything about you, Bloomberg (April 18, 2020)

[11] See

[12] See

[13] See also F. Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2016)

[14] See,

[15] See

[16] See,,

[17] Sentient world: war games on the grandest scale, The Register (June 23, 2007)

[18] Brexit – How the British People Were Hacked, Global Research (November 23, 2017), Brexit – a Game of Social Engineering with No Winners, Medium (June 4, 2019); see also the books by Cambridge Analytica Insiders Christopher Wylie,, and Brittany Kaiser,

[19] D. Helbing and E. Pournaras, Build Digital Democracy, Nature 527, 33-34 (2015); D. Helbing, Why We Need Democracy 2.0 and Capitalism 2.0 to Survive (2016)

[20] D. Kahneman (2013) Thinking Fast and Slow (Farrar, Straus, and Giroux),

[21] See,

[22] E. Pariser, The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think (Penguin, 2012)

[23] See,

[24] E. Bernays, Propaganda (Ig, 2004)

[25] See

[26] The Rise of the Robot Reporter, The New York Times (February 5, 2019),

[27] New AI fake text generator may be too dangerous to release, say creators, The Guardian (February 14, 2019)

[28] See


[30] Adobe’s Project VoCo Lets You Edit Speech As Easily As Text, TechCrunch (November 3, 2016),

[31] Face2Face: Real-Time Face Capture and Reenactment of Videos, Cinema5D (April 9, 2016)

[32] How Covert Agents Infiltrate the Internet …, The Intercept (February 25, 2014), Sentient world: war games on the grandest scale, The Register (June 23, 2007)


[34] An Internet Giveaway to the U.N., Wall Street Journal (August 28, 2016)

[35] See

[36] See; R.H. Thaler and C.R. Sunstein, Nudge (Yale University Press, 2008)

[37] Tech Billionaires Convinced We Live in The Matrix Are Secretly Funding Scientists to Help Break Us Out of It, Independent (October 6, 2016)

[38] Joint Threat Research Intelligence Group,; Controversial GCHQ Unit Engaged in Domestic Law Enforcement, Online Propaganda, Psychological Research, The Intercept (June 22, 2015)

[39] See

[40] Tristan Harris, How a handful of tech companies controls billions of minds every day, (July 28, 2017)

[41] Fresh Cambridge Analytica leak ‘shows global manipulation is out of control’, The Guardian (January 4, 2020)

[42] Before Trump, Cambridge Analytica quietly built “psyops” for militaries, FastCompany (September 25, 2019); Meet the weaponized propaganda I that knows you better than you know yourself, ExtremeTech (March 1, 2017); The Rise of the Weaponized AI Propaganda Machine, Medium (February 13, 2017)

[43] ACLU: Orwellian Citizen Score, China‘s credit score system, is a warning for Americans, Computerworld (October 7, 2015)

[44] See

[45] How China Is Using „Social Credit Scores“ to Reward and Punish Its Citizens, Time (2019)

[46] See

[47] Deutscher Ethikrat: „Der Staat darf menschliches Leben nicht bewerten“, ZEIT (March 27, 2020)

[48] British ‘Karma Police’ program carries out mass surveillance of the web, The Verge (September 25, 2015)

[49] Profiled: From Radio to Porn, British Spies Track Web Users’ Online Identities, The Intercept (September 25, 2015)

[50] L. Lessig, Code is Law: On Liberty in Cyberspace, Harvard Magazine (January 1, 2000),

[51] Überwachung von Flugpassagieren liefert Fehler über Fehler, Süddeutsche Zeitung (April 24, 2019); 100,000 false positives for every real terrorist: Why anti-terror algorithms don’t work, First Monday (2017)

[52] Zweite Welle im Vorzeigeland – was wir von Israel lernen können, WELT (July 8, 2020)

[53] 12,000 Israelis mistakenly quarantined by Shin Bet’s tracking system, The Jerusalem Post (July 15, 2020)

[54] Rewriting Life: Obama’s Brain Project Backs Neurotechnology, MIT Technology Review (September 30, 2014),

[55] The BRAIN Initiative Misson, (accessed on July 31, 2020).

[56] A.P. Alivisatos et al. (2013) Nanotools for Neuroscience and Brain Activity Mapping, ACS Nano 7 (3), 1850-1866,

[57] Is Tech-Boosted Telepathy on Its Way? Forbes (December 4, 2018)


[59] Brain-reading tech is coming. The law is not ready to protect us. Vox (December 20, 2019), (accessed on July 31, 2020); What Is Neurocapitalism aand Why Are We Living In It?, Vice (October 18, 2016), (accessed on July 31, 2020); M. Meckel (2018) Mein Kopf gehört mir: Eine Reise durch die schöne neue Welt des Brainhacking, Piper,ört-mir/dp/3492059074/

[60] Knowledge Doubling Every 12 Months, Soon to be Every 12 Hours (April 19, 2013) (accessed July 31, 2020), refers to

[61] (accessed September 5, 2018)

[62] “In Zukunft werden wir Mensch und Maschine wohl nicht mehr unterscheiden können“, Neue Zürcher Zeitung (August 22, 2019), (accessed on July 31, 2020).

[63] According to the „Teleological Egocentric Functionalism“, as expressed by the „3 laws of transhumanism“ (see Zoltan Istvan’s “The Transhumanist Wager”,, accessed on July 31, 2020):

1) A transhumanist must safeguard one’s own existence above all else.

2) A transhumanist must strive to achieve omnipotence as expediently as possible–so long as one’s actions do not conflict with the First Law.

3) A transhumanist must safeguard value in the universe–so long as one’s actions do not conflict with the First and Second Laws.

[64] It’s Official, the Transhuman Era Has Begun, Forbes (August 22, 2018), (accessed on July 31, 2020)

[65] D. Rushkoff, Future Human: Surival of the Richest, OneZero (July 5, 2018)

[66] Big-Data-Algorithmen: Wenn Software über Leben und Tod entscheidet, ZDF heute (December 20, 2017),

Was Sie wissen müssen, wenn Dr. Big-Data bald über Leben und Tod entscheidet: „Wir sollten Maschinen nicht blind vertrauen“, Medscape (March 7, 2018)

[67] D.H. Meadows, Limits to Growth (Signet, 1972); D.H. Meadows, J. Randers, and D.L. Meadows, Limits to Growth: The 30-Year Update (Chelsea Green, 2004).


[69] R. Arkin (2017) Governing Lethal Behavior in Autonomous Robots (Chapman and Hall/CRC); R. Sparrow, Can Machines Be People? Reflections on the Turing Triage Test,

[70] Ethisch sterben lassen – ein moralisches Dilemma, Neue Zürcher Zeitung (March 23, 2020),

[71] F. Hamburg (2005) Een Computermodel Voor Het Ondersteunen van Euthanasiebeslissingen (Maklu)

[72] J. Nagler, J. van den Hoven, and D. Helbing (August 21, 2017) An Extension of Asimov‘s Robotic Laws,;

B. Dewitt, B. Fischhoff, and N.-E. Sahlin, ‚Moral machine’ experiment is no basis for policymaking, Nature 567, 31 (2019);

Y.E. Bigman and K. Gray, Life and death decisions of autonomous vehicles, Nature 579, E1-E2 (2020);

Automatisiertes und vernetztes Fahren, Bericht der Ethikkommission, Bundesministerium für Verkehr und digitale Infrastruktur (June 2017),;

COVID-19 pandemic: triage for intensive-care treatment under resource scarcity, Swiss Medical Weekly (March 24, 2020);

Deutscher Ethikrat: “Der Staat darf menschliches Leben nicht bewerten”, ZEIT (March 27, 2020),

[73] See and

[74] R. Kurzweil, The Singularity Is Near (Penguin, 2007)

[75] Of course, not. If the underlying goal function of the system would be changed only a little (or if the applied dataset would be updated), this might imply completely different command(ment)s… In scientific terms, this problem is known as “sensitivity”.

[76] W. Indick (2015) The Digital God: How Technology Will Reshape Spirituality (McFarland),

[77] Inside the First Church of Artificial Intelligence, Wired (November 15, 2017)

[78] (accessed on July 31, 2020)

[79] An AI god will emerge by 2042 and write its own bible. Will you worship it? Venture Beat (October 2, 2017)

[80] R.M. Geraci (2010) Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality (Oxford University Press),

[81] Interview zu künstlicher Intelligenz: “Der Nebeneffekt wäre, dass die Menschheit dabei ausgerottet würde“, watson (May 11, 2015),

[82] The Age of Transhumanist Politics Has Begun, TELEPOLIS (April 12, 2015), (accessed on July 31, 2020)

[83] #ÖFIT2018 – Keynote Prof. Robert Geraci PhD,

[84] (accessed on July 31, 2020)
Posted by Dirk Helbing at 11:52

Friday, 10 July 2020

The Corona Crisis Reveals the Struggle for Our Future

by Dirk Helbing

For a long time, experts have warned of the implications of a non-sustainable world, but few have understood the implications. Today’s economic system would enter a terminal phase, before a new kind of system would raise from the ashes. For sure, the digital revolution allowed the machinery of utility maximization to reach new heights. Today’s surveillance capitalism sells our digital doubles, in other words: detailed digital copies of our lives. It became increasingly clear that we would be next. People were already talking about the value of life, which, from a market point of view, could be pretty little – considering the fact of over-population and the coming wave of robotic automation. I have warned that some have worked on systems that would autonomously decide over lives and deaths of people, based on a citizen score reflecting what their “systemic value” was claimed to be.

Then came the Corona Virus. Even though the world had been warned in advance of the next great pandemic to come, COVID-19 hit the world surprisingly unprepared. Even though it started to spread in early December 2019, there was a shortage not only of respirators, but also of disinfectants and face masks as late as April 2020. And so, many people died an early death. Some doctors took triage decisions as in war times, and old or seriously ill people did not stand good chances to be helped. Some doctors relied on “terminal care”: basically, they gave opiates and sleeping pills to patients they could not save, and put them to death.

Dirk Helbing, Professor für Computational Social Science an der ETH Zürich, spricht über die Ära der Digitalisierung und die Herausforderung, deren Möglichkeiten zum Vorteil der Zivilgesellschaft zu nutzen.

Dirk Helbing im Interview mit Manuela Lenzen, Februar 2020

The explosion in data volumes, processing power, and Artificial Intelligence, known as the „digital revolution“, has driven our world to a dangerous p

Source: The Automation of Society is Next: How to Survive the Digital Revolution by Dirk Helbing :: SSRN

Digitale Demokratie statt Datendiktatur

Big Data, Nudging, Verhaltenssteuerung: Droht uns die Automatisierung der Gesellschaft durch Algorithmen und künstliche Intelligenz? Ein gemeinsamer Appell zur Sicherung von Freiheit und Demokratie.

Das Digital-Manifest
© iStock / KrulUA; Bearbeitung: Spektrum der Wissenschaft
„Aufklärung ist der Ausgang des Menschen aus seiner selbstverschuldeten Unmündigkeit. Unmündigkeit ist das Unvermögen, sich seines Verstandes ohne Leitung eines anderen zu bedienen.“ Immanuel Kant, Was ist Aufklärung? (1784)Die digitale Revolution ist in vollem Gange. Wie wird sie unsere Welt verändern? Jedes Jahr verdoppelt sich die Menge an Daten, die wir produzieren. Mit anderen Worten: Allein 2015 kommen so viele Daten hinzu wie in der gesamten Menschheitsgeschichte bis 2014 zusammen. Pro Minute senden wir Hunderttausende von Google-Anfragen und Facebookposts. Sie verraten, was wir denken und fühlen. Bald sind die Gegenstände um uns herum mit dem „Internet der Dinge“ verbunden, vielleicht auch unsere Kleidung. In zehn Jahren wird es schätzungsweise 150 Milliarden vernetzte Messsensoren geben, 20-mal mehr als heute Menschen auf der Erde. Dann wird sich die Datenmenge alle zwölf Stunden verdoppeln. Viele Unternehmen versuchen jetzt, diese „Big Data“ in Big Money zu verwandeln.

Alles wird intelligent: Bald haben wir nicht nur Smartphones, sondern auch Smart Homes, Smart Factories und Smart Cities. Erwarten uns am Ende der Entwicklung Smart Nations und ein smarter Planet?

Dirk Helbing
© Dirk Helbing, ETH Zürich

 Bild vergrößernDirk Helbing

In der Tat macht das Gebiet der künstlichen Intelligenz atemberaubende Fortschritte. Insbesondere trägt es zur Automatisierung der Big-Data-Analyse bei. Künstliche Intelligenz wird nicht mehr Zeile für Zeile programmiert, sondern ist mittlerweile lernfähig und entwickelt sich selbstständig weiter. Vor Kurzem lernten etwa Googles DeepMind-Algorithmen autonom, 49 Atari-Spiele zu gewinnen. Algorithmen können nun Schrift, Sprache und Muster fast so gut erkennen wie Menschen und viele Aufgaben sogar besser lösen. Sie beginnen, Inhalte von Fotos und Videos zu beschreiben. Schon jetzt werden 70 Prozent aller Finanztransaktionen von Algorithmen gesteuert und digitale Zeitungsnews zum Teil automatisch erzeugt. All das hat radikale wirtschaftliche Konsequenzen: Algorithmen werden in den kommenden 10 bis 20 Jahren wohl die Hälfte der heutigen Jobs verdrängen. 40 Prozent der Top-500-Firmen werden in einem Jahrzehnt verschwunden sein.

Auf die Automatisierung der Produktion und die Erfindung selbstfahrender Fahrzeuge folgt nun die Automatisierung der Gesellschaft

Es ist absehbar, dass Supercomputer menschliche Fähigkeiten bald in fast allen Bereichen übertreffen werden – irgendwann zwischen 2020 und 2060. Inzwischen ruft dies alarmierte Stimmen auf den Plan. Technologievisionäre wie Elon Musk von Tesla Motors, Bill Gates von Microsoft und Apple-Mitbegründer Steve Wozniak warnen vor Superintelligenz als einer ernsten Gefahr für die Menschheit, vielleicht bedrohlicher als Atombomben. Ist das Alarmismus?

Größter historischer Umbruch seit Jahrzehnten

Fest steht: Die Art, wie wir Wirtschaft und Gesellschaft organisieren, wird sich fundamental ändern. Wir erleben derzeit den größten historischen Umbruch seit dem Ende des Zweiten Weltkriegs: Auf die Automatisierung der Produktion und die Erfindung selbstfahrender Fahrzeuge folgt nun die Automatisierung der Gesellschaft. Damit steht die Menschheit an einem Scheideweg, bei dem sich große Chancen abzeichnen, aber auch beträchtliche Risiken. Treffen wir jetzt die falschen Entscheidungen, könnte das unsere größten gesellschaftlichen Errungenschaften bedrohen.

Spektrum der Wissenschaft

Kostenloses Probeheft!

In den 1940er Jahren begründete der amerikanische Mathematiker Norbert Wiener (1894-1964) die Kybernetik. Ihm zufolge lässt sich das Verhalten von Systemen mittels geeigneter Rückkopplungen (Feedbacks) kontrollieren. Schon früh schwebte manchen Forschern eine Steuerung von Wirtschaft und Gesellschaft nach diesen Grundsätzen vor, aber lange fehlte die nötige Technik dazu.

Heute gilt Singapur als Musterbeispiel einer datengesteuerten Gesellschaft. Was als Terrorismusabwehrprogramm anfing, beeinflusst nun auch die Wirtschafts- und Einwanderungspolitik, den Immobilienmarkt und die Lehrpläne für Schulen. China ist auf einem ähnlichen Weg (siehe Kasten am Ende des Textes). Kürzlich lud Baidu, das chinesische Äquivalent von Google, das Militär dazu ein, sich am China-Brain-Projekt zu beteiligen. Dabei lässt man so genannte Deep-Learning-Algorithmen über die Suchmaschinendaten laufen, die sie dann intelligent auswerten. Darüber hinaus ist aber offenbar auch eine Gesellschaftssteuerung geplant. Jeder chinesische Bürger soll laut aktuellen Berichten ein Punktekonto („Citizen Score“) bekommen, das darüber entscheiden soll, zu welchen Konditionen er einen Kredit bekommt und ob er einen bestimmten Beruf ausüben oder nach Europa reisen darf. In diese Gesinnungsüberwachung ginge zudem das Surfverhalten des Einzelnen im Internet ein – und das der sozialen Kontakte, die man unterhält (siehe „Blick nach China“).

Bruno Frey
© Bruno Frey, Universität Basel

 Bild vergrößernBruno Frey

Mit sich häufenden Beurteilungen der Kreditwürdigkeit und den Experimenten mancher Onlinehändler mit individualisierten Preisen wandeln auch wir im Westen auf ähnlichen Pfaden. Darüber hinaus wird immer deutlicher, dass wir alle im Fokus institutioneller Überwachung stehen, wie etwa das 2015 bekannt gewordene „Karma Police“-Programm des britischen Geheimdienstes zur flächendeckenden Durchleuchtung von Internetnutzern demonstriert. Wird Big Brother nun tatsächlich Realität? Und: Brauchen wir das womöglich sogar im strategischen Wettkampf der Nationen und ihrer global agierenden Unternehmen?

Programmierte Gesellschaft, programmierte Bürger

Angefangen hat es scheinbar harmlos: Schon seit einiger Zeit bieten uns Suchmaschinen und Empfehlungsplattformen personalisierte Vorschläge zu Produkten und Dienstleistungen an. Diese beruhen auf persönlichen und Metadaten, welche aus früheren Suchanfragen, Konsum- und Bewegungsverhalten sowie dem sozialen Umfeld gewonnen werden. Die Identität des Nutzers ist zwar offiziell geschützt, lässt sich aber leicht ermitteln. Heute wissen Algorithmen, was wir tun, was wir denken und wie wir uns fühlen – vielleicht sogar besser als unsere Freunde und unsere Familie, ja als wir selbst. Oft sind die unterbreiteten Vorschläge so passgenau, dass sich die resultierenden Entscheidungen wie unsere eigenen anfühlen, obwohl sie fremde Entscheidungen sind. Tatsächlich werden wir auf diese Weise immer mehr ferngesteuert. Je mehr man über uns weiß, desto unwahrscheinlicher werden freie Willensentscheidungen mit offenem Ausgang.

Auch dabei wird es nicht bleiben. Einige Softwareplattformen bewegen sich in Richtung „Persuasive Computing„. Mit ausgeklügelten Manipulationstechnologien werden sie uns in Zukunft zu ganzen Handlungsabläufen bringen können, sei es zur schrittweisen Abwicklung komplexer Arbeitsprozesse oder zur kostenlosen Generierung von Inhalten von Internetplattformen, mit denen Konzerne Milliarden verdienen. Die Entwicklung verläuft also von der Programmierung von Computern zur Programmierung von Menschen.

Roberto V. Zicari
© C. Sattler

 Bild vergrößernRoberto V. Zicari

Diese Technologien finden auch in der Politik zunehmend Zuspruch. Unter dem Stichwort Nudging versucht man, Bürger im großen Maßstab zu gesünderem oder umweltfreundlicherem Verhalten „anzustupsen“ – eine moderne Form des Paternalismus. Der neue, umsorgende Staat interessiert sich nicht nur dafür, was wir tun, sondern möchte auch sicherstellen, dass wir das Richtige tun. Das Zauberwort ist „Big Nudging“, die Kombination von Big Data und Nudging (siehe „Big Nudging“). Es erscheint manchem wie ein digitales Zepter, mit dem man effizient durchregieren kann, ohne die Bürger in demokratische Verfahren einbeziehen zu müssen. Lassen sich auf diese Weise Partikularinteressen überwinden und der Lauf der Welt optimieren? Wenn ja, dann könnte man regieren wie ein weiser König, der mit einer Art digitalem Zauberstab die gewünschten wirtschaftlichen und gesellschaftlichen Ergebnisse quasi herbeizaubert.

Vorprogrammierte Katastrophen

Doch ein Blick in die relevante wissenschaftliche Literatur zeigt, dass eine gezielte Kontrolle von Meinungen im Sinne ihrer „Optimierung“ an der Komplexität des Problems scheitert. Die Meinungsbildungsdynamik ist voll von Überraschungen. Niemand weiß, wie der digitale Zauberstab, sprich die manipulative Nudging-Technik, richtig zu verwenden ist. Was richtig und was falsch ist, stellt sich oft erst hinterher heraus. So wollte man während der Schweinegrippeepidemie 2009 jeden zur Impfung bewegen. Inzwischen ist aber bekannt, dass ein bestimmter Prozentsatz der Geimpften von einer ungewöhnlichen Krankheit, der Narkolepsie, befallen wurde. Glücklicherweise haben sich nicht mehr Menschen impfen lassen!

Auch mag der Versuch, Krankenversicherte mit Fitnessarmbändern zu verstärkter Bewegung anzuregen, die Anzahl der Herz-Kreislauf-Erkrankungen reduzieren. Am Ende könnte es dafür aber mehr Hüftoperationen geben. In einem komplexen System wie der Gesellschaft führt eine Verbesserung in einem Bereich fast zwangsläufig zur Verschlechterung in einem anderen. So können sich großflächige Eingriffe leicht als schwer wiegende Fehler erweisen.

Unabhängig davon würden Kriminelle, Terroristen oder Extremisten den digitalen Zauberstab früher oder später unter ihre Kontrolle bringen – vielleicht sogar ohne dass es uns auffällt. Denn: Fast alle Unternehmen und Einrichtungen wurden schon gehackt, selbst Pentagon, Weißes Haus und Bundestag. Hinzu kommt ein weiteres Problem, wenn ausreichende Transparenz und demokratische Kontrolle fehlen: die Aushöhlung des Systems von innen. Denn Suchalgorithmen und Empfehlungssysteme lassen sich beeinflussen. Unternehmen können bestimmte Wortkombinationen ersteigern, die in den Ergebnislisten bevorzugt angezeigt werden. Regierungen haben wahrscheinlich Zugriff auf eigene Steuerungsparameter. Bei Wahlen wäre es daher im Prinzip möglich, sich durch Nudging Stimmen von Unentschlossenen zu sichern – eine nur schwer nachweisbare Manipulation. Wer auch immer diese Technologie kontrolliert, kann also Wahlen für sich entscheiden, sich sozusagen an die Macht nudgen.

Digitales Wachstum
© Dirk Helbing

 Bild vergrößernDigitales Wachstum

Innerhalb weniger Jahre hat die rasante Vernetzung der Welt die Komplexität unserer Gesellschaft explosionsartig erhöht. Dies ermöglicht zwar jetzt, auf Grund von „Big Data“ bessere Entscheidungen zu treffen, aber das althergebrachte Prinzip der Kontrolle von oben funktioniert immer weniger. Verteilte Steuerungsansätze werden immer wichtiger. Nur mittels kollektiver Intelligenz lassen sich noch angemessene Problemlösungen finden.

Verschärft wird dieses Problem durch die Tatsache, dass in Europa eine einzige Suchmaschine einen Marktanteil von rund 90 Prozent besitzt. Sie könnte die Öffentlichkeit maßgeblich beeinflussen, womit Europa vom Silicon Valley aus quasi ferngesteuert würde. Auch wenn das Urteil des Europäischen Gerichtshofs vom 6. Oktober 2015 nun den ungezügelten Export europäischer Daten einschränkt, ist das zu Grunde liegende Problem noch keineswegs gelöst, sondern erst einmal nur geografisch verschoben.

Mit welchen unerwünschten Nebenwirkungen ist zu rechnen? Damit Manipulation nicht auffällt, braucht es einen so genannten Resonanzeffekt, also Vorschläge, die ausreichend kompatibel zum jeweiligen Individuum sind. Damit werden lokale Trends durch Wiederholung allmählich verstärkt, bis hin zum „Echokammereffekt“: Am Ende bekommt man nur noch seine eigenen Meinungen widergespiegelt. Das bewirkt eine gesellschaftliche Polarisierung, also die Entstehung separater Gruppen, die sich gegenseitig nicht mehr verstehen und vermehrt miteinander in Konflikt geraten. So kann personalisierte Information den gesellschaftlichen Zusammenhalt unabsichtlich zerstören. Das lässt sich derzeit etwa in der amerikanischen Politik beobachten, wo Demokraten und Republikaner zusehends auseinanderdriften, so dass politische Kompromisse kaum noch möglich sind. Die Folge ist eine Fragmentierung, vielleicht sogar eine Zersetzung der Gesellschaft.

Einen Meinungsumschwung auf gesamtgesellschaftlicher Ebene kann man wegen des Resonanzeffekts nur langsam und allmählich erzeugen. Die Auswirkungen treten mit zeitlicher Verzögerung ein, lassen sich dann aber auch nicht mehr einfach rückgängig machen. So können zum Beispiel Ressentiments gegen Minderheiten oder Migranten leicht außer Kontrolle geraten; zu viel Nationalgefühl kann Diskriminierung, Extremismus und Konflikte verursachen. Noch schwerer wiegt der Umstand, dass manipulative Methoden die Art und Weise verändern, wie wir unsere Entscheidungen treffen. Sie setzen nämlich die sonst bedeutsamen kulturellen und sozialen Signale außer Kraft – zumindest vorübergehend. Zusammengefasst könnte der großflächige Einsatz manipulativer Methoden also schwer wiegende gesellschaftliche Schäden verursachen, einschließlich der ohnehin schon verbreiteten Verrohung der Verhaltensweisen in der digitalen Welt. Wer soll dafür die Verantwortung tragen?

Rechtliche Probleme

Ernst Hafen
© Katarzyna Nowak

 Bild vergrößernErnst Hafen

Dies wirft rechtliche Fragen auf, die man angesichts der Milliardenklagen gegen Tabakkonzerne, Banken, IT- und Automobilunternehmen in den vergangenen Jahren nicht vernachlässigen sollte. Doch welche Gesetze werden überhaupt tangiert? Zunächst einmal ist klar, dass manipulative Technologien die Entscheidungsfreiheit einschränken. Würde die Fernsteuerung unseres Verhaltens perfekt funktionieren, wären wir im Grunde digitale Sklaven, denn wir würden nur noch fremde Entscheidungen ausführen. Bisher funktionieren manipulative Technologien natürlich nur zum Teil. Jedoch verschwindet unsere Freiheit langsam, aber sicher – langsam genug, dass der Widerstand der Bürger bisher noch gering war.

Die Einsichten des großen Aufklärers Immanuel Kant scheinen jedoch hochaktuell zu sein. Unter anderem stellte er fest, dass ein Staat, der das Glück seiner Bürger zu bestimmen versucht, ein Despot ist. Das Recht auf individuelle Selbstentfaltung kann nur wahrnehmen, wer die Kontrolle über sein Leben hat. Dies setzt jedoch informationelle Selbstbestimmung voraus. Es geht hier um nicht weniger als unsere wichtigsten verfassungsmäßig garantierten Rechte. Ohne deren Einhaltung kann eine Demokratie nicht funktionieren. Ihre Einschränkung unterminiert unsere Verfassung, unsere Gesellschaft und den Staat.

Da manipulative Technologien wie Big Nudging ähnlich wie personalisierte Werbung funktionieren, sind noch weitere Gesetze tangiert. Werbung muss als solche gekennzeichnet werden und darf nicht irreführend sein. Auch sind nicht alle psychologischen Tricks wie etwa unterschwellige Reize erlaubt. So ist es untersagt, ein Erfrischungsgetränk im Kinofilm für eine Zehntelsekunde einzublenden, weil die Werbung dann nicht bewusst wahrnehmbar ist, während sie unterbewusst vielleicht eine Wirkung entfaltet. Das heute gängige Sammeln und Verwerten persönlicher Daten lässt sich außerdem nicht mit dem geltendem Datenschutzrecht in europäischen Ländern vereinen.

Michael Hagner
© mit frdl. Gen. von Michael Hagner

 Bild vergrößernMichael Hagner

Schließlich steht auch die Rechtmäßigkeit personalisierter Preise in Frage, denn es könnte sich dabei um einen Missbrauch von Insiderinformationen handeln. Hinzu kommen mögliche Verstöße gegen den Gleichbehandlungsgrundsatz, das Diskriminierungsverbot und das Wettbewerbsrecht, da freier Marktzugang und Preistransparenz nicht mehr gewährleistet sind. Die Situation ist vergleichbar mit Unternehmen, die ihre Produkte in anderen Ländern billiger verkaufen, jedoch den Erwerb über diese Länder zu verhindern versuchen. In solchen Fällen gab es bisher empfindliche Strafzahlungen.

Mit klassischer Werbung oder Rabattmarken sind personalisierte Werbung und Preise nicht vergleichbar, denn Erstere sind unspezifisch und dringen auch bei Weitem nicht so sehr in unsere Privatsphäre ein, um unsere psychologischen Schwächen auszunutzen und unsere kritische Urteilskraft auszuschalten. Außerdem gelten in der akademischen Welt selbst harmlose Entscheidungsexperimente als Versuche am Menschen und bedürfen der Beurteilung durch eine Ethikkommission, die der Öffentlichkeit Rechenschaft schuldet. Die betroffenen Personen müssen in jedem einzelnen Fall ihre informierte Zustimmung geben. Absolut unzureichend ist dagegen ein Klick zur Bestätigung, dass man einer 100-seitigen Nutzungsbedingung pauschal zustimmt, wie es bei vielen Informationsplattformen heutzutage der Fall ist.

Dennoch experimentieren manipulative Technologien wie Nudging mit Millionen von Menschen, ohne sie darüber in Kenntnis zu setzen, ohne Transparenz und ohne ethische Schranken. Selbst große soziale Netzwerke wie Facebook oder Onlinedating-Plattformen wie OK Cupid haben sich bereits öffentlich zu solchen sozialen Experimenten bekannt. Wenn man unverantwortliche Forschung an Mensch und Gesellschaft vermeiden möchte (man denke etwa an die Beteiligung von Psychologen an den Folterskandalen der jüngsten Vergangenheit), dann benötigen wir dringend hohe Standards, insbesondere wissenschaftliche Qualitätskriterien und einen ethischen Kodex analog zum hippokratischen Eid.

Wurden unser Denken, unsere Freiheit, unsere Demokratie gehackt?

Yvonne Hofstetter
© Heimo Aga

 Bild vergrößernYvonne Hofstetter

Angenommen, es gäbe eine superintelligente Maschine, die quasi gottgleiches Wissen und übermenschliche Fähigkeiten hätte – würden wir dann ehrfürchtig ihren Anweisungen folgen? Das erscheint durchaus möglich. Aber wenn wir das täten, dann hätten sich die Befürchtungen von Elon Musk, Bill Gates, Steve Wozniak, Stephen Hawking und anderen bewahrheitet: Computer hätten die Kontrolle über die Welt übernommen. Es muss uns klar sein, dass auch eine Superintelligenz irren, lügen, egoistische Interessen verfolgen oder selbst manipuliert werden kann. Vor allem könnte sie sich nicht mit der verteilten, kollektiven Intelligenz der Bevölkerung messen.

Das Denken aller Bürger durch einen Computercluster zu ersetzen, wäre absurd, denn das würde die Qualität der erreichbaren Lösungen dramatisch verschlechtern. Schon jetzt ist klar, dass sich die Probleme in der Welt trotz Datenflut und Verwendung personalisierter Informationssysteme nicht verringert haben – im Gegenteil! Der Weltfrieden ist brüchig. Die langfristige Veränderung des Klimas könnte zum größten Verlust von Arten seit dem Aussterben der Dinosaurier führen. Die Auswirkungen der Finanzkrise auf Wirtschaft und Gesellschaft sind sieben Jahre nach ihrem Beginn noch lange nicht bewältigt. Cyberkriminalität richtet einen jährlichen Schaden von drei Billionen Dollar an. Staaten und Terroristen rüsten zum Cyberkrieg.

Andrej Zwitter
© Stefanie Starz

 Bild vergrößernAndrej Zwitter

In einer sich schnell verändernden Welt kann auch eine Superintelligenz nie perfekt entscheiden – die Datenmengen wachsen schneller als die Prozessierbarkeit, und die Übertragungsraten sind begrenzt. So werden lokales Wissen und Fakten außer Acht gelassen, die jedoch von Bedeutung sind, um gute Lösungen zu erzielen. Verteilte, lokale Steuerungsverfahren sind zentralen Ansätzen oft überlegen, vor allem in komplexen Systemen, deren Verhalten stark variabel, kaum voraussagbar und nicht in Echtzeit optimierbar ist. Das gilt schon für die Ampelsteuerung in Städten, aber noch viel mehr für die sozialen und ökonomischen Systeme unserer stark vernetzten, globalisierten Welt.

Weiterhin besteht die Gefahr, dass die Manipulation von Entscheidungen durch mächtige Algorithmen die Grundvoraussetzung der „kollektiven Intelligenz“ untergräbt, die sich an die Herausforderungen unserer komplexen Welt flexibel anpassen kann. Damit kollektive Intelligenz funktioniert, müssen Informationssuche und Entscheidungsfindung der Einzelnen unabhängig erfolgen. Wenn unsere Urteile und Entscheidungen jedoch durch Algorithmen vorgeben werden, führt das im wahrsten Sinne des Wortes zur Volksverdummung. Vernunftbegabte Wesen werden zu Befehlsempfängern degradiert, die reflexhaft auf Stimuli reagieren. Das reduziert die Kreativität, weil man weniger „out of the box“ denkt.

Jeroen van den Hoven
© Yvonne Compier

 Bild vergrößernJeroen van den Hoven

Anders gesagt: Personalisierte Information baut eine „filter bubble“ um uns herum, eine Art digitales Gedankengefängnis. In letzter Konsequenz würde eine zentrale, technokratische Verhaltens- und Gesellschaftssteuerung durch ein superintelligentes Informationssystem eine neue Form der Diktatur bedeuten. Die von oben gesteuerte Gesellschaft, die unter dem Banner des „sanften Paternalismus“ daherkommt, ist daher im Prinzip nichts anderes als ein totalitäres Regime mit rosarotem Anstrich.

Die Entwicklung verläuft von der Programmierung von Computern zur Programmierung von Menschen

In der Tat zielt „Big Nudging“ auf die Gleichschaltung vieler individueller Handlungen und auf eine Manipulation von Sichtweisen und Entscheidungen. Dies rückt es in die Nähe der gezielten Entmündigung des Bürgers durch staatlich geplante Verhaltenssteuerung. Wir befürchten, dass die Auswirkungen langfristig fatal sein könnten, insbesondere wenn man die oben erwähnte, teils kulturzerstörende Wirkung bedenkt.

Eine bessere digitale Gesellschaft ist möglich

Am digitalen Scheideweg
© Dirk Helbing

 Bild vergrößernAm digitalen Scheideweg

Wir stehen an einem Scheideweg: Würden die immer mächtiger werdenden Algorithmen unsere Selbstbestimmung einschränken und von wenigen Entscheidungsträgern kontrolliert, würden wir in eine Art Feudalismus 2.0 zurückfallen, da wichtige gesellschaftliche Errungenschaften verloren gingen. Aber wir haben jetzt die Chance, mit den richtigen Weichenstellungen den Weg zu einer Demokratie 2.0 einzuschlagen, von der wir alle profitieren werden.

Trotz des harten globalen Wettbewerbs tun Demokratien gut daran, ihre in Jahrhunderten erarbeiteten Errungenschaften nicht über Bord zu werfen. Gegenüber anderen politischen Regimes haben die westlichen Demokratien den Vorteil, dass sie mit Pluralismus und Diversität bereits umzugehen gelernt haben. Jetzt müssen sie nur noch stärker davon profitieren lernen.

In Zukunft werden jene Länder führend sein, die eine gute Balance von Wirtschaft, Staat und Bürgern erreichen. Dies erfordert vernetztes Denken und den Aufbau eines Informations-, Innovations-, Produkte- und Service-„Ökosystems“. Hierfür ist es nicht nur wichtig, Beteiligungsmöglichkeiten zu schaffen, sondern auch Vielfalt zu fördern. Denn es gibt keine Methode, um zu ermitteln, was die beste Zielfunktion ist: Soll man das Bruttosozialprodukt optimieren oder Nachhaltigkeit? Macht oder Frieden? Lebensdauer oder Zufriedenheit? Oft weiß man erst hinterher, was vorteilhaft gewesen wäre. Indem sie verschiedene Ziele zulässt, ist eine pluralistische Gesellschaft besser in der Lage, mit verschiedenen Herausforderungen zurechtzukommen.

Zentralisierte Top-down-Kontrolle ist eine Lösung der Vergangenheit, die sich nur für Systeme geringer Komplexität eignet. Deshalb sind föderale Systeme und Mehrheitsentscheidungen die Lösungen der Gegenwart. Mit der wirtschaftlichen und kulturellen Entwicklung nimmt die gesellschaftliche Komplexität jedoch weiter zu. Die Lösung der Zukunft lautet kollektive Intelligenz: Citizen Science, Crowd Sourcing und Online-Diskussionsplattformen sind daher eminent wichtige neue Ansätze, um mehr Wissen, Ideen und Ressourcen nutzbar zu machen.

Gerd Gigerenzer
© Dietmar Gust

 Bild vergrößernGerd Gigerenzer

Kollektive Intelligenz benötigt einen hohen Grad an Diversität. Diese wird jedoch durch heutige personalisierte Informationssysteme zu Gunsten der Verstärkung von Trends reduziert. Soziodiversität ist genauso wichtig wie Biodiversität. Auf ihr beruhen nicht nur kollektive Intelligenz und Innovation, sondern auch gesellschaftliche Resilienz – also die Fähigkeit, mit unerwarteten Schocks zurechtzukommen. Die Verringerung der Soziodiversität reduziert oft auch die Funktions- und Leistungsfähigkeit von Wirtschaft und Gesellschaft. Dies ist der Grund, warum totalitäre Regimes oft in Konflikte mit ihren Nachbarn geraten. Typische Langzeitfolgen sind politische Instabilitäten und Kriege, wie sie in unserer Geschichte immer wieder auftraten. Pluralität und Partizipation sind also nicht in erster Linie als Zugeständnisse an die Bürger zu sehen, sondern als maßgebliche Funktionsvoraussetzungen leistungsfähiger, komplexer, moderner Gesellschaften.

Zusammenfassend kann man sagen: Wir stehen an einem Scheideweg. Big Data, künstliche Intelligenz, Kybernetik und Verhaltensökonomie werden unsere Gesellschaft prägen – im Guten wie im Schlechten. Sind solche weit verbreiteten Technologien nicht mit unseren gesellschaftlichen Grundwerten kompatibel, werden sie früher oder später großflächigen Schaden anrichten. So könnten sie zu einer Automatisierung der Gesellschaft mit totalitären Zügen führen. Im schlimmsten Fall droht eine zentrale künstliche Intelligenz zu steuern, was wir wissen, denken und wie wir handeln. Jetzt ist daher der historische Moment, den richtigen Weg einzuschlagen und von den Chancen zu profitieren, die sich dabei bieten. Wir fordern deshalb die Einhaltung folgender Grundprinzipien:

  1. die Funktion von Informationssystemen stärker zu dezentralisieren;
  2. informationelle Selbstbestimmung und Partizipation zu unterstützen;
  3. Transparenz für eine erhöhte Vertrauenswürdigkeit zu verbessern;
  4. Informationsverzerrungen und -verschmutzung zu reduzieren;
  5. von den Nutzern gesteuerte Informationsfilter zu ermöglichen;
  6. gesellschaftliche und ökonomische Vielfalt zu fördern;
  7. die Fähigkeit technischer Systeme zur Zusammenarbeit zu verbessern;
  8. digitale Assistenten und Koordinationswerkzeuge zu erstellen;
  9. kollektive Intelligenz zu unterstützen; und
  10. die Mündigkeit der Bürger in der digitalen Welt zu fördern – eine „digitale Aufklärung“.

Mit dieser Agenda würden wir alle von den Früchten der digitalen Revolution profitieren: Wirtschaft, Staat und Bürger gleichermaßen. Worauf warten wir noch?

Lesen Sie mehr: Eine Strategie für das digitale Zeitalter – der Aktionsplan

Frey, B. S. und Gallus, J.: Beneficial and Exploitative Nudges. In: Economic Analysis of Law in European Legal Scholarship. Springer, 2015.
Gallagher, R.: Profiled: From Radio to Porn, British Spies Track Web Users’ Online Identities. In: The Intercept, 25.09.2015.
Gigerenzer, G.: On the Supposed Evidence for Libertarian Paternalism. In: Review of Philosophy and Psychology 6(3), S. 361-383, 2015.
Gigerenzer, G.: Risiko: Wie man die richtigen Entscheidungen trifft. Bertelsmann, 2013.
Hafen, E. und Brauchbar, M.: Befreiung aus der digitalen Leibeigenschaft. In: Neue Zürcher Zeitung, 05.03.2014.
Hafen, E., Kossmann, D. und Brand, A.: Health data cooperatives – citizen empowerment. In: Methods of Information in Medicine 53(2), S. 82–86, 2014.
Han, B.-C.: .: Psychopolitik: Neoliberalismus und die neuen Machttechniken. S. Fischer, 2014.
Harris, S.: The Social Laboratory. In: Foreign Policy, 29.07.2014.
Helbing, D.: The Automation of Society Is Next: How to Survive the Digital Revolution. CreateSpace, 2015.
Helbing, D.: Societal, Economic, Ethical and Legal Challenges of the Digital Revolution: From Big Data to Deep Learning, Artificial Intelligence, and Manipulative Technologies. Jusletter IT, 2015.
Helbing, D.: Thinking Ahead – Essays on Big Data, Digital Revolution, and Participatory Market Society. Springer, 2015.
Helbing, D. und Pournaras, E.: Build Digital Democracy. In: Nature 527, S. 33-34, 2015.
Hofstetter, Y.: Sie wissen alles: Wie intelligente Maschinen in unser Leben eindringen und warum wir für unsere Freiheit kämpfen müssen. Bertelsmann, 2014.
Köhler, T. R.: Der programmierte Mensch. Wie uns Internet und Smartphone manipulieren. Frankfurter Allgemeine Buch, 2012.
Koops, B.-J., Oosterlaken, I., Romijn, H., Swierstra, T. und van den Hoven, J.: Responsible Innovation 2. Concepts, Approaches, and Applications. Springer, 2015.
Medina, E.: Cybernetic Revolutionaries. MIT Press, 2011.
Schirrmacher, F.: Technologischer Totalitarismus – Eine Debatte. Suhrkamp, 2015.
Schlieter, K.: Die Herrschaftsformel. Wie Künstliche Intelligenz uns berechnet, steuert und unser Leben verändert. Westend, 2015.
Storm, D.: ACLU: Orwellian Citizen Score, China’s credit score system, is a warning for Americans. Computerworld, 07.10.2015.
Tucker, P.: Can the military make a prediction machine? Defense One, 08.04.2015.
van den Hoven, J., Vermaas, P.E. und van den Poel, I.: Handbook of Ethics, Values and Technological Design. Springer, 2015.
Volodymyr, M., Kavukcuoglu, K., Silver, D., et al.: Human-level control through deep reinforcement learning. In: Nature, 518, S. 529-533, 2015.
Zicari, R. und Zwitter, A.: Data for Humanity: An Open Letter. Frankfurt Big Data Lab, 13.07.2015.
Zwitter, A.: Big Data Ethics. In: Big Data & Society 1(2), 2014.

Menschheit steht vor dem grössten Umbruch seit der industriellen Revolution

Um Arbeitsplätze zu schaffen und neue Organisationsformen zu finden, müsse die Gesellschaft auf kollektive Intelligenz und Selbstorganisation setzen, sagt der Soziophysiker Dirk Helbing

Dirk Helbing möchte uns nicht ängstigen. Aber egal wie sachlich und nüchtern der Komplexitätsforscher der ETH Zürich sein Anliegen auch vorbringt, seine Worte gehen durch Mark und Bein.

«Kein Land der Welt ist vorbereitet auf das, was kommt», sagt er und meint damit die vor uns liegende, digitale Revolution. Diese verändere unsere Gesellschaft in atemberaubender Geschwindigkeit. «Nichts wird so bleiben, wie es war. In den meisten europäischen Ländern werden circa 50  Prozent der heutigen Arbeitsplätze verloren gehen.»

Der Umbruch biete aber auch die Möglichkeit, unsere Gesellschaft und Wirtschaft neu zu gestalten, «eine Chance, wie sie sich nur alle 100 Jahre bietet», sagt Helbing. Wenn wir maximal von diesem riesigen wirtschaftlichen und gesellschaftlichen Potenzial profitieren wollen, bräuchten wir dringend eine Art Apollo-Programm für Informations- und Kommunikationssysteme, um die nötigen Institutionen und Infrastrukturen für die digitale Gesellschaft der Zukunft zu errichten.

Die Zeit drängt: Es bleiben uns vielleicht nur 20 Jahre. «Das ist sehr wenig, wenn man bedenkt, dass die Planung einer neuen Strasse oft 30 Jahre oder mehr verschlingt.»

Vorboten der digitalen Revolution kennt jeder: Wir kaufen online, nutzen Bezahlsysteme wie Bitcoin, kommunizieren über Facebook und Whatsapp, sehen Filme via Netflix, fahren Taxi mit Uber, liefern Päckchen mit Drohnen aus, bauen Häuser per 3-D-Drucker, wundern uns über die globale Überwachung, werden bald von autonomen Fahrzeugen chauffiert und von Robotern gepflegt. Noch vor zehn Jahren hatten wir allenfalls eine vage Ahnung von diesen Dingen.

Computer sind besser im Schach, Rechnen, bei Strategiespielen

Doch die Entwicklung geht mit irrsinnigem Tempo weiter – denn sie basiert auf Computerprozessoren, deren Leistung sich etwa alle 18 Monate verdoppelt. Das heisst: die Rechenleistung wächst exponentiell. Was das bedeutet, lässt die Geschichte von den Reiskörnern auf den Feldern eines Schachbretts erahnen: Legt man ein Reiskorn auf Feld eins, zwei auf Feld zwei, vier auf Feld drei, acht auf Feld vier und so weiter, dann liegen auf Feld 64 nicht nur ein paar Tausend Körner, sondern exakt 9 223 372 036 854 775 808 Stück – das entspricht in etwa dem sechsfachen Volumen des Bodensees. Schon heute schlagen Computer die klügsten Menschen im Rechnen, in Strategiespielen wie Schach, im Auffinden und Verwerten von Wissen und in Quizshows. In etwa zehn Jahren werden Rechner die Leistungsfähigkeit des menschlichen Gehirns erreichen.

Auch die Datenmenge explodiert: Innerhalb eines einzigen Jahres produzieren wir so viele Daten wie in der gesamten Menschheitsgeschichte zusammen. Internetsuche und Onlinekäufe, Tweets und Facebook-Comments, die Nutzung von Google Maps mit Smartphones sowie all die Sensoren auf der Erde und im Weltraum erzeugen Unmengen von Daten, sogenannte Big Data.

Kommt hinzu, dass immer mehr Gegenstände – vom Handy über den Kühlschrank bis zur Zahnbürste – mit Sensoren ausgestattet werden und ein Internet der Dinge aufspannen. «Schon heute gibt es mehr Objekte, die mit dem Internet verbunden sind, als Menschen», sagt Helbing. «In zehn Jahren werden 150 Milliarden Gegenstände im Internet der Dinge verknüpft sein.» Wir schlittern also in eine immer stärker vernetzte Welt mit immer mehr Abhängigkeiten.

«Die Komplexität der Gesellschaft wächst sogar noch schneller als die Rechenleistung der Supercomputer», sagt Helbing. Das heisst: Selbst mithilfe der schnellsten Rechner wird es sogar der klügsten und verantwortungsvollsten Regierung nicht mehr gelingen, die sich rasch wandelnden Regeln und Muster unserer digitalen Welt schnell genug zu erfassen und der Komplexität Herr zu werden. «Die Vorstellung, man könnte ein globales System dieser Komplexität noch zentral steuern, ist einfach falsch», sagt Helbing. «Die Grösse der Herausforderung übersteigt die Möglichkeiten klassischer Lösungsansätze.»

Symptome dieser Komplexität seien zum Beispiel Finanz- und Wirtschaftskrisen. Die Europäische Zentralbank habe bisher keine überzeugende Antwort darauf gefunden, sagt Helbing. Weitere Nebenwirkungen der digitalen Revolution sind Cyberkriminalität, Cyberkrieg und die negativen Seiten von Big Data: Die gigantischen Mengen an Information und persönlichen Daten, die Firmen wie Google, Apple, Amazon, Facebook, Twitter und die Geheimdienste anhäufen, sind nicht mehr zu kontrollieren.

Die Durchschlagskraft der bevorstehenden Umwälzungen ist in etwa vergleichbar mit der industriellen Revolution. Bis etwa 1850 arbeiteten circa 70 Prozent der Bevölkerung im landwirtschaftlichen Sektor. Heute sind es in den industrialisierten Ländern noch 3 bis 5 Prozent. Neue Arbeitsplätze in der Industrie glichen zwar die Arbeitslosigkeit im Agrarsektor teilweise aus. Aber der Übergang von der Agrar- zur Industriegesellschaft hat Zeit gebraucht und erhebliche Turbulenzen verursacht.

Nun gehen prominente Studien davon aus, dass die Anzahl heutiger Industriejobs durch Robotik halbiert wird. Auch der Servicesektor werde auf die Hälfte schrumpfen, da intelligente Computer mehr und mehr Dienste übernehmen. Agrar-, Industrie- und Servicesektor stellen dann nur noch rund 50 Prozent der heutigen Jobs. Die 30 Millionen Arbeitslosen in Europa und eine Jugendarbeitslosigkeit von über 50 Prozent in manchen Ländern sprechen eine deutliche Sprache. «In der Schweiz werden wir diese Entwicklung zuletzt spüren», sagt Helbing. «Aber dann ist es zu spät, sich zu wappnen.»

Es sollte uns Sorgen machen, dass der digitale Sektor erst rund 15  Prozent der Arbeitsplätze stellt. «Digitale Ansätze sind meist viel effizienter als herkömmliche Lösungen», sagt Helbing. Als zum Beispiel Kodak pleiteging, kamen Firmen wie Instagram auf, die aber nur rund ein Tausendstel der Mitarbeiter beschäftigen. «Die neuen Firmen werden wohl nicht in der Lage sein, das Äquivalent der wegfallenden Arbeitsplätze neu zu schaffen.»

Wie können wir auf diese massiven Umwälzungen reagieren und die entfesselte digitale Welt bändigen? Laut Helbing gibt es eine Lösung: Anstatt wie Don Quijote hilflos gegen die Windmühlen der Digitalisierung und Komplexität anzukämpfen, sollten wir wie bei der asiatischen Kampfkunst die Kräfte des Gegners zum eigenen Vorteil nutzen. Uns gegen die Komplexität zu stemmen, sei aussichtslos. Vielmehr müssten wir mit ihr kooperieren.

Die erste Hebeltechnik der zu erlernenden Kampfkunst ist laut Helbing eine Art Intelligenz-Upgrade. Helbing streckt seine langen Beine aus und lehnt sich im schwarzen Sessel seines Büros zurück. Seine Sätze sind messerscharf, kein überschüssiges Wort, kein «ääh» oder «hmm» ist zu hören. Dann berichtet er von einem Wettbewerb des Unternehmens Netflix, der Erstaunliches zutage förderte.

Die Summe der Ideen ist besser als der klügste Mensch

Netflix streamt Videos und Fernsehen übers Internet. Und wie Amazon für seine Kaufempfehlungen unsere Vorlieben für Bücher oder DVDs erforscht, wollte Netflix wissen, welchen Film- und Fernseh-Geschmack ihre Kunden haben. Dazu wühlten Algorithmen in grossen Datenmengen – doch die Resultate waren eher bescheiden. «Das Problem war zu komplex», sagt Helbing. «Daher sagte Netflix: Es ist uns eine Million Dollar wert, wenn es jemand schafft, unsere Algorithmen um zehn Prozent zu verbessern.» Hunderte Teams versuchten sich am Netflix-Challenge. Doch selbst zwei Jahre später schaffte kein einziges die 10-Prozent-Hürde.

Schliesslich kam das beste Team auf die Idee, sich mit den nächstbesten zusammenzutun und über ihre Voraussagen bezüglich des Nutzergeschmacks zu mitteln. Man würde erwarten: Wenn zum Besten etwas Schlechteres hinzukommt, sollte das Resultat schlechter sein. «Das Gegenteil war der Fall», sagt Helbing. Das gemittelte Resultat war besser und schaffte sogar die 10-Prozent-Hürde. «Das ist wirklich atemberaubend: Diversität schlägt die beste Lösung.» Anders ausgedrückt: Die Summe der Ideen vieler ist besser als der klügste Mensch, selbst wenn dieser Supercomputer nutzt.

Es ist daher eine «kollektive Intelligenz», die Helbing als Antwort auf die vernetzte Welt vorschwebt. Das steht im Gegensatz zu heute, wo manche Regierungschefs oder Wirtschaftsführer noch von oben her bestimmen. «Wir müssen möglichst viele gute Ideen mit an Bord nehmen, damit wir klügere Entscheidungen treffen können», sagt Helbing. Die neuen Informations- und Kommunikationstechnologien seien wie geschaffen, um diese kollektive Intelligenz zu ermöglichen.

Der zweite Trick in der asiatischen Kampfkunst ist das Phänomen der Selbstorganisation. Helbing zieht einen Stapel Manuskripte aus einer Tasche und legt sie auf den Tisch. Er verfasst gerade ein Buch zur digitalen Gesellschaft, das er der SonntagsZeitung vorab exklusiv zu lesen gab. Wie die Selbstorganisation den gordischen Knoten der Komplexität lösen könnte, lässt sich anhand eines Beispiels erläutern: des Stadtverkehrs.

In der Rushhour stösst die Verkehrsleitzentrale oft an ihre Grenzen. Zwar kommen Supercomputer zum Einsatz, doch selbst damit lässt sich nicht in Echtzeit ermitteln, wie die Ampeln am besten zu schalten sind. Schon bei einer mittelgrossen Stadt wächst der Rechenaufwand ins Unermessliche. Die Verkehrsleitzentrale, die im übertragenen Sinne einer Regierung oder einem Wirtschaftsführer entspricht, kann den Verkehr beim besten Willen nicht mehr optimal steuern.

Dezentrale statt zentrale Schaltung von Ampeln

Aber es gibt ein Alternative: die dezentrale Schaltung der Ampeln. Dazu misst jede Kreuzung mittels Sensoren die Zu- und Abflüsse der Fahrzeuge. Ziel ist es, deren Fahrzeit zu minimieren. «Das ist mathematisch nicht sehr schwierig», sagt Helbing. «Und bei geringen Verkehrsaufkommen ist das viel besser als die Steuerung durch die Zentrale.»

Obwohl es dabei keine Abstimmung zwischen den Kreuzungen gibt, stellt sich eine erstaunliche Koordination der Fahrzeugströme ein. Es ist, als würden die Ampeln wie magisch von einer unsichtbaren Hand gesteuert. Aber natürlich ist hier keine Magie am Werk – in der Wissenschaft ist das Phänomen als Selbstorganisation bekannt. Es ist laut Helbing auch der Grund, weshalb die kapitalistische Wirtschaft besser funktioniert als die kommunistische Zentralplanung: Der Kapitalismus basiert auf dem einfachen Prinzip des rational und egoistisch entscheidenden Homo oeconomicus, der statt der Fahrzeit seinen Profit optimiert. Wie von einer unsichtbaren Hand gesteuert, sorgt das – meist – für eine florierende Wirtschaft.

Doch nicht in jeder Situation ist diese Methode perfekt. Steigt das Verkehrsaufkommen, bricht die selbst organisierte Koordination plötzlich zusammen. Die entstehenden Fahrzeugschlangen wachsen bis zur vorgelagerten Kreuzung, und es entsteht plötzlich ein Megastau. Es gibt also einen Punkt, wo die unsichtbare Hand versagt und die Koordination endet – so wie das auch bei der Finanzkrise 2008 der Fall war. Hier ist die Verkehrsleitzentrale doch wieder gefragt: Sie kann einige Ampeln länger als üblich auf Grün stellen, um den Verkehrszusammenbruch hinauszuzögern. Entsprechend kann eine Regierung Milliarden in die Rettung der Banken pumpen, um Zeit zu gewinnen. Aber insgesamt ist das noch nicht befriedigend.

Zum Glück gibt es einen weiteren Ansatz, der in jeder Situation das Optimum herausholt und einem Verkehrskollaps – vielleicht auch einem Finanzcrash – am besten entgegenwirkt. Auch hier versuchen die Kreuzungen eigenständig die Fahrzeit zu minimieren – aber nicht immer und um jeden Preis. Sobald eine Schlange fast bis zur nächsten Kreuzung zurückreicht und ein Verkehrskollaps droht, wird die Fahrzeitminimierung unterbrochen und zunächst die entsprechende Fahrzeugschlange abgebaut. «Wir nennen das die geleitete Selbstorganisation» sagt Helbing. So funktioniert die «magische» Koordination des Verkehrsflusses bei jedem Verkehrsaufkommen bestmöglich. Damit das klappt, braucht es im Wesentlichen zwei Dinge: erstens Echtzeitinformationen von Messsensoren, die zwischen den Nachbarkreuzungen ausgetauscht werden, und zweitens geeignete (Spiel-)Regeln, um auf diese Information zu reagieren. Dann stellt sich die gewünschte Funktionalität – der Verkehrsfluss – wie von selbst ein.

In Dresden gibt es bereits erste Ampeln, die nach diesem Prinzip funktionieren. «Das Internet der Dinge und die damit verknüpften Echtzeitinformationen ermöglichen es jetzt, das Prinzip der Selbstorganisation auch in der Wirtschaft und Gesellschaft nutzbringend einzusetzen, trotz aller Komplexität» sagt Helbing. «Denn aus mathematischer Sicht sind die Probleme vergleichbar.»

Die Allmend ist ein Schweizer Beispiel für Selbstorganisation

Ein Beispiel findet sich in der Schweiz. Hier hat die US-amerikanische Wirtschaftsnobelpreisträgerin Elinor Ostrom die Allmend untersucht, also gemeinschaftlich bewirtschaftete Felder, Weiden und Wälder. Ostrom hat acht Regeln identifiziert, mit deren Befolgung die Allmend funktioniert. So braucht es eine klare Abgrenzung zwischen Gruppen, die zur Allmend gehören, und solchen, die nicht beteiligt sind. Auch braucht es günstige und einfache Mechanismen zur Lösung von Konflikten. Mit solchen Regeln organisiert sich die Allmend gewissermassen selbst, ohne Steuerung von oben.

Die Idee der Selbstorganisation von unten ist also gar nicht neu. Sie liegt beispielsweise auch sozialen Normen zugrunde. Wer den Zeitungsstapel nicht ordentlich bündelt, erntet zumindest einen bösen Blick vom Nachbarn. Auch das ist eine Variante der Selbstorganisation an der Basis der Gesellschaft. Entsprechend funktionieren Ebay und Ricardo: Wer sich nicht an die Regeln hält, bekommt schlechte Kritiken und wird Waren künftig nicht oder nur zu einem schlechten Preis los. Digitale Reputationssysteme können daher Qualität und verantwortungsvolles Handeln fördern. «Selbstorganisation ist keine Theorie aus dem Elfenbeinturm», sagt Helbing. «Das ist längst ein Erfolgsprinzip unserer Gesellschaft.» Die Herausforderung besteht jetzt darin, das Prinzip der geleiteten Selbstorganisation auf die digitale und globalisierte Welt auszudehnen.

Ein Beispiel: San Francisco wird hin und wieder von schweren Erdbeben erschüttert. Um die Widerstandsfähigkeit der Region im Falle einer solchen Naturkatastrophe zu verbessern, haben sich Programmierer bei einem von Helbing mitorganisierten Workshop in San Francisco eine App ausgedacht. Wer diese App auf dem Handy hat, kann sich melden, wenn er Hilfe, Wasser, Babynahrung oder warme Decken braucht. Die Info wird hochgeladen, und andere Leute aus der Nachbarschaft können sehen, was wo benötigt wird. Wer Wasserflaschen oder Babynahrung im Keller hortet, kann diese ein paar Häuser weiter zu den Bedürftigen bringen. So finden die Bürger via App rasch zueinander, lange bevor ein Katastropheneinsatzteam bereit ist – je nachdem verzögern eingestürzte Brücken oder zerstörte Strassen deren Vorstoss ohnehin.

Natürlich wird auch der Krisenstab von den Informationen der App profitieren: Selbst wenn noch keine professionellen Helfer vor Ort sind, zeigt die App, wo die Not am grössten ist. Doch vor allem wird dank der App die grundsätzliche Hilfsbereitschaft der Menschen zum Nutzen aller geschickt koordiniert und Hilfe zur Selbsthilfe ermöglicht.

Was laut Helbing nun also entstehen wird, ist eine Art «Mitmachgesellschaft», bei der viele Probleme dezentral «von unten» gelöst werden, entsprechend der jeweiligen Bedürfnisse und Ressourcen vor Ort. Wie bei Wikipedia ist die Partizipation der Bürger dabei von zentraler Bedeutung. Denn Einheitslösungen «von oben» sind oft teuer, ineffizient, kommen verspätet oder treffen nicht die Wünsche und Bedürfnisse der Menschen. In der digitalen Gesellschaft müssen Probleme nur dann auf übergeordneten Instanzen entschieden werden, wenn sie auf unteren Ebenen nicht effizient zu lösen sind.

Erste Beispiele gibt es schon: Bei der Sharing Economy werden konventionelle und digitale Gebrauchsgüter nicht mehr von jedem gekauft und besessen, sondern geteilt und gemeinsam benutzt. So kann jeder eine höhere Lebensqualität erzielen – und nachhaltiger für die Umwelt ist es auch. Carsharing und Airbnb sind vielleicht die bekanntesten Beispiele. Daneben kommt gerade das Maker Movement auf, eine Art Bastlerbewegung von kleinen Daniel Düsentriebs, die ihre Ideen mit anderen teilen und zum Beispiel mit 3-D-Druckern vieles selber herstellen. So entsteht schnell eine hohe Kompetenz zur Lösung von Problemen und zur Befriedigung lokaler Bedürfnisse.

Helbing betrachtet dieses konstruktive Miteinander gar als eine neue Art der Ökonomie: An die Stelle des Homo oeconomicus, der nur an sich, nicht aber an andere und auch nicht an die Umweltfolgen denkt, tritt der vernetzt denkende Homo socialis, der realisiert, dass es allen besser geht, wenn jeder ein bisschen auf die anderen und die Umwelt Rücksicht nimmt.

Inzwischen arbeitet Helbings Team am Konzept für ein völlig neues Informationssystem, wenn man so will die dritte Kampftechnik. Die Rede ist von «Nervousnet», das die Probleme des heutigen Internets der Dinge und von Big Data überwinden soll. Unser Handy sammelt bekanntlich alle möglichen Daten über uns und unser Verhalten. Doch diese Daten sind im Besitz der Internetgiganten und der Anbieter kostenloser Apps, die sie oft für manipulative Zwecke gebrauchen. Bis heute haben wir keinen Zugriff auf unsere Daten und keinen Einfluss auf ihre Verwendung, was uns zum Spielball unbekannter Kräfte macht. Unser Verfassungsrecht auf informationelle Selbstbestimmung ist de facto ausser Kraft gesetzt. Mit Nervousnet soll sich das ändern.

Digitales Nervensystem, in dem Bürger ihre Daten kontrollieren

Helbing möchte das Internet der Dinge nämlich als Bürgernetzwerk betreiben, es also den Bürgen in die Hand geben und es zu einer Art «digitalem Nervensystem» weiterentwickeln. Dank zahlloser Sensoren, die Bewegung, Temperatur, Lärm oder was auch immer messen, nimmt dieses Nervousnet unserer Gesellschaft den Puls. Die Steuerung der Sensoren erfolgt mit einer App, die man sich kostenlos herunterladen kann.

Aber im Gegensatz zu den Daten, die beispielsweise Google, Apple, Facebook und Twitter erheben, soll die Datenhoheit bei den Bürgern liegen. Wir selber sollen bestimmen, welche Informationen bei uns bleiben und welche wir mit wem teilen. Kontrollfunktionen und ein persönliches Datenpostfach sollen uns maximale Kontrollmöglichkeiten in die Hand geben. Es soll also kein orwellscher Überwachungsalbtraum entstehen, wie er heute droht. Vielmehr möchte Helbing ein vertrauenswürdiges Netzwerk aufbauen, an dem die Bürger teilhaben und das sie selbst entscheidend prägen. Im Nervousnet ist gewissermassen das basisdemokratische Funktionsprinzip der Schweiz auf das Internet der Dinge übertragen.

Nervousnet wird viele Möglichkeiten eröffnen. Von der Suche nach einer Parklücke über ein Messnetz für das Wetter bis zu einem Erdbebenwarnsystem wird alles möglich. Wer will, könne messen, wie es seinen Pflanzen gehe, oder mit seinen Freunden eigene Unterhaltungsspiele bauen. Die Nutzer können selber neue Funktionen entwickeln und damit die digitale Welt der Zukunft mitgestalten.

Nervousnet befriedigt aber nicht nur den Spieltrieb von Tüftlern. Es soll eine Jobmaschine werden, welche die Gründung vieler Firmen ermöglicht. Das funktioniert so: Die gesammelten und freigegebenen Daten bilden eine Art Wikipedia für Echtzeitdaten – jeder trägt dazu bei, und jeder kann die Daten nutzen. Zudem sind die Programme, mit denen die Sensoren betrieben werden, open source. Das heisst: Jeder kann sie lesen und zu deren Optimierung und Weiterentwicklung beitragen. «Die unterste Nutzungsebene wäre gratis und viele Programme frei verfügbar», sagt Helbing. Aber darauf aufbauend wird es kommerzielle Premiumdienste geben. «Damit wollen wir die Bürger in die Lage versetzen, sich selbstständig zu machen, eigene Firmen zu gründen und selber neue Services und Produkte anzubieten.» Im Nullkommanichts könnte so ein leistungsfähiges Informationsökosystem entstehen. Schon in wenigen Monaten soll es Sensorkits für Nervousnet zu kaufen geben und erste Apps, mit denen man sie betreiben kann.

In diesem als Bürgernetzwerk aufgebauten Nervousnet sieht der Komplexitätsforscher auch eine grosse Chance für Europa. «Europa befand sich bisher im digitalen Dornröschenschlaf», sagt Helbing. Firmen wie Google investieren jährlich sechs Milliarden Euro allein in Forschung und Entwicklung und das gesamte Silicon Valley ein Vielfaches davon. Dagegen komme Europa einfach nicht an. Es sei denn, es wählt eine ganz andere Strategie: Statt auf abgeschottetes Wissen zu setzen wie Google, Facebook und andere Internetgiganten, sollte das Wissen wie beim Bürgernetzwerk offen sein und geteilt werden. Das Wissen des einen kann dann als Input für das Wissen des anderen dienen. So wird kollektive Intelligenz angehäuft. «Das würde eine unglaubliche Wachstumsdynamik entfalten», sagt Helbing. Die kollektive Intelligenz der Vielen könnte das proprietäre Wissen der wenigen Internetgiganten aus Asien und Amerika ausstechen.

Das Potenzial jedenfalls ist enorm. Die Unternehmensberatung McKinsey schätzt den ökonomische Wert des offen zugänglichen Teils von Big Data – genannt Open Data – auf 3000 bis 5000 Milliarden Dollar pro Jahr.

Wir müssen eine partizipative digitale Gesellschaft bauen

Doch die positiven Aspekte der digitalen Revolution stellen sich nicht von selbst ein. So wie wir für die Industriegesellschaft Milliarden in öffentliche Strassen und für die Dienstleistungsgesellschaft Milliarden in öffentliche Schulen, Universitäten und Bibliotheken stecken, benötigt auch das digitale Zeitalter Investitionen in öffentliche Infrastrukturen. Diese reichen von einer unabhängigen Suchmaschine über Lösungen zum Schutz der Privatsphäre bis zu einer modernen Jobplattform und Nervousnet als Erweiterung des Internets.

Laut Helbing ist nun die Öffentlichkeit an der Reihe, sich Gedanken über die digitale Gesellschaft der Zukunft zu machen. Es sei höchste Zeit, eine Debatte darüber zu lancieren – selbst wenn die Entwicklung am Ende etwas langsamer ablaufen sollte, als manche Experten befürchten. «Wir stehen heute an einem Scheideweg», sagt Helbing. Wir können entweder in eine von oben dirigierte Überwachungsgesellschaft hineinschlittern. Oder wir bauen eine partizipative digitale Gesellschaft und nutzen die Möglichkeiten der kollektiven Intelligenz und der Selbstorganisation. «Wenn uns das gelingt», sagt Helbing, «schreiten wir in ein lichtes, ein besseres Zeitalter, in dem wir einige der Probleme lösen können, die unsere Gesellschaft heute noch plagen .»

Weitere Lektüre: –  GDI-Report «Die Zukunft der vernetzten Gesellschaft»: –  Blog von Dirk Helbing: –  Nervousnet der ETH:

FuturICT Blog: SOCIETAL, ECONOMIC, ETHICAL AND LEGAL CHALLENGES OF THE DIGITAL REVOLUTION: From Big Data to Deep Learning, Artificial Intelligence, and Manipulative Technologies.

Wednesday, 15 April 2015

SOCIETAL, ECONOMIC, ETHICAL AND LEGAL CHALLENGES OF THE DIGITAL REVOLUTION: From Big Data to Deep Learning, Artificial Intelligence, and Manipulative Technologies

[1] by Dirk Helbing (ETH Zurich)

The PDF of this article can be downloaded here

In the wake of the on-going digital revolution, we will see a dramatic transformation of our economy and most of our societal institutions. While the benefits of this transformation can be massive, there are also tremendous risks to our society. After the automation of many production processes and the creation of self-driving vehicles, the automation of society is next. This is moving us to a tipping point and to a crossroads: we must decide between a society in which the actions are determined in a top-down way and then implemented by coercion or manipulative technologies (such as personalized ads and nudging) or a society, in which decisions are taken in a free and participatory way and mutually coordinated. Modern information and communication systems (ICT) enable both, but the latter has economic and strategic benefits. The fundaments of human dignity, autonomous decision-making, and democracies are shaking, but I believe that they need to be vigorously defended, as they are not only core principles of livable societies, but also the basis of greater efficiency and success.

„Those who surrender freedom for security [2] will not have, nor do they deserve, either one.“  

Benjamin Franklin

Overview of Some New Digital Technology Trends

Big Data

In a globalized world, companies and countries are exposed to a harsh competition. This produces a considerable pressure to create more efficient systems – a tendency which is re-inforced by high debt levels.

Big Data seems to be a suitable answer to this. Mining Big Data offers the potential to create new ways to optimize processes, identify interdependencies and make informed decisions. There’s no doubt that Big Data creates new business opportunities, not just because of its application in marketing, but also because information itself is becoming monetized.

Technology gurus preach that Big Data is becoming the oil of the 21st century, a new commodity that can be tapped for profit. As the virtual currency BitCoin temporarily became more valuable than gold, it can be even literally said that data can be mined into money in a way which would previously have been considered a fairy tale. Although many Big Data sets are proprietary, the consultancy company McKinsey recently estimated that the additional value of Open Data alone amounts to be $3-5 trillion per year.[3] If the worth of this publicly available information were evenly divided among the world’s population, every person on Earth would receive an additional $700 per year. We now see Open Government initiatives all over the world, aiming to improve services to citizens while having to cut costs. Even the G8 is pushing for Open Data as this is crucial to mobilize the full societal and economic capacity.[4]

The potential of Big Data spans every area of social activity, from the processing of human language and the management of financial assets, to the harnessing of information enabling large cities to manage the balance between energy consumption and production. Furthermore, Big Data holds the promise to help protect our environment, to detect and reduce risks, and to discover opportunities that would otherwise have been missed. In the area of medicine, Big Data could make it possible to tailor medications to patients, thereby increasing their effectiveness and reducing their side effects. Big Data could also accelerate the research and development of new drugs and focus resources on the areas of greatest need.

Big Data applications are spreading like wildfire. They facilitate personalized offers, services and products. One of the greatest successes of Big Data is automatic speech recognition and processing. Apple’s Siri understands you when asking for a restaurant, and Google Maps can lead you there. Google translate interprets foreign languages by comparing them with a huge collection of translated texts. IBM’s Watson computer even understands human language. It can not only beat experienced quiz show players, but take care of customer hotlines and patients – perhaps better than humans. IBM has just decided to invest $1 billion to further develop and commercialize the system.

Of course, Big Data play an important role in the financial sector. Approximately seventy percent of all financial market transactions are now made by automated trading algorithms. In just one day, the entire money supply of the world is traded. So much money also attracts organized crime. Therefore, financial transactions are scanned by Big Data algorithms for abnormalities to detect suspicious activities. The company Blackrock uses a similar software, called „Aladdin“, to successfully speculate with funds amounting approximately to the gross domestic product (GDP) of Europe.

The Big Data approach is markedly different from classical data mining approaches, where datasets have been carefully collected and carefully curated in databases by scientists or other experts. However, each year we now produce as much data as in the entire history of humankind, i.e. in all the years before. This exceeds by far human capacities to curate all data. In just one minute, 700,000 google queries and 500,000 facebook comments are sent. Besides this, enormous amounts of data are produced by all the traces that human activities are now leaving in the Internet. This includes shopping and financial data, geo-positioning and mobility data, social contacts, opinions posted in social networks, files stored in dropbox or some other cloud storage, emails posted or received through free accounts, ebooks read, including time spent on each page and sentences marked, Google or Apple Siri queries asked, youtube or TV movies watched on demand, and games played. Modern game engines and smart home equipment would also sense your activities at home, digital glasses would transmit what you see, and gene data are also massively gathered now.

Meanwhile, the data sets collected by companies such as ebay, Walmart or Facebook, reach the size of petabytes (1 million billion bytes) – one hundred times the information content of the largest library in the world: the U.S. Library of Congress. The mining of Big Data opens up entirely new possibilities for process optimization, the identification of interdependencies, and decision support. However, Big Data also comes with new challenges, which are often characterized by four criteria:

  • volume: the file sizes and number of records are huge,
  • velocity: the data evaluation has often to be done in real-time,
  • variety: the data are often very heterogeneous and unstructured,
  • veracity: the data are probably incomplete, not representative, and contain errors.

Therefore, completely new algorithms had to be developed, i.e. new computational methods.


Machine Learning, Deep Learning, and Super-Intelligence

To create value from data, it is crucial to turn raw data into useful information and actionable knowledge, some even aim at producing „wisdom“ and „clairvoyance“ (predictive capabilities). This process requires powerful computer algorithms. Machine learning algorithms do not only watch out for particular patterns, but find patterns even by themselves. This has led Chris Anderson to famously postulate „the end of theory“, i.e. the hypothesis that the data deluge makes the scientific method obsolete.[5] If there would be just a big enough quantity of data, machine learning could turn it into high-quality data and come to the right conclusions. This hypothesis has become the credo of Big Data analytics, even though this almost religious belief lacks a proper foundation. I am therefore calling here for a proof of concept, by formulating the following test: Can universal machine learning algorithms, when mining huge masses of experimental physics data, discover the laws of nature themselves, without the support of human knowledge and intelligence?

In spite of these issues, deep learning algorithms are celebrating great successes in everyday applications that do not require an understanding of a hidden logic or causal interdependencies.[6] These algorithms are universal learning procedures which, theoretically, could learn any pattern or input-output relation, given enough time and data. Such algorithms are particularly strong in pattern recognition tasks, i.e. reading, listening, watching, and classifying contents.[7] As a consequence, experts believe that about 50% of all current jobs in the industrial and service sectors will be lost in the next 10-20 years. Moreover, abilities comparable to the human brain are expected to be reached within the next 5 to 25 years.[8] This has led to a revival of Artificial Intelligence, now often coming under the label „Cognitive Computing“.

To be competitive with intelligent machines, humans will in future increasingly need „cognitive assistants“. These are digital tools such as Google Now. However, as cognitive assistants get more powerful at exponentially accelerating pace, they would soon become something like virtual colleagues, then something like digital coaches, and finally our bosses. Robots acting as bosses are already being tested.[9]

Scientists are also working on „biological upgrades“ for humans. The first cyborgs, i.e. humans that have been technologically upgraded, already exist. The most well-known of them is Neil Harbisson. At the same time, there is large progress in producing robots that look and behave increasingly like humans. It must be assumed that many science fiction phantasies shown in cinemas and on TV may soon become reality.[10]

Recently, however, there are increasing concerns about artificial super-intelligences, i.e. machines that would be more intelligent than humans. In fact, computers are now better at calculating, at playing chess and most other strategic games, at driving cars, and they are performing many other specialized tasks increasingly well. Certainly, intelligent multi-purpose machines will soon exist.

Only two or three years ago, most people would have considered it impossible that algorithms, computers, or robots would ever challenge humans as crown of creation. This has changed.[11] Intelligent machines are learning themselves, and it’s now conceivable that robots build other robots that are smarter. The resulting evolutionary progress is quickly accelerating, and it is therefore just a matter of time until there are machines smarter than us. Perhaps such super-intelligences already exist. In the following, I am presenting some related quotes of some notable scientists and technology experts, who raise concerns and try to alert the public of the problems we are running into:

For example, Elon Musk of Tesla Motors voiced:[12] 

„I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful. … I am increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. … “

Similar critique comes from Nick Bostrom at Oxford University.[13]

Stephen Hawking, the most famous physicist to date, recently said:[14] 

„Humans who are limited by slow biological evolution couldn’t compete and would be superseded. … The development of full artificial intelligence could spell the end of the human race. … It would take off on its own, and re-design itself at an ever increasing rate.“

Furthermore, Bill Gates of Microsoft was quoted:[15] 

„I am in the camp that is concerned about super intelligence. … I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.“

Steve Wozniak, co-founder of Apple, formulated his worries as follows:[16]

  „Computers are going to take over from humans, no question … Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people … If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently … Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know …“

Personally, I think more positively about artificial intelligence, but I believe that we should engage in distributed collective intelligence rather than creating a few extremely powerful super-intelligences we may not be able to control.[17] It seems that various big IT companies in the Silicon Valley are already engaged in building super-intelligent machines. It was also recently reported that Baidu, the Chinese search engine, wanted to build a „China Brain Project“, and was looking for significant financial contributions by the military.[18] Therefore, to be competitive, do we need to sacrifice our privacy for a society-spanning Big Data and Deep Learning project to predict the future of the world? As will become clear later on, I don’t think so, because Big Data approaches and the learning of facts from the past are usually bad a predicting fundamental shifts as they occur at societal tipping points, while this is what we mainly need to care about. The combination of explanatory models with little (but right kind of) data is often superior.[19] This can deliver a better description of macro-level societal and economic change, as I will show below, and it’s macro-level effects that really matter. Additionally, one should invest in tools that allow one to reveal mechanisms for the management and design of better systems. Such innovative solutions, too, cannot be found by mining data of the past and learning patterns in them.


Persuasive Technologies and Nudging to Manipulate Individual Decisions

Personal data of all kinds are now being collected by many companies, most of which are not well-known to the public. While we surf the Internet, every single click is recorded by cookies, super-cookies and other processes, mostly without our consent. These data are widely traded, even though this often violates applicable laws. By now, there are about 3,000 to 5,000 personal records of more or less every individual in the industrialized world. These data make it possible to map the way each person thinks and feels. Their clicks would not only produce a unique fingerprint identifying them (perhaps even when surfing anonymously). They would also reveal the political party they are likely to vote for (even though the anonymous vote is an important basis of democracies). Their google searches would furthermore reveal the likely actions they are going to take next (including likely financial trades[20]). There are even companies such as Recorded Future and Palantir that try to predict future individual behavior based on the data available about each of us. Such predictions seem to work pretty well, in more than 90% of all cases. It is often believed that this would eventually make the future course of our society predictable and controllable.

In the past, the attitude was „nobody is perfect, people make mistakes“. Now, with the power of modern information technologies, some keen strategists hope that our society could be turned into a perfect clockwork. The feasibility of this approach is already being tested. Personalized advertisement is in fact trying to manipulate people’s choices, based on the detailed knowledge of a person, including how he/she thinks, feels, and responds to certain kinds of situations. These approaches become increasingly effective, making use of biases in human decision-making and also subliminal messages. Such techniques address people’s subconsciousness, such that they would not necessarily be aware of the reasons causing their actions, similar to acting under hypnosis. 

Manipulating people’s choices is also increasingly being discussed as policy tool, called „nudging“ or „soft paternalism“.[21] Here, people’s decisions and actions would be manipulated by the state through digital devices to reach certain outcomes, e.g. environmentally friendly or healthier behavior, or also certain election results. Related experiments are being carried out already.[22]

Attempt of a Technology Assessment

In the following, I will discuss some of the social, economic, legal, ethical and other implications of the above digital technologies and their use. Like all other technologies, the use of Big Data, Artificial Intelligence, and Nudging can produce potentially harmful side effects, but in this case the impact on our economy and society may be massive. To benefit from the opportunities of digital technologies and minimize their risks, it will be necessary to combine certain technological solutions with social norms and legal regulations. In the following, I attempt to give a number of initial hints, but the discussion below can certainly not give a full account of all issues that need to be addressed.

Problems with Big Data Analytics

The risks of Big Data are manifold. The security of digital communication has been undermined. Cyber ​​crime, including data, identity and financial theft, is exploding, now producing an annual damage of the order of 3 trillion dollars, which is exponentially growing. Critical infrastructures such as energy, financial and communication systems are threatened by cyber attacks. They could, in principle, be made dysfunctional for an extended period of time, thereby seriously disrupting our economy and society. Concerns about cyber wars and digital weapons (D weapons) are quickly growing, as they may be even more dangerous than atomic, biological and chemical (ABC) weapons.

Besides cyber risks, there is a pretty long list of other problems. Results of Big Data analytics are often taken for granted and objective. This is dangerous, because the effectiveness of Big Data is sometimes based more on beliefs than on facts.[23] It is also far from clear that surveillance cameras[24] and predictive policing[25] can really significantly reduce organized and violent crime or that mass surveillance is more effective in countering terrorism than classical investigation methods.[26] Moreover, one of the key examples of the power of Big Data analytics, Google Flu Trends, has recently been found to make poor predictions. This is partly because advertisements bias user behaviors and search algorithms are being changed, such that the results are not stable and reproducible.[27] In fact, Big Data curation and calibration efforts are often low. As a consequence, the underlying datasets are typically not representative and they may contain many errors. Last but not least, Big Data algorithms are frequently used to reveal optimization potentials, but their results may be unreliable or may not reflect any causal relationships. Therefore, conclusions from Big Data are not necessarily correct.

A naive application of Big Data algorithms can easily lead to mistakes and wrong conclusions. The error rate in classification problems (e.g. the distinction between „good“ and „bad“ risks) is often significant. Issues such as wrong decisions or discrimination are serious problems.[28] In fact, anti-discrimination laws may be implicitly undermined, as results of Big Data algorithms may imply disadvantages for women, handicapped people, or ethnic, religious, and other minorities. This is, because insurance offers, product prices of Internet shops, and bank loans increasingly depend on behavioral variables, and on specifics of the social environment, too. It might happen, for example, that the conditions of a personal loan depend on the behavior of people one has never met. In the past, some banks have even terminated loans, when neighbors have failed to make their payments on time.[29] In other words, as we lose control over our personal data, we are losing control over our lives, too. How will we then be able to take responsibility for our life in the future, if we can’t control it any longer?

This brings us to the point of privacy. There are a number of important points to be considered. First of all, surveillance scares people, particularly minorities. All minorities are vulnerable, but the success of our society depends on them (e.g. politicians, entrepreneurs, intellectuals). As the „Volkszählungsurteil“[30] correctly concludes, the continuous and uncontrolled recording of data about individual behaviors is undermining chances of personal, but also societal development. Society needs innovation to adjust to change (such as demographic, environmental, technological or climate change). However, innovation needs a cultural setting that allows to experiment and make mistakes.[31] In fact, many fundamental inventions have been made by accident or even mistake (Porcelain, for example, resulted from attempts to produce gold). A global map of innovation clearly shows that fundamental innovation mainly happens in free and democratic societies.[32] Experimenting is also needed to become an adult who is able to judge situations and take responsible decisions.

Therefore, society needs to be run in a way that is tolerant to mistakes. But today one may get a speed ticket for having been 1km/h too fast (see below). In future, in our over-regulated world, one might get tickets for almost anything.[33] Big Data would make it possible to discover and sanction any small mistake. In the USA, there are already 10 times more people in prison than in Europe (and more than in China and Russia, too). Is this our future, and does it have anything to do with the free society we used to live in? However, if we would punish only a sample of people making mistakes, how would this be compatible with fairness? Wouldn’t this end in arbitrariness and undermine justice? And wouldn’t the principle of assumed innocence be gone, which is based on the idea that the majority of us are good citizens, and only a few are malicious and to be found guilty?

Undermining privacy can’t work well. It questions trust in the citizens, and this undermines the citizens‘ trust in the government, which is the basis of its legitimacy and power. The saying that „trust is good, but control is better“ is not entirely correct: control cannot fully replace trust.[34] A well-functioning and efficient society needs a suitable combination of both.

„Public“ without „private“ wouldn’t work well. Privacy provides opportunities to explore new ideas and solutions. It helps to recover from the stress of daily adaptation and reduces conflict in a dense population of people with diverse preferences and cultural backgrounds.

Public and private are two sides of the same medal. If everything is public, this will eventually undermine social norms.[35] On the long run, the consequence could be a shameless society, or if any deviation from established norms is sanctioned, a totalitarian society.

… and I was not even the driver…

Therefore, while the effects of mass surveillance and privacy intrusion are not immediately visible, they might still cause a long-term damage by undermining the fabric of our society: social norms and culture. It is highly questionable whether the economic benefits would really outweight this, and whether a control-based digital society would work at all. I rather expect such societal experiments to end in disaster.


Problems with Artificial Intelligence and Super-Intelligence[36]

The globalization and networking of our world has caused a level of interdependency and complexity that no individual can fully grasp. This leads to the awkward situation that every one of us sees only part of the picture, which has promoted the idea that we should have artificial super-intelligences that may be able to overlook the entire knowledge of the world. However, learning such knowledge (not just the facts, but also the implications) might progress more slowly than our world changes and human knowledge progresses.[37]

It is also important to consider that the meaning of data depends on context. This becomes particularly clear for ambiguous content. Therefore, like our own brain, an artificial intelligence based on deep learning will sometimes see spurious correlations, and it will probably have some prejudices, too.

Unfortunately, having more information than humans (as cognitive computers have it today) doesn’t mean to be objective or right. The problem of „over-fitting“, according to which there is a tendency to fit meaningless, random patterns in the data is just one possible issue. The problems of parameter sensitivity or of „chaotic“ or „turbulent“ system dynamics will restrict possibilities to predict future events, to assess current situations, or even to identify the correct model parameters describing past events.[38] Despite these constraints, a data-driven approach would always deliver some output, but this might be just an „opinion“ of an intelligent machine rather than a fact. This becomes clear if we assume to run two identical super-intelligent machines in different places. As they are not fed with exactly the same information, they would have different learning histories, and would sometimes come to different conclusions. So, super-intelligence is no guarantee to find a solution that corresponds to the truth.[39] And what if a super-intelligent machine catches a virus and gets something like a „brain disease“?

The greatest problem is that we might be tempted to apply powerful tools such as super-intelligent machines to shape our society at large. As it became obvious above, super-intelligences would make mistakes, too, but the resulting damage might be much larger and even disastrous. Besides this, super-intelligences might emancipate themselves and become uncontrollable. They might also start to act in their own interest, or lie.

Most importantly, powerful tools will always attract people striving for power, including organized criminals, extremists, and terrorists. This is particularly concerning because there is no 100% reliable protection against serious misuse. At the 2015 WEF meeting, Misha Glenny said: „There are two types of companies in the world: those that know they’ve been hacked, and those that don’t“.[40] In fact, even computer systems of many major companies, the US military, the Pentagon and the White House have been hacked in the past, not to talk about the problem of data leaks… Therefore, the growing concerns regarding building and using super-intelligences seem to be largely justified.

Problems with manipulative („persuasive“) technologies[41]

The use of information technology is changing our behavior. This fact invites potential misuse, too.[42] Society-scale experiments with manipulative technologies are likely to have serious side effects. In particular, influencing people’s decision-making undermines the principle of the „wisdom of crowds“,[43] on which democratic decision-making and also the functioning of financial markets is based. For the „wisdom of crowds“ to work, one requires sufficiently well educated people who gather and judge information separately and make their decisions independently. Influencing people’s decisions will increase the likelihood of mistakes, which might be costly. Moreover, the information basis may get so biased over time that no one, including government institutions and intelligent machines, might be able to make reliable judgments.

Eli Pariser raised a related issue, which he called the „filter bubble“. As we increasingly live in a world of personalized information, we are less and less confronted with information that doesn’t fit our beliefs and taste. While this creates a feeling to live in the world we like, we will lose awareness of other people’s needs and their points of view. When confronted with them, we may fail to communicate and interact constructively. For example, the US political system seems to increasingly suffer from the inability of republicans and democrats to make compromises that are good for the country. When analyzing their political discourse on certain subjects, it turns out that they don’t just have different opinions, but they also use different words, such that there is little chance to develop a shared understanding of a problem.[44] Therefore, some modern information systems haven’t made it easier to govern a country – on the contrary.

In perspective, manipulative technologies may be seen as attempts to „program“ people. Some crowd sourcing techniques such as the services provided by Mechanical Turk come already pretty close to this. Here, people pick up jobs of all kinds, which may just take a few minutes. For example, you may have a 1000 page manual translated in a day, by breaking it down into sufficiently many micro-translation jobs.[45] In principle, however, one could think of anything, and people might not even be aware of the outcome they are jointly producing.[46]

Importantly, manipulation incapacitates people, and it makes them less capable of solving problems by themselves.[47] On the one hand, this means that they increasingly lose control of their judgments and decision-making. On the other hand, who should be held responsible for mistakes that are based on manipulated decisions? The one who took the wrong decision or the one who made him or her take the wrong decision? Probably the latter, particularly as human brains can decreasingly keep up with the performance of computer systems (think, for example, of high-frequency trading).

Finally, we must be aware of another important issue. Some keen strategists believe that manipulative technologies would be perfect tools to create a society that works like a perfect machine. The idea behind this is as follows: A super-intelligent machine would try to figure out an optimal solution to a certain problem, and it would then try to implement it using punishment or manipulation, or both. In this context, one should evaluate again what purposes recent editions of security laws (such as the BÜPF) might be used for, besides fighting true terrorists. It is certainly concerning if people can be put to jail for contents on their computer hard disks, while at the same time hard disks are known to have back doors, and secret services are allowed to download materials to them. This enables serious misuse, but it also questions whether hard disk contents can be still accepted as evidence at court.

Of course, one must ask, whether it would be really possible to run a society by a combination of surveillance, manipulation and coercion. The answer is: probably yes, but given the complexity of our world, I expect this would not work well and not for long. One might therefore say that, in complex societies, the times where a „wise king“ or „benevolent dictator“ could succeed are gone.[48] But there is the serious danger that some ambitious people might still try to implement the concept and take drastic measures in desperate attempts to succeed. Minorities, who are often seen to produce „unnecessary complexity“, would probably get under pressure.[49] This would reduce social, cultural and economic diversity.

As a consequence, this would eventually lead to a socio-economic „diversity collapse“, i.e. many people would end up behaving similarly. While this may appear favorable to some people, one must recognize that diversity is the basis of innovation, economic development,[50] societal resilience, collective intelligence, and individual happiness. Therefore, socio-economic and cultural diversity must be protected in a similar way as we have learned to protect biodiversity.[51]

Altogether, it is more appropriate to compare a social or economic system to an ecosystem than to a machine. It then becomes clear that a reduction of diversity corresponds to the loss of biological species in an ecosystem. In the worst case, the ecosystem could collapse. By analogy, the social or economic system would lose performance and become less functional. This is what typically happens in totalitarian regimes, and it often ends with wars as a result of attempts to counter the systemic instability caused by a diversity collapse.[52]

In conclusion, to cope with diversity, engaging in interoperability is largely superior to standardization attempts. That is why I am suggesting below to develop personal digital assistants that help to create benefits from diversity.


I am a strong supporter of using digital technologies to create new business opportunities and to improve societal well-being. Therefore, I think one shouldn’t stop the digital revolution. (Such attempts would anyway fail, given that all countries are exposed to harsh international competition.) However, like with every technology, there are also potentially serious side effects, and there is a dual use problem.

If we use digital technologies in the wrong way, it could be disastrous for our economy, ending in mass unemployment and economic depression. Irresponsible uses could also be bad for our society, potentially ending (intentionally or not) in more or less totalitarian regimes with little individual freedoms.[53] There are also serious security issues due to exponentially increasing cyber crime, which is partially related to the homogeneity of our current Internet, the lack of barriers (for the sake of efficiency), and the backdoors in many hard- and software systems.

Big Data produces further threats. It can be used to ruin personal careers and companies, but also to launch cyber wars.[54] As we don’t allow anyone to own a nuclear bomb or to drive a car without breaks and other safety equipment, we must regulate and control the use of Big Data, too, including the use of Big Data by governments and secret services. This seems to require a sufficient level of transparency, otherwise it is hard to judge for anyone whether we can trust such uses and what are the dangers.


Recommendations regarding Big Data

The use of Big Data should meet certain quality standards. This includes the following aspects:

  • Security issues must be paid more attention to. Sensitive data must be better protected from illegitimate access and use, including hacking of personal data. For this, more and better data encryption might be necessary.
  • Storing large amounts of sensitive data in one place, accessible with a single password appears to be dangerous. Concepts such as distributed data storage and processing are advised.
  • It should not be possible for a person owning or working in a Big Data company or secret service to look into personal data in unauthorized ways (think of the LoveINT affair, where secret service staff was spying on their partners or ex-partners[55]).
  • Informational self-determination (i.e. the control of who uses what personal data for what purpose) is necessary for individuals to keep control of their lives and be able to take responsibility for their actions.
  • It should be easy for users to exercise their right of informational self-determination, which can be done by means of Personal Data Stores, as developed by the MIT[56] and various companies. Microsoft seems to be working on a hardware-based solution.
  • It must be possible and reasonably easy to correct wrong personal data.
  • As Big Data analytics often results in meaningless patterns and spurious correlations, for the sake of objectivity and in order to come to reliable conclusions, it would be good to view its results as hypotheses and to verify or falsify them with different approaches afterwards.
  • It must be ensured that scientific standards are applied to the use of Big Data. For example, one should require the same level of significance that is demanded in statistics and for the approval of medical drugs.
  • The reproducibility of results of Big Data analytics must be demanded.
  • A sufficient level of transparency and/or independent quality control is needed to ensure that quality standards are met.
  • It must be guaranteed that applicable antidiscrimination laws are not implicitly undermined and violated.
  • It must be possible to challenge and check the results of Big Data analytics.
  • Efficient procedures are needed to compensate individuals and companies for improper data use, particularly for unjustified disadvantages.
  • Serious violations of constitutional rights and applicable laws should be confronted with effective sanctions.
  • To monitor potential misuse and for the sake of transparency, the processing of sensitive data (such as personal data) should probably be always logged.
  • Reaping private benefits at the cost of others or the public must be sanctioned.
  • As for handling dangerous goods, potentially sensitive data operations should require particular qualifications and a track record of responsible behavior (which might be implemented by means of special kinds of reputation systems).

It must be clear that digital technologies will only thrive if they are used in a responsible way. For companies, the trust of consumers and users is important to gain and maintain a large customer base. For governments, public trust is the basis of legitimacy and power. Losing trust would, therefore, cause irrevocable damage.

The current problem is the use of cheap technology. For example, most software is not well tested and not secure. Therefore, Europe should invest in high-quality services of products. Considering the delay in developing data products and services, Europe must anyway find a strategy that differentiates itself from its competitors (which could include an open data and open innovation strategy, too[57]).

Most, if not all functionality currently produced with digital technologies (including certain predictive „Crystal Ball“ functionality[58]) can be also obtained in different ways, particularly in ways that are compatible with constitutional and data protection laws[59] (see also the Summary, Conclusion, and Discussion). This may come at higher costs and slightly reduced efficiency, but it might be cheaper overall than risking considerable damage (think of the loss of 3 trillion dollars by cybercrime each year and consider that this number is still exponentially increasing). Remember also that we have imposed safety requirements on nuclear, chemical, genetic, and other technologies (such as cars and planes) for good reasons. In particular, I believe that we shouldn’t (and wouldn’t need to) give up the very important principle of informational self-determination in order to unleash the value of personal data. Informational self-control is of key importance to keep democracy, individual freedom, and responsibility for our lives. To reach catalytic and synergy effects, I strongly advise to engage in culturally fitting uses of Information and Communication Technologies (ICT).

In order to avoid slowing down beneficial data applications too much, one might think of continuously increasing standards. Some new laws and regulations might become applicable within 2 or 3 years time, to give companies a sufficiently long time to adjust their products and operation. Moreover, it would be useful to have some open technology standards such that all companies (also small and medium-sized ones) have a chance to meet new requirements with reasonable effort. Requiring a differentiated kind of interoperability could be of great benefit.

Recommendations regarding Machine Learning and Artificial Intelligence


Modern data application go beyond Big Data analytics towards (semi-)automatic systems, which typically offer possibilities for users to control certain system parameters (but sometimes there is just the possibility to turn the automatic or the system off). Autopilot systems, high-frequency trading, and self-driving cars are well-known examples. Would we in future even see an automation of society, including an automated voting by digital agents mirroring ourselves?[60]

Automated or autonomous systems are often not a 100 percent controllable, as they may operate at a speed that humans cannot compete with. One must also realize that today’s artificial intelligent systems are not fully programmed. They learn, and they may therefore behave in ways that have not been tested before. Even if their components would be programmed line by line and would be thoroughly tested without showing any signs of error, the interaction of the system components may lead to unexpected behaviors. For example, this is often the case when a car with sophisticated electronic systems shows surprising behavior (such as suddenly not operating anymore). In fact, unexpected („emergent“) behavior is a typical feature of many complex dynamical systems.

The benefits of intelligent learning systems can certainly be huge. However, we must understand that they will sometimes make mistakes, too, even when automated systems are superior to human task performance. Therefore, one should make a reasonable effort to ensure that mistakes by an automated system are outweighted by its benefits. Moreover, possible damages should be sufficiently small or rare, i.e. acceptable to society. In particular, such damages should not pose any large-scale threats to critical infrastructures, our economy, or our society. As a consequence, I propose the following:

  • A legal framework for automated technologies and intelligent machines is necessary. Autonomy needs to come with responsibility, otherwise one may quickly end in anarchy and chaos.
  • Companies should be accountable for delivering automated technologies that satisfy certain minimum standards of controllability and for sufficiently educating their users (if necessary).
  • The users of automated technologies should be accountable for appropriate efforts to control and use them properly.
  • Contingency plans should be available for the case where an automated system gets out of control. It would be good to have a fallback level or plan B that can maintain the functionality of the system at the minimum required performance level.
  • Insurances and other legal or public mechanisms should be put in place to appropriately and efficiently compensate those who have suffered damage.
  • Super-intelligences must be well monitored and should have in-built destruction mechanisms in case they get out of control nevertheless.
  • Relevant conclusions of super-intelligent systems should be independently checked (as these could also make mistakes, lie, or act selfishly). This requires suitable verification methods, for example, based on collective intelligence. Humans should still have possibilities to judge recommendations of super-intelligent machines, and to put their suggestions in a historical, cultural, social, economic and ethical perspective.
  • Super-intelligent machines should be accessible not only to governing political parties, but also to the opposition (and their respectively commissioned experts), because the discussion about the choice of the goal function and the implication of this choice is inevitable. This is where politics still enters in times of evidence- or science-based decision-making.
  • The application of automation should affect sufficiently small parts of the entire system only, which calls for decentralized, distributed, modular approaches and engineered breaking points to avoid cascade effects. This has important implications for the design and management of automated systems, particularly of globally coupled and interdependent systems.[61]
  • In order to stay in control, governments must regulate and supervise the use of super-intelligences with the support of qualified experts and independent scientists.[62]


Recommendations regarding manipulative technologies

Manipulative technologies are probably the most dangerous among the various digital technologies discussed in this paper, because we might not even notice the manipulation attempts.

In the past, we lived in an information-poor world. Then, we had enough time to assess the value of information, but we did not always have enough information to decide well. With more information (Web search, Wikipedia, digital maps, etc.) orientation is increasingly easy. Now, however, we are faced with a data deluge and are confronted with so much information that we can’t assess and process it all. We are blinded by too much information, and this makes us vulnerable to manipulation. We increasingly need information filters, and the question is, who should produce these information filters? A company? Or the state?

In both cases, this might have serious implications for our society, because the filters would pursue particular interests (e.g. to maximize clicks on ads or to manipulate people in favor of nationalism). In this way, we might get stuck in a „filter bubble“.[63] Even if this filter bubble would feel like a golden cage, it would limit our imagination and capacity of innovation. Moreover, mistakes can and will always happen, even if best efforts to reach an optimum outcome are made.

While some problems can be solved well in a centralized fashion (i.e. in a top-down way), some optimization problems are notoriously hard and better solved in a distributed way. Innovation is one of these areas.[64] The main problem is that the most fundamental question of optimization is unsolved, namely what goal function to choose. When a bad goal function is chosen, this will have bad outcomes, but we may notice this only after many years. As mistakes in choosing the goal function will surely sometimes happen, it could end in disaster when everyone applies the same goal function.[65]

Therefore, one should apply something like a portfolio strategy. Under strongly variable and hardly predictable conditions a diverse strategy works best. Therefore, pluralistic information filtering is needed. In other words, customers, users, and citizens should be able to create, select, share and adapt the information filters they use, thereby creating an evolving ecosystem of increasingly better filters. In fact, everyone would probably be using several different filters (for example, „What’s currently most popular?“, „What’s most controversial?“, „What’s trendy in my peer group?“, „Surprise me!“).[66] In contrast, if we leave it to a company or the state to decide how we see the world, we might happen to end up with biased views, and this could lead to terrible mistakes. This could, for example, undermine the „wisdom of crowds“, which is currently the basis of free markets and democracies (with benefits such as a high level of performance [not necessarily growth], quality of life, and the avoidance of mistakes such as wars among each other).

In a world characterized by information overload, unbiased and reliable information becomes ever more important. Otherwise the number of mistakes will probably increase. For the digital society to succeed, we must therefore take safeguards against information pollution and biases. Reputation systems might be a suitable instrument, if enough information providers compete efficiently with each other for providing more reliable and more useful information. Additionally, legal sanctions might be necessary to counter intentionally misleading information.

Consequently, advertisements should be marked as such, and the same applies to manipulation attempts such as nudging. In other words, the user, customer or citizen must be given the possibility to consciously decide for or against a certain decision or action, otherwise individual autonomy and responsibility are undermined. Similarly as customers of medical drugs are warned of potential side effects, one should state something like „This product is manipulating your decisions and is trying to make you behave in a more healthy way (or in a environmentally friendly way, or whatever it tries to achieve…)“. The customer would then be aware of the likely effects of the information service and could actively decide whether he or she wants this or not.

Note, however, that it is currently not clear what the side effects of incentivizing the use of manipulative technologies would be. If applied on a large scale, it might be almost as bad as hidden manipulation. Dangerous herding effects might occur (including mass psychology as it occurs in hypes, stock market bubbles, unhealthy levels of nationalism, or the particularly extreme form it took during the Third Reich). Therefore,

  • manipulation attempts should be easily recognizable, e.g. by requiring everyone to mark the kind of information (advertisement, opinion, or fact),
  • it might be useful to monitor manipulation attempts and their effects,
  • the effect size of manipulation attempts should be limited to avoid societal disruptions,
  • one should have a possibility to opt out for free from the exposure to manipulative influences,
  • measures to ensure pluralism and socio-economic diversity should be required,
  • sufficiently many independent information providers with different goals and approaches would be needed to ensure an effective competition for more reliable information services,
  • for collective intelligence to work, having a knowledge base of trustable and unbiased facts is key, such that measures against information pollution are advised.[67]

Ethical guidelines, demanding certain quality standards, and sufficient transparency might also be necessary. Otherwise, the large-scale application of manipulative technologies could intentionally or unintentionally undermine the individual freedom of decision-making and the basis of democracies, particularly when nudging techniques become highly effective and are used to manipulate public opinion at large.[68]


Summary, Conclusions and Discussion

Digital technologies offer great benefits, but also substantial risks. They may help us to solve some long-standing problems, but they may also create new and even bigger issues. In particular, if wrongly used, individual autonomy and freedom, responsible decision-making, democracy and the basis of our legal system are at stake. The foundations on which our society is build might be damaged intentionally or unintentionally within a very short time period, which may not give us enough opportunities to prepare for or respond to the challenges.

Currently, some or even most Big Data practices violate applicable data protection laws. Of course, laws can be changed, but some uses of Big Data are also highly dangerous, and incompatible with our constitution and culture. These challenges must be addressed by a combination of technological solutions (such as personal data stores), legal regulations, and social norms. Distributed data, distributed systems and distributed control, sufficiently many competitors and suitably designed reputation systems might be most efficient to avoid misuses of digital technologies, but transparency must be increased as well.

Even though our economy and society will change in the wake of the digital revolution, we must find a way that is consistent with our values, culture, and traditions, because this will create the largest synergy effects. In other words, a China or Singapore model is unlikely to work well in Europe.[69] We must take the next step in our cultural, economic and societal evolution.

I am convinced that it is now possible to use digital technologies in ways that bring the perspectives of science, politics, business, society, cultural traditions, ethics, and perhaps even religion together.[70] Specifically, I propose to use the Internet of Things as basis for a participatory information system called the Planetary Nervous System or Nervousnet, to support tailored measurements, awareness, coordination, collective intelligence, and informational self-determination.[71] The system I suggest would have a resilient systems design and could be imagined as a huge catalyst of socio-economic value generation. It would also support real-time feedbacks through a multi-dimensional exchange system („multi-dimensional finance“). This approach would allow one to massively increase the efficiency of many systems, as it would support the self-organization of structures, properties and functions that we would like to have, based on local interactions. The distributed approach I propose is consistent with individual autonomy, free decision-making, the democratic principle of participation, as well as free entrepreneurial activities and markets. In fact, wealth is not only created by producing economies of scale (i.e. cheap mass production), but also by engaging in social interaction (that’s why cities are drivers of the economy[72]).

The proposed approach would also consider (and potentially trade) externalities, thereby supporting other-regarding and fair solutions, which would be good for our environment, too. Finally, everyone could reap the benefits of diversity by using personal digital assistants, which would support coordination and cooperation of diverse actors and reducing conflict.

In conclusion, we have the choice between two kinds of a digital society: (1) a society in which people are expected to obey and perform tasks like a robot or a gearwheel of a perfect machine, characterized by top-down control, limitations of freedom and democracy, and potentially large unemployment rates; (2) a participatory society with space for humans with sometimes surprising behaviors characterized by autonomous but responsible decision-making supported by personal digital assistants, where information is opened up to everyone’s benefits in order to reap the benefits of diversity, creativity, and exponential innovation. What society would you choose?

The FuturICT community ( has recently worked out a framework for a smart digital society, which is oriented at international leadership, economic prosperity, social well-being, and societal resilience, based on the well-established principle of subsidiarity. With its largely distributed, decentralized approach, it is designed to cope with the complexity of our globalized world and benefit from it.[73]

The FuturICT approach takes the following insights into account:

  • Having and using more data is not always better (e.g. due to the problem of „over-fitting“, which makes conclusions less useful).[74]
  • Information always depends on context (and missing context), and it is therefore never objective. One person’s signal may be another person’s noise and vice versa. It all depends on the question and perspective.[75]
  • Even if individual decisions can be correctly predicted in 96% of all cases, this does not mean that the macro-level outcome would be correctly predicted.[76] This surprising discovery applies to cases of unstable system dynamics, where minor variations can lead to completely different outcomes.[77]
  • In complex dynamical systems with many interacting components, even the perfect knowledge of all individual component properties does not necessarily allow one to predict what happens if components interact.[78]
  • What governments really need to pay attention to are macro-effects, not micro-behavior. However, the macro-dynamics can often be understood by means of models that are based on aggregate variables and parameters.
  • What matters most is whether a system is stable or unstable. In case of stability, variations in individual behavior do not make a significant difference, i.e. we don’t need to know what the individuals do. In case of instability, random details matter, such that the predictability is low, and even in the unlikely case that one can exactly predict the course of events, one may not be able to control it because of cascade-effects in the system that exceed the control capacities.[79]
  • Surprises and mistakes will always happen. This can disrupt systems, but many inventions wouldn’t exist, if this wasn’t the case.[80]
  • Our economy and society should be organized in a way that manages to keep disruptions small and to respond flexibly to surprises of all kinds. Socio-economic systems should be able to resist shocks and recover from them quickly and well. This is best ensured by a resilient system design.[81]
  • A more intelligent machine is not necessarily more useful. Distributed collective intelligence can better respond to the combinatorial complexity of our world.[82]
  • In complex dynamical systems which vary a lot, are hard to predict and cannot be optimized in real-time (as it applies to NP-hard control problems such as traffic light optimization), distributed control can outperform top-down control attempts by flexibly adapting to local conditions and needs.
  • While distributed control may be emulated by centralized control, a centralized approach might fail to identify the variables that matter.[83] Depending on the problem, centralized control is also considerably more expensive, and it tends to be less efficient and effective.[84]
  • Filtering out information that matters is a great challenge. Explanatory models that are combined with little, but the right kind of data are best to inform decision-makers. Such models also indicate what kind of data is needed.[85] Finding the right models typically requires interdisciplinary collaborations, knowledge about complex systems, and open scientific discussions that take all relevant perspectives on board.
  • Diversity and complexity are not our problem. They come along with the socio-economic and cultural evolution. However, we have to learn how to use complexity and diversity to our advantage. This requires the understanding of the hidden forces behind socio-economic change, the use of (guided) self-organization and digital assistants to create interoperability and to support the coordination of actors with diverse interests and goals.
  • To catalyze the best outcomes and create synergy effects, information systems should be used in a culturally fitting way.[86]
  • Responsible innovation, trustable systems and a sufficient level of transparency and democratic control can be highly beneficial.

As a consequence of the above insights, to reap the benefits of data, I believe we do not need to end privacy and informational self-determination. The best use of information systems is made, if they boost our society and economy to full capacity, i.e. if they use the knowledge, skills, and resources of everyone in the best possible way. This is of strategic importance and requires suitably designed participatory information systems, which optimally exploit the special properties of information.[87] In fact, the value of participatory systems, as pointed out by Jeremy Rifkin[88] and others,[89] becomes particularly clear if we think of the great success of crowd sourcing (Wikipedia, OpenStreetMap, Github, etc.), crowd funding, citizen science and collective („swarm“) intelligence. So, let’s build these systems together. What are we waiting for?



[1] This document includes and reproduces some paragraphs of the following documents: „Big Data – Zauberstab und Rohstoff des 21. Jahrhunderts’’ published in Die Volkswirtschaft – Das Magazin fürWirtschaftspolitik (5/2014), see editions/201405/pdf/04_Helbing_DE.pdf; for an English translation see chapter 7 of D. Helbing (2015) Thinking Ahead – Essays on Big Data, Digital Revolution, and Participatory Market Society (Springer, Berlin).

[2] I would add „efficiency“ or „performance“ here as well.

[3] McKinsey and Co. Open data: Unlocking innovation and performance with liquid information,


[5] Chris Anderson, The End of Theory: The Data Deluge Makes the Scientific Method Obsolete. WIRED Magazin 16.07,

[6] One of the leading experts in this field is Jürgen Schmidhuber.

[7] Jeremy Howard, The wonderful and terrifying implications of computers that can learn, TEDx Brussels, implications_of_computers_that_can_learn

[8] The point in time when this happens is sometimes called „singularity“, according to Ray Kurzweil.

[9] Süddeutsche (11.3.2015) Roboter als Chef,

[10] Such movies often serve to familiarize the public with new technologies and realities, and to give them a positive touch (including „Big Brother“).

[11] James Barrat (2013) Our Final Invention – Artificial Intelligence and the End of the Human Era (Thomas Dunne Books). Edge Question 2015: What do you think about machines that think?


[13] Nick Bostrom (2014) Superintelligence: Paths, Dangers, Strategies (Oxford University Press).




[17] D. Helbing (2015) Distributed Collective Intelligence: The Network Of Ideas,


[19] For example, the following approach seems superior to what Google Flu Trends can offer: D. Brockmann and D. Helbing, The hidden geometry of complex, network-driven contagion phenomena. Science 342, 1337-1342 (2013).

[20] T. Preis, H.S. Moat, and H.E. Stanley, Quantifying trading behavior in financial markets using Google Trends. Scientific Reports 3: 1684 (2013).

[21] R.H. Thaler and C.R. Sunstein (2009) Nudge (Penguin Books).

[22] Süddeutsche (11.3.2015) Politik per Psychotrick,

[23] For example, many Big Data companies (even big ones) don’t make large profits and some are even making losses. Making big money often requires to bring a Big Data company to the stock market, or to be bought by another company.

[24] M. Gill and A. Spriggs: Assessing the impact of CCTV. Home Office Research, Development and Statistics Directorate (2005),; see also BBC News (August 24, 2009) 1,000 cameras `solve one crime’,

[25] Journalist’s Resource (November 6, 2014) The effectiveness of predictive policing: Lessons from a randomized controlled trial, criminal-justice/predictive-policing-randomized-controlled-trial. ZEIT Online (29.3.2015) Predictive Policing – Noch hat niemand bewiesen, dass Data Mining der Polizei hilft,

[26] The Washington Post (January 12, 2014) NSA phone record collection does little to prevent terrorist attacks, group says,; see also

[27] D.M. Lazer et al. The Parable of Google Flu: Traps in Big Data Analytics, Science 343, 1203-1205 (2014).

[28] D. Helbing (2015) Thinking Ahead, Chapter 10 (Springer, Berlin). See also

[29] This problem is related with the method of „geoscoring“, see


[31] The Silicon Valley is well-known for this kind of culture.

[32] A. Mazloumian et al. Global multi-level analysis of the ’scientific food web‘, Scientific Reports 3: 1167 (2013), message-global=remove

[33] J. Schmieder (2013) Mit einem Bein im Knast – Mein Versuch, ein Jahr lang gesetzestreu zu leben (Bertelsmann).

[34] Detlef Fetchenhauer, Six reasons why you should be more trustful, TEDx Groningen,

[35] A. Diekmann, W. Przepiorka, and H. Rauhut, Lifting the veil of ignorance: An experiment on the contagiousness of norm violations, preprint

[36] Note that super-intelligent machines may be seen as an implementation of the concept of the „wise king“. However, as I am saying elsewhere, this is not a suitable approach to govern complex societies (see also the draft chapters of my book on the Digital Society at and, particularly the chapter on the Complexity Time Bomb: Combinatorial complexity must be answered by combinatorial, i.e. collective intelligence, and this needs personal digital assistants and suitable information platforms for coordination.

[37] Remember that it takes about 2 decades for a human to be ready for responsible, self-determined behavior. Before, however, he/she may do a lot of stupid things (and this may actually happen later, too).

[38] I. Kondor, S. Pafka, and G. Nagy, Noise sensitivity of portfolio selection under various risk measures, Journal of Banking & Finance 31(5), 1545-1573 (2007).

[39] It’s quite insightful to have two phones talk to each other, using Apple’s Siri assistant, see e.g. this video:

[40], see also

[41] In other places (, I have metaphorically compared these technologies with a „magic wand“ („Zauberstab“). 
The problem with these technologies is: they are powerful, but if we don’t use them well, their use can end in disaster. A nice poem illustrating this is The Sourcerer’s Apprentice by Johann Wolfgang von Goethe:,

[42] For example, it recently became public that Facebook had run a huge experiment trying to manipulate people’s mood: facebooks-mood-manipulation-experiment-might-be-illegal/380717/ 
This created a big „shit storm“: However, it was also attempted to influence people’s voting behavior: OkCupid even tried to manipulate people’s private emotions: 2014/jul/29/okcupid-experiment-human-beings-dating It is also being said that each of our Web searches now triggers about 200 experiments.

[43] J. Lorenz et al. How social influence can undermine the wisdom of crowd effect, Proceedings of the National Academy of Science of the USA 108 (22), 9020-9025 (2011); see also J. Surowiecki (2005) The Wisdom of Crowds (Anchor).

[44] See Marc Smith’s analyses of political discourse with NodeXL:

[45] M. Bloodgood and C. Callison-Burch, Using Mechanical Turk to build machine translation evaluation sets,

[46] In an extreme case, this might even be a criminal act.

[47] Interestingly, for IBM Watson (the intelligent cognitive computer) to work well, it must be fed with non-biased rather than with self-consistent information, i.e. pre-selecting inputs to get rid of contradictory information reduces Watson’s performance.

[48] It seems, for example, that the attempts of the world’s superpower to extend its powers have rather weakened it: we are now living in a multi-polar world. Coercion works increasingly less. See the draft chapters of my book on the Digital Society at for more information.

[49] even though one never knows before what kinds of ideas and social mechanisms might become important in the future – innovation always starts with minorities

[50] C.A. Hidalgo et al. The product space conditions the development of nations, Science 317, 482-487 (2007). According to Jürgen Mimkes, economic progress (which goes along with an increase in complexity) also drives a transition from autocratic to democratic governance above a certain gross domestic product per capita. In China, this transition is expected to happen soon.

[51] This is the main reason why one should support pluralism.

[52] See the draft chapters of D. Helbing’s book on the Digital Society at, particular the chapter on the Complexity Time Bomb

[53] One might distinguish these into two types: dictatorships based on surveillance („Big Brother“) and manipulatorships („Big Manipulator“).

[54] As digital weapons, so-called D-weapons, are certainly not less dangerous than atomic, biological and chemical (ABC) weapons, they would require international regulation and control.

[55] see




[59] D. Helbing and S. Balietti, From social data mining to forecasting socio-economic crises, Eur. Phys. J Special Topics 195, 3-68 (2011); see also;

[60] ,

[61] Note that the scientific field of complexity science has a large fundus of knowledge how to reach globally coordinated results based on local interactions.

[62] After all, humans have to register, too.

[63] E. Pariser (2012) The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think (Penguin).

[64] Some problems are so hard that no government and no company in the world have solved them (e.g. how to counter climate change). Large multi-national companies are often surprisingly weak in delivering fundamental innovations (probably because they are too controlling). That’s why they keep buying small and medium-sized companies to compensate for this problem.

[65] Similar problems are known for software products that are used by billions of people: a single software bug can cause large-scale problems – and the worrying vulnerability to cyber attacks is further increasing.

[66] We have demonstrated such an approach in the Virtual Journal platform (

[67] In fact, to avoid mistakes, the more we are flooded with information the more must we be able to rely on it, as we have increasingly less time to judge its quality.

[68] This could end up in a way of organizing our society that one could characterize as „Big Manipulator“ (to be distinguished from „Big Brother“).

[69] The following recent newspaper articles support this conclusion: , 20869017 , 
In fact, based on a statistical analysis of Jürgen Mimkes and own observations, I expect that China will now undergo a major transformation towards a more democratic state in the coming years. First signs of instability of the current autocratic system are visible already, such as the increased attempts to control information flows.

[70] D. Helbing, Responding to complexity in socio-economic systems: How to build a smart and resilient society? Preprint

[71] D. Helbing, Creating („Making“) a Planetary Nervous System as Citizen Web,

[72] L.M.A. Bettencourt et al. Growth, innovation, scaling, and the pace of life in cities, Proceedings of the National Academy of Sciences of the USA 104, 7301-7306 (2007).

[73] See D. Helbing, Globally networked risks and how to respond. Nature 497, 51-59 (2013). Due to the problem of the Complexity Time Bomb ( abstract_id=2502559), we must either decentralize our world, or it will most likely fragment, i.e. break into pieces, sooner or later.

[74] Having a greater haystack does not make it easier to find a needle in it.

[75] This is particularly well-known for the problem of ambiguity. For example, a lot of jokes are based on this principle.

[76] M. Maes and D. Helbing, Noise can improve social macro-predictions when micro-theories fail, preprint.

[77] We know this also from so-called „phantom traffic jams“, which appear with no reason, when the car density exceeds a certain critical value beyond which traffic flow becomes unstable. Such phantom traffic jams could not be predicted at all by knowing all drivers thoughts and feelings in detail. However, they can be understood for example with macro-level models that do not require micro-level knowledge. These models also show how traffic congestion can be avoided: by using driver assistance systems that change the interactions between cars, using real-time information about local traffic conditions. Note that this is a distributed control strategy.

[78] Assume one knows the psychology of two persons, but then they accidentally meet and fall in love with each other. This incident will change their entire lives, and in some cases it will change history too (think of Julius Caesar and Cleopatra, for example, but there are many similar cases). A similar problem is known from car electronics: even if all electronic components have been well tested, their interaction often produces unexpected outcomes. In complex systems, such unexpected, „emergent“ system properties are quite common.

[79] In case of cascade effects, a local problem will cause other problems before the system recovers from the initial disruption. Those problems trigger further ones, etc. Even hundreds of policemen could not avoid phantom traffic jams from happening, and in the past even large numbers of security forces have often failed to prevent crowd disasters (they have sometimes even triggered or deteriorated them while trying to avoid them), see D. Helbing and P. Mukerji, Crowd disasters as systemic failures: Analysis of the Love Parade disaster, EPJ Data Science 1:7 (2012).

[80] I am personally convinced that the level of randomness and unpredictability in a society is relatively high, because it creates a lot of personal and societal benefits, such as creativity and innovation. Also think of the success principle of serendipity.

[81] D. Helbing et al. FuturICT: Participatory computing to understand and manage our complex world in a more sustainable and resilient way. Eur. Phys. J. Special Topics 214, 11-39 (2012).

[82] As we know, intellectual discourse can be a very effective way of producing new insights and knowledge.

[83] Due to the data deluge, the existing amounts of data increasingly exceed the processing capacities, which creates a „flashlight effect“: while we might look at anything, we need to decide what data to look at, and other data will be ignored. As a consequence, we often overlook things that matter. While the world was busy fighting terrorism in the aftermath of September 11, it did not see the financial crisis coming. While it was focused on this, it did not see the Arab Spring coming. The crisis in Ukraine came also as a surprise, and the response to Ebola came half a year late. Of course, the possibility or likelihood of all these events was reflected by some existing data, but we failed to pay attention to them.

[84] The classical telematics solutions based on a control center approach haven’t improved traffic much. Today’s solutions to improve traffic flows are mainly based on distributed control approaches: self-driving cars, intervehicle communication, car-to-infrastructure communication etc.

[85] This approach corresponds exactly how Big Data are used at the elementary particle accelerator CERN; 99.9% of measured data are deleted immediately. One only keeps data that are required to answer a certain question, e.g. to validate or falsify implications of a certain theory.

[86] J. van den Hoven et al. FuturICT – The road towards ethical ICT, Eur. Phys. J. Special Topics 214 , 153-181 (2012).

[87] This probably requires different levels of access depending on qualification, reputation, and merit.

[88] J. Rifkin (2013) The Third Industrial Revolution (Palgrave Macmillan Trade); J. Rifkin (2014) The Zero Marginal Cost Society (Palgrave Macmillan Trade).

[89] Government 3.0 initiative of the South Korean government,