Scientific and technological bases of Digital Transformation and commercial applications explained in a nutshell

Aus dem Inhalt

The history and basic functionality of Artificial Intelligence (AI):

From programming to machine learning

The use of the term ‘Artificial Intelligence’ is often criticized because it is quite unclear what human intelligence exactly is and how it refers to attributes and functions of the human nervous system. From its beginnings computer science or informatics had been divided by different views. For one group of scholars the bases had been logic, and symbol transformation; computing had been derived from conscious human thinking and the idea to use computers to mimic human intelligence by itself had been not considered. Instead computers had been seen as an extension of human capabilities in regard of memory and capacity for logical sequences of operations, thus computers had not been seen as intelligent by themselves. But two scientific discoveries enabled another group of scholars to think differently about intelligence and computers, first a basic understanding of general functional principles of complex nervous systems inclusive of the human brain, e.g. learning or neural and synaptic plasticity described by Donald O. Hebb (1949).

The huge impact of Hebb on the interdisciplinary discussion about artificial intelligence might be based on the fact that his findings are easy to summarize through the (simplified) Hebbian (learning) rule:  „neurons wire together if they fire together“. In principle it is a bold simplification to interpret neurons as electro-chemical switches but this view later on triggered the development of networks of electronic switches (transistors) parallel to each other, so called ‘perceptrons’ or one-layer neural networks as predecessors of nowadays convoluted neural networks. Perceptrons had been able to recognize different patterns, e.g. handwritten numbers and to transform into the correct input for computer-based data processing.  (cf Seijnowski, T., 2018, The deep learning revolution, San Diego, Cal.)

Another decisive step towards a serious scientific discussion of ‘Artificial Intelligence’ or ‘brain style computing’ had been the seminal paper of Alan Turing on ‘Computing, Machinery, and Intelligence’ (1950) proposing to consider the question: Can machines think? On page 19 Turing noted his view that that computers might be able to learn in a similar way like children do:

  • “Structure of the Child Machine = Hereditary Material
  • Changes of the Child Machine = Mutation
  • Natural Selection = Judgment of the Experimenter”
  • (Turing, A., 1950, p. 19)

Indeed, nowadays machine learning as the base of the development of artificial intelligence are following the evolutionary principles outlined by Turing and Hebb, multilayered complex neural networks are mimicking structures of the human pre-frontal cortex (initial algorithms) e.g. responsible for the recognition of patterns and objects. Random variations (mutations) in weighing the sensor inputs through neurons (transistors) and interference of the experimenter (or teacher), judging right or wrong are the principle factors to achieve improved results (better trained algorithms) or more intelligent behavior of the neural network. This simple process is called supervised learning and may be compared with training or schooling. The figure below displays a simple four layer neural network for supervised machine learning. Supervised learning means that the network is trained by a set of training data labeled by humans e.g. by pictures from a publicly available data base ImageNet containing more than 14 Million pictures with labeled objects suitable to train algorithms of neural networks in classifying objects as cars, bicycles, motorcycles, pedestrians, children or traffic lights, fire hydrants etc. which of course is vital for the development of driving assistance systems.

Screenshot from the open source platform using cloud computing to identify distinguish different objects on pictures from each other: E.g. cats vs. dogs in the ImageNet database, Source: courtesy Vijay P. Singh 2019
Screenshot from the open source platform using cloud computing to identify distinguish different objects on pictures from each other: E.g. cats vs. dogs in the ImageNet database, Source: courtesy Vijay P. Singh 2019

Learning algorithms based on complex math regulating but not determining the weights of ‘neurons’ expressed by the thickness of the lines between the neurons (in fact electronic switches in the state of 0 or 1).

The screenshot of an artificial multi-layer, convoluted, neural network (CNN) from the open source platform in figure 5.3 is displaying a small cognitive machine working as a rough simulation of what might go on the visual cortex of mammal and human brains when telling different object classes from each other e.g. cats from dogs. However, compared with a two y.o. child the performance of even more sophisticated CNN’s is weak.  A nursing child can distinguish dogs from cats or his mother from other ‘objects’ around after having seen only one or a few examples and it also develops a mysterious sense of ‘object permanence’ a kind of model of the world where the ‘objects’, e.g. the mother still exists, even when she is not in sight. (You can play hide and seek with toddlers, they have fun). But even the model of such a simple cognitive machine e.g. a neural network recognizing handwritten numbers meant an enormous progress towards the development of artificial intelligence as alternatively it seems to be impossible to mimic or achieve advances in object recognition and guidance of machine behavior based on suitable models of its environment. Nobody would know how to write a program that would let a computer with sensors (eyes) see like a human or even on superhuman level, as our consciousness doesn’t have any access to the processes in our visual cortex which are making us recognizing things.

Despite all shortcomings of multilayered or convoluted neural networks ‘CNN’s’ and the so-called deep learning algorithms, the range of applications already developed and influencing our daily lives is amazing. All of the 4 Billion smartphones, all computers and tablets and more and more cars and machines are connected with it. Neural networks and the related applications of specialized AI, e.g. natural language recognition and processing had become the base for the business models of the world’s most valuable companies, from Amazon, to Apple, to Microsoft, to Google, to Facebook, to Alibaba, to Tencent, to Baidu, or the Japanese Softbank Group and including countless start-ups in different industries from automotive, transport, tourism, health care, diagnostics, education and higher education etc.. AI should be viewed as the ‘new electricity’, is a famous quote from Andrew Ng, co-founder of Google’s Brain deep learning division, as well as of the online education platform Coursera, chief scientist of the Chinese Search engine Baidu, now director of the AI lab at Stanford University (cf. Ford, M. 2018).

Artificial Intelligence (AI) as a research field can be traced back to a conference organized by the computer scientist John McCarthy in the summer of 1956. Among the participants of the two-month event, the famous Dartmouth Conference at Dartmouth College in New Hampshire had been Marvin Minsky and Allen Newell. (Crevier: 1993; McCorduck 2004: 259-305; Roberts 2016) Overtime, from 1956 to 2018, Terry Sejnowski (2018) reports in his account of the ‘Deep Learning Revolution’, there had been several periods of AI winter, where high hopes on break through applications of machine learning had been disappointed and investments as well research budgets had been cut down. But advances in computing hardware and information transmission, as well as in neuroscience and mathematical modeling of multidimensional learning, helped for instance solving the problem of ‘over-fitting’, in-between training data sets and the interpretation of data (means e.g. objects not in the training data set would not be recognized because there is no generalization possible). In consequence ‘Artificial Intelligence’ isn’t programmed, it evolves and is trained like a dog or more sophisticated it is educated like children or students and even can acquire capabilities their educators don’t have. The state of the art of technology, a combination of reward learning with back propagation algorithms, already allows that an experimenter (trainer, educator) can be substituted by a machine too. The tech-magazine wired devoted its June 2016 issue question if the end of traditional programming and coding is thinkable in the near future, a question which is out of scope of this introductory text as well as the rather speculative debate on superhuman artificial general intelligence (AGI).

However, the basic artificial intelligence technology has advanced and commercial applications of machine learning took off with overwhelming success in the first decade of the 21th century. ‘Brain style’ computing not only had become complementary to traditional computing on the bases of programmed logical symbol transformation process but dominant in business models of the digital tech platform companies stretching from web search and digital marketing, nearly every kind of industry and corporate function, from HR to finance, from health care to ‘intelligent’ or service robots.

The evolution of commercial applications of machine learning

Recent progress in AI have been achieved by applying machine learning to very large data sets and the development of deep learning algorithms to detect patterns enabling them to make predictions and recommendations by processing data in ‘brain style computing’ rather than by programming instructions. The algorithms also adapt in response to new data and feed back to improve efficacy over time and can be used for descriptive analysis, means to identify patterns, structures trends (to describe what happened), for predictive analysis, means anticipating what is likely to happen (mainly practiced and adopted in corporations with systems of data driven decision making implemented) and prescriptive analysis and decision making where decisions are made autonomous by AI based systems, e.g. in the operations of Amazon or Alibaba fulfillment centers.

Please note that there are many related technologies and that machine learning is not synonymous with artificial intelligence. Left out of considerations for instance are the psychological and social frames of the way we think and interact. Humans seems to have biases towards causality and intention in regard auf the connection and interpretation of data which enables them to learn, transfer and generalize knowledge just from one example and even also without direct experience, often with better predictive validity than technical neural networks trained with billions of data points and millions of examples. Simplified constructs about how the world functions or how it should be, imagined realities, seems to play a major role as new frontier of AI. (cf. Tenenbaum, J. in: Ford, M., 2018 pp. 463)

Major types of machine learning the technology platform strategies of the leading transnational corporations are based on are supervised learning, unsupervised learning and reinforcement or reward learning. The figure below explains their basic features and lists some applications widespread in business.

Machine learning types, general functions and applications
Machine learning types, general functions and applications

Some might even still think about the question if the rise of the new tech giants and their dominance in the global economy is really a novel phenomenon, technologies as pointed out above had been also a decisive factor for the emergence of the big industrial conglomerates of the 19th and 20th century; thus, one of the big questions remaining is what the innovation dynamic and strategies corresponding with makes so special. Of course, the founders and founding teams of the contemporary tech giants didn’t exactly plan or aim at what they have achieved in the recent decades.

None of the new entrants into internet-based businesses could know about the future technological development and breakthrough in machine learning around the turn of the millennium. Analyses of the extraordinary success of Amazon e.g. Bard Stone’s ‘The Everything Store’ (2013) are routinely stressing the genius or business acumen of the founder Jeff Bezos as one of the decisive factors. But there is much more.

Personality and the background in investment banking of course had been crucial to identify the potential of the internet with its rapid growing numbers of users. Bezos obviously knew well about positive network externalities of platforms and platform growth. The growth of the numbers of users and the unique cost structures of software, high cost for the development of the first copy and cost close to zero for any other copy provided a unique chance. Traditional book retailers like the previous US market leader Barnes & Noble didn’t take the new competition serious and hesitated to invest and boost online sales and cannibalizing their stationary business and continued believing in their business model. Traditionally readers needed and appreciate the consultations with knowledgeable sales staff or to make their pick by seeing a selection of books. Amazon never employed people knowledgeable of literature, it left the function of giving recommendations about books and their content to the growing community of customers or ‘users’ of its platform. As more users Amazon had as more perfect the system could become. Introducing eBooks, music services, venturing into complementary hardware, leveraging the value of Amazons infrastructure by evolving into a market place for every kind of internet retail and services was more a consequent evolution than revolution, obviously based on core competence resource and market-based strategy evolution. But that Amazon became an AI powered data driven corporation of course had been only possible by the adoption of machine learning technology. Aside from online retail and logistics Amazon had become a major player in cloud computing infrastructures and services with its Amazon Web Services division (AWS).

As doctoral students of computer science the Google founders Sergey Brin and Larry Page, both with a strong background in mathematics hadn’t have any intention to make a business out of their project to improve web search for internet users. In the 90’s of the last century web search had been dominated by tag words provided by the websites themselves to search engines, by programs called meta-crawler searching for marked tag words, by expert judgments of internet portals and by paid listings. Internet search was time consuming and very biased by questionable marketing practices, deception (gambling and pornographic sites, fraudulent businesses and information) and paid listings as well as by the idea to keep users as long as possible the own websites, web portals and their affiliates. The internet was already at this time the largest knowledge base humankind ever had created but was nearly defunct with only a low value for users trying to retrieve information according to their own preferences. Brin and Page wanted to change this situation dramatically by improving the user experience with web search. The project supported by Stanford University was centered on the development of a novel search algorithm for page ranking by measuring how dense are the connections in-between websites with similar subjects or profiles and how often this links and back-links are used. The algorithm in principle would encompass a model of the whole world wide web with all the links and back-links in-between the web pages and measuring their ‘objective’ relevance independent of tag word bias, paid listing and advertising purposes, enabling every user to find what he or she is searching for fast and without harassment by unwanted results.

Relying on computing resources of Stanford University Brin and Page could demonstrate the principle functionality of their algorithm and its improvement by time and scale, but as the consumption of computing power became unbearable for the University infrastructure; the supervisor of the project computer science Professor Terry Winograd recommended to sell the algorithm to media companies on the verge to venture into the internet or to established portals and search engines like Yahoo and Altavista. But none of the first-generation internet companies was interested in a functional web search bringing the users directly to websites related to their interests. Because keeping users as long as possible on their own websites or routing them only to paid listings had been the prevailing, dominant business models where the new technology didn’t fit in. Even a price tag as low as one million Dollar had been refused by Yahoo!

The digital transformation already has changed how successful corporations are working

Due to the fame the technology savvy students and their prominent supervisor enjoyed on Stanford campus contact with venture capitalist provided a suitable escape from the deadlock Brin and Page had been in. With Rav ShriRam and Andreas von Bechtolsheim, two prominent venture capitalists of the Silicon Valley Tech Scene didn’t hesitate to finance Google to start up and to become incorporated 1998. The early investors just trusted in a revolutionary idea and the disruptive potential of a new technology without having a business model and business plan at hand. Google became the most popular search engine by its superior technology to improve user experience in web search, but it took 6 years before Google introduced a business model, ironically based on paid listings, on top of the search results. But they are at least optimized to match with user interests and behavior by intelligent algorithms and clearly marked as paid. The evolution of Google/Alphabet holding towards becoming a leading tech conglomerate placing large bets on future breakthrough applications of artificial intelligence is well documented by Eric Schmidt and Jonathan Rosenberg’s (2017) insider book on ‘How Google Works’.

Google/ Alphabet had been from the beginning a data, science and technology driven company. Thinking about and knowledge of business strategies in line with the models of corporate strategy and traditional leadership and management practices are not in the center of the specific culture of tech companies and how the work of their core employees is organized.

Please note that the number of core employees in new tech companies driving business by developing and monitoring AI applications is quite small. The older hardware and software giants with their sales networks like Apple and Microsoft or corporations with major operations in logistic and shipment tend to have as much employees as traditional industrial conglomerates, Amazon for instance has globally more than 500.000 employees but most of them are not working in tech but in logistics and fulfillment center operations, services and maintenance which are already guided and monitored by AI applications. Worker in Amazon stores, ‘fulfillment centers’, are monitored and navigated by getting their information about what to do by smart watches and scanners. These explains low median salaries (median means 50% of the employees have a higher and 50% a lower salary as indicated) in comparison with the social network Facebook or Alphabet / Google Holding with which run business based on a much smaller number of employees without major operations in retail and logistics. The figue below displays employee numbers and medium salaries of public corporations listed on the US stock markets according to their 2018 filings to the Federal Stock Exchange Commission in comparison with traditional companies like JP Morgan Chase (largest US Bank), McDonalds and General Electric.

Median Compensation and Number of Employees in Tech companies with AI powered platform strategies (highlighted blue) in comparison with traditional industries. Data Source: US Stock Exchange Commission (SEC) Filings 2018 (Form DEF 14A)
Median Compensation and Number of Employees in Tech companies with AI powered platform strategies (highlighted blue) in comparison with traditional industries. Data Source: US Stock Exchange Commission (SEC) Filings 2018 (Form DEF 14A)

In his foreword to Eric Schmidt and Jonathan Rosenberg’s account on ‘How Google Works’ (2017) Larry Page explains the special role of the core employees of Alpahabet/ Google:

 „Over time I’ve learned, surprisingly, that it’s tremendously hard to get teams to be super ambitious. It turns out most people haven’t been educated in this kind of moonshot thinking. They tend to assume that things are impossible, rather than starting from real-world physics and figuring out what’s actually possible. It’s why we’ve put so much energy into hiring independent thinkers at Google, and setting big goals. Because if you hire the right people and have big enough dreams, you’ll usually get there. And even if you fail, you’ll probably learn something important. It’s also true that many companies get comfortable doing what they have always done, with a few incremental changes. This kind of incrementalism leads to irrelevance over time, especially in technology, because change tends to be revolutionary not evolutionary. So, you need to force yourself to place big bets on the future.“ Larry Page, quoted from Schmidt, Eric. How Google Works 2017 (Kindle Locations 105-112).

But not only the American ‘big five’ internet corporations Alphabet Holding (Google), Apple, Facebook, Amazon, and Microsoft are impacting major industries on a global scale and the way how knowledge is produced, used and organized as well as we work and live together, others like the Chinese Alibaba Group or the Japanese Softbank conglomerate shouldn’t be overlooked and the demise of Kodak, Nokia, Philips or IBM from leadership positions in their industries can’t be ignored.

The rise of social media from Facebook to Byte Dance’s Tik-Tok and the share economy known by Uber or AirBnB is based on machine learning

The impact of Social Media on the organization of societies is largely unknown and not explored yet, but there is no doubt that Facebook, Twitter, YouTube, Baidu, Renren, Weibo, WeChat, or WhatsApp are some of the most important global players. The social media are operated by large, transnational corporations with AI powered platform strategies: Not only Facebook Inc. (Facebook, WhatsApp), Twitter Inc., Alphabet/Google Inc. (e.g. YouTube), but by players unknown in the West like Sina Corp (Weibo), Tencent (WeChat), Baidu Inc. (Baidu), and Renren Inc (Renren) or the Japanese Softbank Group not only with Yahoo Japan but financing and investing dozens of Social Media and Share Economy Start Ups (e.g. Uber, Lyft, AirBnB). The social media companies had become the largest advertising agencies in the world that have access to millions or billions of users’ personal data to customize advertising to specific groups of users with till now unknown efficiency. (Fuchs, 2017)

The huge shift in value creation to a small group of novel companies will continue. Never in the history of the economy had companies reached dominant market positions in such a short time. After all, these monopolies serve the customers, all of us, says the eccentric investor Peter Thiel, who earned billions with PayPal and early venture capital funding of Facebook. These companies are driving progress, the transformation of the economy and society on a global scale.

Although criticism of the new monopolies, data misuse and tax avoidance, manipulation of search results is sporadically voiced and has led to penalties by national competition authorities or the EU Commission, neither destruction nor strict regulation of the data companies is likely. The opposite is true; politics and society are increasingly aligning themselves with the new social media and tech giants.

The defense of the traditional multinational corporations and of many small and medium sized companies is weak up to now. Most managers neither understand artificial intelligence technologies nor platform strategies based on them. They are not eager and capable to explore the potential of the accelerated technological progress especially in the fields of machine learning, deep learning and other fields relevant for digital transformation;  e.g. the blockchain technology for data security, records management, transaction processing and for authentication by securing data integrity based on distributed and public accessible ledgers with the potential to disrupt public administration, legal services and the facilitation of payments.

No wonder many are annoyed by the constant talk about disruption, artificial intelligence, machine learning, platform strategies, big data, open innovation and the like. Is it hyped? Just buzzword bingo? Some people who have a lot to lose might wish. Of course, there is much lip service to digitization, but seldom talk is combined with fundamental changes. The reason for ignorance and idleness is quite simple: Today’s Business is based on experience and expertise from the last century.

Successful digital transformation requires new approaches toward management

Where something different can be learnt? Most Business Schools and training programs are confined to the past and present and not capable to contribute to a future which is significantly different from the past and present; teaching and training is conceptualized as cramming and rigorous instruction to exert control and power over resources and people within hierarchical social systems. Or as Sumantra Ghoshal has put it: (Bad) management practice is taken as a criterion of truth. For most computer scientists, software engineers and mathematicians CEO’s, CFO’s, HR, Marketing Directors and alike from established ventures are mostly clueless persons used to structures and to perform functions in a way which hardly make sense in a new digital world based on machine learning, platforms, and crowd. Where in corporate hierarchies and meetings the highest paid persons opinion (Hippo) counts more than an argumentation based on scientifically validated facts organizational inefficiency is rising at high cost.

Google’s former Executive Chairman Eric Schmidt and Jonathan Rosenberg’s account on how leadership at Alphabet is used to work differently from the traditions of corporate America is telling us something about the future of work and management, not only in Mountain View, California. And indeed, when we had been in talk with ‘smart creatives’ from large established corporations as well as with the leadership of exponentially growing startups or the new tech giants and champions of digital transformation, it didn’t matter from where they had been: the US, China, Japan, India, Germany, the UK or Malta. It was agreed that social organization relevant for digital transformation is different from corporate hierarchies laid out as social machines based on process management. Innovation is an offspring of structures known from excellent research universities where small groups of extraordinary talented minds working on subjects they are really interested in.

Of course, this kind of freedom is not for everyone. The still human work-drones in the logistic and fulfillment centers of Amazon and Alibaba wearing devices similar to smart watches and are closely monitored and directed by processes and algorithms. Management here is already automated on a large scale. Indeed, the bulk of management as we know it from the 19th and 20th century corporations is no longer needed. Disrupted. The last days of 20th century Management and strategies are coming soon.

From our ongoing comparative studies on digital transformation projects, comprising major players of the new type of corporations, start-ups and established large, medium sized and small firms, we can delineate that the failure rate is highest in large and medium sized established firms. This is only partly due to a lack of access or due to a lack of understanding of technology or big data or artificial intelligence. Especially large corporations do not have too much problems with that. But traditionally-managed firms typically get meager results. Because they are good in doing what they are doing and in improving that incrementally; disruptive innovations and changing themselves drastically is out of scope for them (but is needed to stay in the game). Thus, it’s not technology or data or AI that make the difference. The difference lies in the inability to delegitimize an existing structure or social order and to develop a new, credible narrative to legitimize different forms of social organization. Instead of being precise by using the language of social systems theory we express it here in simple words:

The required leadership practice seems to be counterintuitive to narratives learnt and believed. What is necessary is:

  • to let talent drive strategy,
  • to establish small self-organizing teams equipped with the necessary resources to solve big problems,
  • complex systems of decision making must be slashed and de-scaled.
  • Extra efforts to get the Hippos out of the way, they are stubborn, inflexible, huge and dangerous, have to be made; in general managers can’t tell experts and talents what to think and to do; or to stick to micromanagement and controlism as they are used to.
  • The potential of an idea should be assessed at first place by competent people, the logic of cost cutting and economic value added has to be abandoned.
  • In the digital economy you make more money by not focusing on money, but on technology, the users, politics and society.

Google/ Alphabet is not only dominating the search engine industry they are also technology leader in artificial intelligence and in a broad scope of applications like medical diagnosis, self-optimizing software systems and alike. Facebook is dominating social networks, both together accounting for more than 60 percent of global online advertising revenue and 25 percent of the total spending on advertising. New competitors hardly have a chance in these markets. Therefore, the data scandal around Cambridge Analytica (see chapter 1), manipulating the US elections of 2016 with micro-targeting through Facebook feeds, caused only limited damage for Facebook: A competitor who could profit was and is not in sight. Users who leave the network lose all their digital contacts and their virtual existence.

The major challenges for the corporations successfully leading digital transformation is of political nature

Albeit widespread critique on the impact of social networks on the society and fines amounting to billions of dollars demanded from EU and US institutional watchdogs the impact of social media on the public seems to grow. Systematic deception driven by political and commercial interests is a part of reality. Big data and AI powered platforms are leveraging and reinforcing these practices to influence human cognition. Media and public communication in the 20th century already had been used based on cognitive and psychology and behavioral for systematic development and advancement of political propaganda and marketing techniques but the emergence of the so-called social networks and the amount of instantly available data has multiplied its effects. The subsequent figure displays the major features of contemporary critiques of social activists on the commercial character of social networks like Facebook.

Adopted and enhanced from We are all victims of Facebook manipulation. Dec. 13th 2018
Adopted and enhanced from We are all victims of Facebook manipulation. Dec. 13th 2018

Social media and machine learning applications are enabling control and manipulation of communication and social behavior in an extend which is hard to comprehend. The website compares the data driven surveillance capitalism, Zuboff, (2014), The Age of Surveilance Capitalism, with the major driver of the environmental crisis, global warming.

The philosopher Jeremy Bentham (1791), “Panopticon or the Inspection House”, imagined a prison, in which there would be a tower in the middle, from which a guard could look into each cell at any time while the prisoners would not be able to know whether or not they were observed by the guard. Given this uncertainty Bentham assumed that in order to avoid the risk of being caught breaking a rule, they would always behave AS IF they were observed. The mechanism of the Panopticon can be imagined to be leveraged towards the society as a whole. They would display a high level of self-control. Already Bentham had speculated about the usefulness of it for other types of organizations such as companies or schools.

Commercial and political actors are exploiting a plethora of cognitive biases to influence human cognition and behavior through communication in social networks, the most important of more than 700 known biases are:

  • the anchoring bias (means to build expectations based on the environment you are used to),
  • availability bias (to value information you have above information you don’t have),
  • confirmation bias (perceiving only information confirming what is believed),
  • framing bias (the situation or the person is more important than the information itself),
  • optimism bias and planning fallacy (overestimating favorable outcomes, underestimating difficulty and necessary resources),
  • loss aversion or sunk cost bias (continuing to ride a dead horse).

Bias and limited rationality and opportunism plays a major role in contemporary economics, management and marketing science. Daniel Kahneman, author of the bestselling book ‘Thinking fast and slow’ (2011) had been awarded with the Nobel Prize in Economics for his groundbreaking empirical and experimental works aiming at making economic theory more realistic and linking it with realistic assumptions about human behavior. In this regard the commercial use of AI is dual faced. In corporation’s data driven strategies are bound to reduce irrational decision-making practices, depending on ‘group think’, traditions, outdated practice, worldviews and theories; in the field of marketing, lobbying etc. it is aiming at exploiting human bias. Yuval Noah Harari, a historian of Hebrew University who had become an influential and visionary popular science author, with its publications Sapiens – A Brief History of Humankind  (2014), Homo Deus – Brief History of Tomorrow (2016) and 21 lessons for the 21st Century (2018) assumes that there are not only some striking similarities between influencing human behavior and hacking a human brain but the twin revolution of infotech and biotech also will enable us to hack the genetic code, meaning that social, psychological and biological processes are becoming available for design and modification according to political and economic interests. These far reaching assumptions indeed are enjoying the attention of prominent figures in the teach industry. (New York Times 2018.11.09: Tech CEO’s Are in Love with their Principal Doomsayer)