Chinese AI will develop Chinese artificial consciousness

Jan Krikke
15 min readOct 31, 2018

[This is a lightly revised version of an article written for the Asia Times five years ago. I was prompted to republish it after discovering the work of Chinese technology philosopher Yuk Hui, author of numerous books and editor of the recently published Cybernetics for the 21st Century.

Noema Magazine published an interview with Yuk Hui three years ago entitled Singularity Vs Daoist Robots: Is there another path than accelerated Western modernization? He speaks of cosmotechnics, or how technology is infused with a worldview.

Yuk Hui argues that cultural factors will shape the development of artificial intelligence and related technologies during the 21st century. He explains why the Chinese worldview is a natural fit for modern technologies like Cybernetics and AI. My article makes the same point. We also agree cybernetics is foundational to artificial intelligence.]

If the experts are to be believed, AI will develop its own consciousness. A closer look suggests they got it backwards — human consciousness will be embedded in AI. What kind of consciousness will Chinese AI reveal?

Philosophers have traditionally debated consciousness along two lines: Plato, Descartes, and modern neuroscience claim that the brain produces consciousness and that it is the result of biological evolution. On the other hand, Indian philosophy, as well as Aristotle and some of those working in quantum physics, argue that consciousness is intrinsic to the universe and that it preceded life.

The closest Chinese equivalent to the Western word consciousness is xin, literally “heart-mind.” The proverbial heart distinguishes humans from other forms of biological life. In the Chinese view, xin does not develop naturally but must be cultivated. Xin is rooted in Confucianism, which means it has an ethical connotation. A closer look at xin may shed light on how the Chinese will develop AI, and whether it leads to artificial consciousness.

In 2014, the Slovenian sinologist Tea Sernelj wrote a paper entitled “The Unity of Body and Mind in Xu Fuguan’s Theory,” which is one of the few recent discussions on the Chinese notion of consciousness. Fuguan (1903–1982) was a prominent modern Confucianist who tried to develop a synthesis between Western and Chinese thought to resolve social and political problems of the modern, globalized world.

Fuguan’s main interest was the relationship between xin and qi (chi). The word qi is one of the crucial notions in the Chinese worldview. It has been translated as aether, vital force, air, energy flow, and gas, among many others. Sinologist Joseph Needham translated qi as “matter-energy,” a notion derived from quantum physics. No single translation can capture all the nuances of qi, but “all of the above” is a useful approximation.

Electromagnetism

Qi has its origin in the uniquely Chinese concept of Tao, the mysterious “something” that existed before heaven and earth. The universe came into existence when it “separated” into two binary forces the Chinese would later call yin and yang. In the Chinese view of creation, “When the yin and the yang, initially united, separated forever, the mountains poured forth water.” Mountain is primarily yang, water is primarily yin. Tao can be seen as a sophisticated form of animism. In Tao, as in animism, all things are intrinsically related, but ancient Chinese sages determined that nature is based on the interaction of two complementary opposites.

The germinating idea for this binary view of nature may be related to the discovery of magnetism. About 4,000 years ago, the Chinese invented the “south-pointing needle,” the prototype for their later invention, the magnetic compass. Qi is a pre-scientific notion of electromagnetism. The ways the Chinese described qi fascinated the pioneers of quantum physicists, who saw parallels between the behavior of subatomic particles and the way the Chinese described nature. The Chinese character for qi is used in the modern compound character for electricity. It is the same qi used in the compound terms Tai Chi and Qigong. (In Japan, where qi is called ki, it is used in denki, electricity, and aikido, a Japanese martial art.)

The ancient Chinese reasoned that if nature is based on a binary principle — plus and minus, active and passive, male and female, life and death, growth and decay, etc — they would do well to study this principle by classifying all conceivable opposites they could identify in nature. This would allow them to “insert” themselves into the binary universe with a minimum amount of friction. It led to the most enduring of Chinese principles: the Middle Way — the path of qi between the binary opposites. The Chinese Dragon symbolizes qi.

Heart-mind

In her paper on Fuguan, Tea Sernelj points out that in Chinese philosophy, key concepts rarely appear alone, but rather in the framework of duili fanchou, meaning “binary categories.” Consequently, the concept qi always has a binary opposite, as in qi-zhi (vital or creative potential and human will), or li-qi (structure and creativeness).

Having its roots in Confucianism, the Chinese notion of consciousness is closely related to ethics. Only humans can develop ethics. Confucianism argued that humans can overcome their instinctive perception “because their bodies are inherently connected with their heart-minds. This unity enables them to follow the “significant,” i.e. the benevolent, justified, ritualized and wise paths of social practice, instead of following the “insignificant,” i.e. the instinctive, egoistic, and egocentric ways of individual benefits.”

China was historically a collectivist society, the result of its ancient agricultural past. Rice cultivation relied on a collective effort, from irrigation and planting to harvesting. This collectivist tradition, given structure by Confucianism, shaped a “collectivist consciousness” that sets the Chinese apart from India and Europe. It also shaped the Chinese approach to industrialization and modernization. How will this collectivist culture shape the development of artificial intelligence, and what some predict is an inevitable artificial consciousness? Assuming AI can develop a mind of its own, Chinese artificial consciousness will have a cultural bias, based on its Confucian, collectivist consciousness.

The quantum brain

The term “artificial intelligence” was coined by the American computer scientist John McCarthy. In 1956, McCarthy organized the legendary Dartmouth Conference that brought together the most prominent computer scientists of the era. The AI pioneers predicted that a machine as intelligent as a human being would exist in no more than a generation. They were given millions of dollars to make it happen, but it soon became apparent they had grossly underestimated the difficulty of the project.

After the so-called “AI Winter” that followed, interest in AI returned in the first decade of the 21st century. Scientists applied “machine learning” to solve problems facing academia and industry by taking advantage of powerful new computer hardware. With renewed interest came renewed optimism: experts predicted the imminent arrival of artificial general intelligence (AGI) with intellectual capabilities that would ultimately exceed the abilities of all of humanity.

Some experts went a step further, claiming that our understanding of the neurons in the brain, the “quantum” processes in neurons and the connections between them, would bring artificial consciousness within reach. Lightning-fast computing would be able to duplicate the functions of the brain’s 100+ billion neurons.

Despite our advanced knowledge about the brain’s structure, little progress has been made in understanding consciousness — what it is, how it develops, and how it operates. When consciousness starts to develop in childhood, neurons form connections with other neurons at the rate of 500,000 per second. It will make a minimum of 100 trillion connections.

The neurons not only make mutual connections within the brain but also with the central nervous system that runs through the entire body. The nervous system is an inseparable part of the brain. Neurological and glial cells apparently become “repositories” of mental archetypes created by input from sight, hearing, taste, smell, and touch: light-dark, warm-cold, sound-silence, father-mother, etc. Connections between the cells allow the mind to evolve, classify, combine, and retrieve these mental images as needed.

The quantitative aspect of neurons pales in comparison to the quantum level that underlies the brains trillions of connections. Neurons play a crucial role in the five chemical processes that are active in the brain. Each chemical has different atomic properties and operates on different electromagnetic frequencies, ranging from delta (0.5Hz-4Hz) to beta (14Hz-30Hz). They control movement, speech, thought, hearing, and they regulate everything from dopamine levels to the circadian rhythm of the body.

‘Digital ascension’

Despite the complexities of the human brain, the possibilities of decoding the brain and storing its contents outside the body to achieve “immortality of consciousness” has been the staple of science fiction for decades. The idea goes back to 1965 when computer scientist Irvin John Good predicted an “intelligence explosion” based on “recursive self-improvement of a machine intelligence.” In 1983, computer visionary Vernor Vinge popularized the notion of an intelligence explosion and first referred to the “technological singularity.” Vinge wrote:

“We will soon create intelligence greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible.”

Vinge’s 1993 article, “The Coming Technological Singularity: How to Survive in the Post-Human Era,” helped to popularize the idea of the singularity. “Within thirty years,” Vinge wrote, “we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” In 2005, computer scientist Ray Kurzweil published a book that echoed Vinge’s idea: “The Singularity is Near: When Humans Transcend Biology.”

Computer scientist and theorist Jaron Lanier subsequently argued that the technology could be used to extend the operational life of the physical body and create a form of immortality that he called “digital ascension.” People would die in the flesh but would be uploaded to a computer and its consciousness would be preserved.

The cybernetic roots of AI

Artificial intelligence is based on its predecessor cybernetics, the first coherent computer science that was developed during WWII. AI is a self-learning version of cybernetics. They both rely on Boolean algebra and on the same starting point: intention. What is the system supposed to achieve? One of the most familiar products of cybernetics is the autopilot used in airliners.

The autopilot helps to navigate the aircraft from point A to point B without the human pilot in charge. It operates within set parameters to take the shortest and safest route to its destination. Strong air currents may trigger the autopilot to make a course correction and stay within the set parameters. In doing so, it uses Boolean logic (IF/THEN/OR/AND, etc.). IF there are strong side winds, THEN initiate a course correction.

AI operates on the basis of the same Boolean logic but can be programmed to learn from its mistakes. Big Blue, the IBM chess computer, beat the world chess champion Gary Kasparov after learning from its mistakes. While impressive, a computer beating a human player is not magic. Using enormous computational power, AI can simply weigh and process more options faster than humans can. Big Blue made some moves the programmers had not anticipated, but it always operated within the Boolean logic in order to calculate all the probabilities for winning moves.

All AI systems are designed for a specific purpose. A chess computer cannot be used for a self-driving car. The system is only given access to data it needs to perform a designated task. Unless programmed to do so, an AI system will not make up its own fact, just like an autopilot will not fly to an imaginary airport. Even artificial general intelligence (AGI) is domain-specific. It will reveal the cultural bias that the programmers, consciously or subconsciously, program into the system. American, Indian, and Chinese AGI will have many things in common, but each will have the cultural imprint of their designers.

Sputnik moment

Experts predict China will play a key role in the future of AI, partly due to the scale of their economics. The rule of thumb in AI is: bigger is better. Bigger in this context means more data. China has nearly 1.4 billion people generating massive amounts of data, from consumer preferences to highly personal and sensitive data, such as medical records, that can be used to study health outcomes from the use of medicine. The power that comes from having intricate data on more than a billion people, combined with being the world’s largest manufacturer of electronics, can only be imagined.

In 2016, Google’s AlphaGo beat Korean world champion Lee Sedol in the classic Chinese board game Go, a game far more complex than chess. AlphaGo’s victory reportedly caused a “Sputnik moment” in the mind of Chinese leader Xi Jinping. In East Asia, Go was played by military generals. Its complexity gives it a mystique far beyond a board game.

AI was already a thriving field in China, but AlphaGo prompted action by the Chinese government. Within 12 months, the Chinese government added AI to its long-term development plan “Science and Technology Innovation 2030 Megaprojects.” Its goal is to “achieve major breakthroughs in brain-inspired intelligence, autonomous intelligence, hybrid intelligence, and swarm intelligence […]. AI should be deepened and greatly expanded into production, livelihood, and social governance.”

The same year the Chinese government decided to include AI courses in primary and secondary schools nationwide. It published the first AI textbook for high schools that includes the history of AI and how the technology can be used in facial recognition, autonomous vehicles, and public security — and to take the AI sector out of its “ivory tower.” Using its role as “factory of the world,” China’s electronics makers and Internet giants may offer AI as a service, updated in real time 24 hours a day.

The rise of East Asia has questioned the meaning of modernity, and what that word really means. Part of industrialization involves the application of modern science and technology. It does not necessarily involve “Westernization.” China, Japan, and Korea have modernized, but they have not fully Westernized. Their ancient Confucian worldview has not suddenly been replaced by the Greek-inspired worldview that gave rise to the Renaissance and the scientific and industrial revolution. On the contrary, East Asia modernized using its own cultural heritage.

Classic Chinese culture had several attributes that we consider to be modern today. In the 20th century, Western industry embraced the value of teamwork, the modern equivalent of the Chinese ancient collectivist approach dating back to rice cultivation. Classic Chinese architecture applied eminently modern notions of efficiency. It relied on standardization, modularization and prefabrications — all the hallmarks of an industrial process, except the modern machine first developed during the Industrial Revolution. More than 2,000 years ago, the Chinese built cities covering 100 square kilometers in one year. It relied on highly complex supply-chain management principles that anticipated the modern industrial processes used around the world today.

AI-human interface

The world was awoken to the prowess of East Asian collectivism on an industrial scale during the second half of the 20th century, when the Japanese economic juggernaut all but vanquished the Western consumer electronics industry. “Voluntary export restrictions” by Japanese car makers gave their Western counterparts time to study the Japanese model and catch up. Images of Japanese workers doing calisthenics before starting work flooded the world, while the Western industry started a massive restructuring process.

The story is repeating itself in China, but on a scale that is shaking the global economy to its foundation. In barely 30 years, China became the world’s largest producer and exporter of electronics. Millions of people in the developing world buy Chinese consumer electronics, often before they have basic utilities like sanitation and running water, while their first use of the Internet is through Chinese mobile phones. They use mobile payment systems without having a conventional bank account. Chinese companies build infrastructure where there was none, and the nominally communist country has opened 140 “Confucian Institutes” throughout the world to introduce people to Chinese language, culture, and arts.

China is now targeting AI and a host of related technologies, including the Internet of Things (IoT), the convergence of multiple technologies, real-time analytics, machine learning, commodity sensors, and embedded systems. It will gradually create a massive electronic nervous system for all human activities. The human interface to this largely invisible network will be mobile devices and dedicated home consoles like Alibaba’s Tmall Genie, the Chinese version of digital assistants like Amazon’s Echo and Google’s Alexa.

Like its American counterparts, Tmall Genie can remind its owners of appointments, order food, schedule a car repair, call a taxi or transfer money. The console is called to attention by calling its name: “Tmall Genie, when is the next solar eclipse? Tmall Genie, send flowers to my mother for her birthday. Tmall Genie, what is a black hole?” Chinese children grow up with Tmall Genie, asking it to tell Chinese fairy tales or sing children’s songs, talking to them as if they were real companions, and even taking them when they go outside to play. The child grows up feeling it can communicate with machines just as he does with people.

Simulation vs duplication

Children are oblivious to predictions by AI experts that improvement in the capacity of computers will lead to humans being replaced by machines. Using the mathematical logic of exponential growth in computer speed, AI theorists claim that computers will outperform the human brain. We will be able to “scan” the brain, store its content in binary code on an external storage device, and reconstitute the contents for “playback” or transfer to another brain. This scenario, popularized in science-fiction, assumes that the brain, like the computer, is based on binary logic.

The question of whether the brain is analog or digital was first posed during the famous Macy Conference on Cybernetics in the 1950s attended by most of the pioneers of modern digital computing. The issue led to heated debates, but the question was never resolved. The last prominent scientist to address the issue was theoretical physicist Freeman Dyson. In his lecture “Is Life Analog or Digital?” Dyson pointed at the processes in the brain that display analog behavior.

Neurological electricity, with neurons flashing on and off, can be interpreted as a binary process, but critical functions of the brain display analog behavior that cannot be digitized without loss of critical information. The human ear, directly linked to the brain, responds to waves. To digitize sound, we “sample” the analog sound waves 44.400 times a second. Each sample is given a binary number and written to an electronic medium. For playback, we reconstitute to the wave. “Digital music” doesn’t exist. There is only digitized music. The same applies to the brain. Binary systems can simulate analog brain processes, but cannot duplicate them. Once digitized, the original analog wave is lost forever.

The human ear tolerates the loss of information when listening to digitalized music. The high sampling rate of 44,000 times a second makes the lost information nearly imperceptible to all, but the most discerning ear. But if we drill down to the subatomic processes at work in the brain, an infinitely more complex organ than the ear, we meet a world as mysterious as quantum mechanics itself. Like subatomic particles, the brain is not an “object,” but rather a field of energy that changes constantly. It is embedded in its biological host.

Even assuming its contents can be scanned and stored in binary form, the brain would lose crucial information, no matter how high the sample rate. Converting it to binary form would result in an impaired brain, an inevitable consequence of (quantum) physics.

The distinction between analog and digital is of recent date. Analog refers to something similar or resembling something else; the word digital has its roots in the Greek digitalis, meaning finger. The dichotomy of analog-digital did not exist prior to the advent of binary computers. It resulted from the technique to store perceptible data in binary form. In other words, the analog-digital distinction is a human construct. It has no equivalence in nature.

Idealistic and realistic

Theories about machines replacing humans and decoding the brain have obscured other and more immediate implications of AI, among them its cultural dimension. Increased computation power will enable future AI systems to absorb the content of all books ever written and all knowledge accumulated by humanity in recorded history within an hour. By providing it with a self-learning function, the system would rapidly outperform humans in virtually all domains — physics, medicine, mathematics, political science, and even psychology. Humans would be relieved of all mental work. Who will develop the needed algorithms for such super-intelligence?

In the domain of hard science, like mathematics, cultural factors don’t play a role. But philosophy, political science, and sociology involve cultural values based on different worldviews. Chinese super-intelligence will reflect the Chinese worldview. A Chinese digital assistant, like Tmall Genie, given access to a super-intelligence, would respond differently to philosophical, ethical and ideological queries than its American counterparts.

Ask Tmall Genie to quote its favorite philosopher, and it may cite Confucius: “Learn from the ancients, but avoid their mistakes.” Ask it for the ideal Middle Way, the path between the binary parameters defined by qi, and it may paraphrase China’s old sages: “One should be neither idealistic nor realistic; one should be both. One should be neither traditional nor original; one should be both. One should be neither materialistic nor spiritual; one should be both.” Ask it for the road to Nirvana, and its Confucian heart-mind may answer: “Let’s first learn to live in the world together, and Nirvana will come.”

This article appeared earlier in the Asia Times.

--

--

Jan Krikke
Jan Krikke

Written by Jan Krikke

Author of Creating a Planetary Culture: European Science, Chinese art, and Indian Transcendence

Responses (2)