Cybernetics explains what AI is — and what it isn’t
--
Artificial Intelligence is one of the most important and misunderstood sciences of today. Much of this misunderstanding is caused by a failure to recognize its immediate predecessor — Cybernetics. Both AI and Cybernetics are based on binary logic, and both rely on the same principle for the results they produce: intent. The logical part is universal, the intent is culture-specific.
A bit of history. In the 1940s, a team of scientist headed by Norbert Wiener developed the first self-regulating and self-correcting systems. Wiener called it Cybernetics, the Greek word for steersman. The US military used the technology to develop the first guided missiles and in the 1950s, the technology became standard equipment in airlines: the automatic pilot.
Wiener and his team developed the required technology using digital (binary) rather than analog circuits. Most computers at the time were based on analog systems. Like most other computer scientists at the time, Wiener realized that digital systems were more precise and easier to program. By the 1960s nearly all computer design had moved to binary systems.
Another reason Wiener turned to digital systems was Boolean algebra. In the 19th century English mathematician George Boole developed an algebra of classes based on binary choices — yes and no. Boolean logic could be perfectly implemented in digital circuitry. An open binary gate would mean off, a closed binary gate would mean on. The idea was first proposed in the 1930 by Claude Shannon, the spiritual father of the Information Age. (Shannon also realized that a binary number or string of numbers could be used to symbolize anything, from letters and symbols to sound and images.)
A textbook example Boolean logic is this: If the symbol x represents a generic class of all “white objects” and the symbol y represents a generic class of all “round objects,” the symbol xy represents the class of objects that are both white and round. In binary computing, Boolean logic is simply a sequence of yes or no/true or false choices. There is no “maybe” — unless “maybe” is conditional on another probability or likelihood. “IF I get a raise, THEN I will buy a new car.”
IBM’s AI system named Big Blue used Boolean logic to beat world chess champion Garry Kasparov. “IF white opens with D4, THEN I will counter with C5.” The Chinese game of Go has many more possible moves than chess, but Google’s AlphaGo AI program defeated world champion Ke Jie. A computer beating the world champion Go is impressive, but it’s not magic. AI can simply weigh and process more options faster than humans can.
Consumer preferences, self-driving cars and logistics are still part of what we may call AI 1.0. Things become more complex when AI is applied to social, environment and ethical domains where values, judgment and intent come into play. The intent of a chess computer is straight-forward — winning. AI applied to domains that impact the lives of a people involves issues that go beyond winning and losing. But the fundamental — binary/Boolean — principle remains the same.
Regardless of how AI develops, it will be based on binary/Boolean logic, (a point we may have to revisit if scientists succeed in merging analog and digital computing or when we develop quantum computing). For now, Boolean logic is just that, a binary choice between multiple options. IBM’s Deep Blue made some moves the programmers did not anticipate, but it still acted within the binary/Boolean logic of the system. Unless instructed to ignore the difference between fiction and non-fiction, AI will not make up non-existing facts, just like an autopilot will not fly an aircraft to a non-existing airport.
AI can be applied to every imaginable domain of human concern, and it holds up a mirror. Whatever domain we choose — economics, environment, education — it asks us to define our intention, just like an autopilot asks us to pick a destination.