Most of human learning happens through symbols. We do not remember data in quantitative fashion. Even when we remember numbers (like for example phone number or value of Pi) we store it as a series of symbols rather than as float/in values. Our arithmetic calculations are also symbol based. This symbolic representation gives us the power of abstraction. If we want machines to emulate humans, machines should also understand symbols. To some extant this already happens when a variable is a given a name and it is referred by that name in subsequent code. But in this case the machine is not learning that symbol, rather the programmer is in a way hardcoding the symbol. If a machine can truly learn symbols, their associations to combine symbols to form complex symbols and form abstractions, it will be closer to humans in learning ability. In a way digital machines use 0’s and 1’s as symbols at the very base level and create abstractions around them already.

The need for performing operations on symbols has led to the development of Lambda calculus and the language group of Lisp. This was the first major step in AI. Although this happened more than 50 years ago, this approach towards AI has not been given as much importance. The computational world got lost in other aspects like data processing, black-box model fitting (including ANN). There needs to be a revival or symbolic manipulation and lambda calculus for AI to truly progress beyond function fitting.

Follow

Leave a Reply

Your email address will not be published. Required fields are marked *