When we focus on Artificial Intelligence to be a tool for predicting the future, as many developers can imagine as the main outcome of deep learning systems, we are increasing the risk of losing learning from the past that in one way or another contributed to the achievement of quality of current cognitive systems.
I call it lost links. And one of those links that I think is most overlooked, despite its high relevance, is what applies logical rules to the knowledge base to deduce new information. Or, in other words, the old systems of inference.
Actually, machine learning is strongly credited with the possibility of creating programs without programming as a new paradigm, but the truth is that many platforms address exactly the same in a long time, using the most diverse algorithms and logic, such as evolutionary systems, machines of states, etc., beyond, of course, the systems of inference mentioned above.
For problems of limited complexity, such as resolving games, we can even think of disregarding lost links, as recently proved the AlphaZero platform applied to the game of Chess – although in practice, neither Chess nor Go are actually ‘solved’ since the mathematical complexity for this still makes these systems only probabilistic and nondeterministic.
However, if our problems are of unlimited uncertainty and complexity, such as opening or closing a position in the capital market, we need not only the lost links but all other new links that can be imagined by natural and artificial brains and minds.
In this sense, I believe that discovering and deducing new information is, in practice, one of the most valuable forms found by evolution to ensure our survival.
And, probably, a field of research that we have to evolve to the machines really learn without supervision and/or labeled data, and to create proper ways to AI systems solve these problems.
By Rogerio Figurelli at 05/29/2020