One can blame all of complexity in AI to statistics. The main reason why AI is going no where is because of statistics. Machines have no understanding of what they are learning in a statistical model. There is no formal method of testing statistical models that work in a blackbox. So, why bother working through something that can't be fixed? To explain the model, one will have to build an explainability model - uncertainty to explain the uncertainty. One is going from complexity to another form of complexity. As the old adage goes - 'keep things simple, stupid'. Even the machine needs an interpretation device to work out the statistical model which may be provided in form of a codified language. When one codifies a model they are right back to using logics and an instruction set that a machine can then understand and process. So, why not reduce the process to logical set of representations and let the machine figure out the rest of the statistical invariance through axiomatizations? After all, even when one builds a statistical model, it will eventually reduce to logic anyway.