Machine learning by its very nature is built on statistics. If we are to advance AI we have to think beyond machine learning. Humans rarely ever use statistics in the every day life and still have advanced mechanisms to learn through experiences. In fact, those experiences are also retained in memory to form new patterns of learning. Everyday as humans we form associations and relationships with the things around us as we form new experiences. Machine learning on the other hand still requires a lot of training data and that data has to be balanced between variance and bias. Transfer learning on unseen and untrained data is still a challenge. Architectures of deep learning can be formed into very complex and sophisticated structures. However, this complexity is unsustainable when compared to the prohibitive cost and the returns achieved in the process. AI is still very narrow and focused. Any general AI will require thinking beyond the standard concepts of probabilities in statistics. In fact, AI is not just about machine learning but almost eighty percent of the field is based on computer science concepts. The only way to really approach Advanced AI is to take inspiration from the human mind and brain and build models that are highly complex and yet cheaper to put together as building blocks of conceptualization in a hybrid system. Such systems may even be sub-divided into sub-systems just like the organs of the human body and parts of the brain. A natural progression of AI is to combine knowledge representation and reasoning with probabilistic methods to provide such metaphors of adaptability in generalizable learning. Probabilities is not the answer to understanding emotions or other generalizable forms of human learning which lead to brittle and ridged models not to mention a significant margin of error. Machine learning does not provide assurances for key AI functions, which in most cases blurs the lines between what a machine is able to comprehend as a false sense of articulation. For AI to be truely autonomous and live among humans the learning process not only needs to take ethics into account but also be able to reduce such margins of error on its own through the learning process. Increasingly, reinforcement learning methods are being used that do not require huge amounts of training data. However, even in this process learning can be initially slow and also lead to incorrect training in feedback loops which can be disastrous for critical environments like medicine or autonomous driving. An interpretable representation of knowledge is needed to define context as well as some form of logical reasoning constructs. Going further, a long-term and short-term retention of memory through every iterative process of learning is necessary in order to learn from mistakes and past experiences. It may be plausible to assume that to mimic the nervous system one could use more of the statistical thinking to replicate the concepts of impulse and the human senses. Advancement of AI then becomes a joint effort of advancing hardware as well as software. Hardware may even take the form of naturally-inspired computation to enhance the level of coding of information. AI still has a long way to go yet to be regarded as a sentinent being that can cohabitate, live safely among humans, or even to surpass into superintelligence.