For Artificial General Intelligence to see reality, there has to be an extension in the use of Large Language Models that comprise of short-term memory, long-term memory, and sensory memory to provide for an abstraction in associative memory in implicit and explicit forms. This will also need to extend into some form of representation in cognitive modeling as well as a quantum information to extend into space and time geometry. And, above all an aspects of sapience, self-awareness, and sentience will need to be achieved for plausible AGI. AGI refers to a combined effort between symbolic and sub-symbolic learning. So, a natural cognitive architecture forms into a Hybrid AI in nature. However, in industry symbolic learning has largely been ignored in favor of sub-symbolic learning. However, sub-symbolic learning comes with a lot of deficiencies of focusing on probabilistic methods. The machine neither understands these probabilities, can't provide blackbox explanations, nor is able to interpret them into new forms of knowledge. Most so called AI solutions are far from intelligent. Statistical methods have already shown to be brittle, rigid, and uninterpretable. Statistics is a level above logic abstraction that machines just cannot seem to understand as part of their programmable circuitry. And, researchers should really stop trying to muddy the waters with incorrect use of terms only to show false pretences in progress to secure funding.