14 July 2023

Application Tracking Systems

Application Tracking Systems are the bane of organizational recruitment. They reject 75% of applications while accepting only 25%. And, majority of this is down to formatting. Even if you are lucky enough to get past to the recruiter, they are also likely to be interested in counting keywords. In most organizations recruitment processes are antiquated and handled by people that don't even understand the hiring needs. Furthermore, machine learning models compound the issue by automating the process of counting word distributions which is what was done manually by the recruiter. In essence, not really solving the issue of understanding semantics and context towards looking beyond the surface of a document and into discourse processing. The following things are typical with ATS:

  • Submit an application and receive a standard rejection email response within minutes. There is no way that a recruiter could have reviewed the application that quickly.
  • Because the automated systems are also looking at word distributions they miss out on the context and content of work.
  • It is very common to get rejected by an ATS but get approached by a talent partner for the very same role.
  • Since it is looking at word distributions, having two many keywords confuses the automated system
  • It seems to also look at job titles, what if the job title did not exist in the market for the length of experience required.
  • What if the technical skill required did not exist for the length of experience required, asking for ten years of experience in LLM will get a lot of CVs rejected because they are very unlikely to have so many years of experience. Or, the fact that you mention tons of experience with embedding models but no mention of LLMs. 
  • Acronyms and abbreviations are tricky.
  • Skills are not properly evaluated in context.
  • Formating of CV seems to be the end game. But, if that is the case then it defeats the purpose because after passing the ATS it will be screened by the recruiter and then the manager who likely has their own requirements.
  • If the system rejects 75% of CVs then it's proof that the system is not working.
  • If the talent partner then later also approaches after the ATS rejection then it is further proof that the system does not work.
  • If the ATS + the recruiter rejects you for the role that you are a close match compared to the candidates that have very little experience in the area, then is very likely that there is serious bias in the recruitment process.
  • At times, the entire recruitment process is done to legally filter out minorities and further hides the curtain of institutional racism within an organization.
  • It may even ask for information about protected characteristics hiding under the covers of initiatives for diversity, equity, and inclusion to transform them into discriminatory filters.
  • Hiring for keywords and job titles is a useless way to match jobs to candidate profiles, or better yet to build a candidate profile in first place.
  • No one has time to customize their CV to every job to game the ATS.
  • ATS resume parsing is useless, if it is based on keywords, job titles, file type, formating, and special characters.
  • ATS ignores aspects of discourse processing which is the essence of resume parsing.
  • If you apply to more than one role the ATS might consider that as spam.
  • It also proves that human resources is the weakest link when an organization states that there is a skills shortage in the market when there really isn't. The issue is with rejecting 75% of applications at the expense of using an ATS process, some of which are perfectly good candidate profiles to the job requirements.
  • Rejecting a CV just because it doesn't meet a particular formatting criteria is not really a reason to decline a candidate application for a job.
  • This is a case for where AI Automation is bad, likely unethical, untrustworthy, lacks assurance, and irresponsible.
  • Recruiters are terrible at reading and understanding CVs, and ATS makes it even worse

1 July 2023

AGI

For Artificial General Intelligence to see reality, there has to be an extension in the use of Large Language Models that comprise of short-term memory, long-term memory, and sensory memory to provide for an abstraction in associative memory in implicit and explicit forms. This will also need to extend into some form of representation in cognitive modeling as well as a quantum information to extend into space and time geometry. And, above all an aspects of sapience, self-awareness, and sentience will need to be achieved for plausible AGI. AGI refers to a combined effort between symbolic and sub-symbolic learning. So, a natural cognitive architecture forms into a Hybrid AI in nature. However, in industry symbolic learning has largely been ignored in favor of sub-symbolic learning. However, sub-symbolic learning comes with a lot of deficiencies of focusing on probabilistic methods. The machine neither understands these probabilities, can't provide blackbox explanations, nor is able to interpret them into new forms of knowledge. Most so called AI solutions are far from intelligent. Statistical methods have already shown to be brittle, rigid, and uninterpretable. Statistics is a level above logic abstraction that machines just cannot seem to understand as part of their programmable circuitry. And, researchers should really stop trying to muddy the waters with incorrect use of terms only to show false pretences in progress to secure funding.

Generative AI

Generative AI is not really AI. The only thing generative is in the application of deep learning methods which is all statistics. The broader field of Machine Learning makes up only thirty percent of AI. There is a lot of incorrect words floating around in academia trying to confuse people on AI progress. In last fifty years there has not been any significant ground breaking advancements in AI. Apart from renaming of fields and reusing methods that have been around for decades. For example, Deep Learning basically comes from reusing methods in Neural Networks. Large Language Models is also a trendy topic. However, LLMs are simply an engineering extension of embedding models which come under the sub-area of distributional semantics, another area that has been around for decades in information retrieval. In most cases of Machine Learning methods the machine develops no formal context or understanding apart from the use of an intermediate programming language to translate probabilities into logical form using the computational syntax and semantics. If the machine developed any form of understanding then there wouldn't be any need to use a programming language to build a machine learning model. The other significant issue in the field is the wrong types of people that are hired at organizations who primarily come from math and statistics backgrounds. The correct types of people to be conducting AI research should really be from computer science backgrounds where the full spectrum of subject matter is formally taught in both theory and practice. The Generative AI should really be called Generative Deep Learning as that is pretty much the only area that is covered in application. 

Conference Index

Conference Index