Producing a successful business model is challenging. Maintaining a competitive advantage and positive brand awareness is even more challenging. Price optimization is also a challenge that many businesses face nowadays between profit and lost margins. Consumers are becoming more and more knowledgeable about what they buy. Products and Services have different business models. Supply Chain is also critical for ecommerce businesses. While big data investments require unique summary insights for customers. Each type of business has its unique challenges and responsibilities of regulatory compliance as well as management of employees. One useful way of evaluation is to run a simulation of a business product or service as a dry run. This simulation could incorporate multiagent interactions between buyers, suppliers, and sellers as well as other competitors. Different probabilistic variables could be utilized towards understanding the various uncertainties of the real-world before the business goes into market to introduce a product or service. Venture Capitalists could also utilize such simulations to better measure their investment opportunities and risk for funding a start-up. One example, could include: should company X merge with Company Y, given a set of N competitor companies and how will this effect long-term growth and brand identity. A software company could utilize a simulation to understand how the product will run against changing trends. Even a failing business could utilize such simulations to get a better insight on how to turn it into a profitable venture. There may be aspects of business that have been overlooked that a simulation could bring to light. One has simulations to load test production applications. One even has economic models to understand a nation's economy. Defence enterprises utilize simulations to model strategies for warfare. Why not have a simulator to dry run a business idea for the potential of profitability in future or even to understand the existing market challenges.
22 February 2016
17 February 2016
Intelligent Agent for Academics
Adaptive intelligent tutoring using machine learning techniques could help academic performance for students. But, the ideal aspect of such tutoring should really be about positive reinforcement while avoiding excessive negative feedback from collaborative techniques or comparison against other students of merit. Although, students do learn in a collaborative manner, individualized tutoring may be more useful. As a tutor the subject matter understanding becomes critical in this context in order to replace a human skill. Also, the tutor needs to adapt the pace and understand the pattern of learning of each human individual. Online learning and classrooms are slowly but surely becoming the norm as MOOCS are taking the trend. Online universities accessible for everyone is pretty much the future. However, such practices do need to extend towards research as well. Hence, an intelligent agent researcher could also form an aspect of practice towards extending subject matter rather than just teaching it. At same time, the intelligent agent could publish peer reviewed papers on the subject of interest and produce a Latex equivalent publication. Peer review process could also involve other multiagent interactions and search for the right citations. Furthermore, other intelligent agent roles could include plagiarism detection agent, proctor agent, and an instructor agent. It is still very early days for when an intelligent agent becomes a researcher as even tutoring agents have not quite reached the ability to match the human potential. Moreover, one needs to look towards linked data in order to connect so many distributed educational institutions as a global decentralized hub for open access knowledge sharing.
Brand Ontologies
DBPedia is a massive pool of semantic knowledge as an extension of Wikipedia. A learning agent can use DBPedia as a knowledge source to understand the open world. Entity Extraction can utilize Entity Linking via URI as mapped to DBPedia. However, there is more context necessary beyond the open world that we as humans understand. GoodRelations is an ecommerce web vocabulary, fairly generic and customizable with support of schema.org. Furthermore, it seems necessary to incorporate an extension of the open world to brand ontologies that provide more focused semantic conceptual understanding about products and services. A GoodBrands ontology or vocabulary may provide for a further extension in scope that may be automated via an aspect of web scraping of brand sites to expose customizable brand related schemas. One could by an obvious measure use the site map to identify such products and services of a brand. These schemas could be formalized and then reference linked to DBPedia as a root source for particular contextual concepts. Thus, avoiding any ambiguity of scope for disambiguation. The semantic aspects of natural language processing could be utilized via the Lemon Framework. One could also extend the Brand Ontology from a durable to a non-durable as well as consumable vs a non-consumable, even whether something is a product or a service, a thing or a concept. By having metadata understanding of products and services at a more granular level an agent could then provide for a more insightful inference for knowledge as well as better extraction with domain adaptation over brands and their related semantics in the open world.
Labels:
big data
,
dbpedia
,
ecommerce
,
intelligent web
,
linked data
,
metadata
,
natural language processing
,
semantic web
,
text analytics
16 February 2016
AI for Data Validation and Verification
It is predicted that robots will replace many jobs in next 30 years time. However, one of the first critical roles they need to replace is the verification and validation of data input from human error. This is one of most common problems that occurs in business where human error can cause fraud not to mention decline someone from an application that they may have made for a mortgage, loan, security clearance, or recruitment. Furthermore, a business requires data entry in supply chain, ledger accounting, and more. As one can tell the role of data entry is critical across multiple business sectors. Often times the manual task of replacing a form entry into a system by a human needs to be replaced through automation. Furthermore, critical verification and validation checks need to be in place to ensure the data is correct as well as to meet compliance and mitigate risk. Data Validation is usually the aspect of checking that the data entered is sensible and reasonable. However, it does not check the accuracy of such data. Types of validation incorporate checking: digits, format, length, acceptable value lookup, presence of field entry, range, and spelling. Data Verification is usually to check that the data entered matches the source. This can be checked in two ways: double entry and proofreading data. Most of these, if not all can be automated as part of an intelligent agent role designation that can semantically understand the context of the data for validation while at the same time being able to check for the verification of data entry. These days forms are scanned or copied rather than manually entered. However, even such processes require being able to read the handwriting. The intelligent agent needs to be able to understand the different forms of handwriting to deduce characters of a language and semantically understand the meaning without diluting the context of the form nor the data. In process, an intelligent agent needs to be able to process vast quantities at speed greater than that possible for a human i.e. batch processing. Big Data Pipelines have made significant in roads towards automation in the data mining and retrieval with options for stream processing of information. Forms on the web are another aspect of data entry that is often used and entered into a backend database which surely need more intelligent means of validation and verification. Even the role of call center agent can be replaced. Additionally, the knowledgeable intelligent agent will need speech recognition, ability for text-speech analysis, as well as affective understanding of human emotions as part of customer service. At same time, the intelligent agent will need to both facilitate knowledgeable understanding of domain context while processing new information as part of the data entry step. Multitasking is something that computers have been better at than most humans while avoiding error. But, for specialized agents and robots it becomes more complex in learning as tasks get diversified. As we look forward into the future, we are likely to increase trust in artificial intelligence for everyday things while making our lives more complex in other areas of life especially human relationships. In process, data drifts everywhere around us and we adapt to ubiquitous technology as part of a new lifestyle.
Subscribe to:
Posts
(
Atom
)