Statistical learning models in Artificial intelligence

To understand Statistical learning models in Artificial intelligence Consider a simple example. Our favorite Surprise candy comes in two flavors: cherry (yum) and lime (ugh). The manufacturer has a peculiar sense of humor and wraps each piece of candy in the same opaque wrapper, regardless of flavor. The candy is sold in very large bags, … Read more

Decision trees in Artificial intelligence

A decision tree reaches its decision by performing a sequence of tests. Each internal node in the tree corresponds to a test of the value of one of the input attributes, Ai, and the branches from the node are labeled with the possible values of the attribute, Ai = vik. Each leaf node in the … Read more

Reinforced learning in Artificial intelligence

In this techniques, although a teacher is available, it does not tell the expected answer, but only tells if the computed output is correct or incorrect. A reward is given for a correct answer computed and a penalty for a wrong answer. This information helps the network in its learning process Note : Supervised and … Read more

Supervised and unsupervised learning in Artificial intelligence

Supervised learning In supervised learning the agent observes some example input–output pairs and learns a function that maps from input to output Key points of Supervised learning In this learning, every input pattern that is used to train the network is associated with an output pattern This is called ”training set of data”. Thus, in … Read more

Hidden Markov Models (HMM) in Artificial intelligence

A hidden Markov model (HMM) allows us to talk about both observed events (like words that we see in the input) and hidden events (like part-of-speech tags) that we think of as causal factors in our probabilistic model. An HMM is specified by the following components A first-order hidden Markov model instantiates two simplifying assumptions … Read more

Utility theory in Artificial intelligence

There are following points about Utility theory Defines axioms on preferences that involve uncertainty and ways to manipulate them. Uncertainty is modeled through lotteries -– Lottery: [ p : A;(1 − p) : C] Outcome A with probability p Outcome C with probability (1-p) The following six constraints are known as the axioms of utility … Read more

Probabilistic reasoning in Artificial intelligence

Need of Probabilistic Reasoning in AI Unpredictable outcomes Predicates are too large to handle Unknown error occurs In probabilistic reasoning, there are two methods to solve difficulties with uncertain knowledge Bayes’ rule Bayesian Statistics Probability: Probability can be defined as chance of occurrence of an uncertain event. It is the numerical measure of the likelihood … Read more

Resolution in Artificial intelligence

Resolution method is an inference rule which is used in both Propositional as well as First-order Predicate Logic This method is basically used for proving the satisfiability of a sentence In resolution method Resolution in Propositional Logic In propositional logic, resolution method is the only inference rule which gives a new clause when two or … Read more

Forward and Backward Chaining in Artificial intelligence

Forward chaining starts with the available data. This is an initial data and uses inference rules. It helps in extracting more data until a goal is reached. An inference engine using forward chaining searches the inference rules until it finds one. Here the antecedent is known to be true. Whenever such a rule is found, … Read more

Inference in First order logic in Artificial intelligence

There are two ideas behind Inference in First order logic in Artificial intelligence convert the KB to propositional logic and use propositional inference a shortcut that manipulates on first-order sentences directly (resolution, will not be introduced here) Universal Instantiation infer any sentence by substituting a ground term (a term without variables) for the variable Examples … Read more