Intelligent Systems - Summary PDF

Title Intelligent Systems - Summary
Author Max de Froe
Course Intelligent Systems
Institution Vrije Universiteit Amsterdam
Pages 4
File Size 138.5 KB
File Type PDF
Total Downloads 16
Total Views 600

Summary

Final Summary Information Systems Search algorithms judge : completeness: is the strategy guaranteed to find a solution when there is one? optimality: does the strategy find the solution when there are several different solutions? how long does it take to find a solution? how much memory does it nee...


Description

Final Summary

Information Systems

Search algorithms judge by: - completeness: is the strategy guaranteed to find a solution when there is one? - optimality: does the strategy find the highest-quality solution when there are several different solutions? - time-complexity: how long does it take to find a solution? - space-complexity: how much memory does it need to perform the search? Search Type:

Complete Optimal

Breadth-first





Uniform-Cost





Depth-first Depth-limited



Iterative deepening





Bi-directional





A*





A



Beam

Breadth-first search: In this strategy, the root node is expanded first, then all the nodes generated by the root node are expanded next, and then their successors, and so on. -complete ✔ optimal ✔ Uniform-cost search: the same as breathfirst search but expands the lowest-cost node on the fringe. - complete ✔ optimal ✔ Depth-first search: always expands one of the nodes at the deepest level of the tree. Only when the search hits a dead end (a non-goal node with no expansion) does the search go back and expand nodes at

shallower levels. - Very modest memory requirements. - Could get stuck down the wrong path in very deep or even infinite search trees. - Not optimal Not complete Depth-limited search: the same as depth-first-search but has a limit on the maximum depth of a path. This way it won’t get stuck in a deep search tree, and thus searches the full tree. - It can be complicated to set the right limit. - Complete ✔ Not optimal Iterative deepening search: does the same as depth-limited search but starts with a depth limit of 0, then a depth limit of 1 and continues until it reaches a goal. - Very modest memory requirements. - Used when there is a large search space and the depth of the solution is not known. - complete ✔ optimal ✔ Bi-directional search: simultaneously searches both forward from the initial state and backward from the goal. - complete ✔ optimal ✔ Informed search: - Best first search: chooses the most promising option.

- Greedy best first search: chooses the best option without reevaluating - Hill climbing: Greedy best-first with only 1 element stored

- A search: best known form of best first search. - Avoids expanding paths that are already expensive - Estimated total cost of path through n to goal: f(n)=g(n) + h(n) - g(n) the cost (so far) to reach the node. - h(n) estimated cost to get from the node to the goal. - complete ✔ not optimal - A * search: A search but with an admissible heuristic (never overestimates distance -

to goal) - complete ✔ optimal ✔ Beam search: the same as best first search but here it has a beam-width which limits the amount of nodes in memory. - Not optimal Not complete

Heuristics: educated guessing. - complete knowledge is often not available or too expensive to calculate, so heuristics are used. This uses partial knowledge or human expertise to guess. - Can be completely wrong - Perfect heuristics: h(n) = h*(n) - Trivial heuristics: h(n) = 0 Requirement for evaluation of a heuristic: - EVAL should order terminal-nodes in the same way as UTILITY. - Computation may not take too long. - For non-terminal states the EVAL should be strongly correlated with the actual chance of winning. A heuristic is admissible if it never overestimates the cost to reach the goal Utility: value based on quality of the state Heuristics: value based on estimation of the quality of the state Heuristic functions: - manhattan distance: total distance to goal without taking account of obstacles. - Straight line distance: direct distance to goal in a straight line. ( has to be known, can not be calculated from the problem description) Adversarial search: - MinMax: Finds the contingent strategy for MAX assuming an infallible MIN opponent assuming both players play optimally. - Complete & Optimal - AlphaBeta Pruning: - Alpha = value of best choice found so far at any choice point along the MAX path - Beta = value of best choice found so far at any choice point along the MIN path Pruning: trying to identify the nodes we do not need to explore Fringe: a queue sorted in decreasing order of desirability. Perfect Information Monte Carlo Sampling: ?

Search direction - Data-driven: Start with initial state - Goal-driven: Start with goal state Rational agent: chooses whichever action maximizes the expected value of the performance measure given the percept sequence to date and prior environment knowledge. Task environment: Performance, Environment, Actuators, Sensors. (PEAS) Environment types: - Observable: Full / Partial - Deterministic (if the next environment state is completely determined by the current state & the executed action) / Stochastic - Episodic (only the current percept is relevant) / Sequential (memory of the previous actions is required to determine the best next action) - Static / Semi-Dynamic (if the agent’s performance changes even when the environment remains the same) / Dynamic ( If the environment can change while the agent is choosing an action ) - Discrete (there are a finite number of actions that can be performed within the environment) / Continuous - Single agent / Multi-agent (If the environment has other agents who are also maximizing some performance measure that depends on the current agent’s actions) ! Agent types: - Simple reflex agent: Select action on the basis of only the current percept. - Reflex and state agent: Keeps track of how the world evolves and how it’s actions affect the world. - Goal based agents: Keeps track of how the world evolves and how it’s actions affect the world, but also takes into account future, what happens if I do A?. And also needs to know a desired situation. - Learning agents: Uses a learning element to adapt its actions. Properties of inference: - An algorithm is sound if its output is true for every true input. - An algorithm is complete if!it guarantees to return a correct answer for any!input - A calculus terminates if it finds the entailed sentences in finite time - A logic is decidable if there is a sound and complete calculus that terminates. If you want to prove K B |

= α you need to show that K B ∧ ¬α is unsatisfiable.

Fuzzy set theory: allows an object to be to some degree an element of a (fuzzy) set.

ftall (x) = y X is a member of the set of tall people to degree y frequentism: probability is only a property of repeated experiments. Bayesianism: probability is an expression of our uncertainty and beliefs. The difference lies in statements like "the probability that Bob is having an affair". To a frequentist, Bob is having an affair or not. No probability is involved. To the Bayesian, our uncertainty can be expressed as a probability.

Conditional Probablility: P(A| B ) =

Bayes’ rule: P(A| B ) =

P(A ∩ B ) P(B )

P (B | A)P (A) P(B )

Feedback types: - Supervised learning: correct answers for each example - Unsupervised learning: no correct answers given - Semi-supervised learning: active learning, learn which questions to ask - Reinforcement learning: occasional rewards Types of learning problems: - Classification: from data to discrete classes - Regression: predicting a numeric value - Ranking: comparing items - Collaborative filtering: you learn from another persons information (used in recommendation systems) - Clustering: discovering structure in data, the algorithm determines classes.

DPLL: - Translate the premises into logic, i.e. all the formulas you assume to be true, and from which you want to derive something. - Transform the logical representation into Clause Normal Form. - Add the negation of the formula you want to derive from the premises to the previously produced formulas. - Turn the negation of the formula you want to prove entailment for into CNF. - Extend a partial assignment by guessing (assuming) the truth value of one more variable and do this until you either find a model or a contradiction. - Fail to produce a complete assignment that satisfies all the clauses even after backtracking - If a partial assignment fails to satisfy one of the clauses do backtracking, i.e. assign a different value to the variable you last assigned a value to. - Conclude that the formula you wanted to prove follows from your premises....


Similar Free PDFs