Grundlagen der künstlichen Intelligenz - Study Script to fill-in with Exercises+Exams PDF

Title Grundlagen der künstlichen Intelligenz - Study Script to fill-in with Exercises+Exams
Course Grundlagen der Künstlichen Intelligenz (IN2062)
Institution Technische Universität München
Pages 94
File Size 3.9 MB
File Type PDF
Total Downloads 16
Total Views 112

Summary

I printed this study script and filled in the answers by hand, this is what the free space is for....


Description

Grundlagen der künstlichen Intelligenz IN2062 Topics - Task environments and the structure of intelligent agents. - Solving problems by searching: breadth-first search, uniform-cost search, depth-first search, depth-limited search, iterative deepening search, greedy best-first search, A* search. - Constraint satisfaction problems: defining constraint satisfaction problems, backtracking search for constraint satisfaction problems, heuristics for backtracking search, interleaving search and inference, the structure of constraint satisfaction problems. - Logical agents: propositional logic, propositional theorem proving, syntax and semantics of first-order logic, using first-order logic, knowledge engineering in first-order logic, reducing first-order inference to propositional inference, unification and lifting, forward chaining, backward chaining, resolution. - Bayesian networks: acting under uncertainty, basics of probability theory, Bayesian networks, inference in Bayesian networks, approximate inference in Bayesian networks. - Hidden Markov models: time and uncertainty, inference in hidden Markov models (filtering, prediction, smoothing, most likely explanation), approximate inference in hidden Markov models. - Rational decisions: introduction to utility theory, utility functions, decision networks, the value of information, Markov decision processes, value iteration, policy iteration, partially observable Markov decision processes. - Learning: types of learning, supervised learning, learning decision trees. - Introduction to robotics: robot hardware, robotic perception, path planning, planning uncertain movements, control of movements, robotic software architectures, application domains.

Literatur Literature e • P. Norvig and S. Russell: Artificial Intelligence: A Modern Approach, Prentice Hall, 3rd edition. (English version) • P. Norvig and S. Russell: Künstliche Intelligenz: Ein moderner Ansatz, Pearson Studium, 3. Auflage. (German version)

Exercise Sheets Exercise 1 - Intelligent Agents Problem 1.1: Rational Agents Problem 1.1.1: Which actions does a rational agent select?

Problem 1.1.2: Which of these games would a rational agent always win or draw at and why? Is it physically possible to build such an agent? a. Poker

b. Tic-Tac-Toe (Noughts and Crosses)

c. Chess

Problem 1.2: Intelligent Agents Consider the following intelligent agents: a. GPS route guidance b. Kettle (Wasserkocher) c. Bi-directional escalator on Munich Underground Problem 1.2.1: Suggest performance measures for each of the above agents, and which type of agent should be used out of the following: • Simple reflex agent • Model-based reflex agent (reflex agent with state) • Goal-based agent 1

• Utility-based agent • Learning agent (in combination with any of the above)

Problem 1.2.2: (adapted from Russell & Norvig 3rd ed., q. 2.10) Consider the Vacuum Cleaner environment from the lecture notes (slide 11, lecture 2), in which the agent’s performance measure awards three points for each clean floor at the end of the time of operation and penalises one point for each movement during operation. It can only perceive the room it is in. a. Can a simple reflex agent be rational for this environment?

2

b. What about a reflex agent with state?

Problem 1.3: Environments Problem 1.3.1: Andrea says: “a game of billiards is deterministic: a player’s action is determined by the state of the table and where the ball is.” Bernhard says: “A game of billiards is stochastic, as one player doesn’t know what the other player will do.” Catherine says: “A game of billiards is stochastic because it is impossible to know exactly where the ball is and what the shape of the ball and the table are. When the player hits the ball, it might go somewhere else than intended.” Who do you agree with?

3

2 Additional Problems Problem 1.4: Intelligent Agents Consider the following intelligent agents: a. Bomb disposal agent b. Weather forecast c. Chess/strategy game on clock Problem 1.4.1: Suggest performance measures for each of the above agents, and which type of agent should be used.

Problem 1.4.2: Think of further agents. For each, propose a performance measure and decide which type of agent can be used.

4

Problem 1.4.3: (from Russell & Norvig 2nd ed., q. 2.2 ) Both the Performance Measure and the Utility Function measure how well an agent is doing. What is the difference between the two?

Problem 1.5: Environments Problem 1.5.1: (from Russell & Norvig 3rd ed., q. 2.4) For each of the following activities, give a PEAS description of the task environment and characterize it in terms of the properties listed in slides 19-25 from Lecture 2. 1. Playing football

2. Subsea cable repair

5

3. Price query for a product on the internet

4. Bidding on an item at an auction.

6

Exercise 2a: Uninformed Search

7

Problem 2.2: A Appli ppli pplication cation of Search Algorithms: Transport I have bought 120 bricks at the hardware store, which I need to transport to my house. I have the following actions a: 1. take the bus (b) – I can carry up to 30 bricks, costing 3 € 2. take my car (c) – it can carry up to 100 bricks, costing 5 € 3. take a delivery (d) – it delivers all the bricks, costing 29 € Assume that I explore actions in the order: bus, car, delivery; and we start at the initial node s. For clarity, we name nodes after the actions taken to get to them (e.g. after taking the bus twice, the node is labeled with bb); and the order of actions reaching a state is not important (e.g. the states bc and cb are considered to be the same).

Problem 2.2.1: Assuming I want to make as few trips as possible, which search method should I use and what is the resulting solution?

8

Problem 2.2.2: Assuming I want to minimize the cost, which search method should I use and what is the resulting solution?

Problem 2.2.3: Perform depth-first search using the recursive implementation.

Problem 2.2.4: Perform depth-first search using the recursive implementation again, but now assume I explore actions in the order: delivery, car, bus.

9

2 Additional Problems Problem 2.3: A Appli ppli pplication cation of Search Algorithm: Train Journey From Problem 2.5 (of Exercise 2b), perform Breadth-First, Depth-First, and Uniform-Cost search using graph search for both time and price cost. For each step, write down the node which is currently expanded, the frontier set, and the explored set.

10

Problem 2.4: Gene Genera ra rall Questions on U Uninfor ninfor ninformed med Search Problem 2.4.1: (from Russell & Norvig 3ed. q. 3.18) Describe a state transition graph in which Iterative Deepening Search performs much worse (in time) than Depth-First Search (e.g. O(n2) vs. O(n)).

11

Problem 2.4.2: (from Russell & Norvig 3ed. q. 3.15) Consider a state transition graph where the start state is the number 1 and the successor function for state n returns two states, numbers 2n and 2n + 1. a. Draw the portion of the state transition graph for states 1 to 15.

b. Suppose the goal state is 11. List the order in which nodes will be generated (not explored) for Breadth-First search, Depth-Limited search with limit 2, and Iterative Deepening search. Would bidirectional search be appropriate for this problem?

12

c. What is the branching factor in each direction of the bidirectional search?

d. Does the answer to c. suggest a reformulation of the problem that would allow you to solve the problem of getting from state 1 to a given goal state with almost no search?

Problem 2.4.3: (from Russell & Norvig, 2ed. q. 3.6) Does a finite state transition

13

Exercise 2b: Informed Search

Problem 2.5.1: What are the requirements on a heuristic for A* tree search and graph search to be optimal?

Problem 2.5.2: Is the given heuristic admissible? Is it consistent? If not, why not?

14

Problem 2.5.3: Perform A* graph search and tree search to verify your previous answers.

15

Problem 2.6: A Appli ppli pplication cation of Search Algorithms: Train Journey We want to travel from Aberdeen to Birmingham. The (semi-fictional) map of the British rail system is available in Fig. 1. Routes have the same cost in both directions for both time and price. If there is no clear preference for the next node to explore, we explore first the node which is first alphabetically.

Problem 2.6.1: In the table below, we are given a heuristic for cost in time, which is based on the straight-line distances to Birmingham and the maximum speed of the train being no more than 120km/h. Perform Greedy Best-First and A* Search for time cost using tree search. For each step, write down the node which is currently expanded and the nodes in the frontier set, each with its value of the evaluation function f(n).

16

17

Problem 2.6.2: For the problem with cost in price (see Fig. 1 right), would a heuristic based on distance be valid?

Problem 2.6.3: Which search algorithm would we use if we want to minimize train changes (assuming that we change train at every station)?

Problem 2.6.4: Is bidirectional search a good option for the train journey search?

18

2 Additional Problems Problem 2.7: Graph Search Consider the graph of Problem 2.1 (of Exercise 2a). With the heuristic given in the table below, perform Greedy Best-First and A* Search, each with tree search and with graph search. For each step, write down the node which is currently expanded and the nodes in the frontier set, each with its value of the evaluation function f(n) (and the nodes in the explored set in case of graph search).

19

Problem 2.8: Gene Genera ra rall Questions on Search Problem 2.8.1: Can Uniform-Cost Search be more time-effective or memory-effective than the informed search algorithms? What about cost-effectiveness?

Problem 2.9: A Appli ppli pplication cation of Search Algorithms in Daily Life Which search algorithm (out of those covered in lectures 3 and 4) do you think your brain uses in the following cases, and what are the advantages of this algorithm? (Note that there are no hard-and-fast solutions for these; it is a matter of personal preference. The purpose of this exercise is to think about situations where you use search algorithms in daily life.) a. Planning a route from Arad to Bucharest with a map.

20

b. Finding an Easter egg: i. in a building you are unfamiliar with, on your own. ii. in a building you are familiar with, on your own. iii. in a building you are familiar with, with a team of people who all have mobile phones. iv. if the Easter has a bell attached (which constantly rings).

c. Playing a strategy game, e.g. chess.

21

Exercise 3 - CSP

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

Exercise 4 – Logical Agents

39

40

41

42

43

44

45

46

Exercise 5 – First Order Logic

47

48

49

2. Some professor admires Mary.

3. Mary admires herself.

4. No student attended every lecture.

5. No lecture was attended by every student.

50

6. No lecture was attended by any student.

1. Holmes can trap anyone who can trap Moriarty.

2. Holmes can trap anyone whom Moriarty can trap.

3. Holmes can trap anyone who can be trapped by Moriarty.

4. If anyone can trap Moriarty, then Holmes can.

5. If everyone can trap Moriarty, then Holmes can.

6. Anyone who can trap Holmes can trap Moriarty.

7. No one can trap Holmes unless he can trap Moriarty.

51

8. Everyone can trap someone who cannot trap Moriary.

9. Anyone who can trap Holmes can trap anyone whom Holmes can trap.

52

Exercise 6 - Inference in First-Order Logic

Problem 6.1.2: Using the constants Me for the speaker and That for the person depicted in the painting, formalize the sentences regarding the sexes of the people in the puzzle.

53

Problem 6.1.3: Formalize the sentences “Brothers and sisters have I none” and “That man’s father is my father’s son” in first-order logic.

Problem 6.1.4: Solve this puzzle informally and decide what is the relation between the man in the painting and the speaker.

Problem 6.1.5: Using the resolution technique for first-order logic, prove your answer.

54

Exercise 7 - Probability Theory and Bayesian Networks

a. Calculate the value of the missing probability in the table.

b. What is the prior probability distribution of the random variables R and E?

c. What is the probability that the driver is not experienced given that there is an accident and the road is wet?

55

d. Construct a corresponding Bayesian network with the conditional probability tables (calculate the required table entries).

56

a. Is it possible to calculate the most likely color for the taxi?

b. What if you know that 9 out of 10 Athenian taxis are green?

a. Suppose you reach into the bag, pick out a coin at random, flip it, and get a head. What is the (conditional) probability that the coin you chose is the fake coin?

57

b. Suppose you continue flipping the coin for a total of k times after picking it and see k heads. Now what is the conditional probability that you picked the fake coin?

c. Suppose you wanted to decide whether the chosen coin was fake by flipping it k times. The decision procedure returns fake if all k flips come up heads; otherwise it returns normal. What is the (unconditional) probability that this procedure makes an error?

58

a. Consider that the robot is in some pose and takes a measurement with positive result. What is the probability that the robot is in a correct pose for collecting a sample?

59

b. Consider that the robot should take a sample if the probability that it is useful is at least 90%. Can the robot collect a sample if it takes another measurement and the result is again positive? Calculate the corresponding posterior probability.

60

61

Exercise 8 - BAYESIAN NETWORKS Problem 8.1: (Taken from [1] Ex. 14.1) We have a bag of three biased coins a, b, and c with probabilities of coming up heads of 20%, 60%, and 80%, respectively. One coin is drawn randomly from the bag (with equal likelihood of drawing each of the three coins), and then the coin is flipped three times to generate the outcomes X1, X2, and X3.

a. Draw the Bayesian network corresponding to this setup and define the necessary conditional probability tables.

62

b. Calculate which coin was most likely to have been drawn from the bag if the observed flips come out heads twice and tails once.

63

64

Problem 8.3:

65

66

References [1] Russell, S.J. and P. Norvig (2010). Artificial Intelligence: A Modern Approach. Prentice Hall series in artificial intelligence. Prentice Hall.

67

Exercise 9 - HIDDEN MARKOV MODELS

68

69

70

d. Find the most likely state sequence. Consider that a student shows up with red eyes and sleeps every day in the class.

71

e. The probability that the student had enough sleep does not go to zero but converges to a fixed point. What does the point represent?

f. Calculate analytically the fixed point.

72

Problem 9.2: Consider a simplified localization problem with a mobile robot that is randomly placed in an environment with a known map (see Fig.1). The robot has a sensing system composed of a compass and four proximity sensors. Each sensor aims at detecting the presence of a wall (solid lines in the map) in a different direction: North (N), South (S), West (W) and East (E). The map is discrete and the robot can be in one of the 6 sections at each time step. At each subsequent time step the robot changes section with a probability of 80% and moves randomly to a different free neighboring section. As soon as the robot enters a new section or decides to stay in the current one, it takes a measurement with its proximity sensors. Each sensor has a false positive rate of 10% and a false negative rate of 0%. The sensor values are conditionally independent given the position of the robot. (Hint: A false positive rate of 10% in this context means that a wall is detected in a direction without a wall 1 out of 10 times; a false negative rate of 0% means that given that there is wall in a direction, the wall is certainly detected). Suppose that we can not see the robot and the only information that it transmits are the sensed walls in the respective directions.

73

74

75

Exercise 10 - Making Simple Decisions

Problem 10.1.2: Compute the expected utility of buying the book and of not buying it.

76

Problem 10.1.3: What should Sam do?

77

78

Problem 10.2.2: Draw the decision tree that represents this problem.

79

Problem 10.2.3: Calculate the expected utility of buying c1, given no test.

Problem 10.2.4: Calculate the value of information of the test and derive an optimal conditional plan for the buyer. Start with determining the optimal decisions whether to buy the car given no test, a pass or a fail.

80

81

Exercise 11 - MAKING COMPLEX DECISIONS

Proble Problem m 11.1.1 11.1.1:: Assuming the transition probability as deterministic and the discount factor as 1 find the value of all states.

Proble Problem m 11.1.2 11.1.2:: Show the corresponding policy.

82

83

Proble Problem m 11.1.4 11.1.4:: Compute the optimal policy of state (3; 1) after convergence. The utilities after convergence are shown in Fig. 1 (c).

84

85

86

References - Russell and Norvig (2010), Artificial Intelligence: A Modern Approach. Prentice Hall - Andrew W. Moore, Markov Decision Processes, Tutorial Slides. https://www.autonlab.org/_media/tutorials/mdp09.pdf

87

Exercise 12 - Decision Trees

88

89

90

Exams Allgemein: - Zeit ist knapp - Nur sehr kleiner Teil ca 10% Theorie - Aufgaben sehr ähnlich zu den Übungsblätter, z.T. gleiche Aufgaben! - viele Fragen gehen nicht wirklich in die Tiefe 1) 1. 2. 3. 4. 5. 6. 7.

Ers Erstklausur tklausur 20 2018 18 Aufgabe: Theorie Fragen Wahr/falsch beantworten Anhand eines Trees verschiedene Suchalgrorythmen anwenden, bzw. deren Suchverlauf ankreuzen CSP anwenden, bestimmen ob er Arc consistent ist, forward checking anwenden Wahrscheinlichkeiten ausrechnen Proof Tree auf Basis einer Wumpus Welt ermittlen, ungefähr das wie in der Superman Aufgabe First oder Logic und SNF bestimmen Logisches Denken indem man mathematische Formeln angewendet hat, wenn man sich nicht ve...


Similar Free PDFs