There are five states:

- S1: B is on A; still playing.
- S2: B is on A; quit.
- S3: A and B on the table; still playing.
- S4: A and B on the table; quit.
- S5: A is on B.

S2, S4, and S5 are terminal states. In S1 and S3 there are two actions; either to play or to quit. The utilities and the transition probabilities are as follows (using the notation in the textbook): (Transition probabilities not specified are 0.)

M.^{p}_{1,3}= 2/3.

M^{p}_{1,1}= 1/3.

M^{q}_{1,2}= 1

M^{p}_{3,5}= 2/3.

M^{p}_{3,3}= 1/3.

M^{q}_{3,4}= 1

R(2) = 0; R(4)=1; R(5)=5.

U([s_{0}... s_{k}]) = R(k) - the number of plays in the sequence.

There are only three different policies: to play in both S1 and S3; to play in S1 but not S3; and to quit in S1. It is easily verified that the first of these is the best. The expected value of S1, adopting the first policy, is given by the equations:

U(S1) = 1/3 U(S1) + 2/3 U(S3) - 1;The solution is U(S1) = 2; U(S3) = 7/2.

U(S3) = 1/3 U(S3) + 2/3 R(S5) - 1;

** Part B **

i. This plan gives rise to the following tree of possible outcomes.

B----------------->(2/3) A B =---------------> (2/3) A Value: 3 A | | B | | | --------> (1/3) A B Value: -1 | |------->(1/3) B -------------------> (1) B Value: -2 A AThe expected value is therefore 3*(4/9) - 1*(2/9) - 2*(1/3) = 4/9.

i. This plan gives rise to the following DAG of possible outcomes.

B------->(2/3) A B ------> (1) A B ----> (2/3) A ----> (1) A Value: 1 A | ^ | B | | | | | ---> (1/3) A B ----> (2/3) A Value: 1 | (2/3) | | B | | | | | --->(1/3) A B Value: -3 ---->(1/3) B ---------------^ A | | -----> (1/3) B ----> (1) B ----> (1) B Value: -4. A A AThe expected value is therefore

1*(4/9 + 4/27 + 4/27 + 4/81) - 3*(2/27 + 2/81) - 4*(1/9) = 4/81

** Part C **
There are three physical states:

- S1: B is on A.
- S2: A and B are on the table.
- S3: A is on B.

- E1: You believe that B is on A.
- E2: You believe that A and B are on the table.
- E3: You believe that A is on B.

We have the following equations:

V(S1,E3) = 0.

V(S2,E3) = 1.

V(S3,E3) = 5.

V(S1,E2) = 2/3*V(S1,E2) + 1/3*V(S1,E3) - 1.

(If you are in S1,E2, then you will attempt to put A on B, and that will fail. There is a 2/3 probability that the report says it fails, in which case you are still in S1, E2, and a 1/3 probability that the report says it succeeeds, in which case you are in S1,E3.)

V(S2,E2) = 2/3*2/3*V(S3,E3) + 2/3*1/3*V(S3,E2) + 1/3*2/3*V(S2,E2) * 1/3*1/3*V(S2,E3) - 1.

(There are four possible outcomes, depending on whether the action succeeds or fails and whether the report is correct or incorrect.)

V(S3,E2) = 2/3*V(S3,E2) + 1/3*V(S3,E3) - 1.

V(S1,E1) = 2/3*2/3*V(S2,E2) + 2/3*1/3*V(S2,E1) + 1/3*2/3*V(S1,E1) + 1/3*1/3*V(S1,E2) -1

V(S2,E1) = 2/3*V(S2,E1) + 1/3*V(S2,E2) - 1.

There is no way to attain the state S3,E1.

The solution is:
V(S3,E2) = 2.

V(S2,E2) = 16/7.

V(S2,E1) = -5/7.

V(S1,E2) = -3

V(S1,E1) = -30/49.

(i.) Consider the following joint distribution:

p(A,B,C) = 0.045

p(A,~B,C) = 0.05

p(~A,B,~C) = 0.05

p(~A,~B,~C) = 0.45 The remaining probabilities are 0.

Then p(B|C) = p(B,C)/p(C) =

(p(A,B,C) + p(~A,B,C)) /
(p(A,B,C) + p(~A,B,C) + p(A,~B,C) + p(~A,~B,C)) =

0.45/(0.45+0.05) = 0.9.

And p(A|B) = p(A,B)/p(B) =

(p(A,B,C) + p(A,B,~C)) /
(p(A,B,C) + p(A,B,~C) + p(A,~B,C) + p(A,~B,~C)) =

0.45/(0.45+0.05) = 0.9.

But p(A|C) = 1.0.

On the other hand, consider the joint probability.

p(A,B,~C) = 0.81Then p(A|B) = 0.81/0.81+0.09 = 0.9; p(B|C) = 0.09/(0.09+0.01) = 0.9; but p(A|C) = 0. So the possible range is [0,1]; (It is possible to show that any intermediate value is also attainable.)

p(~A,B,C) = 0.09

p(~A,~B,C) = 0.01

p(~A,~B,~C) = 0.09

The remaining probabilities are 0.

(ii) Note: This turned out to be a lot harder than I anticipated. My apologies.

Let e be any small positive value. Consider the following joint distribution,

p(A,B,C) = 0.45e - 0.5 eThen p(A,B) - 0.9e; p(A,~B) = (1-1.9e); p(A,C) = 0.5e - 0.5e^{2}

p(A,B,~C) = 0.45e + 0.5 e^{2}

p(A,~B.C) = 0.05e

p(A,~B,~C) = 1 - 1.95e

p(~A,B,C) = 0.5e^{2}

p(~A,B,~C) = 0.1e - 0.5e^{2}

p(~A,~B,C) = 0

p(~A,~B,~C) = 0.9e

p(B,C) = 0.45e; p(~B,C) = 0.05e;

p(B,~C) = 0.55e; p(A) = 1-e; p(B)=e; p(C)=0.5e.

Thus p(A|B) = 0.9; p(B|C)=0.9; p(AC)=p(A)p(C) so A and C are independent absolutely, and p(A|C) = (1-e).

Conversely, consider the joint distribution

p(A,B,C) = 0.

p(A,B,~C) = 0.9e

p(A,~B.C) = 0.1e^{2}

p(A,~B,~C) = 0.1e - 0.1e^{2}

p(~A,B,C) = 0.09e

p(~A,B,~C) = 0.01e

p(~A,~B,C) = 0.01e - 0.1e^{2}

p(~A,~B,~C) = 1 - the sum of the above.

Then p(A,B) = 0.9e; p(A,~B) = 0.1e; p(~A,B) = 0.1e;
p(A,C) = 0.1e^{2}; p(B,C) = 0.09e;
p(~B,C) = 0.01e; p(A) = e; p(B) = e; p(C) = 0.1e.

Thus p(A|B) = 0.9; p(B|C) = 0.9; p(A,C) = p(A)p(C) so A and C are absolutely independent; and p(A|C) = e. Thus, p(A|C) can be as close to 0 or as close to 1 as desired. (It cannot be made equal to 0 or 1; I omit the proof.)

(iii) p(A|C) = p(A,B|C) + p(A,~B|C) = p(A|B,C) p(B|C) + p(A|~B,C)p(~B|C). Now, using the independence assumption, p(A|B,C) = p(A|B) = 0.9. We are given that p(B|C) = 0.9; therefore p(~B|C) = 0.1. p(A|~B,C) is entirely unconstrained; it can be any value from 0 to 1 inclusive. Therefore we have 0.81 < = p(A|C) < = 0.91.

** Part B **

We have p(A,B|C) < = p(A|C) = 0.9. On the other hand, p(A,B|C) = 1 - p(~(A&B)|C) = 1 - p(~A V ~B | C). Moreover p(~A V ~B | C) < = p(~A | C) + p(~B | C) = (1-p(A|C)) + (1-p(B|C)) = 0.2. Thus p(A,B | C) > = 0.8. So we have shown that p(A,B|C) is between 0.8 and 0.9.

These extreme values can be attained, even if we assume that A and B are absolutely independent, using the following joint distributions:

p(A,B,C) = 0.36

p(A,B,~C) = 0.28

p(A,~B,C) = 0.0

p(A,~B,~C) = 0.16

p(~A,B,C) = 0.0

p(~A,B,~C) = 0.16

p(~A,~B,C) = 0.04

p(~A,~B,~C) = 0.0p(A,B,C) = 0.32

p(A,B,~C) = 0.32

p(A,~B,C) = 0.04

p(A,~B,~C) = 0.12

p(~A,B,C) = 0.04

p(~A,B,~C) = 0.12

p(~A,~B,C) = 0.0

p(~A,~B,~C) = 0.04

The first above has p(A,B|C)=0.9; the second has p(A,B|C)=0.8.

(iii) Under this independence assumption p(A,B|C) = p(A|C)p(B|C) = 0.81.