/flipv2/20121113-091142-2.5K-ReLST-asneeded_epochs_50_5runs_noalphadecay/stdout-flip-2.5K_0.txt
https://bitbucket.org/evan13579b/soar-ziggurat · Plain Text · 34791 lines · 32712 code · 2079 blank · 0 comment · 0 complexity · 5d7ebd7b63172960c957e153706b02e3 MD5 · raw file
- Seeding... 0
- dir: dir isU
- Python-Soar Flip environment.
- To accept commands from an external sml process, you'll need to
- type 'slave <log file> <n decisons>' at the prompt...
- sourcing 'flip_predict.soar'
- ***********
- Total: 11 productions sourced.
- seeding Soar with 0 ...
- soar> Entering slave mode:
- - log file 'rl-slave-2.5K_0.log'....
- - will exit slave mode after 2500 decisions
- waiting for commands from an externally connected sml process...
- -/|sleeping...
- \sleeping...
- -sleeping...
- /sleeping...
- |sleeping...
- \-/|\-/|\-/|sleeping...
- \-/|\-/sleeping...
- |1: O: O1 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, False)
- predict error 1
- dir: dir isU
- rule alias: '*'
- rule alias: '*'
- \-/|\-/2: O: O4 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-3: O: O5 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- /|\4: O: O7 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- -/|5: O: O9 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- \-/6: O: O11 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isU
- |\7: O: O14 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/|8: O: O15 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- \-9: O: O17 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- /|\10: O: O19 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isU
- -/|11: O: O22 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- \12: O: O24 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/|13: O: O26 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, False)
- predict error 1
- dir: dir isU
- \-14: O: O28 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|15: O: O29 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- \-/16: O: O31 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- |\-17: O: O34 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\18: O: O36 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -/|19: O: O38 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-/20: O: O40 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- |\-21: O: O41 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, False)
- predict error 1
- dir: dir isU
- /22: O: O44 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\-23: O: O46 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\24: O: O48 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/|25: O: O50 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isL
- \-/26: O: O51 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- |\27: O: O53 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- -/|28: O: O55 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isU
- \-/29: O: O57 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isU
- |\-/30: O: O60 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\-31: O: O61 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isU
- /32: O: O64 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |\-33: O: O65 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- /|\34: O: O68 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/|35: O: O69 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- \-/36: O: O71 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- |\37: O: O74 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/|38: O: O75 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- \-39: O: O77 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isU
- /|40: O: O80 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/41: O: O81 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- |42: O: O83 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- \-/43: O: O86 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |\-44: O: O87 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- /|\45: O: O89 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, False)
- predict error 1
- dir: dir isU
- -/|46: O: O92 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/47: O: O93 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, False)
- predict error 1
- dir: dir isR
- |\-48: O: O96 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isL
- /|49: O: O97 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- \-/50: O: O100 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\-/|\-sleeping...
- /sleeping...
- |51: O: O102 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \52: O: O104 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isL
- -/|53: O: O106 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, False)
- predict error 1
- dir: dir isL
- \-/54: O: O107 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, False)
- predict error 1
- dir: dir isR
- |\-55: O: O109 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- /|\56: O: O112 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/|57: O: O114 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, False)
- predict error 1
- dir: dir isR
- \-/58: O: O115 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- |\-59: O: O118 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- /|\60: O: O119 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isU
- -/|61: O: O122 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- \62: O: O123 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isU
- -/63: O: O126 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\64: O: O127 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isR
- -65: O: O129 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isR
- /|\66: O: O131 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isR
- -/67: O: O133 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isR
- |68: O: O135 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isR
- \-69: O: O137 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isL
- /|70: O: O139 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- \-/71: O: O141 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, False)
- predict error 1
- dir: dir isL
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- |72: O: O143 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, False)
- predict error 1
- dir: dir isR
- \-/73: O: O146 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isR
- |\-74: O: O147 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isR
- /|\75: O: O150 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/76: O: O151 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- |\77: O: O154 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -/|78: O: O156 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-/79: O: O158 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\-80: O: O160 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|81: O: O162 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- \82: O: O164 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/|83: O: O165 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- \-/84: O: O167 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isU
- |\-85: O: O169 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isL
- /|\86: O: O172 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, False)
- predict error 1
- dir: dir isU
- -/|87: O: O174 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-/88: O: O176 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\-89: O: O178 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|\90: O: O179 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- -/|91: O: O182 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- \92: O: O184 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- -/|93: O: O186 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-/94: O: O188 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |\-95: O: O189 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isU
- /96: O: O191 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isU
- |\97: O: O194 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/|98: O: O195 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- \-/99: O: O197 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- |\100: O: O199 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isR
- -/|101: O: O202 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- rule alias: '*'
- \-/|\-/|\-/|\-/|\-/|\-/|\-/|\-/|\sleeping...
- -102: O: O204 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|\103: O: O205 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- -/104: O: O208 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- |\-105: O: O209 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, False)
- predict error 1
- dir: dir isL
- /|\106: O: O211 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, False)
- predict error 1
- dir: dir isU
- -/|107: O: O214 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/108: O: O216 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\-109: O: O218 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|\110: O: O220 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/|111: O: O222 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- \112: O: O224 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/|113: O: O226 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \-/114: O: O227 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- |\115: O: O230 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- -/|116: O: O232 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- \-/117: O: O234 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |\-118: O: O236 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, False)
- predict error 1
- dir: dir isR
- /|\119: O: O238 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isR
- -/|120: O: O240 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-/121: O: O241 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isR
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- rule alias: '*'
- |122: O: O244 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-/123: O: O246 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |\124: O: O247 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isL
- -/125: O: O250 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, False)
- predict error 1
- dir: dir isL
- |\-126: O: O252 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\127: O: O254 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/|128: O: O256 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/129: O: O257 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, False)
- predict error 1
- dir: dir isL
- |\-130: O: O260 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\131: O: O262 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- rule alias: '*'
- -132: O: O264 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|\133: O: O266 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/134: O: O268 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isL
- |\-135: O: O270 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, False)
- predict error 1
- dir: dir isL
- /|136: O: O272 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/137: O: O274 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\138: O: O276 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isR
- -/|139: O: O278 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/140: O: O280 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, False)
- predict error 1
- dir: dir isR
- |\-141: O: O282 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isL
- rule alias: '*'
- rule alias: '*'
- /142: O: O284 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, False)
- predict error 1
- dir: dir isL
- |\143: O: O285 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, False)
- predict error 1
- dir: dir isU
- -/|144: O: O288 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-145: O: O290 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|\146: O: O292 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isU
- -/|147: O: O294 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- \-/148: O: O296 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |\-149: O: O298 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|\150: O: O300 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, False)
- predict error 1
- dir: dir isU
- -/|151: O: O302 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \152: O: O304 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/|153: O: O306 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-154: O: O308 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\155: O: O310 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/156: O: O312 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isL
- |\-157: O: O314 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, False)
- predict error 1
- dir: dir isR
- /|158: O: O316 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isR
- \-/159: O: O318 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |\160: O: O319 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- -/|161: O: O322 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isR
- \162: O: O324 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- -/|163: O: O326 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-/164: O: O328 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |\165: O: O329 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- -/|166: O: O332 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isU
- \-167: O: O334 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|\168: O: O335 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- -/|169: O: O338 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isL
- \-/170: O: O339 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- |\-171: O: O342 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /172: O: O344 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isL
- |\-173: O: O345 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- /|\174: O: O348 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/|175: O: O350 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-/176: O: O352 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-177: O: O354 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isL
- /|178: O: O355 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- \-179: O: O358 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isU
- /|\180: O: O360 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- -/181: O: O362 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |182: O: O364 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- \-/183: O: O366 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\184: O: O368 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- -/185: O: O370 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\-186: O: O372 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|\187: O: O373 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- -/|188: O: O376 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \-/189: O: O378 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isL
- |\-190: O: O379 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- /|191: O: O382 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isR
- \192: O: O384 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/193: O: O386 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |194: O: O388 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-/195: O: O390 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\-196: O: O392 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- /|\197: O: O394 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- -/|198: O: O396 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-/199: O: O398 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |\-200: O: O399 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- /|\201: O: O402 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isL
- -/202: O: O403 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- |\-203: O: O406 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|\204: O: O408 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isR
- -/205: O: O410 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\206: O: O412 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/|207: O: O414 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- \-/208: O: O416 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\209: O: O418 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/|210: O: O419 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- \-211: O: O422 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isU
- /212: O: O424 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |\-213: O: O426 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- /|\214: O: O428 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/|215: O: O429 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- \-/216: O: O432 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-217: O: O434 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isL
- /|218: O: O435 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- \-219: O: O437 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, False)
- predict error 1
- dir: dir isU
- /|\220: O: O440 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/|221: O: O441 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- \222: O: O444 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/|223: O: O445 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- \-/224: O: O448 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\-225: O: O450 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|226: O: O452 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-/227: O: O454 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-228: O: O455 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- /|229: O: O457 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- \-230: O: O460 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|\231: O: O461 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- -232: O: O464 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|233: O: O466 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, False)
- predict error 1
- dir: dir isU
- \-/234: O: O468 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- |\-235: O: O470 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\236: O: O472 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/|237: O: O473 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- \-/238: O: O475 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- |\-239: O: O478 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|\240: O: O479 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- -/|241: O: O482 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- \242: O: O483 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isL
- -/|243: O: O485 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- \-/244: O: O487 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- |\-245: O: O490 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- /|\246: O: O492 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/|247: O: O494 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/248: O: O495 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- |\249: O: O498 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/|250: O: O500 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-251: O: O502 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /252: O: O503 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- |\253: O: O506 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/|254: O: O508 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- \-/255: O: O509 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isL
- |\256: O: O511 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- -/|257: O: O514 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-/258: O: O516 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-259: O: O517 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- /|\260: O: O520 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/261: O: O521 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isR
- |262: O: O523 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isR
- \-263: O: O526 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- /|264: O: O528 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/265: O: O529 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- |\-266: O: O531 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- /|267: O: O533 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- \-268: O: O536 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|269: O: O538 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, False)
- predict error 1
- dir: dir isU
- \-/270: O: O540 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |\-271: O: O542 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- /272: O: O544 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\-273: O: O546 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|274: O: O547 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- \-/275: O: O550 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\-276: O: O552 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|\277: O: O554 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/278: O: O555 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- |\-279: O: O558 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|280: O: O559 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- \-/281: O: O561 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- |282: O: O563 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- \-/283: O: O565 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, False)
- predict error 1
- dir: dir isL
- |\-284: O: O568 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|\-sleeping...
- /285: O: O569 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- |\-286: O: O572 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, False)
- predict error 1
- dir: dir isR
- /|\287: O: O573 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- -/288: O: O575 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- |\289: O: O577 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- -/290: O: O579 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- |291: O: O582 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \292: O: O583 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- -/293: O: O586 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |\-294: O: O588 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- /|\295: O: O590 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- -/296: O: O592 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |\-297: O: O593 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, False)
- predict error 1
- dir: dir isR
- /|298: O: O596 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-299: O: O597 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- /|300: O: O600 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-/|\-301: O: O602 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /302: O: O604 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-303: O: O605 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- /|\-304: O: O608 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- /|305: O: O610 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-/306: O: O612 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |307: O: O613 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- \-/308: O: O616 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\-309: O: O618 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|\310: O: O620 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/|311: O: O622 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \312: O: O623 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- -/|313: O: O626 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-/314: O: O628 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\-315: O: O630 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- /|\316: O: O632 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/317: O: O634 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\-318: O: O636 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- /|\319: O: O638 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/320: O: O640 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |\-321: O: O641 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- /322: O: O644 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-323: O: O645 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- /|324: O: O648 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/325: O: O649 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- |\-326: O: O652 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\327: O: O654 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -/|328: O: O656 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \-/329: O: O657 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- |\-330: O: O660 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|\331: O: O661 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- -332: O: O663 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- /|\333: O: O666 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, False)
- predict error 1
- dir: dir isL
- -/|334: O: O668 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-/|335: O: O670 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-336: O: O672 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|\337: O: O674 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/|338: O: O676 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \-/339: O: O677 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- |\340: O: O680 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/|341: O: O681 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- \342: O: O684 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -/|343: O: O686 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/344: O: O687 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, False)
- predict error 1
- dir: dir isR
- |\-345: O: O689 (predict-yes)
- I see 0 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- /|346: O: O692 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- \347: O: O694 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- -/|348: O: O696 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- \-/349: O: O698 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |\350: O: O699 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- -/|351: O: O701 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- \352: O: O704 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/353: O: O706 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |\354: O: O707 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- -/|355: O: O709 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- \-/356: O: O711 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- |\-357: O: O713 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- /|\358: O: O716 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -359: O: O718 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- /|\360: O: O720 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/361: O: O722 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, False)
- predict error 1
- dir: dir isL
- |362: O: O724 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/363: O: O726 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\-/364: O: O728 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\-365: O: O730 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|\366: O: O731 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- -/|367: O: O734 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- \-/368: O: O736 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |\-369: O: O737 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- /|\370: O: O740 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -/|\371: O: O742 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -372: O: O744 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|\373: O: O745 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, False)
- predict error 1
- dir: dir isL
- -/|374: O: O748 (predict-no)
- I see 0 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/375: O: O750 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\376: O: O752 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/|377: O: O754 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-378: O: O756 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|\379: O: O758 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/|380: O: O759 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- \-/381: O: O762 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |382: O: O764 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- \-/383: O: O766 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\-384: O: O768 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- /|\385: O: O770 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/386: O: O772 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |\-387: O: O774 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- /|\388: O: O776 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/389: O: O778 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |\-/390: O: O780 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |\-391: O: O782 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /392: O: O783 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- |\-393: O: O785 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- /|\394: O: O788 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/|395: O: O790 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-/396: O: O792 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |\-397: O: O794 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- /|398: O: O796 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-399: O: O798 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- /|400: O: O800 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- \-/401: O: O802 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |402: O: O804 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/403: O: O805 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- |\-404: O: O808 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|405: O: O809 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- \-/406: O: O811 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- |\-407: O: O814 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\408: O: O816 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -/|409: O: O818 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/410: O: O820 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-411: O: O821 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- /412: O: O824 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |\-413: O: O825 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- /|\414: O: O827 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- -/|415: O: O829 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- \-416: O: O832 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|417: O: O834 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-418: O: O836 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|\419: O: O838 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/|420: O: O839 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- \-/421: O: O842 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |422: O: O844 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- \-/423: O: O846 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |\-424: O: O848 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|\425: O: O849 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- -/|426: O: O852 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \-427: O: O853 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- /|\428: O: O856 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- -/|429: O: O858 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-430: O: O859 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- /|431: O: O861 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- \432: O: O863 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- -/|433: O: O866 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \-/434: O: O867 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- |\-435: O: O870 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|\436: O: O871 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- -/|437: O: O873 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- \-438: O: O876 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- /|\439: O: O878 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/|440: O: O880 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-/441: O: O882 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |442: O: O884 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-/443: O: O886 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |444: O: O888 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-/445: O: O890 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\-446: O: O892 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|447: O: O893 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- \-/448: O: O896 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-449: O: O897 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- /|\450: O: O900 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/|451: O: O901 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- \452: O: O904 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -/|453: O: O906 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-/|454: O: O908 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-455: O: O910 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\456: O: O912 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -/|457: O: O914 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \-458: O: O915 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- /|\459: O: O918 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/|460: O: O919 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- \-/461: O: O922 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |462: O: O923 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- \-/463: O: O926 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\464: O: O928 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/|465: O: O930 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/466: O: O931 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- |\467: O: O934 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -/|468: O: O936 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \-/469: O: O937 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- |\-470: O: O940 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- /|\471: O: O942 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- -472: O: O944 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- /|\473: O: O946 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/|\474: O: O947 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- -/|475: O: O950 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-476: O: O952 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\477: O: O954 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -/|478: O: O956 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-479: O: O958 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|\480: O: O959 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- -/|481: O: O961 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- \482: O: O964 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/|\483: O: O965 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- -484: O: O968 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- /|\485: O: O970 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/|486: O: O972 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-487: O: O974 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|\488: O: O975 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- -/489: O: O978 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\-/490: O: O980 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- |\-491: O: O982 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /492: O: O983 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- |\-493: O: O986 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|\494: O: O987 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- -/|495: O: O990 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-/496: O: O992 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\497: O: O994 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/|498: O: O996 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/499: O: O998 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-500: O: O999 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- /|\-/|501: O: O1001 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- \502: O: O1003 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- -/|503: O: O1005 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- \-/|504: O: O1008 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-/505: O: O1010 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- |\-506: O: O1012 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|507: O: O1014 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/|508: O: O1016 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/509: O: O1018 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\-510: O: O1020 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\511: O: O1022 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -512: O: O1024 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|\513: O: O1026 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/|514: O: O1027 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- \-/515: O: O1029 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- |\-516: O: O1031 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- /|517: O: O1034 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/518: O: O1035 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- |\-519: O: O1038 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|520: O: O1039 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- \-/521: O: O1042 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |522: O: O1043 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- \-/523: O: O1046 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-524: O: O1047 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- /|\525: O: O1050 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/526: O: O1051 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- |\-527: O: O1054 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\528: O: O1056 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/|\529: O: O1057 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- -/|530: O: O1059 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- \-531: O: O1062 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /532: O: O1064 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\533: O: O1065 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- -/|534: O: O1067 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- \535: O: O1070 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -/536: O: O1072 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\-537: O: O1074 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /538: O: O1076 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- |\-539: O: O1078 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\540: O: O1080 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/|541: O: O1082 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \542: O: O1083 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- -543: O: O1085 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- /|\544: O: O1088 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/545: O: O1090 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- |\546: O: O1092 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/|547: O: O1094 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \-548: O: O1095 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- /|\549: O: O1098 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/|\sleeping...
- -550: O: O1100 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|\551: O: O1101 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- -552: O: O1103 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- /|\553: O: O1106 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/|554: O: O1107 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- \-/555: O: O1109 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- |\556: O: O1112 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/|557: O: O1114 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/558: O: O1115 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- |\-559: O: O1117 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- /|\560: O: O1120 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/|561: O: O1122 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \562: O: O1123 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- -/|563: O: O1126 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-564: O: O1128 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|\565: O: O1129 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- -/|566: O: O1132 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- \-/567: O: O1134 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |\568: O: O1135 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- -/|569: O: O1137 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- \-/570: O: O1140 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |\-571: O: O1142 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- /572: O: O1144 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\-573: O: O1146 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- /|\574: O: O1148 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- -/|575: O: O1150 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/576: O: O1151 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- |\577: O: O1153 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- -/|578: O: O1156 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-579: O: O1157 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- /|\580: O: O1159 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- -/581: O: O1162 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |582: O: O1164 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/583: O: O1165 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- |\-584: O: O1168 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\585: O: O1170 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/|586: O: O1171 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- \-/587: O: O1173 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- |\-588: O: O1176 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\589: O: O1178 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/|590: O: O1179 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- \-/591: O: O1182 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |592: O: O1183 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- \-/593: O: O1186 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-594: O: O1187 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- /|\595: O: O1190 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/|596: O: O1192 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/597: O: O1193 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- |\-598: O: O1196 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|\599: O: O1198 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -/|600: O: O1200 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/601: O: O1202 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |602: O: O1204 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/603: O: O1206 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- |\604: O: O1208 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/|605: O: O1209 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- \-606: O: O1212 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- /|\607: O: O1214 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/608: O: O1215 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- |\-609: O: O1218 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|\610: O: O1220 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -/|611: O: O1222 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \612: O: O1224 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/|613: O: O1225 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- \-614: O: O1227 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- /|615: O: O1230 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/616: O: O1232 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\617: O: O1233 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- -/618: O: O1236 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |\-619: O: O1237 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- /|\620: O: O1240 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/|621: O: O1242 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \622: O: O1243 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- -/|623: O: O1245 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- \-/624: O: O1247 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- |\-625: O: O1250 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- /|\626: O: O1252 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- -/|627: O: O1254 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-/628: O: O1256 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\-629: O: O1258 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- /|630: O: O1260 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- \-/631: O: O1262 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |632: O: O1264 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/633: O: O1265 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- |\634: O: O1267 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- -/|635: O: O1270 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/636: O: O1271 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- |\637: O: O1274 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/|638: O: O1275 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- \-/639: O: O1278 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |\640: O: O1279 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- -/|641: O: O1282 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \642: O: O1283 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- -/|643: O: O1286 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-/644: O: O1288 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\645: O: O1290 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/|646: O: O1292 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/647: O: O1293 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- |\648: O: O1295 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- -/|649: O: O1297 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- \-650: O: O1300 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|651: O: O1302 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \652: O: O1304 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/|653: O: O1305 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- \-/654: O: O1307 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- |\655: O: O1309 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- -/656: O: O1312 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |\657: O: O1313 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- -/658: O: O1315 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- |\-659: O: O1317 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- /|\660: O: O1320 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -/|661: O: O1322 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \662: O: O1324 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -/|663: O: O1326 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-664: O: O1328 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /665: O: O1330 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-666: O: O1331 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- /|\667: O: O1334 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -668: O: O1336 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- /|\669: O: O1338 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/|670: O: O1340 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/|671: O: O1341 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- \672: O: O1344 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/|673: O: O1346 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/674: O: O1348 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-675: O: O1349 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- /|\676: O: O1351 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- -677: O: O1354 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|678: O: O1355 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- \-/679: O: O1358 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\680: O: O1360 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- -/681: O: O1362 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |682: O: O1364 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \683: O: O1366 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/|684: O: O1367 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- \-/685: O: O1370 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-686: O: O1371 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- /|687: O: O1374 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-/688: O: O1376 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |\-689: O: O1378 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- /|\690: O: O1380 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/691: O: O1381 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- |692: O: O1384 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-693: O: O1386 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|\694: O: O1387 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- -/|695: O: O1389 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- \-/696: O: O1391 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- |\-697: O: O1393 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- /|\698: O: O1396 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/|699: O: O1398 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/700: O: O1400 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\701: O: O1401 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- -702: O: O1403 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- /|703: O: O1405 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- \-/704: O: O1408 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |\705: O: O1410 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- -/|706: O: O1412 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/707: O: O1413 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- |\-708: O: O1416 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|\709: O: O1417 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- -/|710: O: O1420 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-/|711: O: O1422 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \712: O: O1424 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/|713: O: O1426 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- \-/714: O: O1428 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |\715: O: O1430 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/|716: O: O1432 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-/717: O: O1434 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\-718: O: O1436 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- /|\719: O: O1438 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/|720: O: O1439 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- \-/721: O: O1442 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- |722: O: O1444 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/723: O: O1446 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- |\-724: O: O1448 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|\725: O: O1449 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- -/|726: O: O1451 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- \-/727: O: O1454 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\-728: O: O1456 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\729: O: O1458 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/|730: O: O1459 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- \-/731: O: O1462 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |732: O: O1464 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-/|733: O: O1466 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/734: O: O1467 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- |\735: O: O1470 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -/|736: O: O1472 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/737: O: O1474 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-738: O: O1475 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- /|\739: O: O1477 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- -/740: O: O1480 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- |\-741: O: O1482 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /742: O: O1483 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- |\-743: O: O1485 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- /|\744: O: O1487 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- -/|745: O: O1489 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- \-/746: O: O1492 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\-747: O: O1494 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\748: O: O1496 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/|749: O: O1498 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-750: O: O1500 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|\751: O: O1502 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -752: O: O1503 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- /|753: O: O1506 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \754: O: O1507 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- -/|755: O: O1510 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/756: O: O1512 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-757: O: O1513 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- /|758: O: O1516 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/759: O: O1517 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- |\-760: O: O1520 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\761: O: O1522 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -762: O: O1523 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- /|\-763: O: O1525 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- /764: O: O1528 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- |\-765: O: O1530 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|766: O: O1532 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \-/767: O: O1533 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- |\-768: O: O1536 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- /|\769: O: O1538 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/|770: O: O1539 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- \-/|771: O: O1541 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- \772: O: O1544 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/|773: O: O1546 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/774: O: O1547 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- |\-775: O: O1550 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|776: O: O1551 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- \-/777: O: O1553 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- |\778: O: O1556 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -/|779: O: O1558 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/780: O: O1560 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-/781: O: O1561 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- |782: O: O1564 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/783: O: O1565 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- |\784: O: O1567 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- -/|\785: O: O1569 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- -/|786: O: O1572 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \-/|sleeping...
- \787: O: O1573 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- -/|788: O: O1576 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \789: O: O1578 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/790: O: O1579 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- |\-791: O: O1582 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /792: O: O1584 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\793: O: O1586 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/|794: O: O1588 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-795: O: O1590 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|\796: O: O1592 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/797: O: O1594 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\798: O: O1596 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -799: O: O1597 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- /|800: O: O1600 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \-/801: O: O1602 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |802: O: O1604 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/803: O: O1605 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- |\-804: O: O1607 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- /|805: O: O1609 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- \-/806: O: O1612 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\807: O: O1613 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- -/|808: O: O1616 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- \-/809: O: O1618 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\810: O: O1620 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/|811: O: O1621 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- \812: O: O1624 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -813: O: O1626 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|\-sleeping...
- /814: O: O1627 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- |\815: O: O1630 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/|816: O: O1631 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- \-817: O: O1633 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- /|\818: O: O1635 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- -/|819: O: O1638 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-/|820: O: O1640 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \-821: O: O1641 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- /822: O: O1643 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- |\-823: O: O1645 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- /|\824: O: O1647 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- -825: O: O1650 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|826: O: O1651 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- \-/827: O: O1654 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\828: O: O1656 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/829: O: O1657 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- |\830: O: O1660 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -/831: O: O1662 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |832: O: O1664 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \-833: O: O1665 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- /834: O: O1668 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |\835: O: O1669 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- -/|836: O: O1672 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-837: O: O1674 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\838: O: O1676 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/839: O: O1677 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- |\-840: O: O1680 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- /|841: O: O1682 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- \842: O: O1684 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/843: O: O1685 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- |\844: O: O1688 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/|845: O: O1689 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- \-846: O: O1692 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- /|\847: O: O1694 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/848: O: O1695 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- |\-849: O: O1698 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|850: O: O1699 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- \-/851: O: O1702 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |852: O: O1704 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- \-/853: O: O1706 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\-854: O: O1708 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|\-855: O: O1709 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- /856: O: O1712 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- |\-857: O: O1714 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|\858: O: O1715 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- -/859: O: O1718 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- |\-860: O: O1720 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- /|\861: O: O1722 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- -862: O: O1724 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|\863: O: O1725 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- -/|864: O: O1728 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-/865: O: O1730 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\-866: O: O1732 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|867: O: O1733 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- \-/|868: O: O1736 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- \-/869: O: O1738 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\870: O: O1740 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/871: O: O1741 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- |872: O: O1744 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/873: O: O1746 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |874: O: O1747 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- \875: O: O1750 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -876: O: O1752 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|\877: O: O1753 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- -878: O: O1756 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|\879: O: O1757 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- -/|880: O: O1759 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- \-/881: O: O1762 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- |882: O: O1764 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- \-/883: O: O1765 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- |\884: O: O1767 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- -/|885: O: O1770 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-886: O: O1772 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|887: O: O1773 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- \-888: O: O1776 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|889: O: O1777 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- \-/|890: O: O1779 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- \-/891: O: O1782 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |892: O: O1783 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- \-893: O: O1786 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\894: O: O1788 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -895: O: O1790 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|\896: O: O1791 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- -/|897: O: O1794 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/898: O: O1795 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- |\-899: O: O1797 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- /|\900: O: O1800 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/|901: O: O1802 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- \902: O: O1804 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- -/|903: O: O1805 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- \-/904: O: O1808 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-905: O: O1809 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- /|906: O: O1812 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-/907: O: O1813 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- |908: O: O1816 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-/909: O: O1818 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-910: O: O1819 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- /911: O: O1822 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- |912: O: O1823 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- \-/913: O: O1826 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\-914: O: O1828 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\-915: O: O1830 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|\916: O: O1832 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -/|917: O: O1834 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-918: O: O1836 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|\919: O: O1838 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- -920: O: O1840 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- /|\921: O: O1842 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- -922: O: O1844 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|923: O: O1846 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- \-924: O: O1848 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- /|925: O: O1849 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- \-/926: O: O1852 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\-927: O: O1854 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|\928: O: O1855 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- -/929: O: O1857 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- |\930: O: O1859 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- -/931: O: O1862 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |932: O: O1864 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-/933: O: O1866 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- |\-934: O: O1868 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|935: O: O1870 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-936: O: O1872 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|937: O: O1874 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- \-938: O: O1876 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- /|\939: O: O1878 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- -/940: O: O1879 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- |\-/941: O: O1882 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |942: O: O1884 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- \-943: O: O1885 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- /|944: O: O1887 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- \-945: O: O1890 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- /|946: O: O1891 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- \-/947: O: O1894 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- |\-948: O: O1895 (predict-yes)
- I see 1 and I'm going to do: predict-yes
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- /|\949: O: O1898 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- -/950: O: O1900 (predict-no)
- I see 1 and I'm going to do: predict-no
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- |\-/|\-/|\-/|--- Input Phase ---
- =>WM: (13382: I2 ^dir R)
- =>WM: (13381: I2 ^reward 1)
- =>WM: (13380: I2 ^see 0)
- =>WM: (13379: N950 ^status complete)
- <=WM: (13368: I2 ^dir U)
- <=WM: (13367: I2 ^reward 1)
- <=WM: (13366: I2 ^see 0)
- =>WM: (13383: I2 ^level-1 R1-root)
- <=WM: (13369: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1899 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1900 = 0.7427516277634807)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R954 ^value 1 +)
- (R1 ^reward R954 +)
- Firing propose*predict-yes
- -->
- (O1901 ^name predict-yes +)
- (S1 ^operator O1901 +)
- Firing propose*predict-no
- -->
- (O1902 ^name predict-no +)
- (S1 ^operator O1902 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1900 = 0.2572472160770417)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1899 = 0.736829027581098)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1900 ^name predict-no +)
- (S1 ^operator O1900 +)
- Retracting propose*predict-yes
- -->
- (O1899 ^name predict-yes +)
- (S1 ^operator O1899 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R953 ^value 1 +)
- (R1 ^reward R953 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1900 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1899 = 0.)
- =>WM: (13390: S1 ^operator O1902 +)
- =>WM: (13389: S1 ^operator O1901 +)
- =>WM: (13388: I3 ^dir R)
- =>WM: (13387: O1902 ^name predict-no)
- =>WM: (13386: O1901 ^name predict-yes)
- =>WM: (13385: R954 ^value 1)
- =>WM: (13384: R1 ^reward R954)
- <=WM: (13375: S1 ^operator O1899 +)
- <=WM: (13376: S1 ^operator O1900 +)
- <=WM: (13377: S1 ^operator O1900)
- <=WM: (13360: I3 ^dir U)
- <=WM: (13371: R1 ^reward R953)
- <=WM: (13374: O1900 ^name predict-no)
- <=WM: (13373: O1899 ^name predict-yes)
- <=WM: (13372: R953 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1901 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1901 = 0.736829027581098)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1902 = 0.7427516277634807)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1902 = 0.2572472160770417)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1900 = 0.2572472160770417)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1900 = 0.7427516277634807)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1899 = 0.736829027581098)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1899 = -0.3011268063455669)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (13391: S1 ^operator O1902)
- 951: O: O1902 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N951 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N950 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13392: I3 ^predict-no N951)
- <=WM: (13379: N950 ^status complete)
- <=WM: (13378: I3 ^predict-no N950)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- \--- Input Phase ---
- =>WM: (13396: I2 ^dir L)
- =>WM: (13395: I2 ^reward 1)
- =>WM: (13394: I2 ^see 0)
- =>WM: (13393: N951 ^status complete)
- <=WM: (13382: I2 ^dir R)
- <=WM: (13381: I2 ^reward 1)
- <=WM: (13380: I2 ^see 0)
- =>WM: (13397: I2 ^level-1 R0-root)
- <=WM: (13383: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1902 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1901 = 0.5681127864180794)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R955 ^value 1 +)
- (R1 ^reward R955 +)
- Firing propose*predict-yes
- -->
- (O1903 ^name predict-yes +)
- (S1 ^operator O1903 +)
- Firing propose*predict-no
- -->
- (O1904 ^name predict-no +)
- (S1 ^operator O1904 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1902 = 0.3289450941277776)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1901 = 0.43188926143453)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1902 ^name predict-no +)
- (S1 ^operator O1902 +)
- Retracting propose*predict-yes
- -->
- (O1901 ^name predict-yes +)
- (S1 ^operator O1901 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R954 ^value 1 +)
- (R1 ^reward R954 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1902 = 0.2572472160770417)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1902 = 0.7427516277634807)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1901 = 0.736829027581098)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1901 = -0.3011268063455669)
- =>WM: (13404: S1 ^operator O1904 +)
- =>WM: (13403: S1 ^operator O1903 +)
- =>WM: (13402: I3 ^dir L)
- =>WM: (13401: O1904 ^name predict-no)
- =>WM: (13400: O1903 ^name predict-yes)
- =>WM: (13399: R955 ^value 1)
- =>WM: (13398: R1 ^reward R955)
- <=WM: (13389: S1 ^operator O1901 +)
- <=WM: (13390: S1 ^operator O1902 +)
- <=WM: (13391: S1 ^operator O1902)
- <=WM: (13388: I3 ^dir R)
- <=WM: (13384: R1 ^reward R954)
- <=WM: (13387: O1902 ^name predict-no)
- <=WM: (13386: O1901 ^name predict-yes)
- <=WM: (13385: R954 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1903 = 0.43188926143453)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1903 = 0.5681127864180794)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1904 = 0.3289450941277776)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1904 = 0.04178081990804111)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1902 = 0.3289450941277776)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1902 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1901 = 0.43188926143453)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1901 = 0.5681127864180794)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586137 -0.32889 0.257247 -> 0.586137 -0.32889 0.257247(R,m,v=1,0.854545,0.125055)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*36 0.413862 0.32889 0.742752 -> 0.413862 0.32889 0.742752(R,m,v=1,1,0)
- =>WM: (13405: S1 ^operator O1903)
- 952: O: O1903 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N952 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N951 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13406: I3 ^predict-yes N952)
- <=WM: (13393: N951 ^status complete)
- <=WM: (13392: I3 ^predict-no N951)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- -/|--- Input Phase ---
- =>WM: (13410: I2 ^dir U)
- =>WM: (13409: I2 ^reward 1)
- =>WM: (13408: I2 ^see 1)
- =>WM: (13407: N952 ^status complete)
- <=WM: (13396: I2 ^dir L)
- <=WM: (13395: I2 ^reward 1)
- <=WM: (13394: I2 ^see 0)
- =>WM: (13411: I2 ^level-1 L1-root)
- <=WM: (13397: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R956 ^value 1 +)
- (R1 ^reward R956 +)
- Firing propose*predict-yes
- -->
- (O1905 ^name predict-yes +)
- (S1 ^operator O1905 +)
- Firing propose*predict-no
- -->
- (O1906 ^name predict-no +)
- (S1 ^operator O1906 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1904 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1903 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1904 ^name predict-no +)
- (S1 ^operator O1904 +)
- Retracting propose*predict-yes
- -->
- (O1903 ^name predict-yes +)
- (S1 ^operator O1903 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R955 ^value 1 +)
- (R1 ^reward R955 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1904 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1904 = 0.3289450941277776)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1903 = 0.5681127864180794)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1903 = 0.43188926143453)
- =>WM: (13419: S1 ^operator O1906 +)
- =>WM: (13418: S1 ^operator O1905 +)
- =>WM: (13417: I3 ^dir U)
- =>WM: (13416: O1906 ^name predict-no)
- =>WM: (13415: O1905 ^name predict-yes)
- =>WM: (13414: R956 ^value 1)
- =>WM: (13413: R1 ^reward R956)
- =>WM: (13412: I3 ^see 1)
- <=WM: (13403: S1 ^operator O1903 +)
- <=WM: (13405: S1 ^operator O1903)
- <=WM: (13404: S1 ^operator O1904 +)
- <=WM: (13402: I3 ^dir L)
- <=WM: (13398: R1 ^reward R955)
- <=WM: (13370: I3 ^see 0)
- <=WM: (13401: O1904 ^name predict-no)
- <=WM: (13400: O1903 ^name predict-yes)
- <=WM: (13399: R955 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1905 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1906 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1904 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1903 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683775 -0.251886 0.431889 -> 0.683775 -0.251886 0.431889(R,m,v=1,0.919753,0.0742658)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*30 0.316226 0.251886 0.568113 -> 0.316226 0.251886 0.568112(R,m,v=1,1,0)
- =>WM: (13420: S1 ^operator O1906)
- 953: O: O1906 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N953 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N952 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13421: I3 ^predict-no N953)
- <=WM: (13407: N952 ^status complete)
- <=WM: (13406: I3 ^predict-yes N952)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (13425: I2 ^dir R)
- =>WM: (13424: I2 ^reward 1)
- =>WM: (13423: I2 ^see 0)
- =>WM: (13422: N953 ^status complete)
- <=WM: (13410: I2 ^dir U)
- <=WM: (13409: I2 ^reward 1)
- <=WM: (13408: I2 ^see 1)
- =>WM: (13426: I2 ^level-1 L1-root)
- <=WM: (13411: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O1906 = -0.1377248055371832)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O1905 = 0.2631666904115852)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R957 ^value 1 +)
- (R1 ^reward R957 +)
- Firing propose*predict-yes
- -->
- (O1907 ^name predict-yes +)
- (S1 ^operator O1907 +)
- Firing propose*predict-no
- -->
- (O1908 ^name predict-no +)
- (S1 ^operator O1908 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1906 = 0.2572473895009633)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1905 = 0.736829027581098)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O1906 ^name predict-no +)
- (S1 ^operator O1906 +)
- Retracting propose*predict-yes
- -->
- (O1905 ^name predict-yes +)
- (S1 ^operator O1905 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R956 ^value 1 +)
- (R1 ^reward R956 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1906 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1905 = 0.)
- =>WM: (13434: S1 ^operator O1908 +)
- =>WM: (13433: S1 ^operator O1907 +)
- =>WM: (13432: I3 ^dir R)
- =>WM: (13431: O1908 ^name predict-no)
- =>WM: (13430: O1907 ^name predict-yes)
- =>WM: (13429: R957 ^value 1)
- =>WM: (13428: R1 ^reward R957)
- =>WM: (13427: I3 ^see 0)
- <=WM: (13418: S1 ^operator O1905 +)
- <=WM: (13419: S1 ^operator O1906 +)
- <=WM: (13420: S1 ^operator O1906)
- <=WM: (13417: I3 ^dir U)
- <=WM: (13413: R1 ^reward R956)
- <=WM: (13412: I3 ^see 1)
- <=WM: (13416: O1906 ^name predict-no)
- <=WM: (13415: O1905 ^name predict-yes)
- <=WM: (13414: R956 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O1907 = 0.2631666904115852)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1907 = 0.736829027581098)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O1908 = -0.1377248055371832)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1908 = 0.2572473895009633)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1906 = 0.2572473895009633)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O1906 = -0.1377248055371832)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1905 = 0.736829027581098)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O1905 = 0.2631666904115852)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (13435: S1 ^operator O1907)
- 954: O: O1907 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N954 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N953 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13436: I3 ^predict-yes N954)
- <=WM: (13422: N953 ^status complete)
- <=WM: (13421: I3 ^predict-no N953)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /|\---- Input Phase ---
- =>WM: (13440: I2 ^dir U)
- =>WM: (13439: I2 ^reward 1)
- =>WM: (13438: I2 ^see 1)
- =>WM: (13437: N954 ^status complete)
- <=WM: (13425: I2 ^dir R)
- <=WM: (13424: I2 ^reward 1)
- <=WM: (13423: I2 ^see 0)
- =>WM: (13441: I2 ^level-1 R1-root)
- <=WM: (13426: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R958 ^value 1 +)
- (R1 ^reward R958 +)
- Firing propose*predict-yes
- -->
- (O1909 ^name predict-yes +)
- (S1 ^operator O1909 +)
- Firing propose*predict-no
- -->
- (O1910 ^name predict-no +)
- (S1 ^operator O1910 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1908 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1907 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1908 ^name predict-no +)
- (S1 ^operator O1908 +)
- Retracting propose*predict-yes
- -->
- (O1907 ^name predict-yes +)
- (S1 ^operator O1907 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R957 ^value 1 +)
- (R1 ^reward R957 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1908 = 0.2572473895009633)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O1908 = -0.1377248055371832)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1907 = 0.736829027581098)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O1907 = 0.2631666904115852)
- =>WM: (13449: S1 ^operator O1910 +)
- =>WM: (13448: S1 ^operator O1909 +)
- =>WM: (13447: I3 ^dir U)
- =>WM: (13446: O1910 ^name predict-no)
- =>WM: (13445: O1909 ^name predict-yes)
- =>WM: (13444: R958 ^value 1)
- =>WM: (13443: R1 ^reward R958)
- =>WM: (13442: I3 ^see 1)
- <=WM: (13433: S1 ^operator O1907 +)
- <=WM: (13435: S1 ^operator O1907)
- <=WM: (13434: S1 ^operator O1908 +)
- <=WM: (13432: I3 ^dir R)
- <=WM: (13428: R1 ^reward R957)
- <=WM: (13427: I3 ^see 0)
- <=WM: (13431: O1908 ^name predict-no)
- <=WM: (13430: O1907 ^name predict-yes)
- <=WM: (13429: R957 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1909 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1910 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1908 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1907 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114073 0.736829 -> 0.748237 -0.0114068 0.73683(R,m,v=1,0.892405,0.0966298)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*40 0.251763 0.0114042 0.263167 -> 0.251763 0.0114046 0.263167(R,m,v=1,1,0)
- =>WM: (13450: S1 ^operator O1910)
- 955: O: O1910 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N955 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N954 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13451: I3 ^predict-no N955)
- <=WM: (13437: N954 ^status complete)
- <=WM: (13436: I3 ^predict-yes N954)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- /|\--- Input Phase ---
- =>WM: (13455: I2 ^dir R)
- =>WM: (13454: I2 ^reward 1)
- =>WM: (13453: I2 ^see 0)
- =>WM: (13452: N955 ^status complete)
- <=WM: (13440: I2 ^dir U)
- <=WM: (13439: I2 ^reward 1)
- <=WM: (13438: I2 ^see 1)
- =>WM: (13456: I2 ^level-1 R1-root)
- <=WM: (13441: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1909 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1910 = 0.7427518011874024)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R959 ^value 1 +)
- (R1 ^reward R959 +)
- Firing propose*predict-yes
- -->
- (O1911 ^name predict-yes +)
- (S1 ^operator O1911 +)
- Firing propose*predict-no
- -->
- (O1912 ^name predict-no +)
- (S1 ^operator O1912 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1910 = 0.2572473895009633)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1909 = 0.7368296698821956)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O1910 ^name predict-no +)
- (S1 ^operator O1910 +)
- Retracting propose*predict-yes
- -->
- (O1909 ^name predict-yes +)
- (S1 ^operator O1909 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R958 ^value 1 +)
- (R1 ^reward R958 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1910 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1909 = 0.)
- =>WM: (13464: S1 ^operator O1912 +)
- =>WM: (13463: S1 ^operator O1911 +)
- =>WM: (13462: I3 ^dir R)
- =>WM: (13461: O1912 ^name predict-no)
- =>WM: (13460: O1911 ^name predict-yes)
- =>WM: (13459: R959 ^value 1)
- =>WM: (13458: R1 ^reward R959)
- =>WM: (13457: I3 ^see 0)
- <=WM: (13448: S1 ^operator O1909 +)
- <=WM: (13449: S1 ^operator O1910 +)
- <=WM: (13450: S1 ^operator O1910)
- <=WM: (13447: I3 ^dir U)
- <=WM: (13443: R1 ^reward R958)
- <=WM: (13442: I3 ^see 1)
- <=WM: (13446: O1910 ^name predict-no)
- <=WM: (13445: O1909 ^name predict-yes)
- <=WM: (13444: R958 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1911 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1911 = 0.7368296698821956)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1912 = 0.7427518011874024)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1912 = 0.2572473895009633)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1910 = 0.2572473895009633)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1910 = 0.7427518011874024)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1909 = 0.7368296698821956)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1909 = -0.3011268063455669)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (13465: S1 ^operator O1912)
- 956: O: O1912 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N956 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N955 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13466: I3 ^predict-no N956)
- <=WM: (13452: N955 ^status complete)
- <=WM: (13451: I3 ^predict-no N955)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- -/|--- Input Phase ---
- =>WM: (13470: I2 ^dir R)
- =>WM: (13469: I2 ^reward 1)
- =>WM: (13468: I2 ^see 0)
- =>WM: (13467: N956 ^status complete)
- <=WM: (13455: I2 ^dir R)
- <=WM: (13454: I2 ^reward 1)
- <=WM: (13453: I2 ^see 0)
- =>WM: (13471: I2 ^level-1 R0-root)
- <=WM: (13456: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O1912 = 0.7427606592568701)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O1911 = -0.1989581826229297)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R960 ^value 1 +)
- (R1 ^reward R960 +)
- Firing propose*predict-yes
- -->
- (O1913 ^name predict-yes +)
- (S1 ^operator O1913 +)
- Firing propose*predict-no
- -->
- (O1914 ^name predict-no +)
- (S1 ^operator O1914 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1912 = 0.2572473895009633)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1911 = 0.7368296698821956)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1912 ^name predict-no +)
- (S1 ^operator O1912 +)
- Retracting propose*predict-yes
- -->
- (O1911 ^name predict-yes +)
- (S1 ^operator O1911 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R959 ^value 1 +)
- (R1 ^reward R959 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1912 = 0.2572473895009633)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1912 = 0.7427518011874024)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1911 = 0.7368296698821956)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1911 = -0.3011268063455669)
- =>WM: (13477: S1 ^operator O1914 +)
- =>WM: (13476: S1 ^operator O1913 +)
- =>WM: (13475: O1914 ^name predict-no)
- =>WM: (13474: O1913 ^name predict-yes)
- =>WM: (13473: R960 ^value 1)
- =>WM: (13472: R1 ^reward R960)
- <=WM: (13463: S1 ^operator O1911 +)
- <=WM: (13464: S1 ^operator O1912 +)
- <=WM: (13465: S1 ^operator O1912)
- <=WM: (13458: R1 ^reward R959)
- <=WM: (13461: O1912 ^name predict-no)
- <=WM: (13460: O1911 ^name predict-yes)
- <=WM: (13459: R959 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1913 = 0.7368296698821956)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O1913 = -0.1989581826229297)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1914 = 0.2572473895009633)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O1914 = 0.7427606592568701)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1912 = 0.2572473895009633)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O1912 = 0.7427606592568701)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1911 = 0.7368296698821956)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O1911 = -0.1989581826229297)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586137 -0.32889 0.257247 -> 0.586137 -0.32889 0.257248(R,m,v=1,0.855422,0.124425)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*36 0.413862 0.32889 0.742752 -> 0.413862 0.32889 0.742752(R,m,v=1,1,0)
- =>WM: (13478: S1 ^operator O1914)
- 957: O: O1914 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N957 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N956 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13479: I3 ^predict-no N957)
- <=WM: (13467: N956 ^status complete)
- <=WM: (13466: I3 ^predict-no N956)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (13483: I2 ^dir L)
- =>WM: (13482: I2 ^reward 1)
- =>WM: (13481: I2 ^see 0)
- =>WM: (13480: N957 ^status complete)
- <=WM: (13470: I2 ^dir R)
- <=WM: (13469: I2 ^reward 1)
- <=WM: (13468: I2 ^see 0)
- =>WM: (13484: I2 ^level-1 R0-root)
- <=WM: (13471: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1914 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1913 = 0.5681124792401879)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R961 ^value 1 +)
- (R1 ^reward R961 +)
- Firing propose*predict-yes
- -->
- (O1915 ^name predict-yes +)
- (S1 ^operator O1915 +)
- Firing propose*predict-no
- -->
- (O1916 ^name predict-no +)
- (S1 ^operator O1916 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1914 = 0.3289450941277776)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1913 = 0.4318889542566386)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1914 ^name predict-no +)
- (S1 ^operator O1914 +)
- Retracting propose*predict-yes
- -->
- (O1913 ^name predict-yes +)
- (S1 ^operator O1913 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R960 ^value 1 +)
- (R1 ^reward R960 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O1914 = 0.7427606592568701)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1914 = 0.2572475108977085)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O1913 = -0.1989581826229297)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1913 = 0.7368296698821956)
- =>WM: (13491: S1 ^operator O1916 +)
- =>WM: (13490: S1 ^operator O1915 +)
- =>WM: (13489: I3 ^dir L)
- =>WM: (13488: O1916 ^name predict-no)
- =>WM: (13487: O1915 ^name predict-yes)
- =>WM: (13486: R961 ^value 1)
- =>WM: (13485: R1 ^reward R961)
- <=WM: (13476: S1 ^operator O1913 +)
- <=WM: (13477: S1 ^operator O1914 +)
- <=WM: (13478: S1 ^operator O1914)
- <=WM: (13462: I3 ^dir R)
- <=WM: (13472: R1 ^reward R960)
- <=WM: (13475: O1914 ^name predict-no)
- <=WM: (13474: O1913 ^name predict-yes)
- <=WM: (13473: R960 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1915 = 0.5681124792401879)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1915 = 0.4318889542566386)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1916 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1916 = 0.3289450941277776)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1914 = 0.3289450941277776)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1914 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1913 = 0.4318889542566386)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1913 = 0.5681124792401879)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586137 -0.32889 0.257248 -> 0.586136 -0.32889 0.257246(R,m,v=1,0.856287,0.123801)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*41 0.413869 0.328891 0.742761 -> 0.413868 0.328891 0.742759(R,m,v=1,1,0)
- =>WM: (13492: S1 ^operator O1915)
- 958: O: O1915 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N958 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N957 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13493: I3 ^predict-yes N958)
- <=WM: (13480: N957 ^status complete)
- <=WM: (13479: I3 ^predict-no N957)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /|--- Input Phase ---
- =>WM: (13497: I2 ^dir U)
- =>WM: (13496: I2 ^reward 1)
- =>WM: (13495: I2 ^see 1)
- =>WM: (13494: N958 ^status complete)
- <=WM: (13483: I2 ^dir L)
- <=WM: (13482: I2 ^reward 1)
- <=WM: (13481: I2 ^see 0)
- =>WM: (13498: I2 ^level-1 L1-root)
- <=WM: (13484: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R962 ^value 1 +)
- (R1 ^reward R962 +)
- Firing propose*predict-yes
- -->
- (O1917 ^name predict-yes +)
- (S1 ^operator O1917 +)
- Firing propose*predict-no
- -->
- (O1918 ^name predict-no +)
- (S1 ^operator O1918 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1916 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1915 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1916 ^name predict-no +)
- (S1 ^operator O1916 +)
- Retracting propose*predict-yes
- -->
- (O1915 ^name predict-yes +)
- (S1 ^operator O1915 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R961 ^value 1 +)
- (R1 ^reward R961 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1916 = 0.3289450941277776)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1916 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1915 = 0.4318889542566386)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1915 = 0.5681124792401879)
- =>WM: (13506: S1 ^operator O1918 +)
- =>WM: (13505: S1 ^operator O1917 +)
- =>WM: (13504: I3 ^dir U)
- =>WM: (13503: O1918 ^name predict-no)
- =>WM: (13502: O1917 ^name predict-yes)
- =>WM: (13501: R962 ^value 1)
- =>WM: (13500: R1 ^reward R962)
- =>WM: (13499: I3 ^see 1)
- <=WM: (13490: S1 ^operator O1915 +)
- <=WM: (13492: S1 ^operator O1915)
- <=WM: (13491: S1 ^operator O1916 +)
- <=WM: (13489: I3 ^dir L)
- <=WM: (13485: R1 ^reward R961)
- <=WM: (13457: I3 ^see 0)
- <=WM: (13488: O1916 ^name predict-no)
- <=WM: (13487: O1915 ^name predict-yes)
- <=WM: (13486: R961 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1917 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1918 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1916 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1915 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683775 -0.251886 0.431889 -> 0.683775 -0.251886 0.431889(R,m,v=1,0.920245,0.0738469)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*30 0.316226 0.251886 0.568112 -> 0.316226 0.251886 0.568112(R,m,v=1,1,0)
- =>WM: (13507: S1 ^operator O1918)
- 959: O: O1918 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N959 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N958 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13508: I3 ^predict-no N959)
- <=WM: (13494: N958 ^status complete)
- <=WM: (13493: I3 ^predict-yes N958)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- \-/--- Input Phase ---
- =>WM: (13512: I2 ^dir L)
- =>WM: (13511: I2 ^reward 1)
- =>WM: (13510: I2 ^see 0)
- =>WM: (13509: N959 ^status complete)
- <=WM: (13497: I2 ^dir U)
- <=WM: (13496: I2 ^reward 1)
- <=WM: (13495: I2 ^see 1)
- =>WM: (13513: I2 ^level-1 L1-root)
- <=WM: (13498: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O1918 = 0.671051122743914)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O1917 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R963 ^value 1 +)
- (R1 ^reward R963 +)
- Firing propose*predict-yes
- -->
- (O1919 ^name predict-yes +)
- (S1 ^operator O1919 +)
- Firing propose*predict-no
- -->
- (O1920 ^name predict-no +)
- (S1 ^operator O1920 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1918 = 0.3289450941277776)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1917 = 0.4318887392321146)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O1918 ^name predict-no +)
- (S1 ^operator O1918 +)
- Retracting propose*predict-yes
- -->
- (O1917 ^name predict-yes +)
- (S1 ^operator O1917 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R962 ^value 1 +)
- (R1 ^reward R962 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1918 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1917 = 0.)
- =>WM: (13521: S1 ^operator O1920 +)
- =>WM: (13520: S1 ^operator O1919 +)
- =>WM: (13519: I3 ^dir L)
- =>WM: (13518: O1920 ^name predict-no)
- =>WM: (13517: O1919 ^name predict-yes)
- =>WM: (13516: R963 ^value 1)
- =>WM: (13515: R1 ^reward R963)
- =>WM: (13514: I3 ^see 0)
- <=WM: (13505: S1 ^operator O1917 +)
- <=WM: (13506: S1 ^operator O1918 +)
- <=WM: (13507: S1 ^operator O1918)
- <=WM: (13504: I3 ^dir U)
- <=WM: (13500: R1 ^reward R962)
- <=WM: (13499: I3 ^see 1)
- <=WM: (13503: O1918 ^name predict-no)
- <=WM: (13502: O1917 ^name predict-yes)
- <=WM: (13501: R962 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O1919 = -0.06092862110810815)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1919 = 0.4318887392321146)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O1920 = 0.671051122743914)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1920 = 0.3289450941277776)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1918 = 0.3289450941277776)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O1918 = 0.671051122743914)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1917 = 0.4318887392321146)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O1917 = -0.06092862110810815)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (13522: S1 ^operator O1920)
- 960: O: O1920 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N960 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N959 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13523: I3 ^predict-no N960)
- <=WM: (13509: N959 ^status complete)
- <=WM: (13508: I3 ^predict-no N959)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |\---- Input Phase ---
- =>WM: (13527: I2 ^dir U)
- =>WM: (13526: I2 ^reward 1)
- =>WM: (13525: I2 ^see 0)
- =>WM: (13524: N960 ^status complete)
- <=WM: (13512: I2 ^dir L)
- <=WM: (13511: I2 ^reward 1)
- <=WM: (13510: I2 ^see 0)
- =>WM: (13528: I2 ^level-1 L0-root)
- <=WM: (13513: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R964 ^value 1 +)
- (R1 ^reward R964 +)
- Firing propose*predict-yes
- -->
- (O1921 ^name predict-yes +)
- (S1 ^operator O1921 +)
- Firing propose*predict-no
- -->
- (O1922 ^name predict-no +)
- (S1 ^operator O1922 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1920 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1919 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1920 ^name predict-no +)
- (S1 ^operator O1920 +)
- Retracting propose*predict-yes
- -->
- (O1919 ^name predict-yes +)
- (S1 ^operator O1919 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R963 ^value 1 +)
- (R1 ^reward R963 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1920 = 0.3289450941277776)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O1920 = 0.671051122743914)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1919 = 0.4318887392321146)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O1919 = -0.06092862110810815)
- =>WM: (13535: S1 ^operator O1922 +)
- =>WM: (13534: S1 ^operator O1921 +)
- =>WM: (13533: I3 ^dir U)
- =>WM: (13532: O1922 ^name predict-no)
- =>WM: (13531: O1921 ^name predict-yes)
- =>WM: (13530: R964 ^value 1)
- =>WM: (13529: R1 ^reward R964)
- <=WM: (13520: S1 ^operator O1919 +)
- <=WM: (13521: S1 ^operator O1920 +)
- <=WM: (13522: S1 ^operator O1920)
- <=WM: (13519: I3 ^dir L)
- <=WM: (13515: R1 ^reward R963)
- <=WM: (13518: O1920 ^name predict-no)
- <=WM: (13517: O1919 ^name predict-yes)
- <=WM: (13516: R963 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1921 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1922 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1920 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1919 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565402 -0.236456 0.328945 -> 0.565403 -0.236457 0.328946(R,m,v=1,0.903226,0.0879765)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*43 0.434591 0.23646 0.671051 -> 0.434592 0.23646 0.671052(R,m,v=1,1,0)
- =>WM: (13536: S1 ^operator O1922)
- 961: O: O1922 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N961 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N960 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13537: I3 ^predict-no N961)
- <=WM: (13524: N960 ^status complete)
- <=WM: (13523: I3 ^predict-no N960)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- /--- Input Phase ---
- =>WM: (13541: I2 ^dir R)
- =>WM: (13540: I2 ^reward 1)
- =>WM: (13539: I2 ^see 0)
- =>WM: (13538: N961 ^status complete)
- <=WM: (13527: I2 ^dir U)
- <=WM: (13526: I2 ^reward 1)
- <=WM: (13525: I2 ^see 0)
- =>WM: (13542: I2 ^level-1 L0-root)
- <=WM: (13528: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O1922 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O1921 = 0.2631774632268827)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R965 ^value 1 +)
- (R1 ^reward R965 +)
- Firing propose*predict-yes
- -->
- (O1923 ^name predict-yes +)
- (S1 ^operator O1923 +)
- Firing propose*predict-no
- -->
- (O1924 ^name predict-no +)
- (S1 ^operator O1924 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1922 = 0.2572462853745217)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1921 = 0.7368296698821956)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1922 ^name predict-no +)
- (S1 ^operator O1922 +)
- Retracting propose*predict-yes
- -->
- (O1921 ^name predict-yes +)
- (S1 ^operator O1921 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R964 ^value 1 +)
- (R1 ^reward R964 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1922 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1921 = 0.)
- =>WM: (13549: S1 ^operator O1924 +)
- =>WM: (13548: S1 ^operator O1923 +)
- =>WM: (13547: I3 ^dir R)
- =>WM: (13546: O1924 ^name predict-no)
- =>WM: (13545: O1923 ^name predict-yes)
- =>WM: (13544: R965 ^value 1)
- =>WM: (13543: R1 ^reward R965)
- <=WM: (13534: S1 ^operator O1921 +)
- <=WM: (13535: S1 ^operator O1922 +)
- <=WM: (13536: S1 ^operator O1922)
- <=WM: (13533: I3 ^dir U)
- <=WM: (13529: R1 ^reward R964)
- <=WM: (13532: O1922 ^name predict-no)
- <=WM: (13531: O1921 ^name predict-yes)
- <=WM: (13530: R964 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O1923 = 0.2631774632268827)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1923 = 0.7368296698821956)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O1924 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1924 = 0.2572462853745217)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1922 = 0.2572462853745217)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O1922 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1921 = 0.7368296698821956)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O1921 = 0.2631774632268827)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (13550: S1 ^operator O1923)
- 962: O: O1923 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N962 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N961 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13551: I3 ^predict-yes N962)
- <=WM: (13538: N961 ^status complete)
- <=WM: (13537: I3 ^predict-no N961)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |\---- Input Phase ---
- =>WM: (13555: I2 ^dir U)
- =>WM: (13554: I2 ^reward 1)
- =>WM: (13553: I2 ^see 1)
- =>WM: (13552: N962 ^status complete)
- <=WM: (13541: I2 ^dir R)
- <=WM: (13540: I2 ^reward 1)
- <=WM: (13539: I2 ^see 0)
- =>WM: (13556: I2 ^level-1 R1-root)
- <=WM: (13542: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R966 ^value 1 +)
- (R1 ^reward R966 +)
- Firing propose*predict-yes
- -->
- (O1925 ^name predict-yes +)
- (S1 ^operator O1925 +)
- Firing propose*predict-no
- -->
- (O1926 ^name predict-no +)
- (S1 ^operator O1926 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1924 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1923 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1924 ^name predict-no +)
- (S1 ^operator O1924 +)
- Retracting propose*predict-yes
- -->
- (O1923 ^name predict-yes +)
- (S1 ^operator O1923 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R965 ^value 1 +)
- (R1 ^reward R965 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1924 = 0.2572462853745217)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O1924 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1923 = 0.7368296698821956)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O1923 = 0.2631774632268827)
- =>WM: (13564: S1 ^operator O1926 +)
- =>WM: (13563: S1 ^operator O1925 +)
- =>WM: (13562: I3 ^dir U)
- =>WM: (13561: O1926 ^name predict-no)
- =>WM: (13560: O1925 ^name predict-yes)
- =>WM: (13559: R966 ^value 1)
- =>WM: (13558: R1 ^reward R966)
- =>WM: (13557: I3 ^see 1)
- <=WM: (13548: S1 ^operator O1923 +)
- <=WM: (13550: S1 ^operator O1923)
- <=WM: (13549: S1 ^operator O1924 +)
- <=WM: (13547: I3 ^dir R)
- <=WM: (13543: R1 ^reward R965)
- <=WM: (13514: I3 ^see 0)
- <=WM: (13546: O1924 ^name predict-no)
- <=WM: (13545: O1923 ^name predict-yes)
- <=WM: (13544: R965 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1925 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1926 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1924 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1923 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748237 -0.0114068 0.73683 -> 0.748236 -0.0114076 0.736829(R,m,v=1,0.893082,0.0960911)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*35 0.251765 0.0114121 0.263177 -> 0.251765 0.0114113 0.263176(R,m,v=1,1,0)
- =>WM: (13565: S1 ^operator O1926)
- 963: O: O1926 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N963 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N962 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13566: I3 ^predict-no N963)
- <=WM: (13552: N962 ^status complete)
- <=WM: (13551: I3 ^predict-yes N962)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- /|\--- Input Phase ---
- =>WM: (13570: I2 ^dir L)
- =>WM: (13569: I2 ^reward 1)
- =>WM: (13568: I2 ^see 0)
- =>WM: (13567: N963 ^status complete)
- <=WM: (13555: I2 ^dir U)
- <=WM: (13554: I2 ^reward 1)
- <=WM: (13553: I2 ^see 1)
- =>WM: (13571: I2 ^level-1 R1-root)
- <=WM: (13556: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O1925 = 0.5681037396512361)
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O1926 = -0.1549421060161498)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R967 ^value 1 +)
- (R1 ^reward R967 +)
- Firing propose*predict-yes
- -->
- (O1927 ^name predict-yes +)
- (S1 ^operator O1927 +)
- Firing propose*predict-no
- -->
- (O1928 ^name predict-no +)
- (S1 ^operator O1928 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1926 = 0.3289456615970239)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1925 = 0.4318887392321146)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O1926 ^name predict-no +)
- (S1 ^operator O1926 +)
- Retracting propose*predict-yes
- -->
- (O1925 ^name predict-yes +)
- (S1 ^operator O1925 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R966 ^value 1 +)
- (R1 ^reward R966 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1926 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1925 = 0.)
- =>WM: (13579: S1 ^operator O1928 +)
- =>WM: (13578: S1 ^operator O1927 +)
- =>WM: (13577: I3 ^dir L)
- =>WM: (13576: O1928 ^name predict-no)
- =>WM: (13575: O1927 ^name predict-yes)
- =>WM: (13574: R967 ^value 1)
- =>WM: (13573: R1 ^reward R967)
- =>WM: (13572: I3 ^see 0)
- <=WM: (13563: S1 ^operator O1925 +)
- <=WM: (13564: S1 ^operator O1926 +)
- <=WM: (13565: S1 ^operator O1926)
- <=WM: (13562: I3 ^dir U)
- <=WM: (13558: R1 ^reward R966)
- <=WM: (13557: I3 ^see 1)
- <=WM: (13561: O1926 ^name predict-no)
- <=WM: (13560: O1925 ^name predict-yes)
- <=WM: (13559: R966 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O1927 = 0.5681037396512361)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1927 = 0.4318887392321146)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O1928 = -0.1549421060161498)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1928 = 0.3289456615970239)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1926 = 0.3289456615970239)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O1926 = -0.1549421060161498)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1925 = 0.4318887392321146)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O1925 = 0.5681037396512361)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (13580: S1 ^operator O1927)
- 964: O: O1927 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N964 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N963 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13581: I3 ^predict-yes N964)
- <=WM: (13567: N963 ^status complete)
- <=WM: (13566: I3 ^predict-no N963)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- -/--- Input Phase ---
- =>WM: (13585: I2 ^dir U)
- =>WM: (13584: I2 ^reward 1)
- =>WM: (13583: I2 ^see 1)
- =>WM: (13582: N964 ^status complete)
- <=WM: (13570: I2 ^dir L)
- <=WM: (13569: I2 ^reward 1)
- <=WM: (13568: I2 ^see 0)
- =>WM: (13586: I2 ^level-1 L1-root)
- <=WM: (13571: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R968 ^value 1 +)
- (R1 ^reward R968 +)
- Firing propose*predict-yes
- -->
- (O1929 ^name predict-yes +)
- (S1 ^operator O1929 +)
- Firing propose*predict-no
- -->
- (O1930 ^name predict-no +)
- (S1 ^operator O1930 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1928 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1927 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1928 ^name predict-no +)
- (S1 ^operator O1928 +)
- Retracting propose*predict-yes
- -->
- (O1927 ^name predict-yes +)
- (S1 ^operator O1927 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R967 ^value 1 +)
- (R1 ^reward R967 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1928 = 0.3289456615970239)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O1928 = -0.1549421060161498)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1927 = 0.4318887392321146)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O1927 = 0.5681037396512361)
- =>WM: (13594: S1 ^operator O1930 +)
- =>WM: (13593: S1 ^operator O1929 +)
- =>WM: (13592: I3 ^dir U)
- =>WM: (13591: O1930 ^name predict-no)
- =>WM: (13590: O1929 ^name predict-yes)
- =>WM: (13589: R968 ^value 1)
- =>WM: (13588: R1 ^reward R968)
- =>WM: (13587: I3 ^see 1)
- <=WM: (13578: S1 ^operator O1927 +)
- <=WM: (13580: S1 ^operator O1927)
- <=WM: (13579: S1 ^operator O1928 +)
- <=WM: (13577: I3 ^dir L)
- <=WM: (13573: R1 ^reward R967)
- <=WM: (13572: I3 ^see 0)
- <=WM: (13576: O1928 ^name predict-no)
- <=WM: (13575: O1927 ^name predict-yes)
- <=WM: (13574: R967 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1929 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1930 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1928 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1927 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683775 -0.251886 0.431889 -> 0.683776 -0.251886 0.43189(R,m,v=1,0.920732,0.0734326)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*45 0.316218 0.251886 0.568104 -> 0.316219 0.251886 0.568105(R,m,v=1,1,0)
- =>WM: (13595: S1 ^operator O1930)
- 965: O: O1930 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N965 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N964 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13596: I3 ^predict-no N965)
- <=WM: (13582: N964 ^status complete)
- <=WM: (13581: I3 ^predict-yes N964)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- |\--- Input Phase ---
- =>WM: (13600: I2 ^dir L)
- =>WM: (13599: I2 ^reward 1)
- =>WM: (13598: I2 ^see 0)
- =>WM: (13597: N965 ^status complete)
- <=WM: (13585: I2 ^dir U)
- <=WM: (13584: I2 ^reward 1)
- <=WM: (13583: I2 ^see 1)
- =>WM: (13601: I2 ^level-1 L1-root)
- <=WM: (13586: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O1930 = 0.6710516902131602)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O1929 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R969 ^value 1 +)
- (R1 ^reward R969 +)
- Firing propose*predict-yes
- -->
- (O1931 ^name predict-yes +)
- (S1 ^operator O1931 +)
- Firing propose*predict-no
- -->
- (O1932 ^name predict-no +)
- (S1 ^operator O1932 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1930 = 0.3289456615970239)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1929 = 0.431889867399612)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O1930 ^name predict-no +)
- (S1 ^operator O1930 +)
- Retracting propose*predict-yes
- -->
- (O1929 ^name predict-yes +)
- (S1 ^operator O1929 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R968 ^value 1 +)
- (R1 ^reward R968 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1930 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1929 = 0.)
- =>WM: (13609: S1 ^operator O1932 +)
- =>WM: (13608: S1 ^operator O1931 +)
- =>WM: (13607: I3 ^dir L)
- =>WM: (13606: O1932 ^name predict-no)
- =>WM: (13605: O1931 ^name predict-yes)
- =>WM: (13604: R969 ^value 1)
- =>WM: (13603: R1 ^reward R969)
- =>WM: (13602: I3 ^see 0)
- <=WM: (13593: S1 ^operator O1929 +)
- <=WM: (13594: S1 ^operator O1930 +)
- <=WM: (13595: S1 ^operator O1930)
- <=WM: (13592: I3 ^dir U)
- <=WM: (13588: R1 ^reward R968)
- <=WM: (13587: I3 ^see 1)
- <=WM: (13591: O1930 ^name predict-no)
- <=WM: (13590: O1929 ^name predict-yes)
- <=WM: (13589: R968 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O1931 = -0.06092862110810815)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1931 = 0.431889867399612)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O1932 = 0.6710516902131602)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1932 = 0.3289456615970239)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1930 = 0.3289456615970239)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O1930 = 0.6710516902131602)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1929 = 0.431889867399612)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O1929 = -0.06092862110810815)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (13610: S1 ^operator O1932)
- 966: O: O1932 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N966 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N965 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13611: I3 ^predict-no N966)
- <=WM: (13597: N965 ^status complete)
- <=WM: (13596: I3 ^predict-no N965)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- -/|--- Input Phase ---
- =>WM: (13615: I2 ^dir R)
- =>WM: (13614: I2 ^reward 1)
- =>WM: (13613: I2 ^see 0)
- =>WM: (13612: N966 ^status complete)
- <=WM: (13600: I2 ^dir L)
- <=WM: (13599: I2 ^reward 1)
- <=WM: (13598: I2 ^see 0)
- =>WM: (13616: I2 ^level-1 L0-root)
- <=WM: (13601: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O1932 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O1931 = 0.2631763932605209)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R970 ^value 1 +)
- (R1 ^reward R970 +)
- Firing propose*predict-yes
- -->
- (O1933 ^name predict-yes +)
- (S1 ^operator O1933 +)
- Firing propose*predict-no
- -->
- (O1934 ^name predict-no +)
- (S1 ^operator O1934 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1932 = 0.2572462853745217)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1931 = 0.7368285999158338)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1932 ^name predict-no +)
- (S1 ^operator O1932 +)
- Retracting propose*predict-yes
- -->
- (O1931 ^name predict-yes +)
- (S1 ^operator O1931 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R969 ^value 1 +)
- (R1 ^reward R969 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1932 = 0.3289456615970239)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O1932 = 0.6710516902131602)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1931 = 0.431889867399612)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O1931 = -0.06092862110810815)
- =>WM: (13623: S1 ^operator O1934 +)
- =>WM: (13622: S1 ^operator O1933 +)
- =>WM: (13621: I3 ^dir R)
- =>WM: (13620: O1934 ^name predict-no)
- =>WM: (13619: O1933 ^name predict-yes)
- =>WM: (13618: R970 ^value 1)
- =>WM: (13617: R1 ^reward R970)
- <=WM: (13608: S1 ^operator O1931 +)
- <=WM: (13609: S1 ^operator O1932 +)
- <=WM: (13610: S1 ^operator O1932)
- <=WM: (13607: I3 ^dir L)
- <=WM: (13603: R1 ^reward R969)
- <=WM: (13606: O1932 ^name predict-no)
- <=WM: (13605: O1931 ^name predict-yes)
- <=WM: (13604: R969 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1933 = 0.7368285999158338)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O1933 = 0.2631763932605209)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1934 = 0.2572462853745217)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O1934 = -0.07401383653737587)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1932 = 0.2572462853745217)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O1932 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1931 = 0.7368285999158338)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O1931 = 0.2631763932605209)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565403 -0.236457 0.328946 -> 0.565403 -0.236457 0.328946(R,m,v=1,0.903846,0.087469)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*43 0.434592 0.23646 0.671052 -> 0.434593 0.236459 0.671052(R,m,v=1,1,0)
- =>WM: (13624: S1 ^operator O1933)
- 967: O: O1933 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N967 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N966 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13625: I3 ^predict-yes N967)
- <=WM: (13612: N966 ^status complete)
- <=WM: (13611: I3 ^predict-no N966)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (13629: I2 ^dir R)
- =>WM: (13628: I2 ^reward 1)
- =>WM: (13627: I2 ^see 1)
- =>WM: (13626: N967 ^status complete)
- <=WM: (13615: I2 ^dir R)
- <=WM: (13614: I2 ^reward 1)
- <=WM: (13613: I2 ^see 0)
- =>WM: (13630: I2 ^level-1 R1-root)
- <=WM: (13616: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1933 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1934 = 0.7427519225841476)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R971 ^value 1 +)
- (R1 ^reward R971 +)
- Firing propose*predict-yes
- -->
- (O1935 ^name predict-yes +)
- (S1 ^operator O1935 +)
- Firing propose*predict-no
- -->
- (O1936 ^name predict-no +)
- (S1 ^operator O1936 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1934 = 0.2572462853745217)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1933 = 0.7368285999158338)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1934 ^name predict-no +)
- (S1 ^operator O1934 +)
- Retracting propose*predict-yes
- -->
- (O1933 ^name predict-yes +)
- (S1 ^operator O1933 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R970 ^value 1 +)
- (R1 ^reward R970 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O1934 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1934 = 0.2572462853745217)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O1933 = 0.2631763932605209)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1933 = 0.7368285999158338)
- =>WM: (13637: S1 ^operator O1936 +)
- =>WM: (13636: S1 ^operator O1935 +)
- =>WM: (13635: O1936 ^name predict-no)
- =>WM: (13634: O1935 ^name predict-yes)
- =>WM: (13633: R971 ^value 1)
- =>WM: (13632: R1 ^reward R971)
- =>WM: (13631: I3 ^see 1)
- <=WM: (13622: S1 ^operator O1933 +)
- <=WM: (13624: S1 ^operator O1933)
- <=WM: (13623: S1 ^operator O1934 +)
- <=WM: (13617: R1 ^reward R970)
- <=WM: (13602: I3 ^see 0)
- <=WM: (13620: O1934 ^name predict-no)
- <=WM: (13619: O1933 ^name predict-yes)
- <=WM: (13618: R970 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1935 = 0.7368285999158338)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1935 = -0.3011268063455669)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1936 = 0.2572462853745217)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1936 = 0.7427519225841476)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1934 = 0.2572462853745217)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1934 = 0.7427519225841476)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1933 = 0.7368285999158338)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1933 = -0.3011268063455669)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114076 0.736829 -> 0.748236 -0.0114082 0.736828(R,m,v=1,0.89375,0.0955582)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*35 0.251765 0.0114113 0.263176 -> 0.251765 0.0114107 0.263176(R,m,v=1,1,0)
- =>WM: (13638: S1 ^operator O1936)
- 968: O: O1936 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N968 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N967 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13639: I3 ^predict-no N968)
- <=WM: (13626: N967 ^status complete)
- <=WM: (13625: I3 ^predict-yes N967)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /|\--- Input Phase ---
- =>WM: (13643: I2 ^dir U)
- =>WM: (13642: I2 ^reward 1)
- =>WM: (13641: I2 ^see 0)
- =>WM: (13640: N968 ^status complete)
- <=WM: (13629: I2 ^dir R)
- <=WM: (13628: I2 ^reward 1)
- <=WM: (13627: I2 ^see 1)
- =>WM: (13644: I2 ^level-1 R0-root)
- <=WM: (13630: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R972 ^value 1 +)
- (R1 ^reward R972 +)
- Firing propose*predict-yes
- -->
- (O1937 ^name predict-yes +)
- (S1 ^operator O1937 +)
- Firing propose*predict-no
- -->
- (O1938 ^name predict-no +)
- (S1 ^operator O1938 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1936 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1935 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O1936 ^name predict-no +)
- (S1 ^operator O1936 +)
- Retracting propose*predict-yes
- -->
- (O1935 ^name predict-yes +)
- (S1 ^operator O1935 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R971 ^value 1 +)
- (R1 ^reward R971 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1936 = 0.7427519225841476)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1936 = 0.2572462853745217)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1935 = -0.3011268063455669)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1935 = 0.7368278509393806)
- =>WM: (13652: S1 ^operator O1938 +)
- =>WM: (13651: S1 ^operator O1937 +)
- =>WM: (13650: I3 ^dir U)
- =>WM: (13649: O1938 ^name predict-no)
- =>WM: (13648: O1937 ^name predict-yes)
- =>WM: (13647: R972 ^value 1)
- =>WM: (13646: R1 ^reward R972)
- =>WM: (13645: I3 ^see 0)
- <=WM: (13636: S1 ^operator O1935 +)
- <=WM: (13637: S1 ^operator O1936 +)
- <=WM: (13638: S1 ^operator O1936)
- <=WM: (13621: I3 ^dir R)
- <=WM: (13632: R1 ^reward R971)
- <=WM: (13631: I3 ^see 1)
- <=WM: (13635: O1936 ^name predict-no)
- <=WM: (13634: O1935 ^name predict-yes)
- <=WM: (13633: R971 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1937 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1938 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1936 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1935 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586136 -0.32889 0.257246 -> 0.586136 -0.32889 0.257247(R,m,v=1,0.857143,0.123182)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*36 0.413862 0.32889 0.742752 -> 0.413863 0.32889 0.742752(R,m,v=1,1,0)
- =>WM: (13653: S1 ^operator O1938)
- 969: O: O1938 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N969 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N968 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13654: I3 ^predict-no N969)
- <=WM: (13640: N968 ^status complete)
- <=WM: (13639: I3 ^predict-no N968)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- -/|--- Input Phase ---
- =>WM: (13658: I2 ^dir U)
- =>WM: (13657: I2 ^reward 1)
- =>WM: (13656: I2 ^see 0)
- =>WM: (13655: N969 ^status complete)
- <=WM: (13643: I2 ^dir U)
- <=WM: (13642: I2 ^reward 1)
- <=WM: (13641: I2 ^see 0)
- =>WM: (13659: I2 ^level-1 R0-root)
- <=WM: (13644: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R973 ^value 1 +)
- (R1 ^reward R973 +)
- Firing propose*predict-yes
- -->
- (O1939 ^name predict-yes +)
- (S1 ^operator O1939 +)
- Firing propose*predict-no
- -->
- (O1940 ^name predict-no +)
- (S1 ^operator O1940 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1938 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1937 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1938 ^name predict-no +)
- (S1 ^operator O1938 +)
- Retracting propose*predict-yes
- -->
- (O1937 ^name predict-yes +)
- (S1 ^operator O1937 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R972 ^value 1 +)
- (R1 ^reward R972 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1938 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1937 = 0.)
- =>WM: (13665: S1 ^operator O1940 +)
- =>WM: (13664: S1 ^operator O1939 +)
- =>WM: (13663: O1940 ^name predict-no)
- =>WM: (13662: O1939 ^name predict-yes)
- =>WM: (13661: R973 ^value 1)
- =>WM: (13660: R1 ^reward R973)
- <=WM: (13651: S1 ^operator O1937 +)
- <=WM: (13652: S1 ^operator O1938 +)
- <=WM: (13653: S1 ^operator O1938)
- <=WM: (13646: R1 ^reward R972)
- <=WM: (13649: O1938 ^name predict-no)
- <=WM: (13648: O1937 ^name predict-yes)
- <=WM: (13647: R972 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1939 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1940 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1938 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1937 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (13666: S1 ^operator O1940)
- 970: O: O1940 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N970 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N969 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13667: I3 ^predict-no N970)
- <=WM: (13655: N969 ^status complete)
- <=WM: (13654: I3 ^predict-no N969)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (13671: I2 ^dir L)
- =>WM: (13670: I2 ^reward 1)
- =>WM: (13669: I2 ^see 0)
- =>WM: (13668: N970 ^status complete)
- <=WM: (13658: I2 ^dir U)
- <=WM: (13657: I2 ^reward 1)
- <=WM: (13656: I2 ^see 0)
- =>WM: (13672: I2 ^level-1 R0-root)
- <=WM: (13659: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1940 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1939 = 0.568112264215664)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R974 ^value 1 +)
- (R1 ^reward R974 +)
- Firing propose*predict-yes
- -->
- (O1941 ^name predict-yes +)
- (S1 ^operator O1941 +)
- Firing propose*predict-no
- -->
- (O1942 ^name predict-no +)
- (S1 ^operator O1942 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1940 = 0.3289460588254962)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1939 = 0.431889867399612)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1940 ^name predict-no +)
- (S1 ^operator O1940 +)
- Retracting propose*predict-yes
- -->
- (O1939 ^name predict-yes +)
- (S1 ^operator O1939 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R973 ^value 1 +)
- (R1 ^reward R973 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1940 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1939 = 0.)
- =>WM: (13679: S1 ^operator O1942 +)
- =>WM: (13678: S1 ^operator O1941 +)
- =>WM: (13677: I3 ^dir L)
- =>WM: (13676: O1942 ^name predict-no)
- =>WM: (13675: O1941 ^name predict-yes)
- =>WM: (13674: R974 ^value 1)
- =>WM: (13673: R1 ^reward R974)
- <=WM: (13664: S1 ^operator O1939 +)
- <=WM: (13665: S1 ^operator O1940 +)
- <=WM: (13666: S1 ^operator O1940)
- <=WM: (13650: I3 ^dir U)
- <=WM: (13660: R1 ^reward R973)
- <=WM: (13663: O1940 ^name predict-no)
- <=WM: (13662: O1939 ^name predict-yes)
- <=WM: (13661: R973 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1941 = 0.568112264215664)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1941 = 0.431889867399612)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1942 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1942 = 0.3289460588254962)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1940 = 0.3289460588254962)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1940 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1939 = 0.431889867399612)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1939 = 0.568112264215664)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (13680: S1 ^operator O1941)
- 971: O: O1941 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N971 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N970 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13681: I3 ^predict-yes N971)
- <=WM: (13668: N970 ^status complete)
- <=WM: (13667: I3 ^predict-no N970)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- /--- Input Phase ---
- =>WM: (13685: I2 ^dir R)
- =>WM: (13684: I2 ^reward 1)
- =>WM: (13683: I2 ^see 1)
- =>WM: (13682: N971 ^status complete)
- <=WM: (13671: I2 ^dir L)
- <=WM: (13670: I2 ^reward 1)
- <=WM: (13669: I2 ^see 0)
- =>WM: (13686: I2 ^level-1 L1-root)
- <=WM: (13672: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O1942 = -0.1377248055371832)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O1941 = 0.2631673327126827)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R975 ^value 1 +)
- (R1 ^reward R975 +)
- Firing propose*predict-yes
- -->
- (O1943 ^name predict-yes +)
- (S1 ^operator O1943 +)
- Firing propose*predict-no
- -->
- (O1944 ^name predict-no +)
- (S1 ^operator O1944 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1942 = 0.2572465541807213)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1941 = 0.7368278509393806)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1942 ^name predict-no +)
- (S1 ^operator O1942 +)
- Retracting propose*predict-yes
- -->
- (O1941 ^name predict-yes +)
- (S1 ^operator O1941 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R974 ^value 1 +)
- (R1 ^reward R974 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1942 = 0.3289460588254962)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1942 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1941 = 0.431889867399612)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1941 = 0.568112264215664)
- =>WM: (13694: S1 ^operator O1944 +)
- =>WM: (13693: S1 ^operator O1943 +)
- =>WM: (13692: I3 ^dir R)
- =>WM: (13691: O1944 ^name predict-no)
- =>WM: (13690: O1943 ^name predict-yes)
- =>WM: (13689: R975 ^value 1)
- =>WM: (13688: R1 ^reward R975)
- =>WM: (13687: I3 ^see 1)
- <=WM: (13678: S1 ^operator O1941 +)
- <=WM: (13680: S1 ^operator O1941)
- <=WM: (13679: S1 ^operator O1942 +)
- <=WM: (13677: I3 ^dir L)
- <=WM: (13673: R1 ^reward R974)
- <=WM: (13645: I3 ^see 0)
- <=WM: (13676: O1942 ^name predict-no)
- <=WM: (13675: O1941 ^name predict-yes)
- <=WM: (13674: R974 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1943 = 0.7368278509393806)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O1943 = 0.2631673327126827)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1944 = 0.2572465541807213)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O1944 = -0.1377248055371832)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1942 = 0.2572465541807213)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O1942 = -0.1377248055371832)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1941 = 0.7368278509393806)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O1941 = 0.2631673327126827)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683776 -0.251886 0.43189 -> 0.683776 -0.251886 0.43189(R,m,v=1,0.921212,0.0730229)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*30 0.316226 0.251886 0.568112 -> 0.316226 0.251886 0.568112(R,m,v=1,1,0)
- =>WM: (13695: S1 ^operator O1943)
- 972: O: O1943 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N972 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N971 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13696: I3 ^predict-yes N972)
- <=WM: (13682: N971 ^status complete)
- <=WM: (13681: I3 ^predict-yes N971)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- |\--- Input Phase ---
- =>WM: (13700: I2 ^dir L)
- =>WM: (13699: I2 ^reward 1)
- =>WM: (13698: I2 ^see 1)
- =>WM: (13697: N972 ^status complete)
- <=WM: (13685: I2 ^dir R)
- <=WM: (13684: I2 ^reward 1)
- <=WM: (13683: I2 ^see 1)
- =>WM: (13701: I2 ^level-1 R1-root)
- <=WM: (13686: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O1943 = 0.5681048678187335)
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O1944 = -0.1549421060161498)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R976 ^value 1 +)
- (R1 ^reward R976 +)
- Firing propose*predict-yes
- -->
- (O1945 ^name predict-yes +)
- (S1 ^operator O1945 +)
- Firing propose*predict-no
- -->
- (O1946 ^name predict-no +)
- (S1 ^operator O1946 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1944 = 0.3289460588254962)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1943 = 0.4318895476573206)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O1944 ^name predict-no +)
- (S1 ^operator O1944 +)
- Retracting propose*predict-yes
- -->
- (O1943 ^name predict-yes +)
- (S1 ^operator O1943 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R975 ^value 1 +)
- (R1 ^reward R975 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O1944 = -0.1377248055371832)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1944 = 0.2572465541807213)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O1943 = 0.2631673327126827)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1943 = 0.7368278509393806)
- =>WM: (13708: S1 ^operator O1946 +)
- =>WM: (13707: S1 ^operator O1945 +)
- =>WM: (13706: I3 ^dir L)
- =>WM: (13705: O1946 ^name predict-no)
- =>WM: (13704: O1945 ^name predict-yes)
- =>WM: (13703: R976 ^value 1)
- =>WM: (13702: R1 ^reward R976)
- <=WM: (13693: S1 ^operator O1943 +)
- <=WM: (13695: S1 ^operator O1943)
- <=WM: (13694: S1 ^operator O1944 +)
- <=WM: (13692: I3 ^dir R)
- <=WM: (13688: R1 ^reward R975)
- <=WM: (13691: O1944 ^name predict-no)
- <=WM: (13690: O1943 ^name predict-yes)
- <=WM: (13689: R975 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1945 = 0.4318895476573206)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O1945 = 0.5681048678187335)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1946 = 0.3289460588254962)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O1946 = -0.1549421060161498)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1944 = 0.3289460588254962)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O1944 = -0.1549421060161498)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1943 = 0.4318895476573206)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O1943 = 0.5681048678187335)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114082 0.736828 -> 0.748236 -0.0114076 0.736829(R,m,v=1,0.89441,0.0950311)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*40 0.251763 0.0114046 0.263167 -> 0.251763 0.0114052 0.263168(R,m,v=1,1,0)
- =>WM: (13709: S1 ^operator O1945)
- 973: O: O1945 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N973 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N972 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13710: I3 ^predict-yes N973)
- <=WM: (13697: N972 ^status complete)
- <=WM: (13696: I3 ^predict-yes N972)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- -/|--- Input Phase ---
- =>WM: (13714: I2 ^dir U)
- =>WM: (13713: I2 ^reward 1)
- =>WM: (13712: I2 ^see 1)
- =>WM: (13711: N973 ^status complete)
- <=WM: (13700: I2 ^dir L)
- <=WM: (13699: I2 ^reward 1)
- <=WM: (13698: I2 ^see 1)
- =>WM: (13715: I2 ^level-1 L1-root)
- <=WM: (13701: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R977 ^value 1 +)
- (R1 ^reward R977 +)
- Firing propose*predict-yes
- -->
- (O1947 ^name predict-yes +)
- (S1 ^operator O1947 +)
- Firing propose*predict-no
- -->
- (O1948 ^name predict-no +)
- (S1 ^operator O1948 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1946 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1945 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O1946 ^name predict-no +)
- (S1 ^operator O1946 +)
- Retracting propose*predict-yes
- -->
- (O1945 ^name predict-yes +)
- (S1 ^operator O1945 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R976 ^value 1 +)
- (R1 ^reward R976 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O1946 = -0.1549421060161498)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1946 = 0.3289460588254962)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O1945 = 0.5681048678187335)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1945 = 0.4318895476573206)
- =>WM: (13722: S1 ^operator O1948 +)
- =>WM: (13721: S1 ^operator O1947 +)
- =>WM: (13720: I3 ^dir U)
- =>WM: (13719: O1948 ^name predict-no)
- =>WM: (13718: O1947 ^name predict-yes)
- =>WM: (13717: R977 ^value 1)
- =>WM: (13716: R1 ^reward R977)
- <=WM: (13707: S1 ^operator O1945 +)
- <=WM: (13709: S1 ^operator O1945)
- <=WM: (13708: S1 ^operator O1946 +)
- <=WM: (13706: I3 ^dir L)
- <=WM: (13702: R1 ^reward R976)
- <=WM: (13705: O1946 ^name predict-no)
- <=WM: (13704: O1945 ^name predict-yes)
- <=WM: (13703: R976 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1947 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1948 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1946 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1945 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683776 -0.251886 0.43189 -> 0.683777 -0.251886 0.43189(R,m,v=1,0.921687,0.0726177)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*45 0.316219 0.251886 0.568105 -> 0.31622 0.251886 0.568106(R,m,v=1,1,0)
- =>WM: (13723: S1 ^operator O1948)
- 974: O: O1948 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N974 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N973 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13724: I3 ^predict-no N974)
- <=WM: (13711: N973 ^status complete)
- <=WM: (13710: I3 ^predict-yes N973)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- \-/--- Input Phase ---
- =>WM: (13728: I2 ^dir U)
- =>WM: (13727: I2 ^reward 1)
- =>WM: (13726: I2 ^see 0)
- =>WM: (13725: N974 ^status complete)
- <=WM: (13714: I2 ^dir U)
- <=WM: (13713: I2 ^reward 1)
- <=WM: (13712: I2 ^see 1)
- =>WM: (13729: I2 ^level-1 L1-root)
- <=WM: (13715: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R978 ^value 1 +)
- (R1 ^reward R978 +)
- Firing propose*predict-yes
- -->
- (O1949 ^name predict-yes +)
- (S1 ^operator O1949 +)
- Firing propose*predict-no
- -->
- (O1950 ^name predict-no +)
- (S1 ^operator O1950 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1948 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1947 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O1948 ^name predict-no +)
- (S1 ^operator O1948 +)
- Retracting propose*predict-yes
- -->
- (O1947 ^name predict-yes +)
- (S1 ^operator O1947 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R977 ^value 1 +)
- (R1 ^reward R977 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1948 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1947 = 0.)
- =>WM: (13736: S1 ^operator O1950 +)
- =>WM: (13735: S1 ^operator O1949 +)
- =>WM: (13734: O1950 ^name predict-no)
- =>WM: (13733: O1949 ^name predict-yes)
- =>WM: (13732: R978 ^value 1)
- =>WM: (13731: R1 ^reward R978)
- =>WM: (13730: I3 ^see 0)
- <=WM: (13721: S1 ^operator O1947 +)
- <=WM: (13722: S1 ^operator O1948 +)
- <=WM: (13723: S1 ^operator O1948)
- <=WM: (13716: R1 ^reward R977)
- <=WM: (13687: I3 ^see 1)
- <=WM: (13719: O1948 ^name predict-no)
- <=WM: (13718: O1947 ^name predict-yes)
- <=WM: (13717: R977 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1949 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1950 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1948 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1947 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (13737: S1 ^operator O1950)
- 975: O: O1950 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N975 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N974 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13738: I3 ^predict-no N975)
- <=WM: (13725: N974 ^status complete)
- <=WM: (13724: I3 ^predict-no N974)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- |\---- Input Phase ---
- =>WM: (13742: I2 ^dir R)
- =>WM: (13741: I2 ^reward 1)
- =>WM: (13740: I2 ^see 0)
- =>WM: (13739: N975 ^status complete)
- <=WM: (13728: I2 ^dir U)
- <=WM: (13727: I2 ^reward 1)
- <=WM: (13726: I2 ^see 0)
- =>WM: (13743: I2 ^level-1 L1-root)
- <=WM: (13729: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O1950 = -0.1377248055371832)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O1949 = 0.2631680551648732)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R979 ^value 1 +)
- (R1 ^reward R979 +)
- Firing propose*predict-yes
- -->
- (O1951 ^name predict-yes +)
- (S1 ^operator O1951 +)
- Firing propose*predict-no
- -->
- (O1952 ^name predict-no +)
- (S1 ^operator O1952 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1950 = 0.2572465541807213)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1949 = 0.7368285733915712)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1950 ^name predict-no +)
- (S1 ^operator O1950 +)
- Retracting propose*predict-yes
- -->
- (O1949 ^name predict-yes +)
- (S1 ^operator O1949 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R978 ^value 1 +)
- (R1 ^reward R978 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1950 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1949 = 0.)
- =>WM: (13750: S1 ^operator O1952 +)
- =>WM: (13749: S1 ^operator O1951 +)
- =>WM: (13748: I3 ^dir R)
- =>WM: (13747: O1952 ^name predict-no)
- =>WM: (13746: O1951 ^name predict-yes)
- =>WM: (13745: R979 ^value 1)
- =>WM: (13744: R1 ^reward R979)
- <=WM: (13735: S1 ^operator O1949 +)
- <=WM: (13736: S1 ^operator O1950 +)
- <=WM: (13737: S1 ^operator O1950)
- <=WM: (13720: I3 ^dir U)
- <=WM: (13731: R1 ^reward R978)
- <=WM: (13734: O1950 ^name predict-no)
- <=WM: (13733: O1949 ^name predict-yes)
- <=WM: (13732: R978 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O1951 = 0.2631680551648732)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1951 = 0.7368285733915712)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O1952 = -0.1377248055371832)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1952 = 0.2572465541807213)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1950 = 0.2572465541807213)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O1950 = -0.1377248055371832)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1949 = 0.7368285733915712)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O1949 = 0.2631680551648732)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (13751: S1 ^operator O1951)
- 976: O: O1951 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N976 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N975 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13752: I3 ^predict-yes N976)
- <=WM: (13739: N975 ^status complete)
- <=WM: (13738: I3 ^predict-no N975)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /|\--- Input Phase ---
- =>WM: (13756: I2 ^dir U)
- =>WM: (13755: I2 ^reward 1)
- =>WM: (13754: I2 ^see 1)
- =>WM: (13753: N976 ^status complete)
- <=WM: (13742: I2 ^dir R)
- <=WM: (13741: I2 ^reward 1)
- <=WM: (13740: I2 ^see 0)
- =>WM: (13757: I2 ^level-1 R1-root)
- <=WM: (13743: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R980 ^value 1 +)
- (R1 ^reward R980 +)
- Firing propose*predict-yes
- -->
- (O1953 ^name predict-yes +)
- (S1 ^operator O1953 +)
- Firing propose*predict-no
- -->
- (O1954 ^name predict-no +)
- (S1 ^operator O1954 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1952 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1951 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1952 ^name predict-no +)
- (S1 ^operator O1952 +)
- Retracting propose*predict-yes
- -->
- (O1951 ^name predict-yes +)
- (S1 ^operator O1951 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R979 ^value 1 +)
- (R1 ^reward R979 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1952 = 0.2572465541807213)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O1952 = -0.1377248055371832)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1951 = 0.7368285733915712)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O1951 = 0.2631680551648732)
- =>WM: (13765: S1 ^operator O1954 +)
- =>WM: (13764: S1 ^operator O1953 +)
- =>WM: (13763: I3 ^dir U)
- =>WM: (13762: O1954 ^name predict-no)
- =>WM: (13761: O1953 ^name predict-yes)
- =>WM: (13760: R980 ^value 1)
- =>WM: (13759: R1 ^reward R980)
- =>WM: (13758: I3 ^see 1)
- <=WM: (13749: S1 ^operator O1951 +)
- <=WM: (13751: S1 ^operator O1951)
- <=WM: (13750: S1 ^operator O1952 +)
- <=WM: (13748: I3 ^dir R)
- <=WM: (13744: R1 ^reward R979)
- <=WM: (13730: I3 ^see 0)
- <=WM: (13747: O1952 ^name predict-no)
- <=WM: (13746: O1951 ^name predict-yes)
- <=WM: (13745: R979 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1953 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1954 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1952 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1951 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114076 0.736829 -> 0.748236 -0.0114073 0.736829(R,m,v=1,0.895062,0.0945096)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*40 0.251763 0.0114052 0.263168 -> 0.251763 0.0114055 0.263169(R,m,v=1,1,0)
- =>WM: (13766: S1 ^operator O1954)
- 977: O: O1954 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N977 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N976 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13767: I3 ^predict-no N977)
- <=WM: (13753: N976 ^status complete)
- <=WM: (13752: I3 ^predict-yes N976)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- -/|--- Input Phase ---
- =>WM: (13771: I2 ^dir U)
- =>WM: (13770: I2 ^reward 1)
- =>WM: (13769: I2 ^see 0)
- =>WM: (13768: N977 ^status complete)
- <=WM: (13756: I2 ^dir U)
- <=WM: (13755: I2 ^reward 1)
- <=WM: (13754: I2 ^see 1)
- =>WM: (13772: I2 ^level-1 R1-root)
- <=WM: (13757: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R981 ^value 1 +)
- (R1 ^reward R981 +)
- Firing propose*predict-yes
- -->
- (O1955 ^name predict-yes +)
- (S1 ^operator O1955 +)
- Firing propose*predict-no
- -->
- (O1956 ^name predict-no +)
- (S1 ^operator O1956 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1954 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1953 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O1954 ^name predict-no +)
- (S1 ^operator O1954 +)
- Retracting propose*predict-yes
- -->
- (O1953 ^name predict-yes +)
- (S1 ^operator O1953 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R980 ^value 1 +)
- (R1 ^reward R980 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1954 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1953 = 0.)
- =>WM: (13779: S1 ^operator O1956 +)
- =>WM: (13778: S1 ^operator O1955 +)
- =>WM: (13777: O1956 ^name predict-no)
- =>WM: (13776: O1955 ^name predict-yes)
- =>WM: (13775: R981 ^value 1)
- =>WM: (13774: R1 ^reward R981)
- =>WM: (13773: I3 ^see 0)
- <=WM: (13764: S1 ^operator O1953 +)
- <=WM: (13765: S1 ^operator O1954 +)
- <=WM: (13766: S1 ^operator O1954)
- <=WM: (13759: R1 ^reward R980)
- <=WM: (13758: I3 ^see 1)
- <=WM: (13762: O1954 ^name predict-no)
- <=WM: (13761: O1953 ^name predict-yes)
- <=WM: (13760: R980 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1955 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1956 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1954 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1953 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (13780: S1 ^operator O1956)
- 978: O: O1956 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N978 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N977 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13781: I3 ^predict-no N978)
- <=WM: (13768: N977 ^status complete)
- <=WM: (13767: I3 ^predict-no N977)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- \-/--- Input Phase ---
- =>WM: (13785: I2 ^dir R)
- =>WM: (13784: I2 ^reward 1)
- =>WM: (13783: I2 ^see 0)
- =>WM: (13782: N978 ^status complete)
- <=WM: (13771: I2 ^dir U)
- <=WM: (13770: I2 ^reward 1)
- <=WM: (13769: I2 ^see 0)
- =>WM: (13786: I2 ^level-1 R1-root)
- <=WM: (13772: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1955 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1956 = 0.7427521913903472)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R982 ^value 1 +)
- (R1 ^reward R982 +)
- Firing propose*predict-yes
- -->
- (O1957 ^name predict-yes +)
- (S1 ^operator O1957 +)
- Firing propose*predict-no
- -->
- (O1958 ^name predict-no +)
- (S1 ^operator O1958 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1956 = 0.2572465541807213)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1955 = 0.7368290791081045)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1956 ^name predict-no +)
- (S1 ^operator O1956 +)
- Retracting propose*predict-yes
- -->
- (O1955 ^name predict-yes +)
- (S1 ^operator O1955 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R981 ^value 1 +)
- (R1 ^reward R981 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1956 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1955 = 0.)
- =>WM: (13793: S1 ^operator O1958 +)
- =>WM: (13792: S1 ^operator O1957 +)
- =>WM: (13791: I3 ^dir R)
- =>WM: (13790: O1958 ^name predict-no)
- =>WM: (13789: O1957 ^name predict-yes)
- =>WM: (13788: R982 ^value 1)
- =>WM: (13787: R1 ^reward R982)
- <=WM: (13778: S1 ^operator O1955 +)
- <=WM: (13779: S1 ^operator O1956 +)
- <=WM: (13780: S1 ^operator O1956)
- <=WM: (13763: I3 ^dir U)
- <=WM: (13774: R1 ^reward R981)
- <=WM: (13777: O1956 ^name predict-no)
- <=WM: (13776: O1955 ^name predict-yes)
- <=WM: (13775: R981 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1957 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1957 = 0.7368290791081045)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1958 = 0.7427521913903472)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1958 = 0.2572465541807213)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1956 = 0.2572465541807213)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1956 = 0.7427521913903472)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1955 = 0.7368290791081045)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1955 = -0.3011268063455669)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (13794: S1 ^operator O1958)
- 979: O: O1958 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N979 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N978 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13795: I3 ^predict-no N979)
- <=WM: (13782: N978 ^status complete)
- <=WM: (13781: I3 ^predict-no N978)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |--- Input Phase ---
- =>WM: (13799: I2 ^dir U)
- =>WM: (13798: I2 ^reward 1)
- =>WM: (13797: I2 ^see 0)
- =>WM: (13796: N979 ^status complete)
- <=WM: (13785: I2 ^dir R)
- <=WM: (13784: I2 ^reward 1)
- <=WM: (13783: I2 ^see 0)
- =>WM: (13800: I2 ^level-1 R0-root)
- <=WM: (13786: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R983 ^value 1 +)
- (R1 ^reward R983 +)
- Firing propose*predict-yes
- -->
- (O1959 ^name predict-yes +)
- (S1 ^operator O1959 +)
- Firing propose*predict-no
- -->
- (O1960 ^name predict-no +)
- (S1 ^operator O1960 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1958 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1957 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1958 ^name predict-no +)
- (S1 ^operator O1958 +)
- Retracting propose*predict-yes
- -->
- (O1957 ^name predict-yes +)
- (S1 ^operator O1957 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R982 ^value 1 +)
- (R1 ^reward R982 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1958 = 0.2572465541807213)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1958 = 0.7427521913903472)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1957 = 0.7368290791081045)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1957 = -0.3011268063455669)
- =>WM: (13807: S1 ^operator O1960 +)
- =>WM: (13806: S1 ^operator O1959 +)
- =>WM: (13805: I3 ^dir U)
- =>WM: (13804: O1960 ^name predict-no)
- =>WM: (13803: O1959 ^name predict-yes)
- =>WM: (13802: R983 ^value 1)
- =>WM: (13801: R1 ^reward R983)
- <=WM: (13792: S1 ^operator O1957 +)
- <=WM: (13793: S1 ^operator O1958 +)
- <=WM: (13794: S1 ^operator O1958)
- <=WM: (13791: I3 ^dir R)
- <=WM: (13787: R1 ^reward R982)
- <=WM: (13790: O1958 ^name predict-no)
- <=WM: (13789: O1957 ^name predict-yes)
- <=WM: (13788: R982 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1959 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1960 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1958 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1957 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586136 -0.32889 0.257247 -> 0.586137 -0.32889 0.257247(R,m,v=1,0.857988,0.12257)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*36 0.413863 0.32889 0.742752 -> 0.413863 0.32889 0.742752(R,m,v=1,1,0)
- =>WM: (13808: S1 ^operator O1960)
- 980: O: O1960 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N980 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N979 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13809: I3 ^predict-no N980)
- <=WM: (13796: N979 ^status complete)
- <=WM: (13795: I3 ^predict-no N979)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- \-/--- Input Phase ---
- =>WM: (13813: I2 ^dir U)
- =>WM: (13812: I2 ^reward 1)
- =>WM: (13811: I2 ^see 0)
- =>WM: (13810: N980 ^status complete)
- <=WM: (13799: I2 ^dir U)
- <=WM: (13798: I2 ^reward 1)
- <=WM: (13797: I2 ^see 0)
- =>WM: (13814: I2 ^level-1 R0-root)
- <=WM: (13800: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R984 ^value 1 +)
- (R1 ^reward R984 +)
- Firing propose*predict-yes
- -->
- (O1961 ^name predict-yes +)
- (S1 ^operator O1961 +)
- Firing propose*predict-no
- -->
- (O1962 ^name predict-no +)
- (S1 ^operator O1962 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1960 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1959 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1960 ^name predict-no +)
- (S1 ^operator O1960 +)
- Retracting propose*predict-yes
- -->
- (O1959 ^name predict-yes +)
- (S1 ^operator O1959 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R983 ^value 1 +)
- (R1 ^reward R983 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1960 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1959 = 0.)
- =>WM: (13820: S1 ^operator O1962 +)
- =>WM: (13819: S1 ^operator O1961 +)
- =>WM: (13818: O1962 ^name predict-no)
- =>WM: (13817: O1961 ^name predict-yes)
- =>WM: (13816: R984 ^value 1)
- =>WM: (13815: R1 ^reward R984)
- <=WM: (13806: S1 ^operator O1959 +)
- <=WM: (13807: S1 ^operator O1960 +)
- <=WM: (13808: S1 ^operator O1960)
- <=WM: (13801: R1 ^reward R983)
- <=WM: (13804: O1960 ^name predict-no)
- <=WM: (13803: O1959 ^name predict-yes)
- <=WM: (13802: R983 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1961 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1962 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1960 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1959 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (13821: S1 ^operator O1962)
- 981: O: O1962 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N981 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N980 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13822: I3 ^predict-no N981)
- <=WM: (13810: N980 ^status complete)
- <=WM: (13809: I3 ^predict-no N980)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- |--- Input Phase ---
- =>WM: (13826: I2 ^dir L)
- =>WM: (13825: I2 ^reward 1)
- =>WM: (13824: I2 ^see 0)
- =>WM: (13823: N981 ^status complete)
- <=WM: (13813: I2 ^dir U)
- <=WM: (13812: I2 ^reward 1)
- <=WM: (13811: I2 ^see 0)
- =>WM: (13827: I2 ^level-1 R0-root)
- <=WM: (13814: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1962 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1961 = 0.5681119444733725)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R985 ^value 1 +)
- (R1 ^reward R985 +)
- Firing propose*predict-yes
- -->
- (O1963 ^name predict-yes +)
- (S1 ^operator O1963 +)
- Firing propose*predict-no
- -->
- (O1964 ^name predict-no +)
- (S1 ^operator O1964 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1962 = 0.3289460588254962)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1961 = 0.4318903853359125)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1962 ^name predict-no +)
- (S1 ^operator O1962 +)
- Retracting propose*predict-yes
- -->
- (O1961 ^name predict-yes +)
- (S1 ^operator O1961 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R984 ^value 1 +)
- (R1 ^reward R984 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1962 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1961 = 0.)
- =>WM: (13834: S1 ^operator O1964 +)
- =>WM: (13833: S1 ^operator O1963 +)
- =>WM: (13832: I3 ^dir L)
- =>WM: (13831: O1964 ^name predict-no)
- =>WM: (13830: O1963 ^name predict-yes)
- =>WM: (13829: R985 ^value 1)
- =>WM: (13828: R1 ^reward R985)
- <=WM: (13819: S1 ^operator O1961 +)
- <=WM: (13820: S1 ^operator O1962 +)
- <=WM: (13821: S1 ^operator O1962)
- <=WM: (13805: I3 ^dir U)
- <=WM: (13815: R1 ^reward R984)
- <=WM: (13818: O1962 ^name predict-no)
- <=WM: (13817: O1961 ^name predict-yes)
- <=WM: (13816: R984 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1963 = 0.5681119444733725)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1963 = 0.4318903853359125)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1964 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1964 = 0.3289460588254962)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1962 = 0.3289460588254962)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1962 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1961 = 0.4318903853359125)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1961 = 0.5681119444733725)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (13835: S1 ^operator O1963)
- 982: O: O1963 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N982 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N981 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13836: I3 ^predict-yes N982)
- <=WM: (13823: N981 ^status complete)
- <=WM: (13822: I3 ^predict-no N981)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- \--- Input Phase ---
- =>WM: (13840: I2 ^dir U)
- =>WM: (13839: I2 ^reward 1)
- =>WM: (13838: I2 ^see 1)
- =>WM: (13837: N982 ^status complete)
- <=WM: (13826: I2 ^dir L)
- <=WM: (13825: I2 ^reward 1)
- <=WM: (13824: I2 ^see 0)
- =>WM: (13841: I2 ^level-1 L1-root)
- <=WM: (13827: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R986 ^value 1 +)
- (R1 ^reward R986 +)
- Firing propose*predict-yes
- -->
- (O1965 ^name predict-yes +)
- (S1 ^operator O1965 +)
- Firing propose*predict-no
- -->
- (O1966 ^name predict-no +)
- (S1 ^operator O1966 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1964 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1963 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1964 ^name predict-no +)
- (S1 ^operator O1964 +)
- Retracting propose*predict-yes
- -->
- (O1963 ^name predict-yes +)
- (S1 ^operator O1963 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R985 ^value 1 +)
- (R1 ^reward R985 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1964 = 0.3289460588254962)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1964 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1963 = 0.4318903853359125)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1963 = 0.5681119444733725)
- =>WM: (13849: S1 ^operator O1966 +)
- =>WM: (13848: S1 ^operator O1965 +)
- =>WM: (13847: I3 ^dir U)
- =>WM: (13846: O1966 ^name predict-no)
- =>WM: (13845: O1965 ^name predict-yes)
- =>WM: (13844: R986 ^value 1)
- =>WM: (13843: R1 ^reward R986)
- =>WM: (13842: I3 ^see 1)
- <=WM: (13833: S1 ^operator O1963 +)
- <=WM: (13835: S1 ^operator O1963)
- <=WM: (13834: S1 ^operator O1964 +)
- <=WM: (13832: I3 ^dir L)
- <=WM: (13828: R1 ^reward R985)
- <=WM: (13773: I3 ^see 0)
- <=WM: (13831: O1964 ^name predict-no)
- <=WM: (13830: O1963 ^name predict-yes)
- <=WM: (13829: R985 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1965 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1966 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1964 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1963 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683777 -0.251886 0.43189 -> 0.683776 -0.251886 0.43189(R,m,v=1,0.922156,0.072217)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*30 0.316226 0.251886 0.568112 -> 0.316225 0.251886 0.568112(R,m,v=1,1,0)
- =>WM: (13850: S1 ^operator O1966)
- 983: O: O1966 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N983 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N982 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13851: I3 ^predict-no N983)
- <=WM: (13837: N982 ^status complete)
- <=WM: (13836: I3 ^predict-yes N982)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- -/|--- Input Phase ---
- =>WM: (13855: I2 ^dir L)
- =>WM: (13854: I2 ^reward 1)
- =>WM: (13853: I2 ^see 0)
- =>WM: (13852: N983 ^status complete)
- <=WM: (13840: I2 ^dir U)
- <=WM: (13839: I2 ^reward 1)
- <=WM: (13838: I2 ^see 1)
- =>WM: (13856: I2 ^level-1 L1-root)
- <=WM: (13841: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O1966 = 0.6710520874416326)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O1965 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R987 ^value 1 +)
- (R1 ^reward R987 +)
- Firing propose*predict-yes
- -->
- (O1967 ^name predict-yes +)
- (S1 ^operator O1967 +)
- Firing propose*predict-no
- -->
- (O1968 ^name predict-no +)
- (S1 ^operator O1968 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1966 = 0.3289460588254962)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1965 = 0.4318900358645197)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O1966 ^name predict-no +)
- (S1 ^operator O1966 +)
- Retracting propose*predict-yes
- -->
- (O1965 ^name predict-yes +)
- (S1 ^operator O1965 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R986 ^value 1 +)
- (R1 ^reward R986 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1966 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1965 = 0.)
- =>WM: (13864: S1 ^operator O1968 +)
- =>WM: (13863: S1 ^operator O1967 +)
- =>WM: (13862: I3 ^dir L)
- =>WM: (13861: O1968 ^name predict-no)
- =>WM: (13860: O1967 ^name predict-yes)
- =>WM: (13859: R987 ^value 1)
- =>WM: (13858: R1 ^reward R987)
- =>WM: (13857: I3 ^see 0)
- <=WM: (13848: S1 ^operator O1965 +)
- <=WM: (13849: S1 ^operator O1966 +)
- <=WM: (13850: S1 ^operator O1966)
- <=WM: (13847: I3 ^dir U)
- <=WM: (13843: R1 ^reward R986)
- <=WM: (13842: I3 ^see 1)
- <=WM: (13846: O1966 ^name predict-no)
- <=WM: (13845: O1965 ^name predict-yes)
- <=WM: (13844: R986 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O1967 = -0.06092862110810815)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1967 = 0.4318900358645197)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O1968 = 0.6710520874416326)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1968 = 0.3289460588254962)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1966 = 0.3289460588254962)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O1966 = 0.6710520874416326)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1965 = 0.4318900358645197)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O1965 = -0.06092862110810815)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (13865: S1 ^operator O1968)
- 984: O: O1968 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N984 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N983 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13866: I3 ^predict-no N984)
- <=WM: (13852: N983 ^status complete)
- <=WM: (13851: I3 ^predict-no N983)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (13870: I2 ^dir U)
- =>WM: (13869: I2 ^reward 1)
- =>WM: (13868: I2 ^see 0)
- =>WM: (13867: N984 ^status complete)
- <=WM: (13855: I2 ^dir L)
- <=WM: (13854: I2 ^reward 1)
- <=WM: (13853: I2 ^see 0)
- =>WM: (13871: I2 ^level-1 L0-root)
- <=WM: (13856: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R988 ^value 1 +)
- (R1 ^reward R988 +)
- Firing propose*predict-yes
- -->
- (O1969 ^name predict-yes +)
- (S1 ^operator O1969 +)
- Firing propose*predict-no
- -->
- (O1970 ^name predict-no +)
- (S1 ^operator O1970 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1968 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1967 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1968 ^name predict-no +)
- (S1 ^operator O1968 +)
- Retracting propose*predict-yes
- -->
- (O1967 ^name predict-yes +)
- (S1 ^operator O1967 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R987 ^value 1 +)
- (R1 ^reward R987 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1968 = 0.3289460588254962)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O1968 = 0.6710520874416326)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1967 = 0.4318900358645197)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O1967 = -0.06092862110810815)
- =>WM: (13878: S1 ^operator O1970 +)
- =>WM: (13877: S1 ^operator O1969 +)
- =>WM: (13876: I3 ^dir U)
- =>WM: (13875: O1970 ^name predict-no)
- =>WM: (13874: O1969 ^name predict-yes)
- =>WM: (13873: R988 ^value 1)
- =>WM: (13872: R1 ^reward R988)
- <=WM: (13863: S1 ^operator O1967 +)
- <=WM: (13864: S1 ^operator O1968 +)
- <=WM: (13865: S1 ^operator O1968)
- <=WM: (13862: I3 ^dir L)
- <=WM: (13858: R1 ^reward R987)
- <=WM: (13861: O1968 ^name predict-no)
- <=WM: (13860: O1967 ^name predict-yes)
- <=WM: (13859: R987 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1969 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1970 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1968 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1967 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565403 -0.236457 0.328946 -> 0.565404 -0.236458 0.328946(R,m,v=1,0.904459,0.0869672)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*43 0.434593 0.236459 0.671052 -> 0.434593 0.236459 0.671052(R,m,v=1,1,0)
- =>WM: (13879: S1 ^operator O1970)
- 985: O: O1970 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N985 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N984 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13880: I3 ^predict-no N985)
- <=WM: (13867: N984 ^status complete)
- <=WM: (13866: I3 ^predict-no N984)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- /|\-sleeping...
- /--- Input Phase ---
- =>WM: (13884: I2 ^dir R)
- =>WM: (13883: I2 ^reward 1)
- =>WM: (13882: I2 ^see 0)
- =>WM: (13881: N985 ^status complete)
- <=WM: (13870: I2 ^dir U)
- <=WM: (13869: I2 ^reward 1)
- <=WM: (13868: I2 ^see 0)
- =>WM: (13885: I2 ^level-1 L0-root)
- <=WM: (13871: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O1970 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O1969 = 0.2631756442840678)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R989 ^value 1 +)
- (R1 ^reward R989 +)
- Firing propose*predict-yes
- -->
- (O1971 ^name predict-yes +)
- (S1 ^operator O1971 +)
- Firing propose*predict-no
- -->
- (O1972 ^name predict-no +)
- (S1 ^operator O1972 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1970 = 0.257246742345061)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1969 = 0.7368290791081045)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1970 ^name predict-no +)
- (S1 ^operator O1970 +)
- Retracting propose*predict-yes
- -->
- (O1969 ^name predict-yes +)
- (S1 ^operator O1969 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R988 ^value 1 +)
- (R1 ^reward R988 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1970 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1969 = 0.)
- =>WM: (13892: S1 ^operator O1972 +)
- =>WM: (13891: S1 ^operator O1971 +)
- =>WM: (13890: I3 ^dir R)
- =>WM: (13889: O1972 ^name predict-no)
- =>WM: (13888: O1971 ^name predict-yes)
- =>WM: (13887: R989 ^value 1)
- =>WM: (13886: R1 ^reward R989)
- <=WM: (13877: S1 ^operator O1969 +)
- <=WM: (13878: S1 ^operator O1970 +)
- <=WM: (13879: S1 ^operator O1970)
- <=WM: (13876: I3 ^dir U)
- <=WM: (13872: R1 ^reward R988)
- <=WM: (13875: O1970 ^name predict-no)
- <=WM: (13874: O1969 ^name predict-yes)
- <=WM: (13873: R988 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O1971 = 0.2631756442840678)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1971 = 0.7368290791081045)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O1972 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1972 = 0.257246742345061)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1970 = 0.257246742345061)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O1970 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1969 = 0.7368290791081045)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O1969 = 0.2631756442840678)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (13893: S1 ^operator O1971)
- 986: O: O1971 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N986 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N985 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13894: I3 ^predict-yes N986)
- <=WM: (13881: N985 ^status complete)
- <=WM: (13880: I3 ^predict-no N985)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |\--- Input Phase ---
- =>WM: (13898: I2 ^dir U)
- =>WM: (13897: I2 ^reward 1)
- =>WM: (13896: I2 ^see 1)
- =>WM: (13895: N986 ^status complete)
- <=WM: (13884: I2 ^dir R)
- <=WM: (13883: I2 ^reward 1)
- <=WM: (13882: I2 ^see 0)
- =>WM: (13899: I2 ^level-1 R1-root)
- <=WM: (13885: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R990 ^value 1 +)
- (R1 ^reward R990 +)
- Firing propose*predict-yes
- -->
- (O1973 ^name predict-yes +)
- (S1 ^operator O1973 +)
- Firing propose*predict-no
- -->
- (O1974 ^name predict-no +)
- (S1 ^operator O1974 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1972 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1971 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1972 ^name predict-no +)
- (S1 ^operator O1972 +)
- Retracting propose*predict-yes
- -->
- (O1971 ^name predict-yes +)
- (S1 ^operator O1971 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R989 ^value 1 +)
- (R1 ^reward R989 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1972 = 0.257246742345061)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O1972 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1971 = 0.7368290791081045)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O1971 = 0.2631756442840678)
- =>WM: (13907: S1 ^operator O1974 +)
- =>WM: (13906: S1 ^operator O1973 +)
- =>WM: (13905: I3 ^dir U)
- =>WM: (13904: O1974 ^name predict-no)
- =>WM: (13903: O1973 ^name predict-yes)
- =>WM: (13902: R990 ^value 1)
- =>WM: (13901: R1 ^reward R990)
- =>WM: (13900: I3 ^see 1)
- <=WM: (13891: S1 ^operator O1971 +)
- <=WM: (13893: S1 ^operator O1971)
- <=WM: (13892: S1 ^operator O1972 +)
- <=WM: (13890: I3 ^dir R)
- <=WM: (13886: R1 ^reward R989)
- <=WM: (13857: I3 ^see 0)
- <=WM: (13889: O1972 ^name predict-no)
- <=WM: (13888: O1971 ^name predict-yes)
- <=WM: (13887: R989 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1973 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1974 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1972 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1971 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114073 0.736829 -> 0.748236 -0.0114078 0.736828(R,m,v=1,0.895706,0.0939938)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*35 0.251765 0.0114107 0.263176 -> 0.251765 0.0114102 0.263175(R,m,v=1,1,0)
- =>WM: (13908: S1 ^operator O1974)
- 987: O: O1974 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N987 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N986 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13909: I3 ^predict-no N987)
- <=WM: (13895: N986 ^status complete)
- <=WM: (13894: I3 ^predict-yes N986)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- -/|--- Input Phase ---
- =>WM: (13913: I2 ^dir R)
- =>WM: (13912: I2 ^reward 1)
- =>WM: (13911: I2 ^see 0)
- =>WM: (13910: N987 ^status complete)
- <=WM: (13898: I2 ^dir U)
- <=WM: (13897: I2 ^reward 1)
- <=WM: (13896: I2 ^see 1)
- =>WM: (13914: I2 ^level-1 R1-root)
- <=WM: (13899: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1973 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1974 = 0.7427523795546869)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R991 ^value 1 +)
- (R1 ^reward R991 +)
- Firing propose*predict-yes
- -->
- (O1975 ^name predict-yes +)
- (S1 ^operator O1975 +)
- Firing propose*predict-no
- -->
- (O1976 ^name predict-no +)
- (S1 ^operator O1976 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1974 = 0.257246742345061)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1973 = 0.7368283705992786)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O1974 ^name predict-no +)
- (S1 ^operator O1974 +)
- Retracting propose*predict-yes
- -->
- (O1973 ^name predict-yes +)
- (S1 ^operator O1973 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R990 ^value 1 +)
- (R1 ^reward R990 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1974 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1973 = 0.)
- =>WM: (13922: S1 ^operator O1976 +)
- =>WM: (13921: S1 ^operator O1975 +)
- =>WM: (13920: I3 ^dir R)
- =>WM: (13919: O1976 ^name predict-no)
- =>WM: (13918: O1975 ^name predict-yes)
- =>WM: (13917: R991 ^value 1)
- =>WM: (13916: R1 ^reward R991)
- =>WM: (13915: I3 ^see 0)
- <=WM: (13906: S1 ^operator O1973 +)
- <=WM: (13907: S1 ^operator O1974 +)
- <=WM: (13908: S1 ^operator O1974)
- <=WM: (13905: I3 ^dir U)
- <=WM: (13901: R1 ^reward R990)
- <=WM: (13900: I3 ^see 1)
- <=WM: (13904: O1974 ^name predict-no)
- <=WM: (13903: O1973 ^name predict-yes)
- <=WM: (13902: R990 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1975 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1975 = 0.7368283705992786)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1976 = 0.7427523795546869)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1976 = 0.257246742345061)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1974 = 0.257246742345061)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1974 = 0.7427523795546869)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1973 = 0.7368283705992786)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1973 = -0.3011268063455669)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (13923: S1 ^operator O1976)
- 988: O: O1976 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N988 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N987 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13924: I3 ^predict-no N988)
- <=WM: (13910: N987 ^status complete)
- <=WM: (13909: I3 ^predict-no N987)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- \-/--- Input Phase ---
- =>WM: (13928: I2 ^dir R)
- =>WM: (13927: I2 ^reward 1)
- =>WM: (13926: I2 ^see 0)
- =>WM: (13925: N988 ^status complete)
- <=WM: (13913: I2 ^dir R)
- <=WM: (13912: I2 ^reward 1)
- <=WM: (13911: I2 ^see 0)
- =>WM: (13929: I2 ^level-1 R0-root)
- <=WM: (13914: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O1976 = 0.7427594337336832)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O1975 = -0.1989581826229297)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R992 ^value 1 +)
- (R1 ^reward R992 +)
- Firing propose*predict-yes
- -->
- (O1977 ^name predict-yes +)
- (S1 ^operator O1977 +)
- Firing propose*predict-no
- -->
- (O1978 ^name predict-no +)
- (S1 ^operator O1978 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1976 = 0.257246742345061)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1975 = 0.7368283705992786)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1976 ^name predict-no +)
- (S1 ^operator O1976 +)
- Retracting propose*predict-yes
- -->
- (O1975 ^name predict-yes +)
- (S1 ^operator O1975 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R991 ^value 1 +)
- (R1 ^reward R991 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1976 = 0.257246742345061)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O1976 = 0.7427523795546869)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1975 = 0.7368283705992786)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O1975 = -0.3011268063455669)
- =>WM: (13935: S1 ^operator O1978 +)
- =>WM: (13934: S1 ^operator O1977 +)
- =>WM: (13933: O1978 ^name predict-no)
- =>WM: (13932: O1977 ^name predict-yes)
- =>WM: (13931: R992 ^value 1)
- =>WM: (13930: R1 ^reward R992)
- <=WM: (13921: S1 ^operator O1975 +)
- <=WM: (13922: S1 ^operator O1976 +)
- <=WM: (13923: S1 ^operator O1976)
- <=WM: (13916: R1 ^reward R991)
- <=WM: (13919: O1976 ^name predict-no)
- <=WM: (13918: O1975 ^name predict-yes)
- <=WM: (13917: R991 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1977 = 0.7368283705992786)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O1977 = -0.1989581826229297)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1978 = 0.257246742345061)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O1978 = 0.7427594337336832)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1976 = 0.257246742345061)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O1976 = 0.7427594337336832)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1975 = 0.7368283705992786)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O1975 = -0.1989581826229297)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586137 -0.32889 0.257247 -> 0.586137 -0.32889 0.257247(R,m,v=1,0.858824,0.121963)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*36 0.413863 0.32889 0.742752 -> 0.413863 0.32889 0.742753(R,m,v=1,1,0)
- =>WM: (13936: S1 ^operator O1978)
- 989: O: O1978 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N989 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N988 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13937: I3 ^predict-no N989)
- <=WM: (13925: N988 ^status complete)
- <=WM: (13924: I3 ^predict-no N988)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- |\---- Input Phase ---
- =>WM: (13941: I2 ^dir L)
- =>WM: (13940: I2 ^reward 1)
- =>WM: (13939: I2 ^see 0)
- =>WM: (13938: N989 ^status complete)
- <=WM: (13928: I2 ^dir R)
- <=WM: (13927: I2 ^reward 1)
- <=WM: (13926: I2 ^see 0)
- =>WM: (13942: I2 ^level-1 R0-root)
- <=WM: (13929: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1978 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1977 = 0.5681115950019797)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R993 ^value 1 +)
- (R1 ^reward R993 +)
- Firing propose*predict-yes
- -->
- (O1979 ^name predict-yes +)
- (S1 ^operator O1979 +)
- Firing propose*predict-no
- -->
- (O1980 ^name predict-no +)
- (S1 ^operator O1980 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1978 = 0.3289463368854268)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1977 = 0.4318900358645197)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1978 ^name predict-no +)
- (S1 ^operator O1978 +)
- Retracting propose*predict-yes
- -->
- (O1977 ^name predict-yes +)
- (S1 ^operator O1977 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R992 ^value 1 +)
- (R1 ^reward R992 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O1978 = 0.7427594337336832)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1978 = 0.2572468740600988)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O1977 = -0.1989581826229297)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1977 = 0.7368283705992786)
- =>WM: (13949: S1 ^operator O1980 +)
- =>WM: (13948: S1 ^operator O1979 +)
- =>WM: (13947: I3 ^dir L)
- =>WM: (13946: O1980 ^name predict-no)
- =>WM: (13945: O1979 ^name predict-yes)
- =>WM: (13944: R993 ^value 1)
- =>WM: (13943: R1 ^reward R993)
- <=WM: (13934: S1 ^operator O1977 +)
- <=WM: (13935: S1 ^operator O1978 +)
- <=WM: (13936: S1 ^operator O1978)
- <=WM: (13920: I3 ^dir R)
- <=WM: (13930: R1 ^reward R992)
- <=WM: (13933: O1978 ^name predict-no)
- <=WM: (13932: O1977 ^name predict-yes)
- <=WM: (13931: R992 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1979 = 0.5681115950019797)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1979 = 0.4318900358645197)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1980 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1980 = 0.3289463368854268)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1978 = 0.3289463368854268)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1978 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1977 = 0.4318900358645197)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1977 = 0.5681115950019797)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586137 -0.32889 0.257247 -> 0.586136 -0.32889 0.257246(R,m,v=1,0.859649,0.121362)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*41 0.413868 0.328891 0.742759 -> 0.413868 0.328891 0.742758(R,m,v=1,1,0)
- =>WM: (13950: S1 ^operator O1979)
- 990: O: O1979 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N990 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N989 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13951: I3 ^predict-yes N990)
- <=WM: (13938: N989 ^status complete)
- <=WM: (13937: I3 ^predict-no N989)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /|\--- Input Phase ---
- =>WM: (13955: I2 ^dir U)
- =>WM: (13954: I2 ^reward 1)
- =>WM: (13953: I2 ^see 1)
- =>WM: (13952: N990 ^status complete)
- <=WM: (13941: I2 ^dir L)
- <=WM: (13940: I2 ^reward 1)
- <=WM: (13939: I2 ^see 0)
- =>WM: (13956: I2 ^level-1 L1-root)
- <=WM: (13942: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R994 ^value 1 +)
- (R1 ^reward R994 +)
- Firing propose*predict-yes
- -->
- (O1981 ^name predict-yes +)
- (S1 ^operator O1981 +)
- Firing propose*predict-no
- -->
- (O1982 ^name predict-no +)
- (S1 ^operator O1982 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1980 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1979 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1980 ^name predict-no +)
- (S1 ^operator O1980 +)
- Retracting propose*predict-yes
- -->
- (O1979 ^name predict-yes +)
- (S1 ^operator O1979 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R993 ^value 1 +)
- (R1 ^reward R993 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1980 = 0.3289463368854268)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O1980 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1979 = 0.4318900358645197)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O1979 = 0.5681115950019797)
- =>WM: (13964: S1 ^operator O1982 +)
- =>WM: (13963: S1 ^operator O1981 +)
- =>WM: (13962: I3 ^dir U)
- =>WM: (13961: O1982 ^name predict-no)
- =>WM: (13960: O1981 ^name predict-yes)
- =>WM: (13959: R994 ^value 1)
- =>WM: (13958: R1 ^reward R994)
- =>WM: (13957: I3 ^see 1)
- <=WM: (13948: S1 ^operator O1979 +)
- <=WM: (13950: S1 ^operator O1979)
- <=WM: (13949: S1 ^operator O1980 +)
- <=WM: (13947: I3 ^dir L)
- <=WM: (13943: R1 ^reward R993)
- <=WM: (13915: I3 ^see 0)
- <=WM: (13946: O1980 ^name predict-no)
- <=WM: (13945: O1979 ^name predict-yes)
- <=WM: (13944: R993 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1981 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1982 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1980 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1979 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683776 -0.251886 0.43189 -> 0.683776 -0.251886 0.43189(R,m,v=1,0.922619,0.0718206)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*30 0.316225 0.251886 0.568112 -> 0.316225 0.251886 0.568111(R,m,v=1,1,0)
- =>WM: (13965: S1 ^operator O1982)
- 991: O: O1982 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N991 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N990 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13966: I3 ^predict-no N991)
- <=WM: (13952: N990 ^status complete)
- <=WM: (13951: I3 ^predict-yes N990)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- ---- Input Phase ---
- =>WM: (13970: I2 ^dir R)
- =>WM: (13969: I2 ^reward 1)
- =>WM: (13968: I2 ^see 0)
- =>WM: (13967: N991 ^status complete)
- <=WM: (13955: I2 ^dir U)
- <=WM: (13954: I2 ^reward 1)
- <=WM: (13953: I2 ^see 1)
- =>WM: (13971: I2 ^level-1 L1-root)
- <=WM: (13956: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O1982 = -0.1377248055371832)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O1981 = 0.2631685608814066)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R995 ^value 1 +)
- (R1 ^reward R995 +)
- Firing propose*predict-yes
- -->
- (O1983 ^name predict-yes +)
- (S1 ^operator O1983 +)
- Firing propose*predict-no
- -->
- (O1984 ^name predict-no +)
- (S1 ^operator O1984 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1982 = 0.2572459278910315)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1981 = 0.7368283705992786)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O1982 ^name predict-no +)
- (S1 ^operator O1982 +)
- Retracting propose*predict-yes
- -->
- (O1981 ^name predict-yes +)
- (S1 ^operator O1981 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R994 ^value 1 +)
- (R1 ^reward R994 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1982 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1981 = 0.)
- =>WM: (13979: S1 ^operator O1984 +)
- =>WM: (13978: S1 ^operator O1983 +)
- =>WM: (13977: I3 ^dir R)
- =>WM: (13976: O1984 ^name predict-no)
- =>WM: (13975: O1983 ^name predict-yes)
- =>WM: (13974: R995 ^value 1)
- =>WM: (13973: R1 ^reward R995)
- =>WM: (13972: I3 ^see 0)
- <=WM: (13963: S1 ^operator O1981 +)
- <=WM: (13964: S1 ^operator O1982 +)
- <=WM: (13965: S1 ^operator O1982)
- <=WM: (13962: I3 ^dir U)
- <=WM: (13958: R1 ^reward R994)
- <=WM: (13957: I3 ^see 1)
- <=WM: (13961: O1982 ^name predict-no)
- <=WM: (13960: O1981 ^name predict-yes)
- <=WM: (13959: R994 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O1983 = 0.2631685608814066)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1983 = 0.7368283705992786)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O1984 = -0.1377248055371832)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1984 = 0.2572459278910315)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1982 = 0.2572459278910315)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O1982 = -0.1377248055371832)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1981 = 0.7368283705992786)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O1981 = 0.2631685608814066)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (13980: S1 ^operator O1983)
- 992: O: O1983 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N992 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N991 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13981: I3 ^predict-yes N992)
- <=WM: (13967: N991 ^status complete)
- <=WM: (13966: I3 ^predict-no N991)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /|\--- Input Phase ---
- =>WM: (13985: I2 ^dir U)
- =>WM: (13984: I2 ^reward 1)
- =>WM: (13983: I2 ^see 1)
- =>WM: (13982: N992 ^status complete)
- <=WM: (13970: I2 ^dir R)
- <=WM: (13969: I2 ^reward 1)
- <=WM: (13968: I2 ^see 0)
- =>WM: (13986: I2 ^level-1 R1-root)
- <=WM: (13971: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R996 ^value 1 +)
- (R1 ^reward R996 +)
- Firing propose*predict-yes
- -->
- (O1985 ^name predict-yes +)
- (S1 ^operator O1985 +)
- Firing propose*predict-no
- -->
- (O1986 ^name predict-no +)
- (S1 ^operator O1986 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1984 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1983 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1984 ^name predict-no +)
- (S1 ^operator O1984 +)
- Retracting propose*predict-yes
- -->
- (O1983 ^name predict-yes +)
- (S1 ^operator O1983 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R995 ^value 1 +)
- (R1 ^reward R995 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1984 = 0.2572459278910315)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O1984 = -0.1377248055371832)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1983 = 0.7368283705992786)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O1983 = 0.2631685608814066)
- =>WM: (13994: S1 ^operator O1986 +)
- =>WM: (13993: S1 ^operator O1985 +)
- =>WM: (13992: I3 ^dir U)
- =>WM: (13991: O1986 ^name predict-no)
- =>WM: (13990: O1985 ^name predict-yes)
- =>WM: (13989: R996 ^value 1)
- =>WM: (13988: R1 ^reward R996)
- =>WM: (13987: I3 ^see 1)
- <=WM: (13978: S1 ^operator O1983 +)
- <=WM: (13980: S1 ^operator O1983)
- <=WM: (13979: S1 ^operator O1984 +)
- <=WM: (13977: I3 ^dir R)
- <=WM: (13973: R1 ^reward R995)
- <=WM: (13972: I3 ^see 0)
- <=WM: (13976: O1984 ^name predict-no)
- <=WM: (13975: O1983 ^name predict-yes)
- <=WM: (13974: R995 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1985 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1986 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1984 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1983 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114078 0.736828 -> 0.748236 -0.0114074 0.736829(R,m,v=1,0.896341,0.0934835)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*40 0.251763 0.0114055 0.263169 -> 0.251763 0.0114059 0.263169(R,m,v=1,1,0)
- =>WM: (13995: S1 ^operator O1986)
- 993: O: O1986 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N993 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N992 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (13996: I3 ^predict-no N993)
- <=WM: (13982: N992 ^status complete)
- <=WM: (13981: I3 ^predict-yes N992)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- -/--- Input Phase ---
- =>WM: (14000: I2 ^dir L)
- =>WM: (13999: I2 ^reward 1)
- =>WM: (13998: I2 ^see 0)
- =>WM: (13997: N993 ^status complete)
- <=WM: (13985: I2 ^dir U)
- <=WM: (13984: I2 ^reward 1)
- <=WM: (13983: I2 ^see 1)
- =>WM: (14001: I2 ^level-1 R1-root)
- <=WM: (13986: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O1985 = 0.5681057054973254)
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O1986 = -0.1549421060161498)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R997 ^value 1 +)
- (R1 ^reward R997 +)
- Firing propose*predict-yes
- -->
- (O1987 ^name predict-yes +)
- (S1 ^operator O1987 +)
- Firing propose*predict-no
- -->
- (O1988 ^name predict-no +)
- (S1 ^operator O1988 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1986 = 0.3289463368854268)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1985 = 0.4318897912345449)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O1986 ^name predict-no +)
- (S1 ^operator O1986 +)
- Retracting propose*predict-yes
- -->
- (O1985 ^name predict-yes +)
- (S1 ^operator O1985 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R996 ^value 1 +)
- (R1 ^reward R996 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1986 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1985 = 0.)
- =>WM: (14009: S1 ^operator O1988 +)
- =>WM: (14008: S1 ^operator O1987 +)
- =>WM: (14007: I3 ^dir L)
- =>WM: (14006: O1988 ^name predict-no)
- =>WM: (14005: O1987 ^name predict-yes)
- =>WM: (14004: R997 ^value 1)
- =>WM: (14003: R1 ^reward R997)
- =>WM: (14002: I3 ^see 0)
- <=WM: (13993: S1 ^operator O1985 +)
- <=WM: (13994: S1 ^operator O1986 +)
- <=WM: (13995: S1 ^operator O1986)
- <=WM: (13992: I3 ^dir U)
- <=WM: (13988: R1 ^reward R996)
- <=WM: (13987: I3 ^see 1)
- <=WM: (13991: O1986 ^name predict-no)
- <=WM: (13990: O1985 ^name predict-yes)
- <=WM: (13989: R996 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O1987 = 0.5681057054973254)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1987 = 0.4318897912345449)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O1988 = -0.1549421060161498)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1988 = 0.3289463368854268)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1986 = 0.3289463368854268)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O1986 = -0.1549421060161498)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1985 = 0.4318897912345449)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O1985 = 0.5681057054973254)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14010: S1 ^operator O1987)
- 994: O: O1987 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N994 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N993 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14011: I3 ^predict-yes N994)
- <=WM: (13997: N993 ^status complete)
- <=WM: (13996: I3 ^predict-no N993)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- |\---- Input Phase ---
- =>WM: (14015: I2 ^dir L)
- =>WM: (14014: I2 ^reward 1)
- =>WM: (14013: I2 ^see 1)
- =>WM: (14012: N994 ^status complete)
- <=WM: (14000: I2 ^dir L)
- <=WM: (13999: I2 ^reward 1)
- <=WM: (13998: I2 ^see 0)
- =>WM: (14016: I2 ^level-1 L1-root)
- <=WM: (14001: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O1988 = 0.6710523655015633)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O1987 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R998 ^value 1 +)
- (R1 ^reward R998 +)
- Firing propose*predict-yes
- -->
- (O1989 ^name predict-yes +)
- (S1 ^operator O1989 +)
- Firing propose*predict-no
- -->
- (O1990 ^name predict-no +)
- (S1 ^operator O1990 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1988 = 0.3289463368854268)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1987 = 0.4318897912345449)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1988 ^name predict-no +)
- (S1 ^operator O1988 +)
- Retracting propose*predict-yes
- -->
- (O1987 ^name predict-yes +)
- (S1 ^operator O1987 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R997 ^value 1 +)
- (R1 ^reward R997 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1988 = 0.3289463368854268)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O1988 = -0.1549421060161498)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1987 = 0.4318897912345449)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O1987 = 0.5681057054973254)
- =>WM: (14023: S1 ^operator O1990 +)
- =>WM: (14022: S1 ^operator O1989 +)
- =>WM: (14021: O1990 ^name predict-no)
- =>WM: (14020: O1989 ^name predict-yes)
- =>WM: (14019: R998 ^value 1)
- =>WM: (14018: R1 ^reward R998)
- =>WM: (14017: I3 ^see 1)
- <=WM: (14008: S1 ^operator O1987 +)
- <=WM: (14010: S1 ^operator O1987)
- <=WM: (14009: S1 ^operator O1988 +)
- <=WM: (14003: R1 ^reward R997)
- <=WM: (14002: I3 ^see 0)
- <=WM: (14006: O1988 ^name predict-no)
- <=WM: (14005: O1987 ^name predict-yes)
- <=WM: (14004: R997 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1989 = 0.4318897912345449)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O1989 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1990 = 0.3289463368854268)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O1990 = 0.6710523655015633)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1988 = 0.3289463368854268)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O1988 = 0.6710523655015633)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1987 = 0.4318897912345449)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O1987 = -0.06092862110810815)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683776 -0.251886 0.43189 -> 0.683777 -0.251886 0.43189(R,m,v=1,0.923077,0.0714286)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*45 0.31622 0.251886 0.568106 -> 0.31622 0.251886 0.568106(R,m,v=1,1,0)
- =>WM: (14024: S1 ^operator O1990)
- 995: O: O1990 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N995 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N994 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14025: I3 ^predict-no N995)
- <=WM: (14012: N994 ^status complete)
- <=WM: (14011: I3 ^predict-yes N994)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- /|\--- Input Phase ---
- =>WM: (14029: I2 ^dir L)
- =>WM: (14028: I2 ^reward 1)
- =>WM: (14027: I2 ^see 0)
- =>WM: (14026: N995 ^status complete)
- <=WM: (14015: I2 ^dir L)
- <=WM: (14014: I2 ^reward 1)
- <=WM: (14013: I2 ^see 1)
- =>WM: (14030: I2 ^level-1 L0-root)
- <=WM: (14016: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O1990 = 0.6710552574919724)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O1989 = 0.02602968095631553)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R999 ^value 1 +)
- (R1 ^reward R999 +)
- Firing propose*predict-yes
- -->
- (O1991 ^name predict-yes +)
- (S1 ^operator O1991 +)
- Firing propose*predict-no
- -->
- (O1992 ^name predict-no +)
- (S1 ^operator O1992 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1990 = 0.3289463368854268)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1989 = 0.4318904667247643)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O1990 ^name predict-no +)
- (S1 ^operator O1990 +)
- Retracting propose*predict-yes
- -->
- (O1989 ^name predict-yes +)
- (S1 ^operator O1989 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R998 ^value 1 +)
- (R1 ^reward R998 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O1990 = 0.6710523655015633)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1990 = 0.3289463368854268)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O1989 = -0.06092862110810815)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1989 = 0.4318904667247643)
- =>WM: (14037: S1 ^operator O1992 +)
- =>WM: (14036: S1 ^operator O1991 +)
- =>WM: (14035: O1992 ^name predict-no)
- =>WM: (14034: O1991 ^name predict-yes)
- =>WM: (14033: R999 ^value 1)
- =>WM: (14032: R1 ^reward R999)
- =>WM: (14031: I3 ^see 0)
- <=WM: (14022: S1 ^operator O1989 +)
- <=WM: (14023: S1 ^operator O1990 +)
- <=WM: (14024: S1 ^operator O1990)
- <=WM: (14018: R1 ^reward R998)
- <=WM: (14017: I3 ^see 1)
- <=WM: (14021: O1990 ^name predict-no)
- <=WM: (14020: O1989 ^name predict-yes)
- <=WM: (14019: R998 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1991 = 0.4318904667247643)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O1991 = 0.02602968095631553)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1992 = 0.3289463368854268)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O1992 = 0.6710552574919724)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1990 = 0.3289463368854268)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O1990 = 0.6710552574919724)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1989 = 0.4318904667247643)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O1989 = 0.02602968095631553)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565404 -0.236458 0.328946 -> 0.565404 -0.236458 0.328947(R,m,v=1,0.905063,0.086471)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*43 0.434593 0.236459 0.671052 -> 0.434594 0.236459 0.671053(R,m,v=1,1,0)
- =>WM: (14038: S1 ^operator O1992)
- 996: O: O1992 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N996 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N995 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14039: I3 ^predict-no N996)
- <=WM: (14026: N995 ^status complete)
- <=WM: (14025: I3 ^predict-no N995)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- -/|--- Input Phase ---
- =>WM: (14043: I2 ^dir L)
- =>WM: (14042: I2 ^reward 1)
- =>WM: (14041: I2 ^see 0)
- =>WM: (14040: N996 ^status complete)
- <=WM: (14029: I2 ^dir L)
- <=WM: (14028: I2 ^reward 1)
- <=WM: (14027: I2 ^see 0)
- =>WM: (14044: I2 ^level-1 L0-root)
- <=WM: (14030: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O1992 = 0.6710552574919724)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O1991 = 0.02602968095631553)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1000 ^value 1 +)
- (R1 ^reward R1000 +)
- Firing propose*predict-yes
- -->
- (O1993 ^name predict-yes +)
- (S1 ^operator O1993 +)
- Firing propose*predict-no
- -->
- (O1994 ^name predict-no +)
- (S1 ^operator O1994 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1992 = 0.3289465315273784)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1991 = 0.4318904667247643)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1992 ^name predict-no +)
- (S1 ^operator O1992 +)
- Retracting propose*predict-yes
- -->
- (O1991 ^name predict-yes +)
- (S1 ^operator O1991 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R999 ^value 1 +)
- (R1 ^reward R999 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O1992 = 0.6710552574919724)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1992 = 0.3289465315273784)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O1991 = 0.02602968095631553)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1991 = 0.4318904667247643)
- =>WM: (14050: S1 ^operator O1994 +)
- =>WM: (14049: S1 ^operator O1993 +)
- =>WM: (14048: O1994 ^name predict-no)
- =>WM: (14047: O1993 ^name predict-yes)
- =>WM: (14046: R1000 ^value 1)
- =>WM: (14045: R1 ^reward R1000)
- <=WM: (14036: S1 ^operator O1991 +)
- <=WM: (14037: S1 ^operator O1992 +)
- <=WM: (14038: S1 ^operator O1992)
- <=WM: (14032: R1 ^reward R999)
- <=WM: (14035: O1992 ^name predict-no)
- <=WM: (14034: O1991 ^name predict-yes)
- <=WM: (14033: R999 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1993 = 0.4318904667247643)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O1993 = 0.02602968095631553)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1994 = 0.3289465315273784)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O1994 = 0.6710552574919724)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1992 = 0.3289465315273784)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O1992 = 0.6710552574919724)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1991 = 0.4318904667247643)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O1991 = 0.02602968095631553)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565404 -0.236458 0.328947 -> 0.565404 -0.236458 0.328946(R,m,v=1,0.90566,0.0859804)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*33 0.434599 0.236456 0.671055 -> 0.434599 0.236456 0.671055(R,m,v=1,1,0)
- =>WM: (14051: S1 ^operator O1994)
- 997: O: O1994 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N997 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N996 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14052: I3 ^predict-no N997)
- <=WM: (14040: N996 ^status complete)
- <=WM: (14039: I3 ^predict-no N996)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (14056: I2 ^dir U)
- =>WM: (14055: I2 ^reward 1)
- =>WM: (14054: I2 ^see 0)
- =>WM: (14053: N997 ^status complete)
- <=WM: (14043: I2 ^dir L)
- <=WM: (14042: I2 ^reward 1)
- <=WM: (14041: I2 ^see 0)
- =>WM: (14057: I2 ^level-1 L0-root)
- <=WM: (14044: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1001 ^value 1 +)
- (R1 ^reward R1001 +)
- Firing propose*predict-yes
- -->
- (O1995 ^name predict-yes +)
- (S1 ^operator O1995 +)
- Firing propose*predict-no
- -->
- (O1996 ^name predict-no +)
- (S1 ^operator O1996 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1994 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1993 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1994 ^name predict-no +)
- (S1 ^operator O1994 +)
- Retracting propose*predict-yes
- -->
- (O1993 ^name predict-yes +)
- (S1 ^operator O1993 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1000 ^value 1 +)
- (R1 ^reward R1000 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O1994 = 0.6710549891390698)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O1994 = 0.3289462631744757)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O1993 = 0.02602968095631553)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O1993 = 0.4318904667247643)
- =>WM: (14064: S1 ^operator O1996 +)
- =>WM: (14063: S1 ^operator O1995 +)
- =>WM: (14062: I3 ^dir U)
- =>WM: (14061: O1996 ^name predict-no)
- =>WM: (14060: O1995 ^name predict-yes)
- =>WM: (14059: R1001 ^value 1)
- =>WM: (14058: R1 ^reward R1001)
- <=WM: (14049: S1 ^operator O1993 +)
- <=WM: (14050: S1 ^operator O1994 +)
- <=WM: (14051: S1 ^operator O1994)
- <=WM: (14007: I3 ^dir L)
- <=WM: (14045: R1 ^reward R1000)
- <=WM: (14048: O1994 ^name predict-no)
- <=WM: (14047: O1993 ^name predict-yes)
- <=WM: (14046: R1000 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1995 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1996 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1994 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1993 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565404 -0.236458 0.328946 -> 0.565404 -0.236457 0.328946(R,m,v=1,0.90625,0.0854953)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*33 0.434599 0.236456 0.671055 -> 0.434598 0.236457 0.671055(R,m,v=1,1,0)
- =>WM: (14065: S1 ^operator O1996)
- 998: O: O1996 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N998 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N997 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14066: I3 ^predict-no N998)
- <=WM: (14053: N997 ^status complete)
- <=WM: (14052: I3 ^predict-no N997)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /|\--- Input Phase ---
- =>WM: (14070: I2 ^dir U)
- =>WM: (14069: I2 ^reward 1)
- =>WM: (14068: I2 ^see 0)
- =>WM: (14067: N998 ^status complete)
- <=WM: (14056: I2 ^dir U)
- <=WM: (14055: I2 ^reward 1)
- <=WM: (14054: I2 ^see 0)
- =>WM: (14071: I2 ^level-1 L0-root)
- <=WM: (14057: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1002 ^value 1 +)
- (R1 ^reward R1002 +)
- Firing propose*predict-yes
- -->
- (O1997 ^name predict-yes +)
- (S1 ^operator O1997 +)
- Firing propose*predict-no
- -->
- (O1998 ^name predict-no +)
- (S1 ^operator O1998 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1996 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1995 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1996 ^name predict-no +)
- (S1 ^operator O1996 +)
- Retracting propose*predict-yes
- -->
- (O1995 ^name predict-yes +)
- (S1 ^operator O1995 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1001 ^value 1 +)
- (R1 ^reward R1001 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1996 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1995 = 0.)
- =>WM: (14077: S1 ^operator O1998 +)
- =>WM: (14076: S1 ^operator O1997 +)
- =>WM: (14075: O1998 ^name predict-no)
- =>WM: (14074: O1997 ^name predict-yes)
- =>WM: (14073: R1002 ^value 1)
- =>WM: (14072: R1 ^reward R1002)
- <=WM: (14063: S1 ^operator O1995 +)
- <=WM: (14064: S1 ^operator O1996 +)
- <=WM: (14065: S1 ^operator O1996)
- <=WM: (14058: R1 ^reward R1001)
- <=WM: (14061: O1996 ^name predict-no)
- <=WM: (14060: O1995 ^name predict-yes)
- <=WM: (14059: R1001 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1997 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1998 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1996 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1995 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14078: S1 ^operator O1998)
- 999: O: O1998 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N999 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N998 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14079: I3 ^predict-no N999)
- <=WM: (14067: N998 ^status complete)
- <=WM: (14066: I3 ^predict-no N998)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- ---- Input Phase ---
- =>WM: (14083: I2 ^dir R)
- =>WM: (14082: I2 ^reward 1)
- =>WM: (14081: I2 ^see 0)
- =>WM: (14080: N999 ^status complete)
- <=WM: (14070: I2 ^dir U)
- <=WM: (14069: I2 ^reward 1)
- <=WM: (14068: I2 ^see 0)
- =>WM: (14084: I2 ^level-1 L0-root)
- <=WM: (14071: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O1998 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O1997 = 0.263174935775242)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1003 ^value 1 +)
- (R1 ^reward R1003 +)
- Firing propose*predict-yes
- -->
- (O1999 ^name predict-yes +)
- (S1 ^operator O1999 +)
- Firing propose*predict-no
- -->
- (O2000 ^name predict-no +)
- (S1 ^operator O2000 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1998 = 0.2572459278910315)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1997 = 0.7368288308771758)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O1998 ^name predict-no +)
- (S1 ^operator O1998 +)
- Retracting propose*predict-yes
- -->
- (O1997 ^name predict-yes +)
- (S1 ^operator O1997 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1002 ^value 1 +)
- (R1 ^reward R1002 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O1998 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1997 = 0.)
- =>WM: (14091: S1 ^operator O2000 +)
- =>WM: (14090: S1 ^operator O1999 +)
- =>WM: (14089: I3 ^dir R)
- =>WM: (14088: O2000 ^name predict-no)
- =>WM: (14087: O1999 ^name predict-yes)
- =>WM: (14086: R1003 ^value 1)
- =>WM: (14085: R1 ^reward R1003)
- <=WM: (14076: S1 ^operator O1997 +)
- <=WM: (14077: S1 ^operator O1998 +)
- <=WM: (14078: S1 ^operator O1998)
- <=WM: (14062: I3 ^dir U)
- <=WM: (14072: R1 ^reward R1002)
- <=WM: (14075: O1998 ^name predict-no)
- <=WM: (14074: O1997 ^name predict-yes)
- <=WM: (14073: R1002 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O1999 = 0.263174935775242)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1999 = 0.7368288308771758)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2000 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2000 = 0.2572459278910315)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O1998 = 0.2572459278910315)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O1998 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1997 = 0.7368288308771758)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O1997 = 0.263174935775242)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14092: S1 ^operator O1999)
- 1000: O: O1999 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1000 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N999 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14093: I3 ^predict-yes N1000)
- <=WM: (14080: N999 ^status complete)
- <=WM: (14079: I3 ^predict-no N999)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /|\-/|\-/|\--- Input Phase ---
- =>WM: (14097: I2 ^dir U)
- =>WM: (14096: I2 ^reward 1)
- =>WM: (14095: I2 ^see 1)
- =>WM: (14094: N1000 ^status complete)
- <=WM: (14083: I2 ^dir R)
- <=WM: (14082: I2 ^reward 1)
- <=WM: (14081: I2 ^see 0)
- =>WM: (14098: I2 ^level-1 R1-root)
- <=WM: (14084: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1004 ^value 1 +)
- (R1 ^reward R1004 +)
- Firing propose*predict-yes
- -->
- (O2001 ^name predict-yes +)
- (S1 ^operator O2001 +)
- Firing propose*predict-no
- -->
- (O2002 ^name predict-no +)
- (S1 ^operator O2002 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2000 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1999 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2000 ^name predict-no +)
- (S1 ^operator O2000 +)
- Retracting propose*predict-yes
- -->
- (O1999 ^name predict-yes +)
- (S1 ^operator O1999 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1003 ^value 1 +)
- (R1 ^reward R1003 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2000 = 0.2572459278910315)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2000 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O1999 = 0.7368288308771758)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O1999 = 0.263174935775242)
- =>WM: (14106: S1 ^operator O2002 +)
- =>WM: (14105: S1 ^operator O2001 +)
- =>WM: (14104: I3 ^dir U)
- =>WM: (14103: O2002 ^name predict-no)
- =>WM: (14102: O2001 ^name predict-yes)
- =>WM: (14101: R1004 ^value 1)
- =>WM: (14100: R1 ^reward R1004)
- =>WM: (14099: I3 ^see 1)
- <=WM: (14090: S1 ^operator O1999 +)
- <=WM: (14092: S1 ^operator O1999)
- <=WM: (14091: S1 ^operator O2000 +)
- <=WM: (14089: I3 ^dir R)
- <=WM: (14085: R1 ^reward R1003)
- <=WM: (14031: I3 ^see 0)
- <=WM: (14088: O2000 ^name predict-no)
- <=WM: (14087: O1999 ^name predict-yes)
- <=WM: (14086: R1003 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2001 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2002 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2000 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O1999 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114074 0.736829 -> 0.748236 -0.0114079 0.736828(R,m,v=1,0.89697,0.0929786)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*35 0.251765 0.0114102 0.263175 -> 0.251765 0.0114098 0.263174(R,m,v=1,1,0)
- =>WM: (14107: S1 ^operator O2002)
- 1001: O: O2002 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1001 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1000 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14108: I3 ^predict-no N1001)
- <=WM: (14094: N1000 ^status complete)
- <=WM: (14093: I3 ^predict-yes N1000)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- ---- Input Phase ---
- =>WM: (14112: I2 ^dir U)
- =>WM: (14111: I2 ^reward 1)
- =>WM: (14110: I2 ^see 0)
- =>WM: (14109: N1001 ^status complete)
- <=WM: (14097: I2 ^dir U)
- <=WM: (14096: I2 ^reward 1)
- <=WM: (14095: I2 ^see 1)
- =>WM: (14113: I2 ^level-1 R1-root)
- <=WM: (14098: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1005 ^value 1 +)
- (R1 ^reward R1005 +)
- Firing propose*predict-yes
- -->
- (O2003 ^name predict-yes +)
- (S1 ^operator O2003 +)
- Firing propose*predict-no
- -->
- (O2004 ^name predict-no +)
- (S1 ^operator O2004 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2002 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2001 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2002 ^name predict-no +)
- (S1 ^operator O2002 +)
- Retracting propose*predict-yes
- -->
- (O2001 ^name predict-yes +)
- (S1 ^operator O2001 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1004 ^value 1 +)
- (R1 ^reward R1004 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2002 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2001 = 0.)
- =>WM: (14120: S1 ^operator O2004 +)
- =>WM: (14119: S1 ^operator O2003 +)
- =>WM: (14118: O2004 ^name predict-no)
- =>WM: (14117: O2003 ^name predict-yes)
- =>WM: (14116: R1005 ^value 1)
- =>WM: (14115: R1 ^reward R1005)
- =>WM: (14114: I3 ^see 0)
- <=WM: (14105: S1 ^operator O2001 +)
- <=WM: (14106: S1 ^operator O2002 +)
- <=WM: (14107: S1 ^operator O2002)
- <=WM: (14100: R1 ^reward R1004)
- <=WM: (14099: I3 ^see 1)
- <=WM: (14103: O2002 ^name predict-no)
- <=WM: (14102: O2001 ^name predict-yes)
- <=WM: (14101: R1004 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2003 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2004 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2002 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2001 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14121: S1 ^operator O2004)
- 1002: O: O2004 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1002 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1001 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14122: I3 ^predict-no N1002)
- <=WM: (14109: N1001 ^status complete)
- <=WM: (14108: I3 ^predict-no N1001)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /|\--- Input Phase ---
- =>WM: (14126: I2 ^dir U)
- =>WM: (14125: I2 ^reward 1)
- =>WM: (14124: I2 ^see 0)
- =>WM: (14123: N1002 ^status complete)
- <=WM: (14112: I2 ^dir U)
- <=WM: (14111: I2 ^reward 1)
- <=WM: (14110: I2 ^see 0)
- =>WM: (14127: I2 ^level-1 R1-root)
- <=WM: (14113: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1006 ^value 1 +)
- (R1 ^reward R1006 +)
- Firing propose*predict-yes
- -->
- (O2005 ^name predict-yes +)
- (S1 ^operator O2005 +)
- Firing propose*predict-no
- -->
- (O2006 ^name predict-no +)
- (S1 ^operator O2006 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2004 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2003 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2004 ^name predict-no +)
- (S1 ^operator O2004 +)
- Retracting propose*predict-yes
- -->
- (O2003 ^name predict-yes +)
- (S1 ^operator O2003 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1005 ^value 1 +)
- (R1 ^reward R1005 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2004 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2003 = 0.)
- =>WM: (14133: S1 ^operator O2006 +)
- =>WM: (14132: S1 ^operator O2005 +)
- =>WM: (14131: O2006 ^name predict-no)
- =>WM: (14130: O2005 ^name predict-yes)
- =>WM: (14129: R1006 ^value 1)
- =>WM: (14128: R1 ^reward R1006)
- <=WM: (14119: S1 ^operator O2003 +)
- <=WM: (14120: S1 ^operator O2004 +)
- <=WM: (14121: S1 ^operator O2004)
- <=WM: (14115: R1 ^reward R1005)
- <=WM: (14118: O2004 ^name predict-no)
- <=WM: (14117: O2003 ^name predict-yes)
- <=WM: (14116: R1005 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2005 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2006 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2004 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2003 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14134: S1 ^operator O2006)
- 1003: O: O2006 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1003 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1002 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14135: I3 ^predict-no N1003)
- <=WM: (14123: N1002 ^status complete)
- <=WM: (14122: I3 ^predict-no N1002)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- -/|--- Input Phase ---
- =>WM: (14139: I2 ^dir U)
- =>WM: (14138: I2 ^reward 1)
- =>WM: (14137: I2 ^see 0)
- =>WM: (14136: N1003 ^status complete)
- <=WM: (14126: I2 ^dir U)
- <=WM: (14125: I2 ^reward 1)
- <=WM: (14124: I2 ^see 0)
- =>WM: (14140: I2 ^level-1 R1-root)
- <=WM: (14127: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1007 ^value 1 +)
- (R1 ^reward R1007 +)
- Firing propose*predict-yes
- -->
- (O2007 ^name predict-yes +)
- (S1 ^operator O2007 +)
- Firing propose*predict-no
- -->
- (O2008 ^name predict-no +)
- (S1 ^operator O2008 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2006 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2005 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2006 ^name predict-no +)
- (S1 ^operator O2006 +)
- Retracting propose*predict-yes
- -->
- (O2005 ^name predict-yes +)
- (S1 ^operator O2005 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1006 ^value 1 +)
- (R1 ^reward R1006 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2006 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2005 = 0.)
- =>WM: (14146: S1 ^operator O2008 +)
- =>WM: (14145: S1 ^operator O2007 +)
- =>WM: (14144: O2008 ^name predict-no)
- =>WM: (14143: O2007 ^name predict-yes)
- =>WM: (14142: R1007 ^value 1)
- =>WM: (14141: R1 ^reward R1007)
- <=WM: (14132: S1 ^operator O2005 +)
- <=WM: (14133: S1 ^operator O2006 +)
- <=WM: (14134: S1 ^operator O2006)
- <=WM: (14128: R1 ^reward R1006)
- <=WM: (14131: O2006 ^name predict-no)
- <=WM: (14130: O2005 ^name predict-yes)
- <=WM: (14129: R1006 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2007 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2008 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2006 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2005 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14147: S1 ^operator O2008)
- 1004: O: O2008 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1004 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1003 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14148: I3 ^predict-no N1004)
- <=WM: (14136: N1003 ^status complete)
- <=WM: (14135: I3 ^predict-no N1003)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (14152: I2 ^dir L)
- =>WM: (14151: I2 ^reward 1)
- =>WM: (14150: I2 ^see 0)
- =>WM: (14149: N1004 ^status complete)
- <=WM: (14139: I2 ^dir U)
- <=WM: (14138: I2 ^reward 1)
- <=WM: (14137: I2 ^see 0)
- =>WM: (14153: I2 ^level-1 R1-root)
- <=WM: (14140: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2007 = 0.5681063809875448)
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2008 = -0.1549421060161498)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1008 ^value 1 +)
- (R1 ^reward R1008 +)
- Firing propose*predict-yes
- -->
- (O2009 ^name predict-yes +)
- (S1 ^operator O2009 +)
- Firing propose*predict-no
- -->
- (O2010 ^name predict-no +)
- (S1 ^operator O2010 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2008 = 0.3289460753274439)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2007 = 0.4318904667247643)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2008 ^name predict-no +)
- (S1 ^operator O2008 +)
- Retracting propose*predict-yes
- -->
- (O2007 ^name predict-yes +)
- (S1 ^operator O2007 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1007 ^value 1 +)
- (R1 ^reward R1007 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2008 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2007 = 0.)
- =>WM: (14160: S1 ^operator O2010 +)
- =>WM: (14159: S1 ^operator O2009 +)
- =>WM: (14158: I3 ^dir L)
- =>WM: (14157: O2010 ^name predict-no)
- =>WM: (14156: O2009 ^name predict-yes)
- =>WM: (14155: R1008 ^value 1)
- =>WM: (14154: R1 ^reward R1008)
- <=WM: (14145: S1 ^operator O2007 +)
- <=WM: (14146: S1 ^operator O2008 +)
- <=WM: (14147: S1 ^operator O2008)
- <=WM: (14104: I3 ^dir U)
- <=WM: (14141: R1 ^reward R1007)
- <=WM: (14144: O2008 ^name predict-no)
- <=WM: (14143: O2007 ^name predict-yes)
- <=WM: (14142: R1007 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2009 = 0.5681063809875448)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2009 = 0.4318904667247643)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2010 = -0.1549421060161498)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2010 = 0.3289460753274439)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2008 = 0.3289460753274439)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2008 = -0.1549421060161498)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2007 = 0.4318904667247643)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2007 = 0.5681063809875448)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14161: S1 ^operator O2009)
- 1005: O: O2009 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1005 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1004 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14162: I3 ^predict-yes N1005)
- <=WM: (14149: N1004 ^status complete)
- <=WM: (14148: I3 ^predict-no N1004)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- /--- Input Phase ---
- =>WM: (14166: I2 ^dir R)
- =>WM: (14165: I2 ^reward 1)
- =>WM: (14164: I2 ^see 1)
- =>WM: (14163: N1005 ^status complete)
- <=WM: (14152: I2 ^dir L)
- <=WM: (14151: I2 ^reward 1)
- <=WM: (14150: I2 ^see 0)
- =>WM: (14167: I2 ^level-1 L1-root)
- <=WM: (14153: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2010 = -0.1377248055371832)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2009 = 0.2631690211593038)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1009 ^value 1 +)
- (R1 ^reward R1009 +)
- Firing propose*predict-yes
- -->
- (O2011 ^name predict-yes +)
- (S1 ^operator O2011 +)
- Firing propose*predict-no
- -->
- (O2012 ^name predict-no +)
- (S1 ^operator O2012 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2010 = 0.2572459278910315)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2009 = 0.7368282658793132)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2010 ^name predict-no +)
- (S1 ^operator O2010 +)
- Retracting propose*predict-yes
- -->
- (O2009 ^name predict-yes +)
- (S1 ^operator O2009 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1008 ^value 1 +)
- (R1 ^reward R1008 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2010 = 0.3289460753274439)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2010 = -0.1549421060161498)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2009 = 0.4318904667247643)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2009 = 0.5681063809875448)
- =>WM: (14175: S1 ^operator O2012 +)
- =>WM: (14174: S1 ^operator O2011 +)
- =>WM: (14173: I3 ^dir R)
- =>WM: (14172: O2012 ^name predict-no)
- =>WM: (14171: O2011 ^name predict-yes)
- =>WM: (14170: R1009 ^value 1)
- =>WM: (14169: R1 ^reward R1009)
- =>WM: (14168: I3 ^see 1)
- <=WM: (14159: S1 ^operator O2009 +)
- <=WM: (14161: S1 ^operator O2009)
- <=WM: (14160: S1 ^operator O2010 +)
- <=WM: (14158: I3 ^dir L)
- <=WM: (14154: R1 ^reward R1008)
- <=WM: (14114: I3 ^see 0)
- <=WM: (14157: O2010 ^name predict-no)
- <=WM: (14156: O2009 ^name predict-yes)
- <=WM: (14155: R1008 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2011 = 0.7368282658793132)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2011 = 0.2631690211593038)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2012 = 0.2572459278910315)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2012 = -0.1377248055371832)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2010 = 0.2572459278910315)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2010 = -0.1377248055371832)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2009 = 0.7368282658793132)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2009 = 0.2631690211593038)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683777 -0.251886 0.43189 -> 0.683777 -0.251886 0.431891(R,m,v=1,0.923529,0.0710407)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*45 0.31622 0.251886 0.568106 -> 0.316221 0.251886 0.568107(R,m,v=1,1,0)
- =>WM: (14176: S1 ^operator O2011)
- 1006: O: O2011 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1006 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1005 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14177: I3 ^predict-yes N1006)
- <=WM: (14163: N1005 ^status complete)
- <=WM: (14162: I3 ^predict-yes N1005)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- |\--- Input Phase ---
- =>WM: (14181: I2 ^dir R)
- =>WM: (14180: I2 ^reward 1)
- =>WM: (14179: I2 ^see 1)
- =>WM: (14178: N1006 ^status complete)
- <=WM: (14166: I2 ^dir R)
- <=WM: (14165: I2 ^reward 1)
- <=WM: (14164: I2 ^see 1)
- =>WM: (14182: I2 ^level-1 R1-root)
- <=WM: (14167: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2011 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2012 = 0.7427525112697247)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1010 ^value 1 +)
- (R1 ^reward R1010 +)
- Firing propose*predict-yes
- -->
- (O2013 ^name predict-yes +)
- (S1 ^operator O2013 +)
- Firing propose*predict-no
- -->
- (O2014 ^name predict-no +)
- (S1 ^operator O2014 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2012 = 0.2572459278910315)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2011 = 0.7368282658793132)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2012 ^name predict-no +)
- (S1 ^operator O2012 +)
- Retracting propose*predict-yes
- -->
- (O2011 ^name predict-yes +)
- (S1 ^operator O2011 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1009 ^value 1 +)
- (R1 ^reward R1009 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2012 = -0.1377248055371832)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2012 = 0.2572459278910315)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2011 = 0.2631690211593038)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2011 = 0.7368282658793132)
- =>WM: (14188: S1 ^operator O2014 +)
- =>WM: (14187: S1 ^operator O2013 +)
- =>WM: (14186: O2014 ^name predict-no)
- =>WM: (14185: O2013 ^name predict-yes)
- =>WM: (14184: R1010 ^value 1)
- =>WM: (14183: R1 ^reward R1010)
- <=WM: (14174: S1 ^operator O2011 +)
- <=WM: (14176: S1 ^operator O2011)
- <=WM: (14175: S1 ^operator O2012 +)
- <=WM: (14169: R1 ^reward R1009)
- <=WM: (14172: O2012 ^name predict-no)
- <=WM: (14171: O2011 ^name predict-yes)
- <=WM: (14170: R1009 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2013 = 0.7368282658793132)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2013 = -0.3011268063455669)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2014 = 0.2572459278910315)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2014 = 0.7427525112697247)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2012 = 0.2572459278910315)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2012 = 0.7427525112697247)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2011 = 0.7368282658793132)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2011 = -0.3011268063455669)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114079 0.736828 -> 0.748236 -0.0114076 0.736829(R,m,v=1,0.89759,0.092479)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*40 0.251763 0.0114059 0.263169 -> 0.251763 0.0114062 0.263169(R,m,v=1,1,0)
- =>WM: (14189: S1 ^operator O2014)
- 1007: O: O2014 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1007 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1006 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14190: I3 ^predict-no N1007)
- <=WM: (14178: N1006 ^status complete)
- <=WM: (14177: I3 ^predict-yes N1006)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- -/|--- Input Phase ---
- =>WM: (14194: I2 ^dir U)
- =>WM: (14193: I2 ^reward 1)
- =>WM: (14192: I2 ^see 0)
- =>WM: (14191: N1007 ^status complete)
- <=WM: (14181: I2 ^dir R)
- <=WM: (14180: I2 ^reward 1)
- <=WM: (14179: I2 ^see 1)
- =>WM: (14195: I2 ^level-1 R0-root)
- <=WM: (14182: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1011 ^value 1 +)
- (R1 ^reward R1011 +)
- Firing propose*predict-yes
- -->
- (O2015 ^name predict-yes +)
- (S1 ^operator O2015 +)
- Firing propose*predict-no
- -->
- (O2016 ^name predict-no +)
- (S1 ^operator O2016 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2014 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2013 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2014 ^name predict-no +)
- (S1 ^operator O2014 +)
- Retracting propose*predict-yes
- -->
- (O2013 ^name predict-yes +)
- (S1 ^operator O2013 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1010 ^value 1 +)
- (R1 ^reward R1010 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2014 = 0.7427525112697247)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2014 = 0.2572459278910315)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2013 = -0.3011268063455669)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2013 = 0.7368286728235206)
- =>WM: (14203: S1 ^operator O2016 +)
- =>WM: (14202: S1 ^operator O2015 +)
- =>WM: (14201: I3 ^dir U)
- =>WM: (14200: O2016 ^name predict-no)
- =>WM: (14199: O2015 ^name predict-yes)
- =>WM: (14198: R1011 ^value 1)
- =>WM: (14197: R1 ^reward R1011)
- =>WM: (14196: I3 ^see 0)
- <=WM: (14187: S1 ^operator O2013 +)
- <=WM: (14188: S1 ^operator O2014 +)
- <=WM: (14189: S1 ^operator O2014)
- <=WM: (14173: I3 ^dir R)
- <=WM: (14183: R1 ^reward R1010)
- <=WM: (14168: I3 ^see 1)
- <=WM: (14186: O2014 ^name predict-no)
- <=WM: (14185: O2013 ^name predict-yes)
- <=WM: (14184: R1010 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2015 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2016 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2014 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2013 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586136 -0.32889 0.257246 -> 0.586136 -0.32889 0.257246(R,m,v=1,0.860465,0.120767)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*36 0.413863 0.32889 0.742753 -> 0.413863 0.32889 0.742753(R,m,v=1,1,0)
- =>WM: (14204: S1 ^operator O2016)
- 1008: O: O2016 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1008 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1007 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14205: I3 ^predict-no N1008)
- <=WM: (14191: N1007 ^status complete)
- <=WM: (14190: I3 ^predict-no N1007)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (14209: I2 ^dir L)
- =>WM: (14208: I2 ^reward 1)
- =>WM: (14207: I2 ^see 0)
- =>WM: (14206: N1008 ^status complete)
- <=WM: (14194: I2 ^dir U)
- <=WM: (14193: I2 ^reward 1)
- <=WM: (14192: I2 ^see 0)
- =>WM: (14210: I2 ^level-1 R0-root)
- <=WM: (14195: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2016 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2015 = 0.5681113503720048)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1012 ^value 1 +)
- (R1 ^reward R1012 +)
- Firing propose*predict-yes
- -->
- (O2017 ^name predict-yes +)
- (S1 ^operator O2017 +)
- Firing propose*predict-no
- -->
- (O2018 ^name predict-no +)
- (S1 ^operator O2018 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2016 = 0.3289460753274439)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2015 = 0.4318909395679179)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2016 ^name predict-no +)
- (S1 ^operator O2016 +)
- Retracting propose*predict-yes
- -->
- (O2015 ^name predict-yes +)
- (S1 ^operator O2015 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1011 ^value 1 +)
- (R1 ^reward R1011 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2016 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2015 = 0.)
- =>WM: (14217: S1 ^operator O2018 +)
- =>WM: (14216: S1 ^operator O2017 +)
- =>WM: (14215: I3 ^dir L)
- =>WM: (14214: O2018 ^name predict-no)
- =>WM: (14213: O2017 ^name predict-yes)
- =>WM: (14212: R1012 ^value 1)
- =>WM: (14211: R1 ^reward R1012)
- <=WM: (14202: S1 ^operator O2015 +)
- <=WM: (14203: S1 ^operator O2016 +)
- <=WM: (14204: S1 ^operator O2016)
- <=WM: (14201: I3 ^dir U)
- <=WM: (14197: R1 ^reward R1011)
- <=WM: (14200: O2016 ^name predict-no)
- <=WM: (14199: O2015 ^name predict-yes)
- <=WM: (14198: R1011 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2017 = 0.5681113503720048)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2017 = 0.4318909395679179)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2018 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2018 = 0.3289460753274439)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2016 = 0.3289460753274439)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2016 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2015 = 0.4318909395679179)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2015 = 0.5681113503720048)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14218: S1 ^operator O2017)
- 1009: O: O2017 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1009 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1008 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14219: I3 ^predict-yes N1009)
- <=WM: (14206: N1008 ^status complete)
- <=WM: (14205: I3 ^predict-no N1008)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- /--- Input Phase ---
- =>WM: (14223: I2 ^dir L)
- =>WM: (14222: I2 ^reward 1)
- =>WM: (14221: I2 ^see 1)
- =>WM: (14220: N1009 ^status complete)
- <=WM: (14209: I2 ^dir L)
- <=WM: (14208: I2 ^reward 1)
- <=WM: (14207: I2 ^see 0)
- =>WM: (14224: I2 ^level-1 L1-root)
- <=WM: (14210: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2018 = 0.6710525601435148)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2017 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1013 ^value 1 +)
- (R1 ^reward R1013 +)
- Firing propose*predict-yes
- -->
- (O2019 ^name predict-yes +)
- (S1 ^operator O2019 +)
- Firing propose*predict-no
- -->
- (O2020 ^name predict-no +)
- (S1 ^operator O2020 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2018 = 0.3289460753274439)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2017 = 0.4318909395679179)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2018 ^name predict-no +)
- (S1 ^operator O2018 +)
- Retracting propose*predict-yes
- -->
- (O2017 ^name predict-yes +)
- (S1 ^operator O2017 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1012 ^value 1 +)
- (R1 ^reward R1012 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2018 = 0.3289460753274439)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2018 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2017 = 0.4318909395679179)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2017 = 0.5681113503720048)
- =>WM: (14231: S1 ^operator O2020 +)
- =>WM: (14230: S1 ^operator O2019 +)
- =>WM: (14229: O2020 ^name predict-no)
- =>WM: (14228: O2019 ^name predict-yes)
- =>WM: (14227: R1013 ^value 1)
- =>WM: (14226: R1 ^reward R1013)
- =>WM: (14225: I3 ^see 1)
- <=WM: (14216: S1 ^operator O2017 +)
- <=WM: (14218: S1 ^operator O2017)
- <=WM: (14217: S1 ^operator O2018 +)
- <=WM: (14211: R1 ^reward R1012)
- <=WM: (14196: I3 ^see 0)
- <=WM: (14214: O2018 ^name predict-no)
- <=WM: (14213: O2017 ^name predict-yes)
- <=WM: (14212: R1012 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2019 = 0.4318909395679179)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2019 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2020 = 0.3289460753274439)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2020 = 0.6710525601435148)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2018 = 0.3289460753274439)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2018 = 0.6710525601435148)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2017 = 0.4318909395679179)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2017 = -0.06092862110810815)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683777 -0.251886 0.431891 -> 0.683777 -0.251886 0.431891(R,m,v=1,0.923977,0.070657)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*30 0.316225 0.251886 0.568111 -> 0.316225 0.251886 0.568111(R,m,v=1,1,0)
- =>WM: (14232: S1 ^operator O2020)
- 1010: O: O2020 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1010 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1009 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14233: I3 ^predict-no N1010)
- <=WM: (14220: N1009 ^status complete)
- <=WM: (14219: I3 ^predict-yes N1009)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- |\--- Input Phase ---
- =>WM: (14237: I2 ^dir R)
- =>WM: (14236: I2 ^reward 1)
- =>WM: (14235: I2 ^see 0)
- =>WM: (14234: N1010 ^status complete)
- <=WM: (14223: I2 ^dir L)
- <=WM: (14222: I2 ^reward 1)
- <=WM: (14221: I2 ^see 1)
- =>WM: (14238: I2 ^level-1 L0-root)
- <=WM: (14224: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2020 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2019 = 0.2631743707773793)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1014 ^value 1 +)
- (R1 ^reward R1014 +)
- Firing propose*predict-yes
- -->
- (O2021 ^name predict-yes +)
- (S1 ^operator O2021 +)
- Firing propose*predict-no
- -->
- (O2022 ^name predict-no +)
- (S1 ^operator O2022 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2020 = 0.2572461620169181)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2019 = 0.7368286728235206)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2020 ^name predict-no +)
- (S1 ^operator O2020 +)
- Retracting propose*predict-yes
- -->
- (O2019 ^name predict-yes +)
- (S1 ^operator O2019 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1013 ^value 1 +)
- (R1 ^reward R1013 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2020 = 0.6710525601435148)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2020 = 0.3289460753274439)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2019 = -0.06092862110810815)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2019 = 0.4318905960769295)
- =>WM: (14246: S1 ^operator O2022 +)
- =>WM: (14245: S1 ^operator O2021 +)
- =>WM: (14244: I3 ^dir R)
- =>WM: (14243: O2022 ^name predict-no)
- =>WM: (14242: O2021 ^name predict-yes)
- =>WM: (14241: R1014 ^value 1)
- =>WM: (14240: R1 ^reward R1014)
- =>WM: (14239: I3 ^see 0)
- <=WM: (14230: S1 ^operator O2019 +)
- <=WM: (14231: S1 ^operator O2020 +)
- <=WM: (14232: S1 ^operator O2020)
- <=WM: (14215: I3 ^dir L)
- <=WM: (14226: R1 ^reward R1013)
- <=WM: (14225: I3 ^see 1)
- <=WM: (14229: O2020 ^name predict-no)
- <=WM: (14228: O2019 ^name predict-yes)
- <=WM: (14227: R1013 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2021 = 0.7368286728235206)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2021 = 0.2631743707773793)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2022 = 0.2572461620169181)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2022 = -0.07401383653737587)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2020 = 0.2572461620169181)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2020 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2019 = 0.7368286728235206)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2019 = 0.2631743707773793)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565404 -0.236457 0.328946 -> 0.565404 -0.236458 0.328946(R,m,v=1,0.906832,0.0850155)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*43 0.434594 0.236459 0.671053 -> 0.434594 0.236459 0.671053(R,m,v=1,1,0)
- =>WM: (14247: S1 ^operator O2021)
- 1011: O: O2021 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1011 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1010 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14248: I3 ^predict-yes N1011)
- <=WM: (14234: N1010 ^status complete)
- <=WM: (14233: I3 ^predict-no N1010)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- ---- Input Phase ---
- =>WM: (14252: I2 ^dir L)
- =>WM: (14251: I2 ^reward 1)
- =>WM: (14250: I2 ^see 1)
- =>WM: (14249: N1011 ^status complete)
- <=WM: (14237: I2 ^dir R)
- <=WM: (14236: I2 ^reward 1)
- <=WM: (14235: I2 ^see 0)
- =>WM: (14253: I2 ^level-1 R1-root)
- <=WM: (14238: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2021 = 0.5681068538306986)
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2022 = -0.1549421060161498)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1015 ^value 1 +)
- (R1 ^reward R1015 +)
- Firing propose*predict-yes
- -->
- (O2023 ^name predict-yes +)
- (S1 ^operator O2023 +)
- Firing propose*predict-no
- -->
- (O2024 ^name predict-no +)
- (S1 ^operator O2024 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2022 = 0.3289462800068002)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2021 = 0.4318905960769295)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2022 ^name predict-no +)
- (S1 ^operator O2022 +)
- Retracting propose*predict-yes
- -->
- (O2021 ^name predict-yes +)
- (S1 ^operator O2021 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1014 ^value 1 +)
- (R1 ^reward R1014 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2022 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2022 = 0.2572461620169181)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2021 = 0.2631743707773793)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2021 = 0.7368286728235206)
- =>WM: (14261: S1 ^operator O2024 +)
- =>WM: (14260: S1 ^operator O2023 +)
- =>WM: (14259: I3 ^dir L)
- =>WM: (14258: O2024 ^name predict-no)
- =>WM: (14257: O2023 ^name predict-yes)
- =>WM: (14256: R1015 ^value 1)
- =>WM: (14255: R1 ^reward R1015)
- =>WM: (14254: I3 ^see 1)
- <=WM: (14245: S1 ^operator O2021 +)
- <=WM: (14247: S1 ^operator O2021)
- <=WM: (14246: S1 ^operator O2022 +)
- <=WM: (14244: I3 ^dir R)
- <=WM: (14240: R1 ^reward R1014)
- <=WM: (14239: I3 ^see 0)
- <=WM: (14243: O2022 ^name predict-no)
- <=WM: (14242: O2021 ^name predict-yes)
- <=WM: (14241: R1014 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2023 = 0.4318905960769295)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2023 = 0.5681068538306986)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2024 = 0.3289462800068002)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2024 = -0.1549421060161498)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2022 = 0.3289462800068002)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2022 = -0.1549421060161498)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2021 = 0.4318905960769295)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2021 = 0.5681068538306986)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114076 0.736829 -> 0.748236 -0.0114079 0.736828(R,m,v=1,0.898204,0.0919847)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*35 0.251765 0.0114098 0.263174 -> 0.251764 0.0114095 0.263174(R,m,v=1,1,0)
- =>WM: (14262: S1 ^operator O2023)
- 1012: O: O2023 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1012 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1011 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14263: I3 ^predict-yes N1012)
- <=WM: (14249: N1011 ^status complete)
- <=WM: (14248: I3 ^predict-yes N1011)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- /|--- Input Phase ---
- =>WM: (14267: I2 ^dir L)
- =>WM: (14266: I2 ^reward 1)
- =>WM: (14265: I2 ^see 1)
- =>WM: (14264: N1012 ^status complete)
- <=WM: (14252: I2 ^dir L)
- <=WM: (14251: I2 ^reward 1)
- <=WM: (14250: I2 ^see 1)
- =>WM: (14268: I2 ^level-1 L1-root)
- <=WM: (14253: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2024 = 0.671052764822871)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2023 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1016 ^value 1 +)
- (R1 ^reward R1016 +)
- Firing propose*predict-yes
- -->
- (O2025 ^name predict-yes +)
- (S1 ^operator O2025 +)
- Firing propose*predict-no
- -->
- (O2026 ^name predict-no +)
- (S1 ^operator O2026 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2024 = 0.3289462800068002)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2023 = 0.4318905960769295)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2024 ^name predict-no +)
- (S1 ^operator O2024 +)
- Retracting propose*predict-yes
- -->
- (O2023 ^name predict-yes +)
- (S1 ^operator O2023 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1015 ^value 1 +)
- (R1 ^reward R1015 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2024 = -0.1549421060161498)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2024 = 0.3289462800068002)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2023 = 0.5681068538306986)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2023 = 0.4318905960769295)
- =>WM: (14274: S1 ^operator O2026 +)
- =>WM: (14273: S1 ^operator O2025 +)
- =>WM: (14272: O2026 ^name predict-no)
- =>WM: (14271: O2025 ^name predict-yes)
- =>WM: (14270: R1016 ^value 1)
- =>WM: (14269: R1 ^reward R1016)
- <=WM: (14260: S1 ^operator O2023 +)
- <=WM: (14262: S1 ^operator O2023)
- <=WM: (14261: S1 ^operator O2024 +)
- <=WM: (14255: R1 ^reward R1015)
- <=WM: (14258: O2024 ^name predict-no)
- <=WM: (14257: O2023 ^name predict-yes)
- <=WM: (14256: R1015 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2025 = 0.4318905960769295)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2025 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2026 = 0.3289462800068002)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2026 = 0.671052764822871)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2024 = 0.3289462800068002)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2024 = 0.671052764822871)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2023 = 0.4318905960769295)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2023 = -0.06092862110810815)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683777 -0.251886 0.431891 -> 0.683777 -0.251886 0.431891(R,m,v=1,0.924419,0.0702774)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*45 0.316221 0.251886 0.568107 -> 0.316221 0.251886 0.568107(R,m,v=1,1,0)
- =>WM: (14275: S1 ^operator O2026)
- 1013: O: O2026 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1013 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1012 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14276: I3 ^predict-no N1013)
- <=WM: (14264: N1012 ^status complete)
- <=WM: (14263: I3 ^predict-yes N1012)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- \-/--- Input Phase ---
- =>WM: (14280: I2 ^dir R)
- =>WM: (14279: I2 ^reward 1)
- =>WM: (14278: I2 ^see 0)
- =>WM: (14277: N1013 ^status complete)
- <=WM: (14267: I2 ^dir L)
- <=WM: (14266: I2 ^reward 1)
- <=WM: (14265: I2 ^see 1)
- =>WM: (14281: I2 ^level-1 L0-root)
- <=WM: (14268: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2026 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2025 = 0.2631739142372443)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1017 ^value 1 +)
- (R1 ^reward R1017 +)
- Firing propose*predict-yes
- -->
- (O2027 ^name predict-yes +)
- (S1 ^operator O2027 +)
- Firing propose*predict-no
- -->
- (O2028 ^name predict-no +)
- (S1 ^operator O2028 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2026 = 0.2572461620169181)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2025 = 0.7368282162833856)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2026 ^name predict-no +)
- (S1 ^operator O2026 +)
- Retracting propose*predict-yes
- -->
- (O2025 ^name predict-yes +)
- (S1 ^operator O2025 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1016 ^value 1 +)
- (R1 ^reward R1016 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2026 = 0.671052764822871)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2026 = 0.3289462800068002)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2025 = -0.06092862110810815)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2025 = 0.4318909785907853)
- =>WM: (14289: S1 ^operator O2028 +)
- =>WM: (14288: S1 ^operator O2027 +)
- =>WM: (14287: I3 ^dir R)
- =>WM: (14286: O2028 ^name predict-no)
- =>WM: (14285: O2027 ^name predict-yes)
- =>WM: (14284: R1017 ^value 1)
- =>WM: (14283: R1 ^reward R1017)
- =>WM: (14282: I3 ^see 0)
- <=WM: (14273: S1 ^operator O2025 +)
- <=WM: (14274: S1 ^operator O2026 +)
- <=WM: (14275: S1 ^operator O2026)
- <=WM: (14259: I3 ^dir L)
- <=WM: (14269: R1 ^reward R1016)
- <=WM: (14254: I3 ^see 1)
- <=WM: (14272: O2026 ^name predict-no)
- <=WM: (14271: O2025 ^name predict-yes)
- <=WM: (14270: R1016 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2027 = 0.7368282162833856)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2027 = 0.2631739142372443)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2028 = 0.2572461620169181)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2028 = -0.07401383653737587)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2026 = 0.2572461620169181)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2026 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2025 = 0.7368282162833856)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2025 = 0.2631739142372443)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565404 -0.236458 0.328946 -> 0.565404 -0.236458 0.328946(R,m,v=1,0.907407,0.0845411)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*43 0.434594 0.236459 0.671053 -> 0.434595 0.236458 0.671053(R,m,v=1,1,0)
- =>WM: (14290: S1 ^operator O2027)
- 1014: O: O2027 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1014 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1013 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14291: I3 ^predict-yes N1014)
- <=WM: (14277: N1013 ^status complete)
- <=WM: (14276: I3 ^predict-no N1013)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |\-/--- Input Phase ---
- =>WM: (14295: I2 ^dir U)
- =>WM: (14294: I2 ^reward 1)
- =>WM: (14293: I2 ^see 1)
- =>WM: (14292: N1014 ^status complete)
- <=WM: (14280: I2 ^dir R)
- <=WM: (14279: I2 ^reward 1)
- <=WM: (14278: I2 ^see 0)
- =>WM: (14296: I2 ^level-1 R1-root)
- <=WM: (14281: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1018 ^value 1 +)
- (R1 ^reward R1018 +)
- Firing propose*predict-yes
- -->
- (O2029 ^name predict-yes +)
- (S1 ^operator O2029 +)
- Firing propose*predict-no
- -->
- (O2030 ^name predict-no +)
- (S1 ^operator O2030 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2028 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2027 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2028 ^name predict-no +)
- (S1 ^operator O2028 +)
- Retracting propose*predict-yes
- -->
- (O2027 ^name predict-yes +)
- (S1 ^operator O2027 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1017 ^value 1 +)
- (R1 ^reward R1017 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2028 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2028 = 0.2572461620169181)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2027 = 0.2631739142372443)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2027 = 0.7368282162833856)
- =>WM: (14304: S1 ^operator O2030 +)
- =>WM: (14303: S1 ^operator O2029 +)
- =>WM: (14302: I3 ^dir U)
- =>WM: (14301: O2030 ^name predict-no)
- =>WM: (14300: O2029 ^name predict-yes)
- =>WM: (14299: R1018 ^value 1)
- =>WM: (14298: R1 ^reward R1018)
- =>WM: (14297: I3 ^see 1)
- <=WM: (14288: S1 ^operator O2027 +)
- <=WM: (14290: S1 ^operator O2027)
- <=WM: (14289: S1 ^operator O2028 +)
- <=WM: (14287: I3 ^dir R)
- <=WM: (14283: R1 ^reward R1017)
- <=WM: (14282: I3 ^see 0)
- <=WM: (14286: O2028 ^name predict-no)
- <=WM: (14285: O2027 ^name predict-yes)
- <=WM: (14284: R1017 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2029 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2030 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2028 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2027 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114079 0.736828 -> 0.748236 -0.0114081 0.736828(R,m,v=1,0.89881,0.0914956)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*35 0.251764 0.0114095 0.263174 -> 0.251764 0.0114092 0.263174(R,m,v=1,1,0)
- =>WM: (14305: S1 ^operator O2030)
- 1015: O: O2030 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1015 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1014 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14306: I3 ^predict-no N1015)
- <=WM: (14292: N1014 ^status complete)
- <=WM: (14291: I3 ^predict-yes N1014)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- |\-/--- Input Phase ---
- =>WM: (14310: I2 ^dir R)
- =>WM: (14309: I2 ^reward 1)
- =>WM: (14308: I2 ^see 0)
- =>WM: (14307: N1015 ^status complete)
- <=WM: (14295: I2 ^dir U)
- <=WM: (14294: I2 ^reward 1)
- <=WM: (14293: I2 ^see 1)
- =>WM: (14311: I2 ^level-1 R1-root)
- <=WM: (14296: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2029 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2030 = 0.7427527453956113)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1019 ^value 1 +)
- (R1 ^reward R1019 +)
- Firing propose*predict-yes
- -->
- (O2031 ^name predict-yes +)
- (S1 ^operator O2031 +)
- Firing propose*predict-no
- -->
- (O2032 ^name predict-no +)
- (S1 ^operator O2032 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2030 = 0.2572461620169181)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2029 = 0.7368278967052911)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2030 ^name predict-no +)
- (S1 ^operator O2030 +)
- Retracting propose*predict-yes
- -->
- (O2029 ^name predict-yes +)
- (S1 ^operator O2029 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1018 ^value 1 +)
- (R1 ^reward R1018 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2030 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2029 = 0.)
- =>WM: (14319: S1 ^operator O2032 +)
- =>WM: (14318: S1 ^operator O2031 +)
- =>WM: (14317: I3 ^dir R)
- =>WM: (14316: O2032 ^name predict-no)
- =>WM: (14315: O2031 ^name predict-yes)
- =>WM: (14314: R1019 ^value 1)
- =>WM: (14313: R1 ^reward R1019)
- =>WM: (14312: I3 ^see 0)
- <=WM: (14303: S1 ^operator O2029 +)
- <=WM: (14304: S1 ^operator O2030 +)
- <=WM: (14305: S1 ^operator O2030)
- <=WM: (14302: I3 ^dir U)
- <=WM: (14298: R1 ^reward R1018)
- <=WM: (14297: I3 ^see 1)
- <=WM: (14301: O2030 ^name predict-no)
- <=WM: (14300: O2029 ^name predict-yes)
- <=WM: (14299: R1018 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2031 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2031 = 0.7368278967052911)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2032 = 0.7427527453956113)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2032 = 0.2572461620169181)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2030 = 0.2572461620169181)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2030 = 0.7427527453956113)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2029 = 0.7368278967052911)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2029 = -0.3011268063455669)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14320: S1 ^operator O2032)
- 1016: O: O2032 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1016 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1015 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14321: I3 ^predict-no N1016)
- <=WM: (14307: N1015 ^status complete)
- <=WM: (14306: I3 ^predict-no N1015)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |\--- Input Phase ---
- =>WM: (14325: I2 ^dir U)
- =>WM: (14324: I2 ^reward 1)
- =>WM: (14323: I2 ^see 0)
- =>WM: (14322: N1016 ^status complete)
- <=WM: (14310: I2 ^dir R)
- <=WM: (14309: I2 ^reward 1)
- <=WM: (14308: I2 ^see 0)
- =>WM: (14326: I2 ^level-1 R0-root)
- <=WM: (14311: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1020 ^value 1 +)
- (R1 ^reward R1020 +)
- Firing propose*predict-yes
- -->
- (O2033 ^name predict-yes +)
- (S1 ^operator O2033 +)
- Firing propose*predict-no
- -->
- (O2034 ^name predict-no +)
- (S1 ^operator O2034 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2032 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2031 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2032 ^name predict-no +)
- (S1 ^operator O2032 +)
- Retracting propose*predict-yes
- -->
- (O2031 ^name predict-yes +)
- (S1 ^operator O2031 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1019 ^value 1 +)
- (R1 ^reward R1019 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2032 = 0.2572461620169181)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2032 = 0.7427527453956113)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2031 = 0.7368278967052911)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2031 = -0.3011268063455669)
- =>WM: (14333: S1 ^operator O2034 +)
- =>WM: (14332: S1 ^operator O2033 +)
- =>WM: (14331: I3 ^dir U)
- =>WM: (14330: O2034 ^name predict-no)
- =>WM: (14329: O2033 ^name predict-yes)
- =>WM: (14328: R1020 ^value 1)
- =>WM: (14327: R1 ^reward R1020)
- <=WM: (14318: S1 ^operator O2031 +)
- <=WM: (14319: S1 ^operator O2032 +)
- <=WM: (14320: S1 ^operator O2032)
- <=WM: (14317: I3 ^dir R)
- <=WM: (14313: R1 ^reward R1019)
- <=WM: (14316: O2032 ^name predict-no)
- <=WM: (14315: O2031 ^name predict-yes)
- <=WM: (14314: R1019 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2033 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2034 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2032 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2031 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586136 -0.32889 0.257246 -> 0.586136 -0.32889 0.257246(R,m,v=1,0.861272,0.120177)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*36 0.413863 0.32889 0.742753 -> 0.413863 0.32889 0.742753(R,m,v=1,1,0)
- =>WM: (14334: S1 ^operator O2034)
- 1017: O: O2034 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1017 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1016 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14335: I3 ^predict-no N1017)
- <=WM: (14322: N1016 ^status complete)
- <=WM: (14321: I3 ^predict-no N1016)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- ---- Input Phase ---
- =>WM: (14339: I2 ^dir L)
- =>WM: (14338: I2 ^reward 1)
- =>WM: (14337: I2 ^see 0)
- =>WM: (14336: N1017 ^status complete)
- <=WM: (14325: I2 ^dir U)
- <=WM: (14324: I2 ^reward 1)
- <=WM: (14323: I2 ^see 0)
- =>WM: (14340: I2 ^level-1 R0-root)
- <=WM: (14326: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2034 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2033 = 0.5681110068810165)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1021 ^value 1 +)
- (R1 ^reward R1021 +)
- Firing propose*predict-yes
- -->
- (O2035 ^name predict-yes +)
- (S1 ^operator O2035 +)
- Firing propose*predict-no
- -->
- (O2036 ^name predict-no +)
- (S1 ^operator O2036 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2034 = 0.3289464232823495)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2033 = 0.4318909785907853)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2034 ^name predict-no +)
- (S1 ^operator O2034 +)
- Retracting propose*predict-yes
- -->
- (O2033 ^name predict-yes +)
- (S1 ^operator O2033 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1020 ^value 1 +)
- (R1 ^reward R1020 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2034 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2033 = 0.)
- =>WM: (14347: S1 ^operator O2036 +)
- =>WM: (14346: S1 ^operator O2035 +)
- =>WM: (14345: I3 ^dir L)
- =>WM: (14344: O2036 ^name predict-no)
- =>WM: (14343: O2035 ^name predict-yes)
- =>WM: (14342: R1021 ^value 1)
- =>WM: (14341: R1 ^reward R1021)
- <=WM: (14332: S1 ^operator O2033 +)
- <=WM: (14333: S1 ^operator O2034 +)
- <=WM: (14334: S1 ^operator O2034)
- <=WM: (14331: I3 ^dir U)
- <=WM: (14327: R1 ^reward R1020)
- <=WM: (14330: O2034 ^name predict-no)
- <=WM: (14329: O2033 ^name predict-yes)
- <=WM: (14328: R1020 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2035 = 0.5681110068810165)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2035 = 0.4318909785907853)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2036 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2036 = 0.3289464232823495)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2034 = 0.3289464232823495)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2034 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2033 = 0.4318909785907853)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2033 = 0.5681110068810165)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14348: S1 ^operator O2035)
- 1018: O: O2035 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1018 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1017 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14349: I3 ^predict-yes N1018)
- <=WM: (14336: N1017 ^status complete)
- <=WM: (14335: I3 ^predict-no N1017)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- /|--- Input Phase ---
- =>WM: (14353: I2 ^dir L)
- =>WM: (14352: I2 ^reward 1)
- =>WM: (14351: I2 ^see 1)
- =>WM: (14350: N1018 ^status complete)
- <=WM: (14339: I2 ^dir L)
- <=WM: (14338: I2 ^reward 1)
- <=WM: (14337: I2 ^see 0)
- =>WM: (14354: I2 ^level-1 L1-root)
- <=WM: (14340: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2036 = 0.6710529080984203)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2035 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1022 ^value 1 +)
- (R1 ^reward R1022 +)
- Firing propose*predict-yes
- -->
- (O2037 ^name predict-yes +)
- (S1 ^operator O2037 +)
- Firing propose*predict-no
- -->
- (O2038 ^name predict-no +)
- (S1 ^operator O2038 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2036 = 0.3289464232823495)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2035 = 0.4318909785907853)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2036 ^name predict-no +)
- (S1 ^operator O2036 +)
- Retracting propose*predict-yes
- -->
- (O2035 ^name predict-yes +)
- (S1 ^operator O2035 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1021 ^value 1 +)
- (R1 ^reward R1021 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2036 = 0.3289464232823495)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2036 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2035 = 0.4318909785907853)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2035 = 0.5681110068810165)
- =>WM: (14361: S1 ^operator O2038 +)
- =>WM: (14360: S1 ^operator O2037 +)
- =>WM: (14359: O2038 ^name predict-no)
- =>WM: (14358: O2037 ^name predict-yes)
- =>WM: (14357: R1022 ^value 1)
- =>WM: (14356: R1 ^reward R1022)
- =>WM: (14355: I3 ^see 1)
- <=WM: (14346: S1 ^operator O2035 +)
- <=WM: (14348: S1 ^operator O2035)
- <=WM: (14347: S1 ^operator O2036 +)
- <=WM: (14341: R1 ^reward R1021)
- <=WM: (14312: I3 ^see 0)
- <=WM: (14344: O2036 ^name predict-no)
- <=WM: (14343: O2035 ^name predict-yes)
- <=WM: (14342: R1021 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2037 = 0.4318909785907853)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2037 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2038 = 0.3289464232823495)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2038 = 0.6710529080984203)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2036 = 0.3289464232823495)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2036 = 0.6710529080984203)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2035 = 0.4318909785907853)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2035 = -0.06092862110810815)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683777 -0.251886 0.431891 -> 0.683777 -0.251886 0.431891(R,m,v=1,0.924855,0.0699019)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*30 0.316225 0.251886 0.568111 -> 0.316225 0.251886 0.568111(R,m,v=1,1,0)
- =>WM: (14362: S1 ^operator O2038)
- 1019: O: O2038 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1019 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1018 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14363: I3 ^predict-no N1019)
- <=WM: (14350: N1018 ^status complete)
- <=WM: (14349: I3 ^predict-yes N1018)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- \-/--- Input Phase ---
- =>WM: (14367: I2 ^dir R)
- =>WM: (14366: I2 ^reward 1)
- =>WM: (14365: I2 ^see 0)
- =>WM: (14364: N1019 ^status complete)
- <=WM: (14353: I2 ^dir L)
- <=WM: (14352: I2 ^reward 1)
- <=WM: (14351: I2 ^see 1)
- =>WM: (14368: I2 ^level-1 L0-root)
- <=WM: (14354: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2038 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2037 = 0.2631735946591498)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1023 ^value 1 +)
- (R1 ^reward R1023 +)
- Firing propose*predict-yes
- -->
- (O2039 ^name predict-yes +)
- (S1 ^operator O2039 +)
- Firing propose*predict-no
- -->
- (O2040 ^name predict-no +)
- (S1 ^operator O2040 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2038 = 0.2572463259050387)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2037 = 0.7368278967052911)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2038 ^name predict-no +)
- (S1 ^operator O2038 +)
- Retracting propose*predict-yes
- -->
- (O2037 ^name predict-yes +)
- (S1 ^operator O2037 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1022 ^value 1 +)
- (R1 ^reward R1022 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2038 = 0.6710529080984203)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2038 = 0.3289464232823495)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2037 = -0.06092862110810815)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2037 = 0.431890680770015)
- =>WM: (14376: S1 ^operator O2040 +)
- =>WM: (14375: S1 ^operator O2039 +)
- =>WM: (14374: I3 ^dir R)
- =>WM: (14373: O2040 ^name predict-no)
- =>WM: (14372: O2039 ^name predict-yes)
- =>WM: (14371: R1023 ^value 1)
- =>WM: (14370: R1 ^reward R1023)
- =>WM: (14369: I3 ^see 0)
- <=WM: (14360: S1 ^operator O2037 +)
- <=WM: (14361: S1 ^operator O2038 +)
- <=WM: (14362: S1 ^operator O2038)
- <=WM: (14345: I3 ^dir L)
- <=WM: (14356: R1 ^reward R1022)
- <=WM: (14355: I3 ^see 1)
- <=WM: (14359: O2038 ^name predict-no)
- <=WM: (14358: O2037 ^name predict-yes)
- <=WM: (14357: R1022 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2039 = 0.7368278967052911)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2039 = 0.2631735946591498)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2040 = 0.2572463259050387)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2040 = -0.07401383653737587)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2038 = 0.2572463259050387)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2038 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2037 = 0.7368278967052911)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2037 = 0.2631735946591498)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565404 -0.236458 0.328946 -> 0.565404 -0.236458 0.328947(R,m,v=1,0.907975,0.0840718)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*43 0.434595 0.236458 0.671053 -> 0.434595 0.236458 0.671053(R,m,v=1,1,0)
- =>WM: (14377: S1 ^operator O2039)
- 1020: O: O2039 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1020 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1019 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14378: I3 ^predict-yes N1020)
- <=WM: (14364: N1019 ^status complete)
- <=WM: (14363: I3 ^predict-no N1019)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |\---- Input Phase ---
- =>WM: (14382: I2 ^dir U)
- =>WM: (14381: I2 ^reward 1)
- =>WM: (14380: I2 ^see 1)
- =>WM: (14379: N1020 ^status complete)
- <=WM: (14367: I2 ^dir R)
- <=WM: (14366: I2 ^reward 1)
- <=WM: (14365: I2 ^see 0)
- =>WM: (14383: I2 ^level-1 R1-root)
- <=WM: (14368: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1024 ^value 1 +)
- (R1 ^reward R1024 +)
- Firing propose*predict-yes
- -->
- (O2041 ^name predict-yes +)
- (S1 ^operator O2041 +)
- Firing propose*predict-no
- -->
- (O2042 ^name predict-no +)
- (S1 ^operator O2042 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2040 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2039 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2040 ^name predict-no +)
- (S1 ^operator O2040 +)
- Retracting propose*predict-yes
- -->
- (O2039 ^name predict-yes +)
- (S1 ^operator O2039 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1023 ^value 1 +)
- (R1 ^reward R1023 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2040 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2040 = 0.2572463259050387)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2039 = 0.2631735946591498)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2039 = 0.7368278967052911)
- =>WM: (14391: S1 ^operator O2042 +)
- =>WM: (14390: S1 ^operator O2041 +)
- =>WM: (14389: I3 ^dir U)
- =>WM: (14388: O2042 ^name predict-no)
- =>WM: (14387: O2041 ^name predict-yes)
- =>WM: (14386: R1024 ^value 1)
- =>WM: (14385: R1 ^reward R1024)
- =>WM: (14384: I3 ^see 1)
- <=WM: (14375: S1 ^operator O2039 +)
- <=WM: (14377: S1 ^operator O2039)
- <=WM: (14376: S1 ^operator O2040 +)
- <=WM: (14374: I3 ^dir R)
- <=WM: (14370: R1 ^reward R1023)
- <=WM: (14369: I3 ^see 0)
- <=WM: (14373: O2040 ^name predict-no)
- <=WM: (14372: O2039 ^name predict-yes)
- <=WM: (14371: R1023 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2041 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2042 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2040 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2039 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114081 0.736828 -> 0.748236 -0.0114083 0.736828(R,m,v=1,0.899408,0.0910116)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*35 0.251764 0.0114092 0.263174 -> 0.251764 0.0114091 0.263173(R,m,v=1,1,0)
- =>WM: (14392: S1 ^operator O2042)
- 1021: O: O2042 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1021 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1020 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14393: I3 ^predict-no N1021)
- <=WM: (14379: N1020 ^status complete)
- <=WM: (14378: I3 ^predict-yes N1020)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /--- Input Phase ---
- =>WM: (14397: I2 ^dir U)
- =>WM: (14396: I2 ^reward 1)
- =>WM: (14395: I2 ^see 0)
- =>WM: (14394: N1021 ^status complete)
- <=WM: (14382: I2 ^dir U)
- <=WM: (14381: I2 ^reward 1)
- <=WM: (14380: I2 ^see 1)
- =>WM: (14398: I2 ^level-1 R1-root)
- <=WM: (14383: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1025 ^value 1 +)
- (R1 ^reward R1025 +)
- Firing propose*predict-yes
- -->
- (O2043 ^name predict-yes +)
- (S1 ^operator O2043 +)
- Firing propose*predict-no
- -->
- (O2044 ^name predict-no +)
- (S1 ^operator O2044 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2042 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2041 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2042 ^name predict-no +)
- (S1 ^operator O2042 +)
- Retracting propose*predict-yes
- -->
- (O2041 ^name predict-yes +)
- (S1 ^operator O2041 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1024 ^value 1 +)
- (R1 ^reward R1024 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2042 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2041 = 0.)
- =>WM: (14405: S1 ^operator O2044 +)
- =>WM: (14404: S1 ^operator O2043 +)
- =>WM: (14403: O2044 ^name predict-no)
- =>WM: (14402: O2043 ^name predict-yes)
- =>WM: (14401: R1025 ^value 1)
- =>WM: (14400: R1 ^reward R1025)
- =>WM: (14399: I3 ^see 0)
- <=WM: (14390: S1 ^operator O2041 +)
- <=WM: (14391: S1 ^operator O2042 +)
- <=WM: (14392: S1 ^operator O2042)
- <=WM: (14385: R1 ^reward R1024)
- <=WM: (14384: I3 ^see 1)
- <=WM: (14388: O2042 ^name predict-no)
- <=WM: (14387: O2041 ^name predict-yes)
- <=WM: (14386: R1024 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2043 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2044 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2042 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2041 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14406: S1 ^operator O2044)
- 1022: O: O2044 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1022 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1021 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14407: I3 ^predict-no N1022)
- <=WM: (14394: N1021 ^status complete)
- <=WM: (14393: I3 ^predict-no N1021)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |\--- Input Phase ---
- =>WM: (14411: I2 ^dir U)
- =>WM: (14410: I2 ^reward 1)
- =>WM: (14409: I2 ^see 0)
- =>WM: (14408: N1022 ^status complete)
- <=WM: (14397: I2 ^dir U)
- <=WM: (14396: I2 ^reward 1)
- <=WM: (14395: I2 ^see 0)
- =>WM: (14412: I2 ^level-1 R1-root)
- <=WM: (14398: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1026 ^value 1 +)
- (R1 ^reward R1026 +)
- Firing propose*predict-yes
- -->
- (O2045 ^name predict-yes +)
- (S1 ^operator O2045 +)
- Firing propose*predict-no
- -->
- (O2046 ^name predict-no +)
- (S1 ^operator O2046 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2044 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2043 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2044 ^name predict-no +)
- (S1 ^operator O2044 +)
- Retracting propose*predict-yes
- -->
- (O2043 ^name predict-yes +)
- (S1 ^operator O2043 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1025 ^value 1 +)
- (R1 ^reward R1025 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2044 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2043 = 0.)
- =>WM: (14418: S1 ^operator O2046 +)
- =>WM: (14417: S1 ^operator O2045 +)
- =>WM: (14416: O2046 ^name predict-no)
- =>WM: (14415: O2045 ^name predict-yes)
- =>WM: (14414: R1026 ^value 1)
- =>WM: (14413: R1 ^reward R1026)
- <=WM: (14404: S1 ^operator O2043 +)
- <=WM: (14405: S1 ^operator O2044 +)
- <=WM: (14406: S1 ^operator O2044)
- <=WM: (14400: R1 ^reward R1025)
- <=WM: (14403: O2044 ^name predict-no)
- <=WM: (14402: O2043 ^name predict-yes)
- <=WM: (14401: R1025 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2045 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2046 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2044 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2043 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14419: S1 ^operator O2046)
- 1023: O: O2046 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1023 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1022 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14420: I3 ^predict-no N1023)
- <=WM: (14408: N1022 ^status complete)
- <=WM: (14407: I3 ^predict-no N1022)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- -/|--- Input Phase ---
- =>WM: (14424: I2 ^dir R)
- =>WM: (14423: I2 ^reward 1)
- =>WM: (14422: I2 ^see 0)
- =>WM: (14421: N1023 ^status complete)
- <=WM: (14411: I2 ^dir U)
- <=WM: (14410: I2 ^reward 1)
- <=WM: (14409: I2 ^see 0)
- =>WM: (14425: I2 ^level-1 R1-root)
- <=WM: (14412: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2045 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2046 = 0.7427529092837319)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1027 ^value 1 +)
- (R1 ^reward R1027 +)
- Firing propose*predict-yes
- -->
- (O2047 ^name predict-yes +)
- (S1 ^operator O2047 +)
- Firing propose*predict-no
- -->
- (O2048 ^name predict-no +)
- (S1 ^operator O2048 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2046 = 0.2572463259050387)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2045 = 0.736827673000625)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2046 ^name predict-no +)
- (S1 ^operator O2046 +)
- Retracting propose*predict-yes
- -->
- (O2045 ^name predict-yes +)
- (S1 ^operator O2045 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1026 ^value 1 +)
- (R1 ^reward R1026 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2046 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2045 = 0.)
- =>WM: (14432: S1 ^operator O2048 +)
- =>WM: (14431: S1 ^operator O2047 +)
- =>WM: (14430: I3 ^dir R)
- =>WM: (14429: O2048 ^name predict-no)
- =>WM: (14428: O2047 ^name predict-yes)
- =>WM: (14427: R1027 ^value 1)
- =>WM: (14426: R1 ^reward R1027)
- <=WM: (14417: S1 ^operator O2045 +)
- <=WM: (14418: S1 ^operator O2046 +)
- <=WM: (14419: S1 ^operator O2046)
- <=WM: (14389: I3 ^dir U)
- <=WM: (14413: R1 ^reward R1026)
- <=WM: (14416: O2046 ^name predict-no)
- <=WM: (14415: O2045 ^name predict-yes)
- <=WM: (14414: R1026 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2047 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2047 = 0.736827673000625)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2048 = 0.7427529092837319)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2048 = 0.2572463259050387)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2046 = 0.2572463259050387)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2046 = 0.7427529092837319)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2045 = 0.736827673000625)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2045 = -0.3011268063455669)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14433: S1 ^operator O2048)
- 1024: O: O2048 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1024 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1023 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14434: I3 ^predict-no N1024)
- <=WM: (14421: N1023 ^status complete)
- <=WM: (14420: I3 ^predict-no N1023)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- \-/--- Input Phase ---
- =>WM: (14438: I2 ^dir R)
- =>WM: (14437: I2 ^reward 1)
- =>WM: (14436: I2 ^see 0)
- =>WM: (14435: N1024 ^status complete)
- <=WM: (14424: I2 ^dir R)
- <=WM: (14423: I2 ^reward 1)
- <=WM: (14422: I2 ^see 0)
- =>WM: (14439: I2 ^level-1 R0-root)
- <=WM: (14425: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2048 = 0.7427584875646159)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2047 = -0.1989581826229297)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1028 ^value 1 +)
- (R1 ^reward R1028 +)
- Firing propose*predict-yes
- -->
- (O2049 ^name predict-yes +)
- (S1 ^operator O2049 +)
- Firing propose*predict-no
- -->
- (O2050 ^name predict-no +)
- (S1 ^operator O2050 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2048 = 0.2572463259050387)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2047 = 0.736827673000625)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2048 ^name predict-no +)
- (S1 ^operator O2048 +)
- Retracting propose*predict-yes
- -->
- (O2047 ^name predict-yes +)
- (S1 ^operator O2047 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1027 ^value 1 +)
- (R1 ^reward R1027 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2048 = 0.2572463259050387)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2048 = 0.7427529092837319)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2047 = 0.736827673000625)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2047 = -0.3011268063455669)
- =>WM: (14445: S1 ^operator O2050 +)
- =>WM: (14444: S1 ^operator O2049 +)
- =>WM: (14443: O2050 ^name predict-no)
- =>WM: (14442: O2049 ^name predict-yes)
- =>WM: (14441: R1028 ^value 1)
- =>WM: (14440: R1 ^reward R1028)
- <=WM: (14431: S1 ^operator O2047 +)
- <=WM: (14432: S1 ^operator O2048 +)
- <=WM: (14433: S1 ^operator O2048)
- <=WM: (14426: R1 ^reward R1027)
- <=WM: (14429: O2048 ^name predict-no)
- <=WM: (14428: O2047 ^name predict-yes)
- <=WM: (14427: R1027 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2049 = 0.736827673000625)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2049 = -0.1989581826229297)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2050 = 0.2572463259050387)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2050 = 0.7427584875646159)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2048 = 0.2572463259050387)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2048 = 0.7427584875646159)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2047 = 0.736827673000625)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2047 = -0.1989581826229297)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586136 -0.32889 0.257246 -> 0.586136 -0.32889 0.257246(R,m,v=1,0.862069,0.119593)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*36 0.413863 0.32889 0.742753 -> 0.413863 0.32889 0.742753(R,m,v=1,1,0)
- =>WM: (14446: S1 ^operator O2050)
- 1025: O: O2050 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1025 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1024 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14447: I3 ^predict-no N1025)
- <=WM: (14435: N1024 ^status complete)
- <=WM: (14434: I3 ^predict-no N1024)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |\---- Input Phase ---
- =>WM: (14451: I2 ^dir U)
- =>WM: (14450: I2 ^reward 1)
- =>WM: (14449: I2 ^see 0)
- =>WM: (14448: N1025 ^status complete)
- <=WM: (14438: I2 ^dir R)
- <=WM: (14437: I2 ^reward 1)
- <=WM: (14436: I2 ^see 0)
- =>WM: (14452: I2 ^level-1 R0-root)
- <=WM: (14439: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1029 ^value 1 +)
- (R1 ^reward R1029 +)
- Firing propose*predict-yes
- -->
- (O2051 ^name predict-yes +)
- (S1 ^operator O2051 +)
- Firing propose*predict-no
- -->
- (O2052 ^name predict-no +)
- (S1 ^operator O2052 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2050 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2049 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2050 ^name predict-no +)
- (S1 ^operator O2050 +)
- Retracting propose*predict-yes
- -->
- (O2049 ^name predict-yes +)
- (S1 ^operator O2049 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1028 ^value 1 +)
- (R1 ^reward R1028 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2050 = 0.7427584875646159)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2050 = 0.2572464406267231)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2049 = -0.1989581826229297)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2049 = 0.736827673000625)
- =>WM: (14459: S1 ^operator O2052 +)
- =>WM: (14458: S1 ^operator O2051 +)
- =>WM: (14457: I3 ^dir U)
- =>WM: (14456: O2052 ^name predict-no)
- =>WM: (14455: O2051 ^name predict-yes)
- =>WM: (14454: R1029 ^value 1)
- =>WM: (14453: R1 ^reward R1029)
- <=WM: (14444: S1 ^operator O2049 +)
- <=WM: (14445: S1 ^operator O2050 +)
- <=WM: (14446: S1 ^operator O2050)
- <=WM: (14430: I3 ^dir R)
- <=WM: (14440: R1 ^reward R1028)
- <=WM: (14443: O2050 ^name predict-no)
- <=WM: (14442: O2049 ^name predict-yes)
- <=WM: (14441: R1028 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2051 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2052 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2050 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2049 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586136 -0.32889 0.257246 -> 0.586136 -0.32889 0.257246(R,m,v=1,0.862857,0.119015)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*41 0.413868 0.328891 0.742758 -> 0.413867 0.328891 0.742758(R,m,v=1,1,0)
- =>WM: (14460: S1 ^operator O2052)
- 1026: O: O2052 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1026 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1025 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14461: I3 ^predict-no N1026)
- <=WM: (14448: N1025 ^status complete)
- <=WM: (14447: I3 ^predict-no N1025)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- /|--- Input Phase ---
- =>WM: (14465: I2 ^dir L)
- =>WM: (14464: I2 ^reward 1)
- =>WM: (14463: I2 ^see 0)
- =>WM: (14462: N1026 ^status complete)
- <=WM: (14451: I2 ^dir U)
- <=WM: (14450: I2 ^reward 1)
- <=WM: (14449: I2 ^see 0)
- =>WM: (14466: I2 ^level-1 R0-root)
- <=WM: (14452: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2052 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2051 = 0.5681107090602462)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1030 ^value 1 +)
- (R1 ^reward R1030 +)
- Firing propose*predict-yes
- -->
- (O2053 ^name predict-yes +)
- (S1 ^operator O2053 +)
- Firing propose*predict-no
- -->
- (O2054 ^name predict-no +)
- (S1 ^operator O2054 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2052 = 0.3289465235752339)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2051 = 0.431890680770015)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2052 ^name predict-no +)
- (S1 ^operator O2052 +)
- Retracting propose*predict-yes
- -->
- (O2051 ^name predict-yes +)
- (S1 ^operator O2051 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1029 ^value 1 +)
- (R1 ^reward R1029 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2052 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2051 = 0.)
- =>WM: (14473: S1 ^operator O2054 +)
- =>WM: (14472: S1 ^operator O2053 +)
- =>WM: (14471: I3 ^dir L)
- =>WM: (14470: O2054 ^name predict-no)
- =>WM: (14469: O2053 ^name predict-yes)
- =>WM: (14468: R1030 ^value 1)
- =>WM: (14467: R1 ^reward R1030)
- <=WM: (14458: S1 ^operator O2051 +)
- <=WM: (14459: S1 ^operator O2052 +)
- <=WM: (14460: S1 ^operator O2052)
- <=WM: (14457: I3 ^dir U)
- <=WM: (14453: R1 ^reward R1029)
- <=WM: (14456: O2052 ^name predict-no)
- <=WM: (14455: O2051 ^name predict-yes)
- <=WM: (14454: R1029 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2053 = 0.5681107090602462)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2053 = 0.431890680770015)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2054 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2054 = 0.3289465235752339)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2052 = 0.3289465235752339)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2052 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2051 = 0.431890680770015)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2051 = 0.5681107090602462)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14474: S1 ^operator O2053)
- 1027: O: O2053 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1027 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1026 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14475: I3 ^predict-yes N1027)
- <=WM: (14462: N1026 ^status complete)
- <=WM: (14461: I3 ^predict-no N1026)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (14479: I2 ^dir U)
- =>WM: (14478: I2 ^reward 1)
- =>WM: (14477: I2 ^see 1)
- =>WM: (14476: N1027 ^status complete)
- <=WM: (14465: I2 ^dir L)
- <=WM: (14464: I2 ^reward 1)
- <=WM: (14463: I2 ^see 0)
- =>WM: (14480: I2 ^level-1 L1-root)
- <=WM: (14466: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1031 ^value 1 +)
- (R1 ^reward R1031 +)
- Firing propose*predict-yes
- -->
- (O2055 ^name predict-yes +)
- (S1 ^operator O2055 +)
- Firing propose*predict-no
- -->
- (O2056 ^name predict-no +)
- (S1 ^operator O2056 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2054 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2053 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2054 ^name predict-no +)
- (S1 ^operator O2054 +)
- Retracting propose*predict-yes
- -->
- (O2053 ^name predict-yes +)
- (S1 ^operator O2053 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1030 ^value 1 +)
- (R1 ^reward R1030 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2054 = 0.3289465235752339)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2054 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2053 = 0.431890680770015)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2053 = 0.5681107090602462)
- =>WM: (14488: S1 ^operator O2056 +)
- =>WM: (14487: S1 ^operator O2055 +)
- =>WM: (14486: I3 ^dir U)
- =>WM: (14485: O2056 ^name predict-no)
- =>WM: (14484: O2055 ^name predict-yes)
- =>WM: (14483: R1031 ^value 1)
- =>WM: (14482: R1 ^reward R1031)
- =>WM: (14481: I3 ^see 1)
- <=WM: (14472: S1 ^operator O2053 +)
- <=WM: (14474: S1 ^operator O2053)
- <=WM: (14473: S1 ^operator O2054 +)
- <=WM: (14471: I3 ^dir L)
- <=WM: (14467: R1 ^reward R1030)
- <=WM: (14399: I3 ^see 0)
- <=WM: (14470: O2054 ^name predict-no)
- <=WM: (14469: O2053 ^name predict-yes)
- <=WM: (14468: R1030 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2055 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2056 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2054 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2053 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683777 -0.251886 0.431891 -> 0.683777 -0.251886 0.43189(R,m,v=1,0.925287,0.0695303)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*30 0.316225 0.251886 0.568111 -> 0.316224 0.251886 0.568111(R,m,v=1,1,0)
- =>WM: (14489: S1 ^operator O2056)
- 1028: O: O2056 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1028 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1027 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14490: I3 ^predict-no N1028)
- <=WM: (14476: N1027 ^status complete)
- <=WM: (14475: I3 ^predict-yes N1027)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /|--- Input Phase ---
- =>WM: (14494: I2 ^dir U)
- =>WM: (14493: I2 ^reward 1)
- =>WM: (14492: I2 ^see 0)
- =>WM: (14491: N1028 ^status complete)
- <=WM: (14479: I2 ^dir U)
- <=WM: (14478: I2 ^reward 1)
- <=WM: (14477: I2 ^see 1)
- =>WM: (14495: I2 ^level-1 L1-root)
- <=WM: (14480: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1032 ^value 1 +)
- (R1 ^reward R1032 +)
- Firing propose*predict-yes
- -->
- (O2057 ^name predict-yes +)
- (S1 ^operator O2057 +)
- Firing propose*predict-no
- -->
- (O2058 ^name predict-no +)
- (S1 ^operator O2058 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2056 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2055 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2056 ^name predict-no +)
- (S1 ^operator O2056 +)
- Retracting propose*predict-yes
- -->
- (O2055 ^name predict-yes +)
- (S1 ^operator O2055 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1031 ^value 1 +)
- (R1 ^reward R1031 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2056 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2055 = 0.)
- =>WM: (14502: S1 ^operator O2058 +)
- =>WM: (14501: S1 ^operator O2057 +)
- =>WM: (14500: O2058 ^name predict-no)
- =>WM: (14499: O2057 ^name predict-yes)
- =>WM: (14498: R1032 ^value 1)
- =>WM: (14497: R1 ^reward R1032)
- =>WM: (14496: I3 ^see 0)
- <=WM: (14487: S1 ^operator O2055 +)
- <=WM: (14488: S1 ^operator O2056 +)
- <=WM: (14489: S1 ^operator O2056)
- <=WM: (14482: R1 ^reward R1031)
- <=WM: (14481: I3 ^see 1)
- <=WM: (14485: O2056 ^name predict-no)
- <=WM: (14484: O2055 ^name predict-yes)
- <=WM: (14483: R1031 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2057 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2058 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2056 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2055 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14503: S1 ^operator O2058)
- 1029: O: O2058 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1029 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1028 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14504: I3 ^predict-no N1029)
- <=WM: (14491: N1028 ^status complete)
- <=WM: (14490: I3 ^predict-no N1028)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (14508: I2 ^dir U)
- =>WM: (14507: I2 ^reward 1)
- =>WM: (14506: I2 ^see 0)
- =>WM: (14505: N1029 ^status complete)
- <=WM: (14494: I2 ^dir U)
- <=WM: (14493: I2 ^reward 1)
- <=WM: (14492: I2 ^see 0)
- =>WM: (14509: I2 ^level-1 L1-root)
- <=WM: (14495: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1033 ^value 1 +)
- (R1 ^reward R1033 +)
- Firing propose*predict-yes
- -->
- (O2059 ^name predict-yes +)
- (S1 ^operator O2059 +)
- Firing propose*predict-no
- -->
- (O2060 ^name predict-no +)
- (S1 ^operator O2060 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2058 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2057 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2058 ^name predict-no +)
- (S1 ^operator O2058 +)
- Retracting propose*predict-yes
- -->
- (O2057 ^name predict-yes +)
- (S1 ^operator O2057 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1032 ^value 1 +)
- (R1 ^reward R1032 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2058 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2057 = 0.)
- =>WM: (14515: S1 ^operator O2060 +)
- =>WM: (14514: S1 ^operator O2059 +)
- =>WM: (14513: O2060 ^name predict-no)
- =>WM: (14512: O2059 ^name predict-yes)
- =>WM: (14511: R1033 ^value 1)
- =>WM: (14510: R1 ^reward R1033)
- <=WM: (14501: S1 ^operator O2057 +)
- <=WM: (14502: S1 ^operator O2058 +)
- <=WM: (14503: S1 ^operator O2058)
- <=WM: (14497: R1 ^reward R1032)
- <=WM: (14500: O2058 ^name predict-no)
- <=WM: (14499: O2057 ^name predict-yes)
- <=WM: (14498: R1032 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2059 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2060 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2058 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2057 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14516: S1 ^operator O2060)
- 1030: O: O2060 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1030 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1029 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14517: I3 ^predict-no N1030)
- <=WM: (14505: N1029 ^status complete)
- <=WM: (14504: I3 ^predict-no N1029)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /--- Input Phase ---
- =>WM: (14521: I2 ^dir U)
- =>WM: (14520: I2 ^reward 1)
- =>WM: (14519: I2 ^see 0)
- =>WM: (14518: N1030 ^status complete)
- <=WM: (14508: I2 ^dir U)
- <=WM: (14507: I2 ^reward 1)
- <=WM: (14506: I2 ^see 0)
- =>WM: (14522: I2 ^level-1 L1-root)
- <=WM: (14509: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1034 ^value 1 +)
- (R1 ^reward R1034 +)
- Firing propose*predict-yes
- -->
- (O2061 ^name predict-yes +)
- (S1 ^operator O2061 +)
- Firing propose*predict-no
- -->
- (O2062 ^name predict-no +)
- (S1 ^operator O2062 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2060 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2059 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2060 ^name predict-no +)
- (S1 ^operator O2060 +)
- Retracting propose*predict-yes
- -->
- (O2059 ^name predict-yes +)
- (S1 ^operator O2059 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1033 ^value 1 +)
- (R1 ^reward R1033 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2060 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2059 = 0.)
- =>WM: (14528: S1 ^operator O2062 +)
- =>WM: (14527: S1 ^operator O2061 +)
- =>WM: (14526: O2062 ^name predict-no)
- =>WM: (14525: O2061 ^name predict-yes)
- =>WM: (14524: R1034 ^value 1)
- =>WM: (14523: R1 ^reward R1034)
- <=WM: (14514: S1 ^operator O2059 +)
- <=WM: (14515: S1 ^operator O2060 +)
- <=WM: (14516: S1 ^operator O2060)
- <=WM: (14510: R1 ^reward R1033)
- <=WM: (14513: O2060 ^name predict-no)
- <=WM: (14512: O2059 ^name predict-yes)
- <=WM: (14511: R1033 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2061 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2062 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2060 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2059 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14529: S1 ^operator O2062)
- 1031: O: O2062 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1031 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1030 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14530: I3 ^predict-no N1031)
- <=WM: (14518: N1030 ^status complete)
- <=WM: (14517: I3 ^predict-no N1030)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- |--- Input Phase ---
- =>WM: (14534: I2 ^dir L)
- =>WM: (14533: I2 ^reward 1)
- =>WM: (14532: I2 ^see 0)
- =>WM: (14531: N1031 ^status complete)
- <=WM: (14521: I2 ^dir U)
- <=WM: (14520: I2 ^reward 1)
- <=WM: (14519: I2 ^see 0)
- =>WM: (14535: I2 ^level-1 L1-root)
- <=WM: (14522: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2062 = 0.6710530083913049)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2061 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1035 ^value 1 +)
- (R1 ^reward R1035 +)
- Firing propose*predict-yes
- -->
- (O2063 ^name predict-yes +)
- (S1 ^operator O2063 +)
- Firing propose*predict-no
- -->
- (O2064 ^name predict-no +)
- (S1 ^operator O2064 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2062 = 0.3289465235752339)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2061 = 0.4318904722954759)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2062 ^name predict-no +)
- (S1 ^operator O2062 +)
- Retracting propose*predict-yes
- -->
- (O2061 ^name predict-yes +)
- (S1 ^operator O2061 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1034 ^value 1 +)
- (R1 ^reward R1034 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2062 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2061 = 0.)
- =>WM: (14542: S1 ^operator O2064 +)
- =>WM: (14541: S1 ^operator O2063 +)
- =>WM: (14540: I3 ^dir L)
- =>WM: (14539: O2064 ^name predict-no)
- =>WM: (14538: O2063 ^name predict-yes)
- =>WM: (14537: R1035 ^value 1)
- =>WM: (14536: R1 ^reward R1035)
- <=WM: (14527: S1 ^operator O2061 +)
- <=WM: (14528: S1 ^operator O2062 +)
- <=WM: (14529: S1 ^operator O2062)
- <=WM: (14486: I3 ^dir U)
- <=WM: (14523: R1 ^reward R1034)
- <=WM: (14526: O2062 ^name predict-no)
- <=WM: (14525: O2061 ^name predict-yes)
- <=WM: (14524: R1034 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2063 = -0.06092862110810815)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2063 = 0.4318904722954759)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2064 = 0.6710530083913049)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2064 = 0.3289465235752339)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2062 = 0.3289465235752339)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2062 = 0.6710530083913049)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2061 = 0.4318904722954759)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2061 = -0.06092862110810815)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14543: S1 ^operator O2064)
- 1032: O: O2064 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1032 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1031 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14544: I3 ^predict-no N1032)
- <=WM: (14531: N1031 ^status complete)
- <=WM: (14530: I3 ^predict-no N1031)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- \-/--- Input Phase ---
- =>WM: (14548: I2 ^dir R)
- =>WM: (14547: I2 ^reward 1)
- =>WM: (14546: I2 ^see 0)
- =>WM: (14545: N1032 ^status complete)
- <=WM: (14534: I2 ^dir L)
- <=WM: (14533: I2 ^reward 1)
- <=WM: (14532: I2 ^see 0)
- =>WM: (14549: I2 ^level-1 L0-root)
- <=WM: (14535: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2064 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2063 = 0.2631733709544837)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1036 ^value 1 +)
- (R1 ^reward R1036 +)
- Firing propose*predict-yes
- -->
- (O2065 ^name predict-yes +)
- (S1 ^operator O2065 +)
- Firing propose*predict-no
- -->
- (O2066 ^name predict-no +)
- (S1 ^operator O2066 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2064 = 0.2572457013980222)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2063 = 0.736827673000625)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2064 ^name predict-no +)
- (S1 ^operator O2064 +)
- Retracting propose*predict-yes
- -->
- (O2063 ^name predict-yes +)
- (S1 ^operator O2063 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1035 ^value 1 +)
- (R1 ^reward R1035 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2064 = 0.3289465235752339)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2064 = 0.6710530083913049)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2063 = 0.4318904722954759)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2063 = -0.06092862110810815)
- =>WM: (14556: S1 ^operator O2066 +)
- =>WM: (14555: S1 ^operator O2065 +)
- =>WM: (14554: I3 ^dir R)
- =>WM: (14553: O2066 ^name predict-no)
- =>WM: (14552: O2065 ^name predict-yes)
- =>WM: (14551: R1036 ^value 1)
- =>WM: (14550: R1 ^reward R1036)
- <=WM: (14541: S1 ^operator O2063 +)
- <=WM: (14542: S1 ^operator O2064 +)
- <=WM: (14543: S1 ^operator O2064)
- <=WM: (14540: I3 ^dir L)
- <=WM: (14536: R1 ^reward R1035)
- <=WM: (14539: O2064 ^name predict-no)
- <=WM: (14538: O2063 ^name predict-yes)
- <=WM: (14537: R1035 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2065 = 0.736827673000625)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2065 = 0.2631733709544837)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2066 = 0.2572457013980222)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2066 = -0.07401383653737587)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2064 = 0.2572457013980222)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2064 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2063 = 0.736827673000625)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2063 = 0.2631733709544837)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565404 -0.236458 0.328947 -> 0.565405 -0.236458 0.328947(R,m,v=1,0.908537,0.0836077)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*43 0.434595 0.236458 0.671053 -> 0.434595 0.236458 0.671053(R,m,v=1,1,0)
- =>WM: (14557: S1 ^operator O2065)
- 1033: O: O2065 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1033 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1032 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14558: I3 ^predict-yes N1033)
- <=WM: (14545: N1032 ^status complete)
- <=WM: (14544: I3 ^predict-no N1032)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |\---- Input Phase ---
- =>WM: (14562: I2 ^dir U)
- =>WM: (14561: I2 ^reward 1)
- =>WM: (14560: I2 ^see 1)
- =>WM: (14559: N1033 ^status complete)
- <=WM: (14548: I2 ^dir R)
- <=WM: (14547: I2 ^reward 1)
- <=WM: (14546: I2 ^see 0)
- =>WM: (14563: I2 ^level-1 R1-root)
- <=WM: (14549: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1037 ^value 1 +)
- (R1 ^reward R1037 +)
- Firing propose*predict-yes
- -->
- (O2067 ^name predict-yes +)
- (S1 ^operator O2067 +)
- Firing propose*predict-no
- -->
- (O2068 ^name predict-no +)
- (S1 ^operator O2068 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2066 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2065 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2066 ^name predict-no +)
- (S1 ^operator O2066 +)
- Retracting propose*predict-yes
- -->
- (O2065 ^name predict-yes +)
- (S1 ^operator O2065 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1036 ^value 1 +)
- (R1 ^reward R1036 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2066 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2066 = 0.2572457013980222)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2065 = 0.2631733709544837)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2065 = 0.736827673000625)
- =>WM: (14571: S1 ^operator O2068 +)
- =>WM: (14570: S1 ^operator O2067 +)
- =>WM: (14569: I3 ^dir U)
- =>WM: (14568: O2068 ^name predict-no)
- =>WM: (14567: O2067 ^name predict-yes)
- =>WM: (14566: R1037 ^value 1)
- =>WM: (14565: R1 ^reward R1037)
- =>WM: (14564: I3 ^see 1)
- <=WM: (14555: S1 ^operator O2065 +)
- <=WM: (14557: S1 ^operator O2065)
- <=WM: (14556: S1 ^operator O2066 +)
- <=WM: (14554: I3 ^dir R)
- <=WM: (14550: R1 ^reward R1036)
- <=WM: (14496: I3 ^see 0)
- <=WM: (14553: O2066 ^name predict-no)
- <=WM: (14552: O2065 ^name predict-yes)
- <=WM: (14551: R1036 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2067 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2068 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2066 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2065 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114083 0.736828 -> 0.748236 -0.0114084 0.736828(R,m,v=1,0.9,0.0905325)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*35 0.251764 0.0114091 0.263173 -> 0.251764 0.0114089 0.263173(R,m,v=1,1,0)
- =>WM: (14572: S1 ^operator O2068)
- 1034: O: O2068 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1034 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1033 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14573: I3 ^predict-no N1034)
- <=WM: (14559: N1033 ^status complete)
- <=WM: (14558: I3 ^predict-yes N1033)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- /|--- Input Phase ---
- =>WM: (14577: I2 ^dir R)
- =>WM: (14576: I2 ^reward 1)
- =>WM: (14575: I2 ^see 0)
- =>WM: (14574: N1034 ^status complete)
- <=WM: (14562: I2 ^dir U)
- <=WM: (14561: I2 ^reward 1)
- <=WM: (14560: I2 ^see 1)
- =>WM: (14578: I2 ^level-1 R1-root)
- <=WM: (14563: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2067 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2068 = 0.7427530240054163)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1038 ^value 1 +)
- (R1 ^reward R1038 +)
- Firing propose*predict-yes
- -->
- (O2069 ^name predict-yes +)
- (S1 ^operator O2069 +)
- Firing propose*predict-no
- -->
- (O2070 ^name predict-no +)
- (S1 ^operator O2070 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2068 = 0.2572457013980222)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2067 = 0.7368275164073588)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2068 ^name predict-no +)
- (S1 ^operator O2068 +)
- Retracting propose*predict-yes
- -->
- (O2067 ^name predict-yes +)
- (S1 ^operator O2067 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1037 ^value 1 +)
- (R1 ^reward R1037 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2068 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2067 = 0.)
- =>WM: (14586: S1 ^operator O2070 +)
- =>WM: (14585: S1 ^operator O2069 +)
- =>WM: (14584: I3 ^dir R)
- =>WM: (14583: O2070 ^name predict-no)
- =>WM: (14582: O2069 ^name predict-yes)
- =>WM: (14581: R1038 ^value 1)
- =>WM: (14580: R1 ^reward R1038)
- =>WM: (14579: I3 ^see 0)
- <=WM: (14570: S1 ^operator O2067 +)
- <=WM: (14571: S1 ^operator O2068 +)
- <=WM: (14572: S1 ^operator O2068)
- <=WM: (14569: I3 ^dir U)
- <=WM: (14565: R1 ^reward R1037)
- <=WM: (14564: I3 ^see 1)
- <=WM: (14568: O2068 ^name predict-no)
- <=WM: (14567: O2067 ^name predict-yes)
- <=WM: (14566: R1037 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2069 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2069 = 0.7368275164073588)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2070 = 0.7427530240054163)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2070 = 0.2572457013980222)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2068 = 0.2572457013980222)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2068 = 0.7427530240054163)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2067 = 0.7368275164073588)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2067 = -0.3011268063455669)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14587: S1 ^operator O2070)
- 1035: O: O2070 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1035 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1034 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14588: I3 ^predict-no N1035)
- <=WM: (14574: N1034 ^status complete)
- <=WM: (14573: I3 ^predict-no N1034)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- \-/--- Input Phase ---
- =>WM: (14592: I2 ^dir R)
- =>WM: (14591: I2 ^reward 1)
- =>WM: (14590: I2 ^see 0)
- =>WM: (14589: N1035 ^status complete)
- <=WM: (14577: I2 ^dir R)
- <=WM: (14576: I2 ^reward 1)
- <=WM: (14575: I2 ^see 0)
- =>WM: (14593: I2 ^level-1 R0-root)
- <=WM: (14578: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2070 = 0.7427577483359151)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2069 = -0.1989581826229297)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1039 ^value 1 +)
- (R1 ^reward R1039 +)
- Firing propose*predict-yes
- -->
- (O2071 ^name predict-yes +)
- (S1 ^operator O2071 +)
- Firing propose*predict-no
- -->
- (O2072 ^name predict-no +)
- (S1 ^operator O2072 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2070 = 0.2572457013980222)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2069 = 0.7368275164073588)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2070 ^name predict-no +)
- (S1 ^operator O2070 +)
- Retracting propose*predict-yes
- -->
- (O2069 ^name predict-yes +)
- (S1 ^operator O2069 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1038 ^value 1 +)
- (R1 ^reward R1038 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2070 = 0.2572457013980222)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2070 = 0.7427530240054163)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2069 = 0.7368275164073588)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2069 = -0.3011268063455669)
- =>WM: (14599: S1 ^operator O2072 +)
- =>WM: (14598: S1 ^operator O2071 +)
- =>WM: (14597: O2072 ^name predict-no)
- =>WM: (14596: O2071 ^name predict-yes)
- =>WM: (14595: R1039 ^value 1)
- =>WM: (14594: R1 ^reward R1039)
- <=WM: (14585: S1 ^operator O2069 +)
- <=WM: (14586: S1 ^operator O2070 +)
- <=WM: (14587: S1 ^operator O2070)
- <=WM: (14580: R1 ^reward R1038)
- <=WM: (14583: O2070 ^name predict-no)
- <=WM: (14582: O2069 ^name predict-yes)
- <=WM: (14581: R1038 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2071 = 0.7368275164073588)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2071 = -0.1989581826229297)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2072 = 0.2572457013980222)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2072 = 0.7427577483359151)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2070 = 0.2572457013980222)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2070 = 0.7427577483359151)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2069 = 0.7368275164073588)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2069 = -0.1989581826229297)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586136 -0.32889 0.257246 -> 0.586136 -0.32889 0.257246(R,m,v=1,0.863636,0.118442)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*36 0.413863 0.32889 0.742753 -> 0.413863 0.32889 0.742753(R,m,v=1,1,0)
- =>WM: (14600: S1 ^operator O2072)
- 1036: O: O2072 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1036 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1035 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14601: I3 ^predict-no N1036)
- <=WM: (14589: N1035 ^status complete)
- <=WM: (14588: I3 ^predict-no N1035)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |\---- Input Phase ---
- =>WM: (14605: I2 ^dir U)
- =>WM: (14604: I2 ^reward 1)
- =>WM: (14603: I2 ^see 0)
- =>WM: (14602: N1036 ^status complete)
- <=WM: (14592: I2 ^dir R)
- <=WM: (14591: I2 ^reward 1)
- <=WM: (14590: I2 ^see 0)
- =>WM: (14606: I2 ^level-1 R0-root)
- <=WM: (14593: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1040 ^value 1 +)
- (R1 ^reward R1040 +)
- Firing propose*predict-yes
- -->
- (O2073 ^name predict-yes +)
- (S1 ^operator O2073 +)
- Firing propose*predict-no
- -->
- (O2074 ^name predict-no +)
- (S1 ^operator O2074 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2072 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2071 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2072 ^name predict-no +)
- (S1 ^operator O2072 +)
- Retracting propose*predict-yes
- -->
- (O2071 ^name predict-yes +)
- (S1 ^operator O2071 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1039 ^value 1 +)
- (R1 ^reward R1039 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2072 = 0.7427577483359151)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2072 = 0.2572458925875065)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2071 = -0.1989581826229297)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2071 = 0.7368275164073588)
- =>WM: (14613: S1 ^operator O2074 +)
- =>WM: (14612: S1 ^operator O2073 +)
- =>WM: (14611: I3 ^dir U)
- =>WM: (14610: O2074 ^name predict-no)
- =>WM: (14609: O2073 ^name predict-yes)
- =>WM: (14608: R1040 ^value 1)
- =>WM: (14607: R1 ^reward R1040)
- <=WM: (14598: S1 ^operator O2071 +)
- <=WM: (14599: S1 ^operator O2072 +)
- <=WM: (14600: S1 ^operator O2072)
- <=WM: (14584: I3 ^dir R)
- <=WM: (14594: R1 ^reward R1039)
- <=WM: (14597: O2072 ^name predict-no)
- <=WM: (14596: O2071 ^name predict-yes)
- <=WM: (14595: R1039 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2073 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2074 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2072 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2071 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586136 -0.32889 0.257246 -> 0.586136 -0.32889 0.257245(R,m,v=1,0.864407,0.117874)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*41 0.413867 0.328891 0.742758 -> 0.413866 0.328891 0.742757(R,m,v=1,1,0)
- =>WM: (14614: S1 ^operator O2074)
- 1037: O: O2074 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1037 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1036 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14615: I3 ^predict-no N1037)
- <=WM: (14602: N1036 ^status complete)
- <=WM: (14601: I3 ^predict-no N1036)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- /|--- Input Phase ---
- =>WM: (14619: I2 ^dir R)
- =>WM: (14618: I2 ^reward 1)
- =>WM: (14617: I2 ^see 0)
- =>WM: (14616: N1037 ^status complete)
- <=WM: (14605: I2 ^dir U)
- <=WM: (14604: I2 ^reward 1)
- <=WM: (14603: I2 ^see 0)
- =>WM: (14620: I2 ^level-1 R0-root)
- <=WM: (14606: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2074 = 0.7427572021974018)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2073 = -0.1989581826229297)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1041 ^value 1 +)
- (R1 ^reward R1041 +)
- Firing propose*predict-yes
- -->
- (O2075 ^name predict-yes +)
- (S1 ^operator O2075 +)
- Firing propose*predict-no
- -->
- (O2076 ^name predict-no +)
- (S1 ^operator O2076 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2074 = 0.2572453464489932)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2073 = 0.7368275164073588)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2074 ^name predict-no +)
- (S1 ^operator O2074 +)
- Retracting propose*predict-yes
- -->
- (O2073 ^name predict-yes +)
- (S1 ^operator O2073 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1040 ^value 1 +)
- (R1 ^reward R1040 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2074 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2073 = 0.)
- =>WM: (14627: S1 ^operator O2076 +)
- =>WM: (14626: S1 ^operator O2075 +)
- =>WM: (14625: I3 ^dir R)
- =>WM: (14624: O2076 ^name predict-no)
- =>WM: (14623: O2075 ^name predict-yes)
- =>WM: (14622: R1041 ^value 1)
- =>WM: (14621: R1 ^reward R1041)
- <=WM: (14612: S1 ^operator O2073 +)
- <=WM: (14613: S1 ^operator O2074 +)
- <=WM: (14614: S1 ^operator O2074)
- <=WM: (14611: I3 ^dir U)
- <=WM: (14607: R1 ^reward R1040)
- <=WM: (14610: O2074 ^name predict-no)
- <=WM: (14609: O2073 ^name predict-yes)
- <=WM: (14608: R1040 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2075 = -0.1989581826229297)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2075 = 0.7368275164073588)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2076 = 0.7427572021974018)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2076 = 0.2572453464489932)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2074 = 0.2572453464489932)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2074 = 0.7427572021974018)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2073 = 0.7368275164073588)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2073 = -0.1989581826229297)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14628: S1 ^operator O2076)
- 1038: O: O2076 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1038 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1037 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14629: I3 ^predict-no N1038)
- <=WM: (14616: N1037 ^status complete)
- <=WM: (14615: I3 ^predict-no N1037)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (14633: I2 ^dir R)
- =>WM: (14632: I2 ^reward 1)
- =>WM: (14631: I2 ^see 0)
- =>WM: (14630: N1038 ^status complete)
- <=WM: (14619: I2 ^dir R)
- <=WM: (14618: I2 ^reward 1)
- <=WM: (14617: I2 ^see 0)
- =>WM: (14634: I2 ^level-1 R0-root)
- <=WM: (14620: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2076 = 0.7427572021974018)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2075 = -0.1989581826229297)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1042 ^value 1 +)
- (R1 ^reward R1042 +)
- Firing propose*predict-yes
- -->
- (O2077 ^name predict-yes +)
- (S1 ^operator O2077 +)
- Firing propose*predict-no
- -->
- (O2078 ^name predict-no +)
- (S1 ^operator O2078 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2076 = 0.2572453464489932)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2075 = 0.7368275164073588)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2076 ^name predict-no +)
- (S1 ^operator O2076 +)
- Retracting propose*predict-yes
- -->
- (O2075 ^name predict-yes +)
- (S1 ^operator O2075 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1041 ^value 1 +)
- (R1 ^reward R1041 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2076 = 0.2572453464489932)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2076 = 0.7427572021974018)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2075 = 0.7368275164073588)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2075 = -0.1989581826229297)
- =>WM: (14640: S1 ^operator O2078 +)
- =>WM: (14639: S1 ^operator O2077 +)
- =>WM: (14638: O2078 ^name predict-no)
- =>WM: (14637: O2077 ^name predict-yes)
- =>WM: (14636: R1042 ^value 1)
- =>WM: (14635: R1 ^reward R1042)
- <=WM: (14626: S1 ^operator O2075 +)
- <=WM: (14627: S1 ^operator O2076 +)
- <=WM: (14628: S1 ^operator O2076)
- <=WM: (14621: R1 ^reward R1041)
- <=WM: (14624: O2076 ^name predict-no)
- <=WM: (14623: O2075 ^name predict-yes)
- <=WM: (14622: R1041 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2077 = -0.1989581826229297)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2077 = 0.7368275164073588)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2078 = 0.7427572021974018)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2078 = 0.2572453464489932)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2076 = 0.2572453464489932)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2076 = 0.7427572021974018)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2075 = 0.7368275164073588)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2075 = -0.1989581826229297)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586136 -0.32889 0.257245 -> 0.586135 -0.32889 0.257245(R,m,v=1,0.865169,0.117311)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*41 0.413866 0.328891 0.742757 -> 0.413866 0.328891 0.742757(R,m,v=1,1,0)
- =>WM: (14641: S1 ^operator O2078)
- 1039: O: O2078 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1039 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1038 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14642: I3 ^predict-no N1039)
- <=WM: (14630: N1038 ^status complete)
- <=WM: (14629: I3 ^predict-no N1038)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- /--- Input Phase ---
- =>WM: (14646: I2 ^dir R)
- =>WM: (14645: I2 ^reward 1)
- =>WM: (14644: I2 ^see 0)
- =>WM: (14643: N1039 ^status complete)
- <=WM: (14633: I2 ^dir R)
- <=WM: (14632: I2 ^reward 1)
- <=WM: (14631: I2 ^see 0)
- =>WM: (14647: I2 ^level-1 R0-root)
- <=WM: (14634: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2078 = 0.7427568199004426)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2077 = -0.1989581826229297)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1043 ^value 1 +)
- (R1 ^reward R1043 +)
- Firing propose*predict-yes
- -->
- (O2079 ^name predict-yes +)
- (S1 ^operator O2079 +)
- Firing propose*predict-no
- -->
- (O2080 ^name predict-no +)
- (S1 ^operator O2080 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2078 = 0.2572449641520339)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2077 = 0.7368275164073588)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2078 ^name predict-no +)
- (S1 ^operator O2078 +)
- Retracting propose*predict-yes
- -->
- (O2077 ^name predict-yes +)
- (S1 ^operator O2077 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1042 ^value 1 +)
- (R1 ^reward R1042 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2078 = 0.2572449641520339)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2078 = 0.7427568199004426)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2077 = 0.7368275164073588)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2077 = -0.1989581826229297)
- =>WM: (14653: S1 ^operator O2080 +)
- =>WM: (14652: S1 ^operator O2079 +)
- =>WM: (14651: O2080 ^name predict-no)
- =>WM: (14650: O2079 ^name predict-yes)
- =>WM: (14649: R1043 ^value 1)
- =>WM: (14648: R1 ^reward R1043)
- <=WM: (14639: S1 ^operator O2077 +)
- <=WM: (14640: S1 ^operator O2078 +)
- <=WM: (14641: S1 ^operator O2078)
- <=WM: (14635: R1 ^reward R1042)
- <=WM: (14638: O2078 ^name predict-no)
- <=WM: (14637: O2077 ^name predict-yes)
- <=WM: (14636: R1042 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2079 = -0.1989581826229297)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2079 = 0.7368275164073588)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2080 = 0.7427568199004426)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2080 = 0.2572449641520339)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2078 = 0.2572449641520339)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2078 = 0.7427568199004426)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2077 = 0.7368275164073588)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2077 = -0.1989581826229297)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586135 -0.32889 0.257245 -> 0.586135 -0.32889 0.257245(R,m,v=1,0.865922,0.116753)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*41 0.413866 0.328891 0.742757 -> 0.413866 0.328891 0.742757(R,m,v=1,1,0)
- =>WM: (14654: S1 ^operator O2080)
- 1040: O: O2080 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1040 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1039 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14655: I3 ^predict-no N1040)
- <=WM: (14643: N1039 ^status complete)
- <=WM: (14642: I3 ^predict-no N1039)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |\---- Input Phase ---
- =>WM: (14659: I2 ^dir U)
- =>WM: (14658: I2 ^reward 1)
- =>WM: (14657: I2 ^see 0)
- =>WM: (14656: N1040 ^status complete)
- <=WM: (14646: I2 ^dir R)
- <=WM: (14645: I2 ^reward 1)
- <=WM: (14644: I2 ^see 0)
- =>WM: (14660: I2 ^level-1 R0-root)
- <=WM: (14647: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1044 ^value 1 +)
- (R1 ^reward R1044 +)
- Firing propose*predict-yes
- -->
- (O2081 ^name predict-yes +)
- (S1 ^operator O2081 +)
- Firing propose*predict-no
- -->
- (O2082 ^name predict-no +)
- (S1 ^operator O2082 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2080 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2079 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2080 ^name predict-no +)
- (S1 ^operator O2080 +)
- Retracting propose*predict-yes
- -->
- (O2079 ^name predict-yes +)
- (S1 ^operator O2079 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1043 ^value 1 +)
- (R1 ^reward R1043 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2080 = 0.2572446965441624)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2080 = 0.7427565522925711)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2079 = 0.7368275164073588)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2079 = -0.1989581826229297)
- =>WM: (14667: S1 ^operator O2082 +)
- =>WM: (14666: S1 ^operator O2081 +)
- =>WM: (14665: I3 ^dir U)
- =>WM: (14664: O2082 ^name predict-no)
- =>WM: (14663: O2081 ^name predict-yes)
- =>WM: (14662: R1044 ^value 1)
- =>WM: (14661: R1 ^reward R1044)
- <=WM: (14652: S1 ^operator O2079 +)
- <=WM: (14653: S1 ^operator O2080 +)
- <=WM: (14654: S1 ^operator O2080)
- <=WM: (14625: I3 ^dir R)
- <=WM: (14648: R1 ^reward R1043)
- <=WM: (14651: O2080 ^name predict-no)
- <=WM: (14650: O2079 ^name predict-yes)
- <=WM: (14649: R1043 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2081 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2082 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2080 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2079 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586135 -0.32889 0.257245 -> 0.586135 -0.32889 0.257245(R,m,v=1,0.866667,0.116201)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*41 0.413866 0.328891 0.742757 -> 0.413866 0.328891 0.742756(R,m,v=1,1,0)
- =>WM: (14668: S1 ^operator O2082)
- 1041: O: O2082 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1041 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1040 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14669: I3 ^predict-no N1041)
- <=WM: (14656: N1040 ^status complete)
- <=WM: (14655: I3 ^predict-no N1040)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- /--- Input Phase ---
- =>WM: (14673: I2 ^dir L)
- =>WM: (14672: I2 ^reward 1)
- =>WM: (14671: I2 ^see 0)
- =>WM: (14670: N1041 ^status complete)
- <=WM: (14659: I2 ^dir U)
- <=WM: (14658: I2 ^reward 1)
- <=WM: (14657: I2 ^see 0)
- =>WM: (14674: I2 ^level-1 R0-root)
- <=WM: (14660: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2082 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2081 = 0.568110500585707)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1045 ^value 1 +)
- (R1 ^reward R1045 +)
- Firing propose*predict-yes
- -->
- (O2083 ^name predict-yes +)
- (S1 ^operator O2083 +)
- Firing propose*predict-no
- -->
- (O2084 ^name predict-no +)
- (S1 ^operator O2084 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2082 = 0.328946593780253)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2081 = 0.4318904722954759)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2082 ^name predict-no +)
- (S1 ^operator O2082 +)
- Retracting propose*predict-yes
- -->
- (O2081 ^name predict-yes +)
- (S1 ^operator O2081 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1044 ^value 1 +)
- (R1 ^reward R1044 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2082 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2081 = 0.)
- =>WM: (14681: S1 ^operator O2084 +)
- =>WM: (14680: S1 ^operator O2083 +)
- =>WM: (14679: I3 ^dir L)
- =>WM: (14678: O2084 ^name predict-no)
- =>WM: (14677: O2083 ^name predict-yes)
- =>WM: (14676: R1045 ^value 1)
- =>WM: (14675: R1 ^reward R1045)
- <=WM: (14666: S1 ^operator O2081 +)
- <=WM: (14667: S1 ^operator O2082 +)
- <=WM: (14668: S1 ^operator O2082)
- <=WM: (14665: I3 ^dir U)
- <=WM: (14661: R1 ^reward R1044)
- <=WM: (14664: O2082 ^name predict-no)
- <=WM: (14663: O2081 ^name predict-yes)
- <=WM: (14662: R1044 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2083 = 0.568110500585707)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2083 = 0.4318904722954759)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2084 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2084 = 0.328946593780253)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2082 = 0.328946593780253)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2082 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2081 = 0.4318904722954759)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2081 = 0.568110500585707)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14682: S1 ^operator O2083)
- 1042: O: O2083 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1042 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1041 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14683: I3 ^predict-yes N1042)
- <=WM: (14670: N1041 ^status complete)
- <=WM: (14669: I3 ^predict-no N1041)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- |\--- Input Phase ---
- =>WM: (14687: I2 ^dir L)
- =>WM: (14686: I2 ^reward 1)
- =>WM: (14685: I2 ^see 1)
- =>WM: (14684: N1042 ^status complete)
- <=WM: (14673: I2 ^dir L)
- <=WM: (14672: I2 ^reward 1)
- <=WM: (14671: I2 ^see 0)
- =>WM: (14688: I2 ^level-1 L1-root)
- <=WM: (14674: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2084 = 0.671053078596324)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2083 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1046 ^value 1 +)
- (R1 ^reward R1046 +)
- Firing propose*predict-yes
- -->
- (O2085 ^name predict-yes +)
- (S1 ^operator O2085 +)
- Firing propose*predict-no
- -->
- (O2086 ^name predict-no +)
- (S1 ^operator O2086 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2084 = 0.328946593780253)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2083 = 0.4318904722954759)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2084 ^name predict-no +)
- (S1 ^operator O2084 +)
- Retracting propose*predict-yes
- -->
- (O2083 ^name predict-yes +)
- (S1 ^operator O2083 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1045 ^value 1 +)
- (R1 ^reward R1045 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2084 = 0.328946593780253)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2084 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2083 = 0.4318904722954759)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2083 = 0.568110500585707)
- =>WM: (14695: S1 ^operator O2086 +)
- =>WM: (14694: S1 ^operator O2085 +)
- =>WM: (14693: O2086 ^name predict-no)
- =>WM: (14692: O2085 ^name predict-yes)
- =>WM: (14691: R1046 ^value 1)
- =>WM: (14690: R1 ^reward R1046)
- =>WM: (14689: I3 ^see 1)
- <=WM: (14680: S1 ^operator O2083 +)
- <=WM: (14682: S1 ^operator O2083)
- <=WM: (14681: S1 ^operator O2084 +)
- <=WM: (14675: R1 ^reward R1045)
- <=WM: (14579: I3 ^see 0)
- <=WM: (14678: O2084 ^name predict-no)
- <=WM: (14677: O2083 ^name predict-yes)
- <=WM: (14676: R1045 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2085 = 0.4318904722954759)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2085 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2086 = 0.328946593780253)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2086 = 0.671053078596324)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2084 = 0.328946593780253)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2084 = 0.671053078596324)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2083 = 0.4318904722954759)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2083 = -0.06092862110810815)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683777 -0.251886 0.43189 -> 0.683776 -0.251886 0.43189(R,m,v=1,0.925714,0.0691626)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*30 0.316224 0.251886 0.568111 -> 0.316224 0.251886 0.56811(R,m,v=1,1,0)
- =>WM: (14696: S1 ^operator O2086)
- 1043: O: O2086 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1043 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1042 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14697: I3 ^predict-no N1043)
- <=WM: (14684: N1042 ^status complete)
- <=WM: (14683: I3 ^predict-yes N1042)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- -/--- Input Phase ---
- =>WM: (14701: I2 ^dir R)
- =>WM: (14700: I2 ^reward 1)
- =>WM: (14699: I2 ^see 0)
- =>WM: (14698: N1043 ^status complete)
- <=WM: (14687: I2 ^dir L)
- <=WM: (14686: I2 ^reward 1)
- <=WM: (14685: I2 ^see 1)
- =>WM: (14702: I2 ^level-1 L0-root)
- <=WM: (14688: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2086 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2085 = 0.2631732143612174)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1047 ^value 1 +)
- (R1 ^reward R1047 +)
- Firing propose*predict-yes
- -->
- (O2087 ^name predict-yes +)
- (S1 ^operator O2087 +)
- Firing propose*predict-no
- -->
- (O2088 ^name predict-no +)
- (S1 ^operator O2088 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2086 = 0.2572445092186524)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2085 = 0.7368275164073588)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2086 ^name predict-no +)
- (S1 ^operator O2086 +)
- Retracting propose*predict-yes
- -->
- (O2085 ^name predict-yes +)
- (S1 ^operator O2085 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1046 ^value 1 +)
- (R1 ^reward R1046 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2086 = 0.671053078596324)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2086 = 0.328946593780253)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2085 = -0.06092862110810815)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2085 = 0.4318903263632984)
- =>WM: (14710: S1 ^operator O2088 +)
- =>WM: (14709: S1 ^operator O2087 +)
- =>WM: (14708: I3 ^dir R)
- =>WM: (14707: O2088 ^name predict-no)
- =>WM: (14706: O2087 ^name predict-yes)
- =>WM: (14705: R1047 ^value 1)
- =>WM: (14704: R1 ^reward R1047)
- =>WM: (14703: I3 ^see 0)
- <=WM: (14694: S1 ^operator O2085 +)
- <=WM: (14695: S1 ^operator O2086 +)
- <=WM: (14696: S1 ^operator O2086)
- <=WM: (14679: I3 ^dir L)
- <=WM: (14690: R1 ^reward R1046)
- <=WM: (14689: I3 ^see 1)
- <=WM: (14693: O2086 ^name predict-no)
- <=WM: (14692: O2085 ^name predict-yes)
- <=WM: (14691: R1046 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2087 = 0.7368275164073588)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2087 = 0.2631732143612174)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2088 = 0.2572445092186524)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2088 = -0.07401383653737587)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2086 = 0.2572445092186524)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2086 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2085 = 0.7368275164073588)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2085 = 0.2631732143612174)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565405 -0.236458 0.328947 -> 0.565405 -0.236458 0.328947(R,m,v=1,0.909091,0.0831486)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*43 0.434595 0.236458 0.671053 -> 0.434595 0.236458 0.671053(R,m,v=1,1,0)
- =>WM: (14711: S1 ^operator O2087)
- 1044: O: O2087 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1044 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1043 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14712: I3 ^predict-yes N1044)
- <=WM: (14698: N1043 ^status complete)
- <=WM: (14697: I3 ^predict-no N1043)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |\---- Input Phase ---
- =>WM: (14716: I2 ^dir U)
- =>WM: (14715: I2 ^reward 1)
- =>WM: (14714: I2 ^see 1)
- =>WM: (14713: N1044 ^status complete)
- <=WM: (14701: I2 ^dir R)
- <=WM: (14700: I2 ^reward 1)
- <=WM: (14699: I2 ^see 0)
- =>WM: (14717: I2 ^level-1 R1-root)
- <=WM: (14702: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1048 ^value 1 +)
- (R1 ^reward R1048 +)
- Firing propose*predict-yes
- -->
- (O2089 ^name predict-yes +)
- (S1 ^operator O2089 +)
- Firing propose*predict-no
- -->
- (O2090 ^name predict-no +)
- (S1 ^operator O2090 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2088 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2087 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2088 ^name predict-no +)
- (S1 ^operator O2088 +)
- Retracting propose*predict-yes
- -->
- (O2087 ^name predict-yes +)
- (S1 ^operator O2087 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1047 ^value 1 +)
- (R1 ^reward R1047 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2088 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2088 = 0.2572445092186524)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2087 = 0.2631732143612174)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2087 = 0.7368275164073588)
- =>WM: (14725: S1 ^operator O2090 +)
- =>WM: (14724: S1 ^operator O2089 +)
- =>WM: (14723: I3 ^dir U)
- =>WM: (14722: O2090 ^name predict-no)
- =>WM: (14721: O2089 ^name predict-yes)
- =>WM: (14720: R1048 ^value 1)
- =>WM: (14719: R1 ^reward R1048)
- =>WM: (14718: I3 ^see 1)
- <=WM: (14709: S1 ^operator O2087 +)
- <=WM: (14711: S1 ^operator O2087)
- <=WM: (14710: S1 ^operator O2088 +)
- <=WM: (14708: I3 ^dir R)
- <=WM: (14704: R1 ^reward R1047)
- <=WM: (14703: I3 ^see 0)
- <=WM: (14707: O2088 ^name predict-no)
- <=WM: (14706: O2087 ^name predict-yes)
- <=WM: (14705: R1047 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2089 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2090 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2088 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2087 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114084 0.736828 -> 0.748236 -0.0114085 0.736827(R,m,v=1,0.900585,0.0900585)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*35 0.251764 0.0114089 0.263173 -> 0.251764 0.0114089 0.263173(R,m,v=1,1,0)
- =>WM: (14726: S1 ^operator O2090)
- 1045: O: O2090 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1045 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1044 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14727: I3 ^predict-no N1045)
- <=WM: (14713: N1044 ^status complete)
- <=WM: (14712: I3 ^predict-yes N1044)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /--- Input Phase ---
- =>WM: (14731: I2 ^dir U)
- =>WM: (14730: I2 ^reward 1)
- =>WM: (14729: I2 ^see 0)
- =>WM: (14728: N1045 ^status complete)
- <=WM: (14716: I2 ^dir U)
- <=WM: (14715: I2 ^reward 1)
- <=WM: (14714: I2 ^see 1)
- =>WM: (14732: I2 ^level-1 R1-root)
- <=WM: (14717: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1049 ^value 1 +)
- (R1 ^reward R1049 +)
- Firing propose*predict-yes
- -->
- (O2091 ^name predict-yes +)
- (S1 ^operator O2091 +)
- Firing propose*predict-no
- -->
- (O2092 ^name predict-no +)
- (S1 ^operator O2092 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2090 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2089 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2090 ^name predict-no +)
- (S1 ^operator O2090 +)
- Retracting propose*predict-yes
- -->
- (O2089 ^name predict-yes +)
- (S1 ^operator O2089 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1048 ^value 1 +)
- (R1 ^reward R1048 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2090 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2089 = 0.)
- =>WM: (14739: S1 ^operator O2092 +)
- =>WM: (14738: S1 ^operator O2091 +)
- =>WM: (14737: O2092 ^name predict-no)
- =>WM: (14736: O2091 ^name predict-yes)
- =>WM: (14735: R1049 ^value 1)
- =>WM: (14734: R1 ^reward R1049)
- =>WM: (14733: I3 ^see 0)
- <=WM: (14724: S1 ^operator O2089 +)
- <=WM: (14725: S1 ^operator O2090 +)
- <=WM: (14726: S1 ^operator O2090)
- <=WM: (14719: R1 ^reward R1048)
- <=WM: (14718: I3 ^see 1)
- <=WM: (14722: O2090 ^name predict-no)
- <=WM: (14721: O2089 ^name predict-yes)
- <=WM: (14720: R1048 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2091 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2092 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2090 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2089 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14740: S1 ^operator O2092)
- 1046: O: O2092 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1046 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1045 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14741: I3 ^predict-no N1046)
- <=WM: (14728: N1045 ^status complete)
- <=WM: (14727: I3 ^predict-no N1045)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- |--- Input Phase ---
- =>WM: (14745: I2 ^dir R)
- =>WM: (14744: I2 ^reward 1)
- =>WM: (14743: I2 ^see 0)
- =>WM: (14742: N1046 ^status complete)
- <=WM: (14731: I2 ^dir U)
- <=WM: (14730: I2 ^reward 1)
- <=WM: (14729: I2 ^see 0)
- =>WM: (14746: I2 ^level-1 R1-root)
- <=WM: (14732: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2091 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2092 = 0.7427532151949006)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1050 ^value 1 +)
- (R1 ^reward R1050 +)
- Firing propose*predict-yes
- -->
- (O2093 ^name predict-yes +)
- (S1 ^operator O2093 +)
- Firing propose*predict-no
- -->
- (O2094 ^name predict-no +)
- (S1 ^operator O2094 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2092 = 0.2572445092186524)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2091 = 0.7368274067920724)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2092 ^name predict-no +)
- (S1 ^operator O2092 +)
- Retracting propose*predict-yes
- -->
- (O2091 ^name predict-yes +)
- (S1 ^operator O2091 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1049 ^value 1 +)
- (R1 ^reward R1049 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2092 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2091 = 0.)
- =>WM: (14753: S1 ^operator O2094 +)
- =>WM: (14752: S1 ^operator O2093 +)
- =>WM: (14751: I3 ^dir R)
- =>WM: (14750: O2094 ^name predict-no)
- =>WM: (14749: O2093 ^name predict-yes)
- =>WM: (14748: R1050 ^value 1)
- =>WM: (14747: R1 ^reward R1050)
- <=WM: (14738: S1 ^operator O2091 +)
- <=WM: (14739: S1 ^operator O2092 +)
- <=WM: (14740: S1 ^operator O2092)
- <=WM: (14723: I3 ^dir U)
- <=WM: (14734: R1 ^reward R1049)
- <=WM: (14737: O2092 ^name predict-no)
- <=WM: (14736: O2091 ^name predict-yes)
- <=WM: (14735: R1049 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2093 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2093 = 0.7368274067920724)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2094 = 0.7427532151949006)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2094 = 0.2572445092186524)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2092 = 0.2572445092186524)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2092 = 0.7427532151949006)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2091 = 0.7368274067920724)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2091 = -0.3011268063455669)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14754: S1 ^operator O2094)
- 1047: O: O2094 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1047 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1046 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14755: I3 ^predict-no N1047)
- <=WM: (14742: N1046 ^status complete)
- <=WM: (14741: I3 ^predict-no N1046)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (14759: I2 ^dir R)
- =>WM: (14758: I2 ^reward 1)
- =>WM: (14757: I2 ^see 0)
- =>WM: (14756: N1047 ^status complete)
- <=WM: (14745: I2 ^dir R)
- <=WM: (14744: I2 ^reward 1)
- <=WM: (14743: I2 ^see 0)
- =>WM: (14760: I2 ^level-1 R0-root)
- <=WM: (14746: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2094 = 0.7427563649670611)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2093 = -0.1989581826229297)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1051 ^value 1 +)
- (R1 ^reward R1051 +)
- Firing propose*predict-yes
- -->
- (O2095 ^name predict-yes +)
- (S1 ^operator O2095 +)
- Firing propose*predict-no
- -->
- (O2096 ^name predict-no +)
- (S1 ^operator O2096 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2094 = 0.2572445092186524)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2093 = 0.7368274067920724)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2094 ^name predict-no +)
- (S1 ^operator O2094 +)
- Retracting propose*predict-yes
- -->
- (O2093 ^name predict-yes +)
- (S1 ^operator O2093 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1050 ^value 1 +)
- (R1 ^reward R1050 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2094 = 0.2572445092186524)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2094 = 0.7427532151949006)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2093 = 0.7368274067920724)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2093 = -0.3011268063455669)
- =>WM: (14766: S1 ^operator O2096 +)
- =>WM: (14765: S1 ^operator O2095 +)
- =>WM: (14764: O2096 ^name predict-no)
- =>WM: (14763: O2095 ^name predict-yes)
- =>WM: (14762: R1051 ^value 1)
- =>WM: (14761: R1 ^reward R1051)
- <=WM: (14752: S1 ^operator O2093 +)
- <=WM: (14753: S1 ^operator O2094 +)
- <=WM: (14754: S1 ^operator O2094)
- <=WM: (14747: R1 ^reward R1050)
- <=WM: (14750: O2094 ^name predict-no)
- <=WM: (14749: O2093 ^name predict-yes)
- <=WM: (14748: R1050 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2095 = 0.7368274067920724)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2095 = -0.1989581826229297)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2096 = 0.2572445092186524)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2096 = 0.7427563649670611)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2094 = 0.2572445092186524)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2094 = 0.7427563649670611)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2093 = 0.7368274067920724)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2093 = -0.1989581826229297)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586135 -0.32889 0.257245 -> 0.586135 -0.32889 0.257245(R,m,v=1,0.867403,0.115654)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*36 0.413863 0.32889 0.742753 -> 0.413864 0.32889 0.742754(R,m,v=1,1,0)
- =>WM: (14767: S1 ^operator O2096)
- 1048: O: O2096 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1048 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1047 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14768: I3 ^predict-no N1048)
- <=WM: (14756: N1047 ^status complete)
- <=WM: (14755: I3 ^predict-no N1047)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- /--- Input Phase ---
- =>WM: (14772: I2 ^dir R)
- =>WM: (14771: I2 ^reward 1)
- =>WM: (14770: I2 ^see 0)
- =>WM: (14769: N1048 ^status complete)
- <=WM: (14759: I2 ^dir R)
- <=WM: (14758: I2 ^reward 1)
- <=WM: (14757: I2 ^see 0)
- =>WM: (14773: I2 ^level-1 R0-root)
- <=WM: (14760: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2096 = 0.7427563649670611)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2095 = -0.1989581826229297)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1052 ^value 1 +)
- (R1 ^reward R1052 +)
- Firing propose*predict-yes
- -->
- (O2097 ^name predict-yes +)
- (S1 ^operator O2097 +)
- Firing propose*predict-no
- -->
- (O2098 ^name predict-no +)
- (S1 ^operator O2098 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2096 = 0.2572448505566195)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2095 = 0.7368274067920724)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2096 ^name predict-no +)
- (S1 ^operator O2096 +)
- Retracting propose*predict-yes
- -->
- (O2095 ^name predict-yes +)
- (S1 ^operator O2095 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1051 ^value 1 +)
- (R1 ^reward R1051 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2096 = 0.7427563649670611)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2096 = 0.2572448505566195)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2095 = -0.1989581826229297)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2095 = 0.7368274067920724)
- =>WM: (14779: S1 ^operator O2098 +)
- =>WM: (14778: S1 ^operator O2097 +)
- =>WM: (14777: O2098 ^name predict-no)
- =>WM: (14776: O2097 ^name predict-yes)
- =>WM: (14775: R1052 ^value 1)
- =>WM: (14774: R1 ^reward R1052)
- <=WM: (14765: S1 ^operator O2095 +)
- <=WM: (14766: S1 ^operator O2096 +)
- <=WM: (14767: S1 ^operator O2096)
- <=WM: (14761: R1 ^reward R1051)
- <=WM: (14764: O2096 ^name predict-no)
- <=WM: (14763: O2095 ^name predict-yes)
- <=WM: (14762: R1051 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2097 = 0.7368274067920724)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2097 = -0.1989581826229297)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2098 = 0.2572448505566195)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2098 = 0.7427563649670611)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2096 = 0.2572448505566195)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2096 = 0.7427563649670611)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2095 = 0.7368274067920724)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2095 = -0.1989581826229297)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586135 -0.32889 0.257245 -> 0.586135 -0.32889 0.257245(R,m,v=1,0.868132,0.115111)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*41 0.413866 0.328891 0.742756 -> 0.413866 0.32889 0.742756(R,m,v=1,1,0)
- =>WM: (14780: S1 ^operator O2098)
- 1049: O: O2098 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1049 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1048 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14781: I3 ^predict-no N1049)
- <=WM: (14769: N1048 ^status complete)
- <=WM: (14768: I3 ^predict-no N1048)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |\--- Input Phase ---
- =>WM: (14785: I2 ^dir U)
- =>WM: (14784: I2 ^reward 1)
- =>WM: (14783: I2 ^see 0)
- =>WM: (14782: N1049 ^status complete)
- <=WM: (14772: I2 ^dir R)
- <=WM: (14771: I2 ^reward 1)
- <=WM: (14770: I2 ^see 0)
- =>WM: (14786: I2 ^level-1 R0-root)
- <=WM: (14773: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1053 ^value 1 +)
- (R1 ^reward R1053 +)
- Firing propose*predict-yes
- -->
- (O2099 ^name predict-yes +)
- (S1 ^operator O2099 +)
- Firing propose*predict-no
- -->
- (O2100 ^name predict-no +)
- (S1 ^operator O2100 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2098 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2097 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2098 ^name predict-no +)
- (S1 ^operator O2098 +)
- Retracting propose*predict-yes
- -->
- (O2097 ^name predict-yes +)
- (S1 ^operator O2097 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1052 ^value 1 +)
- (R1 ^reward R1052 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2098 = 0.742756182638509)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2098 = 0.2572446682280674)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2097 = -0.1989581826229297)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2097 = 0.7368274067920724)
- =>WM: (14793: S1 ^operator O2100 +)
- =>WM: (14792: S1 ^operator O2099 +)
- =>WM: (14791: I3 ^dir U)
- =>WM: (14790: O2100 ^name predict-no)
- =>WM: (14789: O2099 ^name predict-yes)
- =>WM: (14788: R1053 ^value 1)
- =>WM: (14787: R1 ^reward R1053)
- <=WM: (14778: S1 ^operator O2097 +)
- <=WM: (14779: S1 ^operator O2098 +)
- <=WM: (14780: S1 ^operator O2098)
- <=WM: (14751: I3 ^dir R)
- <=WM: (14774: R1 ^reward R1052)
- <=WM: (14777: O2098 ^name predict-no)
- <=WM: (14776: O2097 ^name predict-yes)
- <=WM: (14775: R1052 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2099 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2100 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2098 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2097 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586135 -0.32889 0.257245 -> 0.586135 -0.32889 0.257245(R,m,v=1,0.868852,0.114574)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*41 0.413866 0.32889 0.742756 -> 0.413866 0.32889 0.742756(R,m,v=1,1,0)
- =>WM: (14794: S1 ^operator O2100)
- 1050: O: O2100 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1050 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1049 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14795: I3 ^predict-no N1050)
- <=WM: (14782: N1049 ^status complete)
- <=WM: (14781: I3 ^predict-no N1049)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- -/--- Input Phase ---
- =>WM: (14799: I2 ^dir L)
- =>WM: (14798: I2 ^reward 1)
- =>WM: (14797: I2 ^see 0)
- =>WM: (14796: N1050 ^status complete)
- <=WM: (14785: I2 ^dir U)
- <=WM: (14784: I2 ^reward 1)
- <=WM: (14783: I2 ^see 0)
- =>WM: (14800: I2 ^level-1 R0-root)
- <=WM: (14786: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2100 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2099 = 0.5681103546535295)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1054 ^value 1 +)
- (R1 ^reward R1054 +)
- Firing propose*predict-yes
- -->
- (O2101 ^name predict-yes +)
- (S1 ^operator O2101 +)
- Firing propose*predict-no
- -->
- (O2102 ^name predict-no +)
- (S1 ^operator O2102 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2100 = 0.3289466429237665)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2099 = 0.4318903263632984)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2100 ^name predict-no +)
- (S1 ^operator O2100 +)
- Retracting propose*predict-yes
- -->
- (O2099 ^name predict-yes +)
- (S1 ^operator O2099 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1053 ^value 1 +)
- (R1 ^reward R1053 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2100 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2099 = 0.)
- =>WM: (14807: S1 ^operator O2102 +)
- =>WM: (14806: S1 ^operator O2101 +)
- =>WM: (14805: I3 ^dir L)
- =>WM: (14804: O2102 ^name predict-no)
- =>WM: (14803: O2101 ^name predict-yes)
- =>WM: (14802: R1054 ^value 1)
- =>WM: (14801: R1 ^reward R1054)
- <=WM: (14792: S1 ^operator O2099 +)
- <=WM: (14793: S1 ^operator O2100 +)
- <=WM: (14794: S1 ^operator O2100)
- <=WM: (14791: I3 ^dir U)
- <=WM: (14787: R1 ^reward R1053)
- <=WM: (14790: O2100 ^name predict-no)
- <=WM: (14789: O2099 ^name predict-yes)
- <=WM: (14788: R1053 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2101 = 0.5681103546535295)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2101 = 0.4318903263632984)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2102 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2102 = 0.3289466429237665)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2100 = 0.3289466429237665)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2100 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2099 = 0.4318903263632984)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2099 = 0.5681103546535295)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14808: S1 ^operator O2101)
- 1051: O: O2101 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1051 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1050 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14809: I3 ^predict-yes N1051)
- <=WM: (14796: N1050 ^status complete)
- <=WM: (14795: I3 ^predict-no N1050)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |--- Input Phase ---
- =>WM: (14813: I2 ^dir U)
- =>WM: (14812: I2 ^reward 1)
- =>WM: (14811: I2 ^see 1)
- =>WM: (14810: N1051 ^status complete)
- <=WM: (14799: I2 ^dir L)
- <=WM: (14798: I2 ^reward 1)
- <=WM: (14797: I2 ^see 0)
- =>WM: (14814: I2 ^level-1 L1-root)
- <=WM: (14800: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1055 ^value 1 +)
- (R1 ^reward R1055 +)
- Firing propose*predict-yes
- -->
- (O2103 ^name predict-yes +)
- (S1 ^operator O2103 +)
- Firing propose*predict-no
- -->
- (O2104 ^name predict-no +)
- (S1 ^operator O2104 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2102 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2101 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2102 ^name predict-no +)
- (S1 ^operator O2102 +)
- Retracting propose*predict-yes
- -->
- (O2101 ^name predict-yes +)
- (S1 ^operator O2101 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1054 ^value 1 +)
- (R1 ^reward R1054 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2102 = 0.3289466429237665)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2102 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2101 = 0.4318903263632984)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2101 = 0.5681103546535295)
- =>WM: (14822: S1 ^operator O2104 +)
- =>WM: (14821: S1 ^operator O2103 +)
- =>WM: (14820: I3 ^dir U)
- =>WM: (14819: O2104 ^name predict-no)
- =>WM: (14818: O2103 ^name predict-yes)
- =>WM: (14817: R1055 ^value 1)
- =>WM: (14816: R1 ^reward R1055)
- =>WM: (14815: I3 ^see 1)
- <=WM: (14806: S1 ^operator O2101 +)
- <=WM: (14808: S1 ^operator O2101)
- <=WM: (14807: S1 ^operator O2102 +)
- <=WM: (14805: I3 ^dir L)
- <=WM: (14801: R1 ^reward R1054)
- <=WM: (14733: I3 ^see 0)
- <=WM: (14804: O2102 ^name predict-no)
- <=WM: (14803: O2101 ^name predict-yes)
- <=WM: (14802: R1054 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2103 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2104 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2102 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2101 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683776 -0.251886 0.43189 -> 0.683776 -0.251886 0.43189(R,m,v=1,0.926136,0.0687987)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*30 0.316224 0.251886 0.56811 -> 0.316224 0.251886 0.56811(R,m,v=1,1,0)
- =>WM: (14823: S1 ^operator O2104)
- 1052: O: O2104 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1052 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1051 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14824: I3 ^predict-no N1052)
- <=WM: (14810: N1051 ^status complete)
- <=WM: (14809: I3 ^predict-yes N1051)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- \--- Input Phase ---
- =>WM: (14828: I2 ^dir L)
- =>WM: (14827: I2 ^reward 1)
- =>WM: (14826: I2 ^see 0)
- =>WM: (14825: N1052 ^status complete)
- <=WM: (14813: I2 ^dir U)
- <=WM: (14812: I2 ^reward 1)
- <=WM: (14811: I2 ^see 1)
- =>WM: (14829: I2 ^level-1 L1-root)
- <=WM: (14814: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2104 = 0.6710531277398375)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2103 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1056 ^value 1 +)
- (R1 ^reward R1056 +)
- Firing propose*predict-yes
- -->
- (O2105 ^name predict-yes +)
- (S1 ^operator O2105 +)
- Firing propose*predict-no
- -->
- (O2106 ^name predict-no +)
- (S1 ^operator O2106 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2104 = 0.3289466429237665)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2103 = 0.4318902242107743)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2104 ^name predict-no +)
- (S1 ^operator O2104 +)
- Retracting propose*predict-yes
- -->
- (O2103 ^name predict-yes +)
- (S1 ^operator O2103 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1055 ^value 1 +)
- (R1 ^reward R1055 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2104 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2103 = 0.)
- =>WM: (14837: S1 ^operator O2106 +)
- =>WM: (14836: S1 ^operator O2105 +)
- =>WM: (14835: I3 ^dir L)
- =>WM: (14834: O2106 ^name predict-no)
- =>WM: (14833: O2105 ^name predict-yes)
- =>WM: (14832: R1056 ^value 1)
- =>WM: (14831: R1 ^reward R1056)
- =>WM: (14830: I3 ^see 0)
- <=WM: (14821: S1 ^operator O2103 +)
- <=WM: (14822: S1 ^operator O2104 +)
- <=WM: (14823: S1 ^operator O2104)
- <=WM: (14820: I3 ^dir U)
- <=WM: (14816: R1 ^reward R1055)
- <=WM: (14815: I3 ^see 1)
- <=WM: (14819: O2104 ^name predict-no)
- <=WM: (14818: O2103 ^name predict-yes)
- <=WM: (14817: R1055 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2105 = -0.06092862110810815)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2105 = 0.4318902242107743)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2106 = 0.6710531277398375)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2106 = 0.3289466429237665)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2104 = 0.3289466429237665)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2104 = 0.6710531277398375)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2103 = 0.4318902242107743)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2103 = -0.06092862110810815)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14838: S1 ^operator O2106)
- 1053: O: O2106 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1053 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1052 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14839: I3 ^predict-no N1053)
- <=WM: (14825: N1052 ^status complete)
- <=WM: (14824: I3 ^predict-no N1052)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- -/--- Input Phase ---
- =>WM: (14843: I2 ^dir U)
- =>WM: (14842: I2 ^reward 1)
- =>WM: (14841: I2 ^see 0)
- =>WM: (14840: N1053 ^status complete)
- <=WM: (14828: I2 ^dir L)
- <=WM: (14827: I2 ^reward 1)
- <=WM: (14826: I2 ^see 0)
- =>WM: (14844: I2 ^level-1 L0-root)
- <=WM: (14829: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1057 ^value 1 +)
- (R1 ^reward R1057 +)
- Firing propose*predict-yes
- -->
- (O2107 ^name predict-yes +)
- (S1 ^operator O2107 +)
- Firing propose*predict-no
- -->
- (O2108 ^name predict-no +)
- (S1 ^operator O2108 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2106 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2105 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2106 ^name predict-no +)
- (S1 ^operator O2106 +)
- Retracting propose*predict-yes
- -->
- (O2105 ^name predict-yes +)
- (S1 ^operator O2105 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1056 ^value 1 +)
- (R1 ^reward R1056 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2106 = 0.3289466429237665)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2106 = 0.6710531277398375)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2105 = 0.4318902242107743)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2105 = -0.06092862110810815)
- =>WM: (14851: S1 ^operator O2108 +)
- =>WM: (14850: S1 ^operator O2107 +)
- =>WM: (14849: I3 ^dir U)
- =>WM: (14848: O2108 ^name predict-no)
- =>WM: (14847: O2107 ^name predict-yes)
- =>WM: (14846: R1057 ^value 1)
- =>WM: (14845: R1 ^reward R1057)
- <=WM: (14836: S1 ^operator O2105 +)
- <=WM: (14837: S1 ^operator O2106 +)
- <=WM: (14838: S1 ^operator O2106)
- <=WM: (14835: I3 ^dir L)
- <=WM: (14831: R1 ^reward R1056)
- <=WM: (14834: O2106 ^name predict-no)
- <=WM: (14833: O2105 ^name predict-yes)
- <=WM: (14832: R1056 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2107 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2108 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2106 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2105 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565405 -0.236458 0.328947 -> 0.565405 -0.236458 0.328947(R,m,v=1,0.909639,0.0826944)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*43 0.434595 0.236458 0.671053 -> 0.434595 0.236458 0.671053(R,m,v=1,1,0)
- =>WM: (14852: S1 ^operator O2108)
- 1054: O: O2108 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1054 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1053 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14853: I3 ^predict-no N1054)
- <=WM: (14840: N1053 ^status complete)
- <=WM: (14839: I3 ^predict-no N1053)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |\--- Input Phase ---
- =>WM: (14857: I2 ^dir U)
- =>WM: (14856: I2 ^reward 1)
- =>WM: (14855: I2 ^see 0)
- =>WM: (14854: N1054 ^status complete)
- <=WM: (14843: I2 ^dir U)
- <=WM: (14842: I2 ^reward 1)
- <=WM: (14841: I2 ^see 0)
- =>WM: (14858: I2 ^level-1 L0-root)
- <=WM: (14844: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1058 ^value 1 +)
- (R1 ^reward R1058 +)
- Firing propose*predict-yes
- -->
- (O2109 ^name predict-yes +)
- (S1 ^operator O2109 +)
- Firing propose*predict-no
- -->
- (O2110 ^name predict-no +)
- (S1 ^operator O2110 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2108 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2107 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2108 ^name predict-no +)
- (S1 ^operator O2108 +)
- Retracting propose*predict-yes
- -->
- (O2107 ^name predict-yes +)
- (S1 ^operator O2107 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1057 ^value 1 +)
- (R1 ^reward R1057 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2108 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2107 = 0.)
- =>WM: (14864: S1 ^operator O2110 +)
- =>WM: (14863: S1 ^operator O2109 +)
- =>WM: (14862: O2110 ^name predict-no)
- =>WM: (14861: O2109 ^name predict-yes)
- =>WM: (14860: R1058 ^value 1)
- =>WM: (14859: R1 ^reward R1058)
- <=WM: (14850: S1 ^operator O2107 +)
- <=WM: (14851: S1 ^operator O2108 +)
- <=WM: (14852: S1 ^operator O2108)
- <=WM: (14845: R1 ^reward R1057)
- <=WM: (14848: O2108 ^name predict-no)
- <=WM: (14847: O2107 ^name predict-yes)
- <=WM: (14846: R1057 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2109 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2110 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2108 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2107 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14865: S1 ^operator O2110)
- 1055: O: O2110 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1055 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1054 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14866: I3 ^predict-no N1055)
- <=WM: (14854: N1054 ^status complete)
- <=WM: (14853: I3 ^predict-no N1054)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- ---- Input Phase ---
- =>WM: (14870: I2 ^dir L)
- =>WM: (14869: I2 ^reward 1)
- =>WM: (14868: I2 ^see 0)
- =>WM: (14867: N1055 ^status complete)
- <=WM: (14857: I2 ^dir U)
- <=WM: (14856: I2 ^reward 1)
- <=WM: (14855: I2 ^see 0)
- =>WM: (14871: I2 ^level-1 L0-root)
- <=WM: (14858: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2110 = 0.671054801292038)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2109 = 0.02602968095631553)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1059 ^value 1 +)
- (R1 ^reward R1059 +)
- Firing propose*predict-yes
- -->
- (O2111 ^name predict-yes +)
- (S1 ^operator O2111 +)
- Firing propose*predict-no
- -->
- (O2112 ^name predict-no +)
- (S1 ^operator O2112 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2110 = 0.3289466773242259)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2109 = 0.4318902242107743)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2110 ^name predict-no +)
- (S1 ^operator O2110 +)
- Retracting propose*predict-yes
- -->
- (O2109 ^name predict-yes +)
- (S1 ^operator O2109 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1058 ^value 1 +)
- (R1 ^reward R1058 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2110 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2109 = 0.)
- =>WM: (14878: S1 ^operator O2112 +)
- =>WM: (14877: S1 ^operator O2111 +)
- =>WM: (14876: I3 ^dir L)
- =>WM: (14875: O2112 ^name predict-no)
- =>WM: (14874: O2111 ^name predict-yes)
- =>WM: (14873: R1059 ^value 1)
- =>WM: (14872: R1 ^reward R1059)
- <=WM: (14863: S1 ^operator O2109 +)
- <=WM: (14864: S1 ^operator O2110 +)
- <=WM: (14865: S1 ^operator O2110)
- <=WM: (14849: I3 ^dir U)
- <=WM: (14859: R1 ^reward R1058)
- <=WM: (14862: O2110 ^name predict-no)
- <=WM: (14861: O2109 ^name predict-yes)
- <=WM: (14860: R1058 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2111 = 0.02602968095631553)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2111 = 0.4318902242107743)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2112 = 0.671054801292038)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2112 = 0.3289466773242259)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2110 = 0.3289466773242259)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2110 = 0.671054801292038)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2109 = 0.4318902242107743)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2109 = 0.02602968095631553)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14879: S1 ^operator O2112)
- 1056: O: O2112 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1056 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1055 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14880: I3 ^predict-no N1056)
- <=WM: (14867: N1055 ^status complete)
- <=WM: (14866: I3 ^predict-no N1055)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- /--- Input Phase ---
- =>WM: (14884: I2 ^dir R)
- =>WM: (14883: I2 ^reward 1)
- =>WM: (14882: I2 ^see 0)
- =>WM: (14881: N1056 ^status complete)
- <=WM: (14870: I2 ^dir L)
- <=WM: (14869: I2 ^reward 1)
- <=WM: (14868: I2 ^see 0)
- =>WM: (14885: I2 ^level-1 L0-root)
- <=WM: (14871: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2112 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2111 = 0.2631731047459309)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1060 ^value 1 +)
- (R1 ^reward R1060 +)
- Firing propose*predict-yes
- -->
- (O2113 ^name predict-yes +)
- (S1 ^operator O2113 +)
- Firing propose*predict-no
- -->
- (O2114 ^name predict-no +)
- (S1 ^operator O2114 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2112 = 0.2572445405980809)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2111 = 0.7368274067920724)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2112 ^name predict-no +)
- (S1 ^operator O2112 +)
- Retracting propose*predict-yes
- -->
- (O2111 ^name predict-yes +)
- (S1 ^operator O2111 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1059 ^value 1 +)
- (R1 ^reward R1059 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2112 = 0.3289466773242259)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2112 = 0.671054801292038)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2111 = 0.4318902242107743)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2111 = 0.02602968095631553)
- =>WM: (14892: S1 ^operator O2114 +)
- =>WM: (14891: S1 ^operator O2113 +)
- =>WM: (14890: I3 ^dir R)
- =>WM: (14889: O2114 ^name predict-no)
- =>WM: (14888: O2113 ^name predict-yes)
- =>WM: (14887: R1060 ^value 1)
- =>WM: (14886: R1 ^reward R1060)
- <=WM: (14877: S1 ^operator O2111 +)
- <=WM: (14878: S1 ^operator O2112 +)
- <=WM: (14879: S1 ^operator O2112)
- <=WM: (14876: I3 ^dir L)
- <=WM: (14872: R1 ^reward R1059)
- <=WM: (14875: O2112 ^name predict-no)
- <=WM: (14874: O2111 ^name predict-yes)
- <=WM: (14873: R1059 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2113 = 0.2631731047459309)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2113 = 0.7368274067920724)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2114 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2114 = 0.2572445405980809)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2112 = 0.2572445405980809)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2112 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2111 = 0.7368274067920724)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2111 = 0.2631731047459309)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565405 -0.236458 0.328947 -> 0.565404 -0.236458 0.328946(R,m,v=1,0.91018,0.0822451)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*33 0.434598 0.236457 0.671055 -> 0.434598 0.236457 0.671055(R,m,v=1,1,0)
- =>WM: (14893: S1 ^operator O2113)
- 1057: O: O2113 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1057 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1056 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14894: I3 ^predict-yes N1057)
- <=WM: (14881: N1056 ^status complete)
- <=WM: (14880: I3 ^predict-no N1056)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |\--- Input Phase ---
- =>WM: (14898: I2 ^dir U)
- =>WM: (14897: I2 ^reward 1)
- =>WM: (14896: I2 ^see 1)
- =>WM: (14895: N1057 ^status complete)
- <=WM: (14884: I2 ^dir R)
- <=WM: (14883: I2 ^reward 1)
- <=WM: (14882: I2 ^see 0)
- =>WM: (14899: I2 ^level-1 R1-root)
- <=WM: (14885: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1061 ^value 1 +)
- (R1 ^reward R1061 +)
- Firing propose*predict-yes
- -->
- (O2115 ^name predict-yes +)
- (S1 ^operator O2115 +)
- Firing propose*predict-no
- -->
- (O2116 ^name predict-no +)
- (S1 ^operator O2116 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2114 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2113 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2114 ^name predict-no +)
- (S1 ^operator O2114 +)
- Retracting propose*predict-yes
- -->
- (O2113 ^name predict-yes +)
- (S1 ^operator O2113 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1060 ^value 1 +)
- (R1 ^reward R1060 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2114 = 0.2572445405980809)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2114 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2113 = 0.7368274067920724)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2113 = 0.2631731047459309)
- =>WM: (14907: S1 ^operator O2116 +)
- =>WM: (14906: S1 ^operator O2115 +)
- =>WM: (14905: I3 ^dir U)
- =>WM: (14904: O2116 ^name predict-no)
- =>WM: (14903: O2115 ^name predict-yes)
- =>WM: (14902: R1061 ^value 1)
- =>WM: (14901: R1 ^reward R1061)
- =>WM: (14900: I3 ^see 1)
- <=WM: (14891: S1 ^operator O2113 +)
- <=WM: (14893: S1 ^operator O2113)
- <=WM: (14892: S1 ^operator O2114 +)
- <=WM: (14890: I3 ^dir R)
- <=WM: (14886: R1 ^reward R1060)
- <=WM: (14830: I3 ^see 0)
- <=WM: (14889: O2114 ^name predict-no)
- <=WM: (14888: O2113 ^name predict-yes)
- <=WM: (14887: R1060 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2115 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2116 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2114 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2113 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114085 0.736827 -> 0.748236 -0.0114085 0.736827(R,m,v=1,0.901163,0.0895893)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*35 0.251764 0.0114089 0.263173 -> 0.251764 0.0114088 0.263173(R,m,v=1,1,0)
- =>WM: (14908: S1 ^operator O2116)
- 1058: O: O2116 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1058 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1057 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14909: I3 ^predict-no N1058)
- <=WM: (14895: N1057 ^status complete)
- <=WM: (14894: I3 ^predict-yes N1057)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- ---- Input Phase ---
- =>WM: (14913: I2 ^dir R)
- =>WM: (14912: I2 ^reward 1)
- =>WM: (14911: I2 ^see 0)
- =>WM: (14910: N1058 ^status complete)
- <=WM: (14898: I2 ^dir U)
- <=WM: (14897: I2 ^reward 1)
- <=WM: (14896: I2 ^see 1)
- =>WM: (14914: I2 ^level-1 R1-root)
- <=WM: (14899: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2115 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2116 = 0.7427535565328676)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1062 ^value 1 +)
- (R1 ^reward R1062 +)
- Firing propose*predict-yes
- -->
- (O2117 ^name predict-yes +)
- (S1 ^operator O2117 +)
- Firing propose*predict-no
- -->
- (O2118 ^name predict-no +)
- (S1 ^operator O2118 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2116 = 0.2572445405980809)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2115 = 0.7368273300613719)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2116 ^name predict-no +)
- (S1 ^operator O2116 +)
- Retracting propose*predict-yes
- -->
- (O2115 ^name predict-yes +)
- (S1 ^operator O2115 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1061 ^value 1 +)
- (R1 ^reward R1061 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2116 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2115 = 0.)
- =>WM: (14922: S1 ^operator O2118 +)
- =>WM: (14921: S1 ^operator O2117 +)
- =>WM: (14920: I3 ^dir R)
- =>WM: (14919: O2118 ^name predict-no)
- =>WM: (14918: O2117 ^name predict-yes)
- =>WM: (14917: R1062 ^value 1)
- =>WM: (14916: R1 ^reward R1062)
- =>WM: (14915: I3 ^see 0)
- <=WM: (14906: S1 ^operator O2115 +)
- <=WM: (14907: S1 ^operator O2116 +)
- <=WM: (14908: S1 ^operator O2116)
- <=WM: (14905: I3 ^dir U)
- <=WM: (14901: R1 ^reward R1061)
- <=WM: (14900: I3 ^see 1)
- <=WM: (14904: O2116 ^name predict-no)
- <=WM: (14903: O2115 ^name predict-yes)
- <=WM: (14902: R1061 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2117 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2117 = 0.7368273300613719)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2118 = 0.7427535565328676)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2118 = 0.2572445405980809)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2116 = 0.2572445405980809)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2116 = 0.7427535565328676)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2115 = 0.7368273300613719)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2115 = -0.3011268063455669)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14923: S1 ^operator O2118)
- 1059: O: O2118 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1059 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1058 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14924: I3 ^predict-no N1059)
- <=WM: (14910: N1058 ^status complete)
- <=WM: (14909: I3 ^predict-no N1058)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- /|\--- Input Phase ---
- =>WM: (14928: I2 ^dir R)
- =>WM: (14927: I2 ^reward 1)
- =>WM: (14926: I2 ^see 0)
- =>WM: (14925: N1059 ^status complete)
- <=WM: (14913: I2 ^dir R)
- <=WM: (14912: I2 ^reward 1)
- <=WM: (14911: I2 ^see 0)
- =>WM: (14929: I2 ^level-1 R0-root)
- <=WM: (14914: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2118 = 0.7427560550085226)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2117 = -0.1989581826229297)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1063 ^value 1 +)
- (R1 ^reward R1063 +)
- Firing propose*predict-yes
- -->
- (O2119 ^name predict-yes +)
- (S1 ^operator O2119 +)
- Firing propose*predict-no
- -->
- (O2120 ^name predict-no +)
- (S1 ^operator O2120 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2118 = 0.2572445405980809)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2117 = 0.7368273300613719)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2118 ^name predict-no +)
- (S1 ^operator O2118 +)
- Retracting propose*predict-yes
- -->
- (O2117 ^name predict-yes +)
- (S1 ^operator O2117 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1062 ^value 1 +)
- (R1 ^reward R1062 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2118 = 0.2572445405980809)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2118 = 0.7427535565328676)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2117 = 0.7368273300613719)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2117 = -0.3011268063455669)
- =>WM: (14935: S1 ^operator O2120 +)
- =>WM: (14934: S1 ^operator O2119 +)
- =>WM: (14933: O2120 ^name predict-no)
- =>WM: (14932: O2119 ^name predict-yes)
- =>WM: (14931: R1063 ^value 1)
- =>WM: (14930: R1 ^reward R1063)
- <=WM: (14921: S1 ^operator O2117 +)
- <=WM: (14922: S1 ^operator O2118 +)
- <=WM: (14923: S1 ^operator O2118)
- <=WM: (14916: R1 ^reward R1062)
- <=WM: (14919: O2118 ^name predict-no)
- <=WM: (14918: O2117 ^name predict-yes)
- <=WM: (14917: R1062 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2119 = 0.7368273300613719)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2119 = -0.1989581826229297)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2120 = 0.2572445405980809)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2120 = 0.7427560550085226)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2118 = 0.2572445405980809)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2118 = 0.7427560550085226)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2117 = 0.7368273300613719)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2117 = -0.1989581826229297)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586135 -0.32889 0.257245 -> 0.586135 -0.32889 0.257245(R,m,v=1,0.869565,0.114041)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*36 0.413864 0.32889 0.742754 -> 0.413864 0.32889 0.742754(R,m,v=1,1,0)
- =>WM: (14936: S1 ^operator O2120)
- 1060: O: O2120 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1060 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1059 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14937: I3 ^predict-no N1060)
- <=WM: (14925: N1059 ^status complete)
- <=WM: (14924: I3 ^predict-no N1059)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- ---- Input Phase ---
- =>WM: (14941: I2 ^dir U)
- =>WM: (14940: I2 ^reward 1)
- =>WM: (14939: I2 ^see 0)
- =>WM: (14938: N1060 ^status complete)
- <=WM: (14928: I2 ^dir R)
- <=WM: (14927: I2 ^reward 1)
- <=WM: (14926: I2 ^see 0)
- =>WM: (14942: I2 ^level-1 R0-root)
- <=WM: (14929: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1064 ^value 1 +)
- (R1 ^reward R1064 +)
- Firing propose*predict-yes
- -->
- (O2121 ^name predict-yes +)
- (S1 ^operator O2121 +)
- Firing propose*predict-no
- -->
- (O2122 ^name predict-no +)
- (S1 ^operator O2122 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2120 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2119 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2120 ^name predict-no +)
- (S1 ^operator O2120 +)
- Retracting propose*predict-yes
- -->
- (O2119 ^name predict-yes +)
- (S1 ^operator O2119 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1063 ^value 1 +)
- (R1 ^reward R1063 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2120 = 0.7427560550085226)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2120 = 0.2572448260284386)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2119 = -0.1989581826229297)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2119 = 0.7368273300613719)
- =>WM: (14949: S1 ^operator O2122 +)
- =>WM: (14948: S1 ^operator O2121 +)
- =>WM: (14947: I3 ^dir U)
- =>WM: (14946: O2122 ^name predict-no)
- =>WM: (14945: O2121 ^name predict-yes)
- =>WM: (14944: R1064 ^value 1)
- =>WM: (14943: R1 ^reward R1064)
- <=WM: (14934: S1 ^operator O2119 +)
- <=WM: (14935: S1 ^operator O2120 +)
- <=WM: (14936: S1 ^operator O2120)
- <=WM: (14920: I3 ^dir R)
- <=WM: (14930: R1 ^reward R1063)
- <=WM: (14933: O2120 ^name predict-no)
- <=WM: (14932: O2119 ^name predict-yes)
- <=WM: (14931: R1063 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2121 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2122 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2120 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2119 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586135 -0.32889 0.257245 -> 0.586135 -0.32889 0.257245(R,m,v=1,0.87027,0.113514)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*41 0.413866 0.32889 0.742756 -> 0.413865 0.32889 0.742756(R,m,v=1,1,0)
- =>WM: (14950: S1 ^operator O2122)
- 1061: O: O2122 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1061 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1060 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14951: I3 ^predict-no N1061)
- <=WM: (14938: N1060 ^status complete)
- <=WM: (14937: I3 ^predict-no N1060)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /--- Input Phase ---
- =>WM: (14955: I2 ^dir U)
- =>WM: (14954: I2 ^reward 1)
- =>WM: (14953: I2 ^see 0)
- =>WM: (14952: N1061 ^status complete)
- <=WM: (14941: I2 ^dir U)
- <=WM: (14940: I2 ^reward 1)
- <=WM: (14939: I2 ^see 0)
- =>WM: (14956: I2 ^level-1 R0-root)
- <=WM: (14942: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1065 ^value 1 +)
- (R1 ^reward R1065 +)
- Firing propose*predict-yes
- -->
- (O2123 ^name predict-yes +)
- (S1 ^operator O2123 +)
- Firing propose*predict-no
- -->
- (O2124 ^name predict-no +)
- (S1 ^operator O2124 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2122 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2121 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2122 ^name predict-no +)
- (S1 ^operator O2122 +)
- Retracting propose*predict-yes
- -->
- (O2121 ^name predict-yes +)
- (S1 ^operator O2121 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1064 ^value 1 +)
- (R1 ^reward R1064 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2122 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2121 = 0.)
- =>WM: (14962: S1 ^operator O2124 +)
- =>WM: (14961: S1 ^operator O2123 +)
- =>WM: (14960: O2124 ^name predict-no)
- =>WM: (14959: O2123 ^name predict-yes)
- =>WM: (14958: R1065 ^value 1)
- =>WM: (14957: R1 ^reward R1065)
- <=WM: (14948: S1 ^operator O2121 +)
- <=WM: (14949: S1 ^operator O2122 +)
- <=WM: (14950: S1 ^operator O2122)
- <=WM: (14943: R1 ^reward R1064)
- <=WM: (14946: O2122 ^name predict-no)
- <=WM: (14945: O2121 ^name predict-yes)
- <=WM: (14944: R1064 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2123 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2124 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2122 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2121 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14963: S1 ^operator O2124)
- 1062: O: O2124 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1062 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1061 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14964: I3 ^predict-no N1062)
- <=WM: (14952: N1061 ^status complete)
- <=WM: (14951: I3 ^predict-no N1061)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |--- Input Phase ---
- =>WM: (14968: I2 ^dir U)
- =>WM: (14967: I2 ^reward 1)
- =>WM: (14966: I2 ^see 0)
- =>WM: (14965: N1062 ^status complete)
- <=WM: (14955: I2 ^dir U)
- <=WM: (14954: I2 ^reward 1)
- <=WM: (14953: I2 ^see 0)
- =>WM: (14969: I2 ^level-1 R0-root)
- <=WM: (14956: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1066 ^value 1 +)
- (R1 ^reward R1066 +)
- Firing propose*predict-yes
- -->
- (O2125 ^name predict-yes +)
- (S1 ^operator O2125 +)
- Firing propose*predict-no
- -->
- (O2126 ^name predict-no +)
- (S1 ^operator O2126 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2124 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2123 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2124 ^name predict-no +)
- (S1 ^operator O2124 +)
- Retracting propose*predict-yes
- -->
- (O2123 ^name predict-yes +)
- (S1 ^operator O2123 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1065 ^value 1 +)
- (R1 ^reward R1065 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2124 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2123 = 0.)
- =>WM: (14975: S1 ^operator O2126 +)
- =>WM: (14974: S1 ^operator O2125 +)
- =>WM: (14973: O2126 ^name predict-no)
- =>WM: (14972: O2125 ^name predict-yes)
- =>WM: (14971: R1066 ^value 1)
- =>WM: (14970: R1 ^reward R1066)
- <=WM: (14961: S1 ^operator O2123 +)
- <=WM: (14962: S1 ^operator O2124 +)
- <=WM: (14963: S1 ^operator O2124)
- <=WM: (14957: R1 ^reward R1065)
- <=WM: (14960: O2124 ^name predict-no)
- <=WM: (14959: O2123 ^name predict-yes)
- <=WM: (14958: R1065 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2125 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2126 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2124 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2123 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14976: S1 ^operator O2126)
- 1063: O: O2126 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1063 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1062 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14977: I3 ^predict-no N1063)
- <=WM: (14965: N1062 ^status complete)
- <=WM: (14964: I3 ^predict-no N1062)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (14981: I2 ^dir L)
- =>WM: (14980: I2 ^reward 1)
- =>WM: (14979: I2 ^see 0)
- =>WM: (14978: N1063 ^status complete)
- <=WM: (14968: I2 ^dir U)
- <=WM: (14967: I2 ^reward 1)
- <=WM: (14966: I2 ^see 0)
- =>WM: (14982: I2 ^level-1 R0-root)
- <=WM: (14969: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2126 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2125 = 0.5681102525010053)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1067 ^value 1 +)
- (R1 ^reward R1067 +)
- Firing propose*predict-yes
- -->
- (O2127 ^name predict-yes +)
- (S1 ^operator O2127 +)
- Firing propose*predict-no
- -->
- (O2128 ^name predict-no +)
- (S1 ^operator O2128 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2126 = 0.3289464555317863)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2125 = 0.4318902242107743)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2126 ^name predict-no +)
- (S1 ^operator O2126 +)
- Retracting propose*predict-yes
- -->
- (O2125 ^name predict-yes +)
- (S1 ^operator O2125 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1066 ^value 1 +)
- (R1 ^reward R1066 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2126 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2125 = 0.)
- =>WM: (14989: S1 ^operator O2128 +)
- =>WM: (14988: S1 ^operator O2127 +)
- =>WM: (14987: I3 ^dir L)
- =>WM: (14986: O2128 ^name predict-no)
- =>WM: (14985: O2127 ^name predict-yes)
- =>WM: (14984: R1067 ^value 1)
- =>WM: (14983: R1 ^reward R1067)
- <=WM: (14974: S1 ^operator O2125 +)
- <=WM: (14975: S1 ^operator O2126 +)
- <=WM: (14976: S1 ^operator O2126)
- <=WM: (14947: I3 ^dir U)
- <=WM: (14970: R1 ^reward R1066)
- <=WM: (14973: O2126 ^name predict-no)
- <=WM: (14972: O2125 ^name predict-yes)
- <=WM: (14971: R1066 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2127 = 0.5681102525010053)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2127 = 0.4318902242107743)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2128 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2128 = 0.3289464555317863)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2126 = 0.3289464555317863)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2126 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2125 = 0.4318902242107743)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2125 = 0.5681102525010053)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (14990: S1 ^operator O2127)
- 1064: O: O2127 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1064 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1063 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (14991: I3 ^predict-yes N1064)
- <=WM: (14978: N1063 ^status complete)
- <=WM: (14977: I3 ^predict-no N1063)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /|--- Input Phase ---
- =>WM: (14995: I2 ^dir U)
- =>WM: (14994: I2 ^reward 1)
- =>WM: (14993: I2 ^see 1)
- =>WM: (14992: N1064 ^status complete)
- <=WM: (14981: I2 ^dir L)
- <=WM: (14980: I2 ^reward 1)
- <=WM: (14979: I2 ^see 0)
- =>WM: (14996: I2 ^level-1 L1-root)
- <=WM: (14982: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1068 ^value 1 +)
- (R1 ^reward R1068 +)
- Firing propose*predict-yes
- -->
- (O2129 ^name predict-yes +)
- (S1 ^operator O2129 +)
- Firing propose*predict-no
- -->
- (O2130 ^name predict-no +)
- (S1 ^operator O2130 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2128 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2127 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2128 ^name predict-no +)
- (S1 ^operator O2128 +)
- Retracting propose*predict-yes
- -->
- (O2127 ^name predict-yes +)
- (S1 ^operator O2127 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1067 ^value 1 +)
- (R1 ^reward R1067 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2128 = 0.3289464555317863)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2128 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2127 = 0.4318902242107743)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2127 = 0.5681102525010053)
- =>WM: (15004: S1 ^operator O2130 +)
- =>WM: (15003: S1 ^operator O2129 +)
- =>WM: (15002: I3 ^dir U)
- =>WM: (15001: O2130 ^name predict-no)
- =>WM: (15000: O2129 ^name predict-yes)
- =>WM: (14999: R1068 ^value 1)
- =>WM: (14998: R1 ^reward R1068)
- =>WM: (14997: I3 ^see 1)
- <=WM: (14988: S1 ^operator O2127 +)
- <=WM: (14990: S1 ^operator O2127)
- <=WM: (14989: S1 ^operator O2128 +)
- <=WM: (14987: I3 ^dir L)
- <=WM: (14983: R1 ^reward R1067)
- <=WM: (14915: I3 ^see 0)
- <=WM: (14986: O2128 ^name predict-no)
- <=WM: (14985: O2127 ^name predict-yes)
- <=WM: (14984: R1067 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2129 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2130 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2128 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2127 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683776 -0.251886 0.43189 -> 0.683776 -0.251886 0.43189(R,m,v=1,0.926554,0.0684386)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*30 0.316224 0.251886 0.56811 -> 0.316224 0.251886 0.56811(R,m,v=1,1,0)
- =>WM: (15005: S1 ^operator O2130)
- 1065: O: O2130 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1065 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1064 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15006: I3 ^predict-no N1065)
- <=WM: (14992: N1064 ^status complete)
- <=WM: (14991: I3 ^predict-yes N1064)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (15010: I2 ^dir U)
- =>WM: (15009: I2 ^reward 1)
- =>WM: (15008: I2 ^see 0)
- =>WM: (15007: N1065 ^status complete)
- <=WM: (14995: I2 ^dir U)
- <=WM: (14994: I2 ^reward 1)
- <=WM: (14993: I2 ^see 1)
- =>WM: (15011: I2 ^level-1 L1-root)
- <=WM: (14996: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1069 ^value 1 +)
- (R1 ^reward R1069 +)
- Firing propose*predict-yes
- -->
- (O2131 ^name predict-yes +)
- (S1 ^operator O2131 +)
- Firing propose*predict-no
- -->
- (O2132 ^name predict-no +)
- (S1 ^operator O2132 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2130 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2129 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2130 ^name predict-no +)
- (S1 ^operator O2130 +)
- Retracting propose*predict-yes
- -->
- (O2129 ^name predict-yes +)
- (S1 ^operator O2129 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1068 ^value 1 +)
- (R1 ^reward R1068 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2130 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2129 = 0.)
- =>WM: (15018: S1 ^operator O2132 +)
- =>WM: (15017: S1 ^operator O2131 +)
- =>WM: (15016: O2132 ^name predict-no)
- =>WM: (15015: O2131 ^name predict-yes)
- =>WM: (15014: R1069 ^value 1)
- =>WM: (15013: R1 ^reward R1069)
- =>WM: (15012: I3 ^see 0)
- <=WM: (15003: S1 ^operator O2129 +)
- <=WM: (15004: S1 ^operator O2130 +)
- <=WM: (15005: S1 ^operator O2130)
- <=WM: (14998: R1 ^reward R1068)
- <=WM: (14997: I3 ^see 1)
- <=WM: (15001: O2130 ^name predict-no)
- <=WM: (15000: O2129 ^name predict-yes)
- <=WM: (14999: R1068 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2131 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2132 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2130 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2129 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (15019: S1 ^operator O2132)
- 1066: O: O2132 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1066 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1065 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15020: I3 ^predict-no N1066)
- <=WM: (15007: N1065 ^status complete)
- <=WM: (15006: I3 ^predict-no N1065)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- /--- Input Phase ---
- =>WM: (15024: I2 ^dir R)
- =>WM: (15023: I2 ^reward 1)
- =>WM: (15022: I2 ^see 0)
- =>WM: (15021: N1066 ^status complete)
- <=WM: (15010: I2 ^dir U)
- <=WM: (15009: I2 ^reward 1)
- <=WM: (15008: I2 ^see 0)
- =>WM: (15025: I2 ^level-1 L1-root)
- <=WM: (15011: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2132 = -0.1377248055371832)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2131 = 0.2631694281035112)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1070 ^value 1 +)
- (R1 ^reward R1070 +)
- Firing propose*predict-yes
- -->
- (O2133 ^name predict-yes +)
- (S1 ^operator O2133 +)
- Firing propose*predict-no
- -->
- (O2134 ^name predict-no +)
- (S1 ^operator O2134 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2132 = 0.2572446938728945)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2131 = 0.7368273300613719)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2132 ^name predict-no +)
- (S1 ^operator O2132 +)
- Retracting propose*predict-yes
- -->
- (O2131 ^name predict-yes +)
- (S1 ^operator O2131 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1069 ^value 1 +)
- (R1 ^reward R1069 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2132 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2131 = 0.)
- =>WM: (15032: S1 ^operator O2134 +)
- =>WM: (15031: S1 ^operator O2133 +)
- =>WM: (15030: I3 ^dir R)
- =>WM: (15029: O2134 ^name predict-no)
- =>WM: (15028: O2133 ^name predict-yes)
- =>WM: (15027: R1070 ^value 1)
- =>WM: (15026: R1 ^reward R1070)
- <=WM: (15017: S1 ^operator O2131 +)
- <=WM: (15018: S1 ^operator O2132 +)
- <=WM: (15019: S1 ^operator O2132)
- <=WM: (15002: I3 ^dir U)
- <=WM: (15013: R1 ^reward R1069)
- <=WM: (15016: O2132 ^name predict-no)
- <=WM: (15015: O2131 ^name predict-yes)
- <=WM: (15014: R1069 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2133 = 0.2631694281035112)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2133 = 0.7368273300613719)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2134 = -0.1377248055371832)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2134 = 0.2572446938728945)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2132 = 0.2572446938728945)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2132 = -0.1377248055371832)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2131 = 0.7368273300613719)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2131 = 0.2631694281035112)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (15033: S1 ^operator O2133)
- 1067: O: O2133 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1067 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1066 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15034: I3 ^predict-yes N1067)
- <=WM: (15021: N1066 ^status complete)
- <=WM: (15020: I3 ^predict-no N1066)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |\--- Input Phase ---
- =>WM: (15038: I2 ^dir U)
- =>WM: (15037: I2 ^reward 1)
- =>WM: (15036: I2 ^see 1)
- =>WM: (15035: N1067 ^status complete)
- <=WM: (15024: I2 ^dir R)
- <=WM: (15023: I2 ^reward 1)
- <=WM: (15022: I2 ^see 0)
- =>WM: (15039: I2 ^level-1 R1-root)
- <=WM: (15025: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1071 ^value 1 +)
- (R1 ^reward R1071 +)
- Firing propose*predict-yes
- -->
- (O2135 ^name predict-yes +)
- (S1 ^operator O2135 +)
- Firing propose*predict-no
- -->
- (O2136 ^name predict-no +)
- (S1 ^operator O2136 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2134 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2133 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2134 ^name predict-no +)
- (S1 ^operator O2134 +)
- Retracting propose*predict-yes
- -->
- (O2133 ^name predict-yes +)
- (S1 ^operator O2133 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1070 ^value 1 +)
- (R1 ^reward R1070 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2134 = 0.2572446938728945)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2134 = -0.1377248055371832)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2133 = 0.7368273300613719)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2133 = 0.2631694281035112)
- =>WM: (15047: S1 ^operator O2136 +)
- =>WM: (15046: S1 ^operator O2135 +)
- =>WM: (15045: I3 ^dir U)
- =>WM: (15044: O2136 ^name predict-no)
- =>WM: (15043: O2135 ^name predict-yes)
- =>WM: (15042: R1071 ^value 1)
- =>WM: (15041: R1 ^reward R1071)
- =>WM: (15040: I3 ^see 1)
- <=WM: (15031: S1 ^operator O2133 +)
- <=WM: (15033: S1 ^operator O2133)
- <=WM: (15032: S1 ^operator O2134 +)
- <=WM: (15030: I3 ^dir R)
- <=WM: (15026: R1 ^reward R1070)
- <=WM: (15012: I3 ^see 0)
- <=WM: (15029: O2134 ^name predict-no)
- <=WM: (15028: O2133 ^name predict-yes)
- <=WM: (15027: R1070 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2135 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2136 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2134 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2133 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114085 0.736827 -> 0.748236 -0.0114082 0.736828(R,m,v=1,0.901734,0.0891249)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*40 0.251763 0.0114062 0.263169 -> 0.251763 0.0114065 0.26317(R,m,v=1,1,0)
- =>WM: (15048: S1 ^operator O2136)
- 1068: O: O2136 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1068 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1067 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15049: I3 ^predict-no N1068)
- <=WM: (15035: N1067 ^status complete)
- <=WM: (15034: I3 ^predict-yes N1067)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- ---- Input Phase ---
- =>WM: (15053: I2 ^dir L)
- =>WM: (15052: I2 ^reward 1)
- =>WM: (15051: I2 ^see 0)
- =>WM: (15050: N1068 ^status complete)
- <=WM: (15038: I2 ^dir U)
- <=WM: (15037: I2 ^reward 1)
- <=WM: (15036: I2 ^see 1)
- =>WM: (15054: I2 ^level-1 R1-root)
- <=WM: (15039: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2135 = 0.5681072363445543)
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2136 = -0.1549421060161498)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1072 ^value 1 +)
- (R1 ^reward R1072 +)
- Firing propose*predict-yes
- -->
- (O2137 ^name predict-yes +)
- (S1 ^operator O2137 +)
- Firing propose*predict-no
- -->
- (O2138 ^name predict-no +)
- (S1 ^operator O2138 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2136 = 0.3289464555317863)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2135 = 0.4318901527040073)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2136 ^name predict-no +)
- (S1 ^operator O2136 +)
- Retracting propose*predict-yes
- -->
- (O2135 ^name predict-yes +)
- (S1 ^operator O2135 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1071 ^value 1 +)
- (R1 ^reward R1071 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2136 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2135 = 0.)
- =>WM: (15062: S1 ^operator O2138 +)
- =>WM: (15061: S1 ^operator O2137 +)
- =>WM: (15060: I3 ^dir L)
- =>WM: (15059: O2138 ^name predict-no)
- =>WM: (15058: O2137 ^name predict-yes)
- =>WM: (15057: R1072 ^value 1)
- =>WM: (15056: R1 ^reward R1072)
- =>WM: (15055: I3 ^see 0)
- <=WM: (15046: S1 ^operator O2135 +)
- <=WM: (15047: S1 ^operator O2136 +)
- <=WM: (15048: S1 ^operator O2136)
- <=WM: (15045: I3 ^dir U)
- <=WM: (15041: R1 ^reward R1071)
- <=WM: (15040: I3 ^see 1)
- <=WM: (15044: O2136 ^name predict-no)
- <=WM: (15043: O2135 ^name predict-yes)
- <=WM: (15042: R1071 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2137 = 0.5681072363445543)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2137 = 0.4318901527040073)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2138 = -0.1549421060161498)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2138 = 0.3289464555317863)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2136 = 0.3289464555317863)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2136 = -0.1549421060161498)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2135 = 0.4318901527040073)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2135 = 0.5681072363445543)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (15063: S1 ^operator O2137)
- 1069: O: O2137 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1069 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1068 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15064: I3 ^predict-yes N1069)
- <=WM: (15050: N1068 ^status complete)
- <=WM: (15049: I3 ^predict-no N1068)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /|--- Input Phase ---
- =>WM: (15068: I2 ^dir U)
- =>WM: (15067: I2 ^reward 1)
- =>WM: (15066: I2 ^see 1)
- =>WM: (15065: N1069 ^status complete)
- <=WM: (15053: I2 ^dir L)
- <=WM: (15052: I2 ^reward 1)
- <=WM: (15051: I2 ^see 0)
- =>WM: (15069: I2 ^level-1 L1-root)
- <=WM: (15054: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1073 ^value 1 +)
- (R1 ^reward R1073 +)
- Firing propose*predict-yes
- -->
- (O2139 ^name predict-yes +)
- (S1 ^operator O2139 +)
- Firing propose*predict-no
- -->
- (O2140 ^name predict-no +)
- (S1 ^operator O2140 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2138 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2137 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2138 ^name predict-no +)
- (S1 ^operator O2138 +)
- Retracting propose*predict-yes
- -->
- (O2137 ^name predict-yes +)
- (S1 ^operator O2137 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1072 ^value 1 +)
- (R1 ^reward R1072 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2138 = 0.3289464555317863)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2138 = -0.1549421060161498)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2137 = 0.4318901527040073)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2137 = 0.5681072363445543)
- =>WM: (15077: S1 ^operator O2140 +)
- =>WM: (15076: S1 ^operator O2139 +)
- =>WM: (15075: I3 ^dir U)
- =>WM: (15074: O2140 ^name predict-no)
- =>WM: (15073: O2139 ^name predict-yes)
- =>WM: (15072: R1073 ^value 1)
- =>WM: (15071: R1 ^reward R1073)
- =>WM: (15070: I3 ^see 1)
- <=WM: (15061: S1 ^operator O2137 +)
- <=WM: (15063: S1 ^operator O2137)
- <=WM: (15062: S1 ^operator O2138 +)
- <=WM: (15060: I3 ^dir L)
- <=WM: (15056: R1 ^reward R1072)
- <=WM: (15055: I3 ^see 0)
- <=WM: (15059: O2138 ^name predict-no)
- <=WM: (15058: O2137 ^name predict-yes)
- <=WM: (15057: R1072 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2139 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2140 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2138 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2137 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683776 -0.251886 0.43189 -> 0.683777 -0.251886 0.431891(R,m,v=1,0.926966,0.0680823)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*45 0.316221 0.251886 0.568107 -> 0.316222 0.251886 0.568108(R,m,v=1,1,0)
- =>WM: (15078: S1 ^operator O2140)
- 1070: O: O2140 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1070 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1069 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15079: I3 ^predict-no N1070)
- <=WM: (15065: N1069 ^status complete)
- <=WM: (15064: I3 ^predict-yes N1069)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- \-/--- Input Phase ---
- =>WM: (15083: I2 ^dir U)
- =>WM: (15082: I2 ^reward 1)
- =>WM: (15081: I2 ^see 0)
- =>WM: (15080: N1070 ^status complete)
- <=WM: (15068: I2 ^dir U)
- <=WM: (15067: I2 ^reward 1)
- <=WM: (15066: I2 ^see 1)
- =>WM: (15084: I2 ^level-1 L1-root)
- <=WM: (15069: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1074 ^value 1 +)
- (R1 ^reward R1074 +)
- Firing propose*predict-yes
- -->
- (O2141 ^name predict-yes +)
- (S1 ^operator O2141 +)
- Firing propose*predict-no
- -->
- (O2142 ^name predict-no +)
- (S1 ^operator O2142 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2140 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2139 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2140 ^name predict-no +)
- (S1 ^operator O2140 +)
- Retracting propose*predict-yes
- -->
- (O2139 ^name predict-yes +)
- (S1 ^operator O2139 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1073 ^value 1 +)
- (R1 ^reward R1073 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2140 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2139 = 0.)
- =>WM: (15091: S1 ^operator O2142 +)
- =>WM: (15090: S1 ^operator O2141 +)
- =>WM: (15089: O2142 ^name predict-no)
- =>WM: (15088: O2141 ^name predict-yes)
- =>WM: (15087: R1074 ^value 1)
- =>WM: (15086: R1 ^reward R1074)
- =>WM: (15085: I3 ^see 0)
- <=WM: (15076: S1 ^operator O2139 +)
- <=WM: (15077: S1 ^operator O2140 +)
- <=WM: (15078: S1 ^operator O2140)
- <=WM: (15071: R1 ^reward R1073)
- <=WM: (15070: I3 ^see 1)
- <=WM: (15074: O2140 ^name predict-no)
- <=WM: (15073: O2139 ^name predict-yes)
- <=WM: (15072: R1073 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2141 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2142 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2140 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2139 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (15092: S1 ^operator O2142)
- 1071: O: O2142 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1071 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1070 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15093: I3 ^predict-no N1071)
- <=WM: (15080: N1070 ^status complete)
- <=WM: (15079: I3 ^predict-no N1070)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- |--- Input Phase ---
- =>WM: (15097: I2 ^dir R)
- =>WM: (15096: I2 ^reward 1)
- =>WM: (15095: I2 ^see 0)
- =>WM: (15094: N1071 ^status complete)
- <=WM: (15083: I2 ^dir U)
- <=WM: (15082: I2 ^reward 1)
- <=WM: (15081: I2 ^see 0)
- =>WM: (15098: I2 ^level-1 L1-root)
- <=WM: (15084: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2142 = -0.1377248055371832)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2141 = 0.2631699143787788)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1075 ^value 1 +)
- (R1 ^reward R1075 +)
- Firing propose*predict-yes
- -->
- (O2143 ^name predict-yes +)
- (S1 ^operator O2143 +)
- Firing propose*predict-no
- -->
- (O2144 ^name predict-no +)
- (S1 ^operator O2144 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2142 = 0.2572446938728945)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2141 = 0.7368278163366394)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2142 ^name predict-no +)
- (S1 ^operator O2142 +)
- Retracting propose*predict-yes
- -->
- (O2141 ^name predict-yes +)
- (S1 ^operator O2141 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1074 ^value 1 +)
- (R1 ^reward R1074 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2142 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2141 = 0.)
- =>WM: (15105: S1 ^operator O2144 +)
- =>WM: (15104: S1 ^operator O2143 +)
- =>WM: (15103: I3 ^dir R)
- =>WM: (15102: O2144 ^name predict-no)
- =>WM: (15101: O2143 ^name predict-yes)
- =>WM: (15100: R1075 ^value 1)
- =>WM: (15099: R1 ^reward R1075)
- <=WM: (15090: S1 ^operator O2141 +)
- <=WM: (15091: S1 ^operator O2142 +)
- <=WM: (15092: S1 ^operator O2142)
- <=WM: (15075: I3 ^dir U)
- <=WM: (15086: R1 ^reward R1074)
- <=WM: (15089: O2142 ^name predict-no)
- <=WM: (15088: O2141 ^name predict-yes)
- <=WM: (15087: R1074 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2143 = 0.2631699143787788)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2143 = 0.7368278163366394)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2144 = -0.1377248055371832)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2144 = 0.2572446938728945)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2142 = 0.2572446938728945)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2142 = -0.1377248055371832)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2141 = 0.7368278163366394)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2141 = 0.2631699143787788)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (15106: S1 ^operator O2143)
- 1072: O: O2143 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1072 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1071 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15107: I3 ^predict-yes N1072)
- <=WM: (15094: N1071 ^status complete)
- <=WM: (15093: I3 ^predict-no N1071)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (15111: I2 ^dir L)
- =>WM: (15110: I2 ^reward 1)
- =>WM: (15109: I2 ^see 1)
- =>WM: (15108: N1072 ^status complete)
- <=WM: (15097: I2 ^dir R)
- <=WM: (15096: I2 ^reward 1)
- <=WM: (15095: I2 ^see 0)
- =>WM: (15112: I2 ^level-1 R1-root)
- <=WM: (15098: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2143 = 0.5681076279872701)
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2144 = -0.1549421060161498)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1076 ^value 1 +)
- (R1 ^reward R1076 +)
- Firing propose*predict-yes
- -->
- (O2145 ^name predict-yes +)
- (S1 ^operator O2145 +)
- Firing propose*predict-no
- -->
- (O2146 ^name predict-no +)
- (S1 ^operator O2146 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2144 = 0.3289464555317863)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2143 = 0.431890544346723)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2144 ^name predict-no +)
- (S1 ^operator O2144 +)
- Retracting propose*predict-yes
- -->
- (O2143 ^name predict-yes +)
- (S1 ^operator O2143 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1075 ^value 1 +)
- (R1 ^reward R1075 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2144 = 0.2572446938728945)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2144 = -0.1377248055371832)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2143 = 0.7368278163366394)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2143 = 0.2631699143787788)
- =>WM: (15120: S1 ^operator O2146 +)
- =>WM: (15119: S1 ^operator O2145 +)
- =>WM: (15118: I3 ^dir L)
- =>WM: (15117: O2146 ^name predict-no)
- =>WM: (15116: O2145 ^name predict-yes)
- =>WM: (15115: R1076 ^value 1)
- =>WM: (15114: R1 ^reward R1076)
- =>WM: (15113: I3 ^see 1)
- <=WM: (15104: S1 ^operator O2143 +)
- <=WM: (15106: S1 ^operator O2143)
- <=WM: (15105: S1 ^operator O2144 +)
- <=WM: (15103: I3 ^dir R)
- <=WM: (15099: R1 ^reward R1075)
- <=WM: (15085: I3 ^see 0)
- <=WM: (15102: O2144 ^name predict-no)
- <=WM: (15101: O2143 ^name predict-yes)
- <=WM: (15100: R1075 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2145 = 0.431890544346723)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2145 = 0.5681076279872701)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2146 = 0.3289464555317863)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2146 = -0.1549421060161498)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2144 = 0.3289464555317863)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2144 = -0.1549421060161498)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2143 = 0.431890544346723)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2143 = 0.5681076279872701)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114082 0.736828 -> 0.748236 -0.0114079 0.736828(R,m,v=1,0.902299,0.0886652)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*40 0.251763 0.0114065 0.26317 -> 0.251763 0.0114068 0.26317(R,m,v=1,1,0)
- =>WM: (15121: S1 ^operator O2145)
- 1073: O: O2145 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1073 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1072 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15122: I3 ^predict-yes N1073)
- <=WM: (15108: N1072 ^status complete)
- <=WM: (15107: I3 ^predict-yes N1072)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- /|--- Input Phase ---
- =>WM: (15126: I2 ^dir L)
- =>WM: (15125: I2 ^reward 1)
- =>WM: (15124: I2 ^see 1)
- =>WM: (15123: N1073 ^status complete)
- <=WM: (15111: I2 ^dir L)
- <=WM: (15110: I2 ^reward 1)
- <=WM: (15109: I2 ^see 1)
- =>WM: (15127: I2 ^level-1 L1-root)
- <=WM: (15112: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2146 = 0.6710531621402969)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2145 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1077 ^value 1 +)
- (R1 ^reward R1077 +)
- Firing propose*predict-yes
- -->
- (O2147 ^name predict-yes +)
- (S1 ^operator O2147 +)
- Firing propose*predict-no
- -->
- (O2148 ^name predict-no +)
- (S1 ^operator O2148 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2146 = 0.3289464555317863)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2145 = 0.431890544346723)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2146 ^name predict-no +)
- (S1 ^operator O2146 +)
- Retracting propose*predict-yes
- -->
- (O2145 ^name predict-yes +)
- (S1 ^operator O2145 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1076 ^value 1 +)
- (R1 ^reward R1076 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2146 = -0.1549421060161498)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2146 = 0.3289464555317863)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2145 = 0.5681076279872701)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2145 = 0.431890544346723)
- =>WM: (15133: S1 ^operator O2148 +)
- =>WM: (15132: S1 ^operator O2147 +)
- =>WM: (15131: O2148 ^name predict-no)
- =>WM: (15130: O2147 ^name predict-yes)
- =>WM: (15129: R1077 ^value 1)
- =>WM: (15128: R1 ^reward R1077)
- <=WM: (15119: S1 ^operator O2145 +)
- <=WM: (15121: S1 ^operator O2145)
- <=WM: (15120: S1 ^operator O2146 +)
- <=WM: (15114: R1 ^reward R1076)
- <=WM: (15117: O2146 ^name predict-no)
- <=WM: (15116: O2145 ^name predict-yes)
- <=WM: (15115: R1076 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2147 = 0.431890544346723)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2147 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2148 = 0.3289464555317863)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2148 = 0.6710531621402969)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2146 = 0.3289464555317863)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2146 = 0.6710531621402969)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2145 = 0.431890544346723)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2145 = -0.06092862110810815)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683777 -0.251886 0.431891 -> 0.683777 -0.251886 0.431891(R,m,v=1,0.927374,0.0677296)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*45 0.316222 0.251886 0.568108 -> 0.316222 0.251886 0.568108(R,m,v=1,1,0)
- =>WM: (15134: S1 ^operator O2148)
- 1074: O: O2148 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1074 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1073 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15135: I3 ^predict-no N1074)
- <=WM: (15123: N1073 ^status complete)
- <=WM: (15122: I3 ^predict-yes N1073)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (15139: I2 ^dir L)
- =>WM: (15138: I2 ^reward 1)
- =>WM: (15137: I2 ^see 0)
- =>WM: (15136: N1074 ^status complete)
- <=WM: (15126: I2 ^dir L)
- <=WM: (15125: I2 ^reward 1)
- <=WM: (15124: I2 ^see 1)
- =>WM: (15140: I2 ^level-1 L0-root)
- <=WM: (15127: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2148 = 0.6710545794995983)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2147 = 0.02602968095631553)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1078 ^value 1 +)
- (R1 ^reward R1078 +)
- Firing propose*predict-yes
- -->
- (O2149 ^name predict-yes +)
- (S1 ^operator O2149 +)
- Firing propose*predict-no
- -->
- (O2150 ^name predict-no +)
- (S1 ^operator O2150 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2148 = 0.3289464555317863)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2147 = 0.431890818496624)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2148 ^name predict-no +)
- (S1 ^operator O2148 +)
- Retracting propose*predict-yes
- -->
- (O2147 ^name predict-yes +)
- (S1 ^operator O2147 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1077 ^value 1 +)
- (R1 ^reward R1077 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2148 = 0.6710531621402969)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2148 = 0.3289464555317863)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2147 = -0.06092862110810815)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2147 = 0.431890818496624)
- =>WM: (15147: S1 ^operator O2150 +)
- =>WM: (15146: S1 ^operator O2149 +)
- =>WM: (15145: O2150 ^name predict-no)
- =>WM: (15144: O2149 ^name predict-yes)
- =>WM: (15143: R1078 ^value 1)
- =>WM: (15142: R1 ^reward R1078)
- =>WM: (15141: I3 ^see 0)
- <=WM: (15132: S1 ^operator O2147 +)
- <=WM: (15133: S1 ^operator O2148 +)
- <=WM: (15134: S1 ^operator O2148)
- <=WM: (15128: R1 ^reward R1077)
- <=WM: (15113: I3 ^see 1)
- <=WM: (15131: O2148 ^name predict-no)
- <=WM: (15130: O2147 ^name predict-yes)
- <=WM: (15129: R1077 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2149 = 0.431890818496624)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2149 = 0.02602968095631553)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2150 = 0.3289464555317863)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2150 = 0.6710545794995983)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2148 = 0.3289464555317863)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2148 = 0.6710545794995983)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2147 = 0.431890818496624)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2147 = 0.02602968095631553)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565404 -0.236458 0.328946 -> 0.565404 -0.236458 0.328947(R,m,v=1,0.910714,0.0818007)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*43 0.434595 0.236458 0.671053 -> 0.434595 0.236458 0.671053(R,m,v=1,1,0)
- =>WM: (15148: S1 ^operator O2150)
- 1075: O: O2150 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1075 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1074 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15149: I3 ^predict-no N1075)
- <=WM: (15136: N1074 ^status complete)
- <=WM: (15135: I3 ^predict-no N1074)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- /|\--- Input Phase ---
- =>WM: (15153: I2 ^dir L)
- =>WM: (15152: I2 ^reward 1)
- =>WM: (15151: I2 ^see 0)
- =>WM: (15150: N1075 ^status complete)
- <=WM: (15139: I2 ^dir L)
- <=WM: (15138: I2 ^reward 1)
- <=WM: (15137: I2 ^see 0)
- =>WM: (15154: I2 ^level-1 L0-root)
- <=WM: (15140: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2150 = 0.6710545794995983)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2149 = 0.02602968095631553)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1079 ^value 1 +)
- (R1 ^reward R1079 +)
- Firing propose*predict-yes
- -->
- (O2151 ^name predict-yes +)
- (S1 ^operator O2151 +)
- Firing propose*predict-no
- -->
- (O2152 ^name predict-no +)
- (S1 ^operator O2152 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2150 = 0.3289465128809739)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2149 = 0.431890818496624)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2150 ^name predict-no +)
- (S1 ^operator O2150 +)
- Retracting propose*predict-yes
- -->
- (O2149 ^name predict-yes +)
- (S1 ^operator O2149 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1078 ^value 1 +)
- (R1 ^reward R1078 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2150 = 0.6710545794995983)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2150 = 0.3289465128809739)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2149 = 0.02602968095631553)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2149 = 0.431890818496624)
- =>WM: (15160: S1 ^operator O2152 +)
- =>WM: (15159: S1 ^operator O2151 +)
- =>WM: (15158: O2152 ^name predict-no)
- =>WM: (15157: O2151 ^name predict-yes)
- =>WM: (15156: R1079 ^value 1)
- =>WM: (15155: R1 ^reward R1079)
- <=WM: (15146: S1 ^operator O2149 +)
- <=WM: (15147: S1 ^operator O2150 +)
- <=WM: (15148: S1 ^operator O2150)
- <=WM: (15142: R1 ^reward R1078)
- <=WM: (15145: O2150 ^name predict-no)
- <=WM: (15144: O2149 ^name predict-yes)
- <=WM: (15143: R1078 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2151 = 0.431890818496624)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2151 = 0.02602968095631553)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2152 = 0.3289465128809739)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2152 = 0.6710545794995983)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2150 = 0.3289465128809739)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2150 = 0.6710545794995983)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2149 = 0.431890818496624)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2149 = 0.02602968095631553)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565404 -0.236458 0.328947 -> 0.565404 -0.236458 0.328946(R,m,v=1,0.911243,0.0813609)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*33 0.434598 0.236457 0.671055 -> 0.434597 0.236457 0.671054(R,m,v=1,1,0)
- =>WM: (15161: S1 ^operator O2152)
- 1076: O: O2152 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1076 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1075 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15162: I3 ^predict-no N1076)
- <=WM: (15150: N1075 ^status complete)
- <=WM: (15149: I3 ^predict-no N1075)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- -/|--- Input Phase ---
- =>WM: (15166: I2 ^dir U)
- =>WM: (15165: I2 ^reward 1)
- =>WM: (15164: I2 ^see 0)
- =>WM: (15163: N1076 ^status complete)
- <=WM: (15153: I2 ^dir L)
- <=WM: (15152: I2 ^reward 1)
- <=WM: (15151: I2 ^see 0)
- =>WM: (15167: I2 ^level-1 L0-root)
- <=WM: (15154: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1080 ^value 1 +)
- (R1 ^reward R1080 +)
- Firing propose*predict-yes
- -->
- (O2153 ^name predict-yes +)
- (S1 ^operator O2153 +)
- Firing propose*predict-no
- -->
- (O2154 ^name predict-no +)
- (S1 ^operator O2154 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2152 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2151 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2152 ^name predict-no +)
- (S1 ^operator O2152 +)
- Retracting propose*predict-yes
- -->
- (O2151 ^name predict-yes +)
- (S1 ^operator O2151 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1079 ^value 1 +)
- (R1 ^reward R1079 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2152 = 0.6710544156425126)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2152 = 0.3289463490238881)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2151 = 0.02602968095631553)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2151 = 0.431890818496624)
- =>WM: (15174: S1 ^operator O2154 +)
- =>WM: (15173: S1 ^operator O2153 +)
- =>WM: (15172: I3 ^dir U)
- =>WM: (15171: O2154 ^name predict-no)
- =>WM: (15170: O2153 ^name predict-yes)
- =>WM: (15169: R1080 ^value 1)
- =>WM: (15168: R1 ^reward R1080)
- <=WM: (15159: S1 ^operator O2151 +)
- <=WM: (15160: S1 ^operator O2152 +)
- <=WM: (15161: S1 ^operator O2152)
- <=WM: (15118: I3 ^dir L)
- <=WM: (15155: R1 ^reward R1079)
- <=WM: (15158: O2152 ^name predict-no)
- <=WM: (15157: O2151 ^name predict-yes)
- <=WM: (15156: R1079 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2153 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2154 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2152 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2151 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565404 -0.236458 0.328946 -> 0.565404 -0.236458 0.328946(R,m,v=1,0.911765,0.0809259)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*33 0.434597 0.236457 0.671054 -> 0.434597 0.236457 0.671054(R,m,v=1,1,0)
- =>WM: (15175: S1 ^operator O2154)
- 1077: O: O2154 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1077 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1076 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15176: I3 ^predict-no N1077)
- <=WM: (15163: N1076 ^status complete)
- <=WM: (15162: I3 ^predict-no N1076)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (15180: I2 ^dir R)
- =>WM: (15179: I2 ^reward 1)
- =>WM: (15178: I2 ^see 0)
- =>WM: (15177: N1077 ^status complete)
- <=WM: (15166: I2 ^dir U)
- <=WM: (15165: I2 ^reward 1)
- <=WM: (15164: I2 ^see 0)
- =>WM: (15181: I2 ^level-1 L0-root)
- <=WM: (15167: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2154 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2153 = 0.2631730280152305)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1081 ^value 1 +)
- (R1 ^reward R1081 +)
- Firing propose*predict-yes
- -->
- (O2155 ^name predict-yes +)
- (S1 ^operator O2155 +)
- Firing propose*predict-no
- -->
- (O2156 ^name predict-no +)
- (S1 ^operator O2156 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2154 = 0.2572446938728945)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2153 = 0.7368281567293268)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2154 ^name predict-no +)
- (S1 ^operator O2154 +)
- Retracting propose*predict-yes
- -->
- (O2153 ^name predict-yes +)
- (S1 ^operator O2153 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1080 ^value 1 +)
- (R1 ^reward R1080 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2154 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2153 = 0.)
- =>WM: (15188: S1 ^operator O2156 +)
- =>WM: (15187: S1 ^operator O2155 +)
- =>WM: (15186: I3 ^dir R)
- =>WM: (15185: O2156 ^name predict-no)
- =>WM: (15184: O2155 ^name predict-yes)
- =>WM: (15183: R1081 ^value 1)
- =>WM: (15182: R1 ^reward R1081)
- <=WM: (15173: S1 ^operator O2153 +)
- <=WM: (15174: S1 ^operator O2154 +)
- <=WM: (15175: S1 ^operator O2154)
- <=WM: (15172: I3 ^dir U)
- <=WM: (15168: R1 ^reward R1080)
- <=WM: (15171: O2154 ^name predict-no)
- <=WM: (15170: O2153 ^name predict-yes)
- <=WM: (15169: R1080 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2155 = 0.2631730280152305)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2155 = 0.7368281567293268)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2156 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2156 = 0.2572446938728945)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2154 = 0.2572446938728945)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2154 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2153 = 0.7368281567293268)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2153 = 0.2631730280152305)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (15189: S1 ^operator O2155)
- 1078: O: O2155 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1078 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1077 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15190: I3 ^predict-yes N1078)
- <=WM: (15177: N1077 ^status complete)
- <=WM: (15176: I3 ^predict-no N1077)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /--- Input Phase ---
- =>WM: (15194: I2 ^dir U)
- =>WM: (15193: I2 ^reward 1)
- =>WM: (15192: I2 ^see 1)
- =>WM: (15191: N1078 ^status complete)
- <=WM: (15180: I2 ^dir R)
- <=WM: (15179: I2 ^reward 1)
- <=WM: (15178: I2 ^see 0)
- =>WM: (15195: I2 ^level-1 R1-root)
- <=WM: (15181: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1082 ^value 1 +)
- (R1 ^reward R1082 +)
- Firing propose*predict-yes
- -->
- (O2157 ^name predict-yes +)
- (S1 ^operator O2157 +)
- Firing propose*predict-no
- -->
- (O2158 ^name predict-no +)
- (S1 ^operator O2158 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2156 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2155 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2156 ^name predict-no +)
- (S1 ^operator O2156 +)
- Retracting propose*predict-yes
- -->
- (O2155 ^name predict-yes +)
- (S1 ^operator O2155 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1081 ^value 1 +)
- (R1 ^reward R1081 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2156 = 0.2572446938728945)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2156 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2155 = 0.7368281567293268)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2155 = 0.2631730280152305)
- =>WM: (15203: S1 ^operator O2158 +)
- =>WM: (15202: S1 ^operator O2157 +)
- =>WM: (15201: I3 ^dir U)
- =>WM: (15200: O2158 ^name predict-no)
- =>WM: (15199: O2157 ^name predict-yes)
- =>WM: (15198: R1082 ^value 1)
- =>WM: (15197: R1 ^reward R1082)
- =>WM: (15196: I3 ^see 1)
- <=WM: (15187: S1 ^operator O2155 +)
- <=WM: (15189: S1 ^operator O2155)
- <=WM: (15188: S1 ^operator O2156 +)
- <=WM: (15186: I3 ^dir R)
- <=WM: (15182: R1 ^reward R1081)
- <=WM: (15141: I3 ^see 0)
- <=WM: (15185: O2156 ^name predict-no)
- <=WM: (15184: O2155 ^name predict-yes)
- <=WM: (15183: R1081 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2157 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2158 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2156 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2155 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114079 0.736828 -> 0.748236 -0.0114081 0.736828(R,m,v=1,0.902857,0.0882102)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*35 0.251764 0.0114088 0.263173 -> 0.251764 0.0114087 0.263173(R,m,v=1,1,0)
- =>WM: (15204: S1 ^operator O2158)
- 1079: O: O2158 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1079 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1078 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15205: I3 ^predict-no N1079)
- <=WM: (15191: N1078 ^status complete)
- <=WM: (15190: I3 ^predict-yes N1078)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- |\--- Input Phase ---
- =>WM: (15209: I2 ^dir R)
- =>WM: (15208: I2 ^reward 1)
- =>WM: (15207: I2 ^see 0)
- =>WM: (15206: N1079 ^status complete)
- <=WM: (15194: I2 ^dir U)
- <=WM: (15193: I2 ^reward 1)
- <=WM: (15192: I2 ^see 1)
- =>WM: (15210: I2 ^level-1 R1-root)
- <=WM: (15195: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2157 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2158 = 0.7427538419632254)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1083 ^value 1 +)
- (R1 ^reward R1083 +)
- Firing propose*predict-yes
- -->
- (O2159 ^name predict-yes +)
- (S1 ^operator O2159 +)
- Firing propose*predict-no
- -->
- (O2160 ^name predict-no +)
- (S1 ^operator O2160 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2158 = 0.2572446938728945)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2157 = 0.7368279790176432)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2158 ^name predict-no +)
- (S1 ^operator O2158 +)
- Retracting propose*predict-yes
- -->
- (O2157 ^name predict-yes +)
- (S1 ^operator O2157 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1082 ^value 1 +)
- (R1 ^reward R1082 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2158 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2157 = 0.)
- =>WM: (15218: S1 ^operator O2160 +)
- =>WM: (15217: S1 ^operator O2159 +)
- =>WM: (15216: I3 ^dir R)
- =>WM: (15215: O2160 ^name predict-no)
- =>WM: (15214: O2159 ^name predict-yes)
- =>WM: (15213: R1083 ^value 1)
- =>WM: (15212: R1 ^reward R1083)
- =>WM: (15211: I3 ^see 0)
- <=WM: (15202: S1 ^operator O2157 +)
- <=WM: (15203: S1 ^operator O2158 +)
- <=WM: (15204: S1 ^operator O2158)
- <=WM: (15201: I3 ^dir U)
- <=WM: (15197: R1 ^reward R1082)
- <=WM: (15196: I3 ^see 1)
- <=WM: (15200: O2158 ^name predict-no)
- <=WM: (15199: O2157 ^name predict-yes)
- <=WM: (15198: R1082 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2159 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2159 = 0.7368279790176432)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2160 = 0.7427538419632254)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2160 = 0.2572446938728945)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2158 = 0.2572446938728945)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2158 = 0.7427538419632254)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2157 = 0.7368279790176432)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2157 = -0.3011268063455669)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (15219: S1 ^operator O2160)
- 1080: O: O2160 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1080 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1079 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15220: I3 ^predict-no N1080)
- <=WM: (15206: N1079 ^status complete)
- <=WM: (15205: I3 ^predict-no N1079)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- -/--- Input Phase ---
- =>WM: (15224: I2 ^dir R)
- =>WM: (15223: I2 ^reward 1)
- =>WM: (15222: I2 ^see 0)
- =>WM: (15221: N1080 ^status complete)
- <=WM: (15209: I2 ^dir R)
- <=WM: (15208: I2 ^reward 1)
- <=WM: (15207: I2 ^see 0)
- =>WM: (15225: I2 ^level-1 R0-root)
- <=WM: (15210: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2160 = 0.7427559228529783)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2159 = -0.1989581826229297)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1084 ^value 1 +)
- (R1 ^reward R1084 +)
- Firing propose*predict-yes
- -->
- (O2161 ^name predict-yes +)
- (S1 ^operator O2161 +)
- Firing propose*predict-no
- -->
- (O2162 ^name predict-no +)
- (S1 ^operator O2162 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2160 = 0.2572446938728945)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2159 = 0.7368279790176432)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2160 ^name predict-no +)
- (S1 ^operator O2160 +)
- Retracting propose*predict-yes
- -->
- (O2159 ^name predict-yes +)
- (S1 ^operator O2159 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1083 ^value 1 +)
- (R1 ^reward R1083 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2160 = 0.2572446938728945)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2160 = 0.7427538419632254)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2159 = 0.7368279790176432)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2159 = -0.3011268063455669)
- =>WM: (15231: S1 ^operator O2162 +)
- =>WM: (15230: S1 ^operator O2161 +)
- =>WM: (15229: O2162 ^name predict-no)
- =>WM: (15228: O2161 ^name predict-yes)
- =>WM: (15227: R1084 ^value 1)
- =>WM: (15226: R1 ^reward R1084)
- <=WM: (15217: S1 ^operator O2159 +)
- <=WM: (15218: S1 ^operator O2160 +)
- <=WM: (15219: S1 ^operator O2160)
- <=WM: (15212: R1 ^reward R1083)
- <=WM: (15215: O2160 ^name predict-no)
- <=WM: (15214: O2159 ^name predict-yes)
- <=WM: (15213: R1083 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2161 = 0.7368279790176432)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2161 = -0.1989581826229297)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2162 = 0.2572446938728945)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2162 = 0.7427559228529783)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2160 = 0.2572446938728945)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2160 = 0.7427559228529783)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2159 = 0.7368279790176432)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2159 = -0.1989581826229297)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586135 -0.32889 0.257245 -> 0.586135 -0.32889 0.257245(R,m,v=1,0.870968,0.11299)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*36 0.413864 0.32889 0.742754 -> 0.413864 0.32889 0.742754(R,m,v=1,1,0)
- =>WM: (15232: S1 ^operator O2162)
- 1081: O: O2162 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1081 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1080 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15233: I3 ^predict-no N1081)
- <=WM: (15221: N1080 ^status complete)
- <=WM: (15220: I3 ^predict-no N1080)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |--- Input Phase ---
- =>WM: (15237: I2 ^dir U)
- =>WM: (15236: I2 ^reward 1)
- =>WM: (15235: I2 ^see 0)
- =>WM: (15234: N1081 ^status complete)
- <=WM: (15224: I2 ^dir R)
- <=WM: (15223: I2 ^reward 1)
- <=WM: (15222: I2 ^see 0)
- =>WM: (15238: I2 ^level-1 R0-root)
- <=WM: (15225: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1085 ^value 1 +)
- (R1 ^reward R1085 +)
- Firing propose*predict-yes
- -->
- (O2163 ^name predict-yes +)
- (S1 ^operator O2163 +)
- Firing propose*predict-no
- -->
- (O2164 ^name predict-no +)
- (S1 ^operator O2164 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2162 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2161 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2162 ^name predict-no +)
- (S1 ^operator O2162 +)
- Retracting propose*predict-yes
- -->
- (O2161 ^name predict-yes +)
- (S1 ^operator O2161 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1084 ^value 1 +)
- (R1 ^reward R1084 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*41
- -->
- (S1 ^operator O2162 = 0.7427559228529783)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2162 = 0.2572449134974765)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*42
- -->
- (S1 ^operator O2161 = -0.1989581826229297)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2161 = 0.7368279790176432)
- =>WM: (15245: S1 ^operator O2164 +)
- =>WM: (15244: S1 ^operator O2163 +)
- =>WM: (15243: I3 ^dir U)
- =>WM: (15242: O2164 ^name predict-no)
- =>WM: (15241: O2163 ^name predict-yes)
- =>WM: (15240: R1085 ^value 1)
- =>WM: (15239: R1 ^reward R1085)
- <=WM: (15230: S1 ^operator O2161 +)
- <=WM: (15231: S1 ^operator O2162 +)
- <=WM: (15232: S1 ^operator O2162)
- <=WM: (15216: I3 ^dir R)
- <=WM: (15226: R1 ^reward R1084)
- <=WM: (15229: O2162 ^name predict-no)
- <=WM: (15228: O2161 ^name predict-yes)
- <=WM: (15227: R1084 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2163 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2164 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2162 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2161 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586135 -0.32889 0.257245 -> 0.586135 -0.32889 0.257245(R,m,v=1,0.871658,0.112472)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*41 0.413865 0.32889 0.742756 -> 0.413865 0.32889 0.742756(R,m,v=1,1,0)
- =>WM: (15246: S1 ^operator O2164)
- 1082: O: O2164 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1082 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1081 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15247: I3 ^predict-no N1082)
- <=WM: (15234: N1081 ^status complete)
- <=WM: (15233: I3 ^predict-no N1081)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- \-/|--- Input Phase ---
- =>WM: (15251: I2 ^dir L)
- =>WM: (15250: I2 ^reward 1)
- =>WM: (15249: I2 ^see 0)
- =>WM: (15248: N1082 ^status complete)
- <=WM: (15237: I2 ^dir U)
- <=WM: (15236: I2 ^reward 1)
- <=WM: (15235: I2 ^see 0)
- =>WM: (15252: I2 ^level-1 R0-root)
- <=WM: (15238: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2164 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2163 = 0.5681101809942384)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1086 ^value 1 +)
- (R1 ^reward R1086 +)
- Firing propose*predict-yes
- -->
- (O2165 ^name predict-yes +)
- (S1 ^operator O2165 +)
- Firing propose*predict-no
- -->
- (O2166 ^name predict-no +)
- (S1 ^operator O2166 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2164 = 0.3289462343239279)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2163 = 0.431890818496624)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2164 ^name predict-no +)
- (S1 ^operator O2164 +)
- Retracting propose*predict-yes
- -->
- (O2163 ^name predict-yes +)
- (S1 ^operator O2163 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1085 ^value 1 +)
- (R1 ^reward R1085 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2164 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2163 = 0.)
- =>WM: (15259: S1 ^operator O2166 +)
- =>WM: (15258: S1 ^operator O2165 +)
- =>WM: (15257: I3 ^dir L)
- =>WM: (15256: O2166 ^name predict-no)
- =>WM: (15255: O2165 ^name predict-yes)
- =>WM: (15254: R1086 ^value 1)
- =>WM: (15253: R1 ^reward R1086)
- <=WM: (15244: S1 ^operator O2163 +)
- <=WM: (15245: S1 ^operator O2164 +)
- <=WM: (15246: S1 ^operator O2164)
- <=WM: (15243: I3 ^dir U)
- <=WM: (15239: R1 ^reward R1085)
- <=WM: (15242: O2164 ^name predict-no)
- <=WM: (15241: O2163 ^name predict-yes)
- <=WM: (15240: R1085 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2165 = 0.5681101809942384)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2165 = 0.431890818496624)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2166 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2166 = 0.3289462343239279)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2164 = 0.3289462343239279)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2164 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2163 = 0.431890818496624)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2163 = 0.5681101809942384)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (15260: S1 ^operator O2165)
- 1083: O: O2165 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1083 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1082 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15261: I3 ^predict-yes N1083)
- <=WM: (15248: N1082 ^status complete)
- <=WM: (15247: I3 ^predict-no N1082)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- \-/--- Input Phase ---
- =>WM: (15265: I2 ^dir R)
- =>WM: (15264: I2 ^reward 1)
- =>WM: (15263: I2 ^see 1)
- =>WM: (15262: N1083 ^status complete)
- <=WM: (15251: I2 ^dir L)
- <=WM: (15250: I2 ^reward 1)
- <=WM: (15249: I2 ^see 0)
- =>WM: (15266: I2 ^level-1 L1-root)
- <=WM: (15252: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2166 = -0.1377248055371832)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2165 = 0.263170254771466)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1087 ^value 1 +)
- (R1 ^reward R1087 +)
- Firing propose*predict-yes
- -->
- (O2167 ^name predict-yes +)
- (S1 ^operator O2167 +)
- Firing propose*predict-no
- -->
- (O2168 ^name predict-no +)
- (S1 ^operator O2168 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2166 = 0.2572447880449083)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2165 = 0.7368279790176432)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2166 ^name predict-no +)
- (S1 ^operator O2166 +)
- Retracting propose*predict-yes
- -->
- (O2165 ^name predict-yes +)
- (S1 ^operator O2165 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1086 ^value 1 +)
- (R1 ^reward R1086 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2166 = 0.3289462343239279)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2166 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2165 = 0.431890818496624)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2165 = 0.5681101809942384)
- =>WM: (15274: S1 ^operator O2168 +)
- =>WM: (15273: S1 ^operator O2167 +)
- =>WM: (15272: I3 ^dir R)
- =>WM: (15271: O2168 ^name predict-no)
- =>WM: (15270: O2167 ^name predict-yes)
- =>WM: (15269: R1087 ^value 1)
- =>WM: (15268: R1 ^reward R1087)
- =>WM: (15267: I3 ^see 1)
- <=WM: (15258: S1 ^operator O2165 +)
- <=WM: (15260: S1 ^operator O2165)
- <=WM: (15259: S1 ^operator O2166 +)
- <=WM: (15257: I3 ^dir L)
- <=WM: (15253: R1 ^reward R1086)
- <=WM: (15211: I3 ^see 0)
- <=WM: (15256: O2166 ^name predict-no)
- <=WM: (15255: O2165 ^name predict-yes)
- <=WM: (15254: R1086 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2167 = 0.7368279790176432)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2167 = 0.263170254771466)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2168 = 0.2572447880449083)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2168 = -0.1377248055371832)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2166 = 0.2572447880449083)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2166 = -0.1377248055371832)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2165 = 0.7368279790176432)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2165 = 0.263170254771466)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683777 -0.251886 0.431891 -> 0.683777 -0.251886 0.431891(R,m,v=1,0.927778,0.0673805)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*30 0.316224 0.251886 0.56811 -> 0.316224 0.251886 0.56811(R,m,v=1,1,0)
- =>WM: (15275: S1 ^operator O2167)
- 1084: O: O2167 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1084 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1083 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15276: I3 ^predict-yes N1084)
- <=WM: (15262: N1083 ^status complete)
- <=WM: (15261: I3 ^predict-yes N1083)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |\--- Input Phase ---
- =>WM: (15280: I2 ^dir U)
- =>WM: (15279: I2 ^reward 1)
- =>WM: (15278: I2 ^see 1)
- =>WM: (15277: N1084 ^status complete)
- <=WM: (15265: I2 ^dir R)
- <=WM: (15264: I2 ^reward 1)
- <=WM: (15263: I2 ^see 1)
- =>WM: (15281: I2 ^level-1 R1-root)
- <=WM: (15266: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1088 ^value 1 +)
- (R1 ^reward R1088 +)
- Firing propose*predict-yes
- -->
- (O2169 ^name predict-yes +)
- (S1 ^operator O2169 +)
- Firing propose*predict-no
- -->
- (O2170 ^name predict-no +)
- (S1 ^operator O2170 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2168 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2167 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2168 ^name predict-no +)
- (S1 ^operator O2168 +)
- Retracting propose*predict-yes
- -->
- (O2167 ^name predict-yes +)
- (S1 ^operator O2167 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1087 ^value 1 +)
- (R1 ^reward R1087 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2168 = -0.1377248055371832)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2168 = 0.2572447880449083)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2167 = 0.263170254771466)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2167 = 0.7368279790176432)
- =>WM: (15288: S1 ^operator O2170 +)
- =>WM: (15287: S1 ^operator O2169 +)
- =>WM: (15286: I3 ^dir U)
- =>WM: (15285: O2170 ^name predict-no)
- =>WM: (15284: O2169 ^name predict-yes)
- =>WM: (15283: R1088 ^value 1)
- =>WM: (15282: R1 ^reward R1088)
- <=WM: (15273: S1 ^operator O2167 +)
- <=WM: (15275: S1 ^operator O2167)
- <=WM: (15274: S1 ^operator O2168 +)
- <=WM: (15272: I3 ^dir R)
- <=WM: (15268: R1 ^reward R1087)
- <=WM: (15271: O2168 ^name predict-no)
- <=WM: (15270: O2167 ^name predict-yes)
- <=WM: (15269: R1087 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2169 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2170 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2168 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2167 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114081 0.736828 -> 0.748236 -0.0114079 0.736828(R,m,v=1,0.903409,0.0877597)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*40 0.251763 0.0114068 0.26317 -> 0.251764 0.011407 0.263171(R,m,v=1,1,0)
- =>WM: (15289: S1 ^operator O2170)
- 1085: O: O2170 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1085 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1084 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15290: I3 ^predict-no N1085)
- <=WM: (15277: N1084 ^status complete)
- <=WM: (15276: I3 ^predict-yes N1084)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- -/|--- Input Phase ---
- =>WM: (15294: I2 ^dir L)
- =>WM: (15293: I2 ^reward 1)
- =>WM: (15292: I2 ^see 0)
- =>WM: (15291: N1085 ^status complete)
- <=WM: (15280: I2 ^dir U)
- <=WM: (15279: I2 ^reward 1)
- <=WM: (15278: I2 ^see 1)
- =>WM: (15295: I2 ^level-1 R1-root)
- <=WM: (15281: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2169 = 0.5681079021371711)
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2170 = -0.1549421060161498)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1089 ^value 1 +)
- (R1 ^reward R1089 +)
- Firing propose*predict-yes
- -->
- (O2171 ^name predict-yes +)
- (S1 ^operator O2171 +)
- Firing propose*predict-no
- -->
- (O2172 ^name predict-no +)
- (S1 ^operator O2172 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2170 = 0.3289462343239279)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2169 = 0.4318906685729947)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2170 ^name predict-no +)
- (S1 ^operator O2170 +)
- Retracting propose*predict-yes
- -->
- (O2169 ^name predict-yes +)
- (S1 ^operator O2169 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1088 ^value 1 +)
- (R1 ^reward R1088 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2170 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2169 = 0.)
- =>WM: (15303: S1 ^operator O2172 +)
- =>WM: (15302: S1 ^operator O2171 +)
- =>WM: (15301: I3 ^dir L)
- =>WM: (15300: O2172 ^name predict-no)
- =>WM: (15299: O2171 ^name predict-yes)
- =>WM: (15298: R1089 ^value 1)
- =>WM: (15297: R1 ^reward R1089)
- =>WM: (15296: I3 ^see 0)
- <=WM: (15287: S1 ^operator O2169 +)
- <=WM: (15288: S1 ^operator O2170 +)
- <=WM: (15289: S1 ^operator O2170)
- <=WM: (15286: I3 ^dir U)
- <=WM: (15282: R1 ^reward R1088)
- <=WM: (15267: I3 ^see 1)
- <=WM: (15285: O2170 ^name predict-no)
- <=WM: (15284: O2169 ^name predict-yes)
- <=WM: (15283: R1088 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2171 = 0.5681079021371711)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2171 = 0.4318906685729947)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2172 = -0.1549421060161498)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2172 = 0.3289462343239279)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2170 = 0.3289462343239279)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2170 = -0.1549421060161498)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2169 = 0.4318906685729947)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2169 = 0.5681079021371711)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (15304: S1 ^operator O2171)
- 1086: O: O2171 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1086 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1085 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15305: I3 ^predict-yes N1086)
- <=WM: (15291: N1085 ^status complete)
- <=WM: (15290: I3 ^predict-no N1085)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (15309: I2 ^dir U)
- =>WM: (15308: I2 ^reward 1)
- =>WM: (15307: I2 ^see 1)
- =>WM: (15306: N1086 ^status complete)
- <=WM: (15294: I2 ^dir L)
- <=WM: (15293: I2 ^reward 1)
- <=WM: (15292: I2 ^see 0)
- =>WM: (15310: I2 ^level-1 L1-root)
- <=WM: (15295: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1090 ^value 1 +)
- (R1 ^reward R1090 +)
- Firing propose*predict-yes
- -->
- (O2173 ^name predict-yes +)
- (S1 ^operator O2173 +)
- Firing propose*predict-no
- -->
- (O2174 ^name predict-no +)
- (S1 ^operator O2174 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2172 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2171 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2172 ^name predict-no +)
- (S1 ^operator O2172 +)
- Retracting propose*predict-yes
- -->
- (O2171 ^name predict-yes +)
- (S1 ^operator O2171 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1089 ^value 1 +)
- (R1 ^reward R1089 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2172 = 0.3289462343239279)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2172 = -0.1549421060161498)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2171 = 0.4318906685729947)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2171 = 0.5681079021371711)
- =>WM: (15318: S1 ^operator O2174 +)
- =>WM: (15317: S1 ^operator O2173 +)
- =>WM: (15316: I3 ^dir U)
- =>WM: (15315: O2174 ^name predict-no)
- =>WM: (15314: O2173 ^name predict-yes)
- =>WM: (15313: R1090 ^value 1)
- =>WM: (15312: R1 ^reward R1090)
- =>WM: (15311: I3 ^see 1)
- <=WM: (15302: S1 ^operator O2171 +)
- <=WM: (15304: S1 ^operator O2171)
- <=WM: (15303: S1 ^operator O2172 +)
- <=WM: (15301: I3 ^dir L)
- <=WM: (15297: R1 ^reward R1089)
- <=WM: (15296: I3 ^see 0)
- <=WM: (15300: O2172 ^name predict-no)
- <=WM: (15299: O2171 ^name predict-yes)
- <=WM: (15298: R1089 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2173 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2174 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2172 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2171 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683777 -0.251886 0.431891 -> 0.683777 -0.251886 0.431891(R,m,v=1,0.928177,0.067035)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*45 0.316222 0.251886 0.568108 -> 0.316222 0.251886 0.568108(R,m,v=1,1,0)
- =>WM: (15319: S1 ^operator O2174)
- 1087: O: O2174 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1087 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1086 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15320: I3 ^predict-no N1087)
- <=WM: (15306: N1086 ^status complete)
- <=WM: (15305: I3 ^predict-yes N1086)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- /|\--- Input Phase ---
- =>WM: (15324: I2 ^dir L)
- =>WM: (15323: I2 ^reward 1)
- =>WM: (15322: I2 ^see 0)
- =>WM: (15321: N1087 ^status complete)
- <=WM: (15309: I2 ^dir U)
- <=WM: (15308: I2 ^reward 1)
- <=WM: (15307: I2 ^see 1)
- =>WM: (15325: I2 ^level-1 L1-root)
- <=WM: (15310: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2174 = 0.6710532194894845)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2173 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1091 ^value 1 +)
- (R1 ^reward R1091 +)
- Firing propose*predict-yes
- -->
- (O2175 ^name predict-yes +)
- (S1 ^operator O2175 +)
- Firing propose*predict-no
- -->
- (O2176 ^name predict-no +)
- (S1 ^operator O2176 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2174 = 0.3289462343239279)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2173 = 0.4318908829664698)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2174 ^name predict-no +)
- (S1 ^operator O2174 +)
- Retracting propose*predict-yes
- -->
- (O2173 ^name predict-yes +)
- (S1 ^operator O2173 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1090 ^value 1 +)
- (R1 ^reward R1090 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2174 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2173 = 0.)
- =>WM: (15333: S1 ^operator O2176 +)
- =>WM: (15332: S1 ^operator O2175 +)
- =>WM: (15331: I3 ^dir L)
- =>WM: (15330: O2176 ^name predict-no)
- =>WM: (15329: O2175 ^name predict-yes)
- =>WM: (15328: R1091 ^value 1)
- =>WM: (15327: R1 ^reward R1091)
- =>WM: (15326: I3 ^see 0)
- <=WM: (15317: S1 ^operator O2173 +)
- <=WM: (15318: S1 ^operator O2174 +)
- <=WM: (15319: S1 ^operator O2174)
- <=WM: (15316: I3 ^dir U)
- <=WM: (15312: R1 ^reward R1090)
- <=WM: (15311: I3 ^see 1)
- <=WM: (15315: O2174 ^name predict-no)
- <=WM: (15314: O2173 ^name predict-yes)
- <=WM: (15313: R1090 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2175 = -0.06092862110810815)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2175 = 0.4318908829664698)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2176 = 0.6710532194894845)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2176 = 0.3289462343239279)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2174 = 0.3289462343239279)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2174 = 0.6710532194894845)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2173 = 0.4318908829664698)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2173 = -0.06092862110810815)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (15334: S1 ^operator O2176)
- 1088: O: O2176 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1088 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1087 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15335: I3 ^predict-no N1088)
- <=WM: (15321: N1087 ^status complete)
- <=WM: (15320: I3 ^predict-no N1087)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- -/--- Input Phase ---
- =>WM: (15339: I2 ^dir L)
- =>WM: (15338: I2 ^reward 1)
- =>WM: (15337: I2 ^see 0)
- =>WM: (15336: N1088 ^status complete)
- <=WM: (15324: I2 ^dir L)
- <=WM: (15323: I2 ^reward 1)
- <=WM: (15322: I2 ^see 0)
- =>WM: (15340: I2 ^level-1 L0-root)
- <=WM: (15325: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2176 = 0.6710543009425525)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2175 = 0.02602968095631553)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1092 ^value 1 +)
- (R1 ^reward R1092 +)
- Firing propose*predict-yes
- -->
- (O2177 ^name predict-yes +)
- (S1 ^operator O2177 +)
- Firing propose*predict-no
- -->
- (O2178 ^name predict-no +)
- (S1 ^operator O2178 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2176 = 0.3289462343239279)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2175 = 0.4318908829664698)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2176 ^name predict-no +)
- (S1 ^operator O2176 +)
- Retracting propose*predict-yes
- -->
- (O2175 ^name predict-yes +)
- (S1 ^operator O2175 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1091 ^value 1 +)
- (R1 ^reward R1091 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2176 = 0.3289462343239279)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2176 = 0.6710532194894845)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2175 = 0.4318908829664698)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2175 = -0.06092862110810815)
- =>WM: (15346: S1 ^operator O2178 +)
- =>WM: (15345: S1 ^operator O2177 +)
- =>WM: (15344: O2178 ^name predict-no)
- =>WM: (15343: O2177 ^name predict-yes)
- =>WM: (15342: R1092 ^value 1)
- =>WM: (15341: R1 ^reward R1092)
- <=WM: (15332: S1 ^operator O2175 +)
- <=WM: (15333: S1 ^operator O2176 +)
- <=WM: (15334: S1 ^operator O2176)
- <=WM: (15327: R1 ^reward R1091)
- <=WM: (15330: O2176 ^name predict-no)
- <=WM: (15329: O2175 ^name predict-yes)
- <=WM: (15328: R1091 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2177 = 0.4318908829664698)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2177 = 0.02602968095631553)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2178 = 0.3289462343239279)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2178 = 0.6710543009425525)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2176 = 0.3289462343239279)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2176 = 0.6710543009425525)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2175 = 0.4318908829664698)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2175 = 0.02602968095631553)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565404 -0.236458 0.328946 -> 0.565404 -0.236458 0.328946(R,m,v=1,0.912281,0.0804954)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*43 0.434595 0.236458 0.671053 -> 0.434595 0.236458 0.671053(R,m,v=1,1,0)
- =>WM: (15347: S1 ^operator O2178)
- 1089: O: O2178 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1089 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1088 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15348: I3 ^predict-no N1089)
- <=WM: (15336: N1088 ^status complete)
- <=WM: (15335: I3 ^predict-no N1088)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- |\---- Input Phase ---
- =>WM: (15352: I2 ^dir R)
- =>WM: (15351: I2 ^reward 1)
- =>WM: (15350: I2 ^see 0)
- =>WM: (15349: N1089 ^status complete)
- <=WM: (15339: I2 ^dir L)
- <=WM: (15338: I2 ^reward 1)
- <=WM: (15337: I2 ^see 0)
- =>WM: (15353: I2 ^level-1 L0-root)
- <=WM: (15340: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2178 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2177 = 0.2631728503035469)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1093 ^value 1 +)
- (R1 ^reward R1093 +)
- Firing propose*predict-yes
- -->
- (O2179 ^name predict-yes +)
- (S1 ^operator O2179 +)
- Firing propose*predict-no
- -->
- (O2180 ^name predict-no +)
- (S1 ^operator O2180 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2178 = 0.2572447880449083)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2177 = 0.7368282439492768)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2178 ^name predict-no +)
- (S1 ^operator O2178 +)
- Retracting propose*predict-yes
- -->
- (O2177 ^name predict-yes +)
- (S1 ^operator O2177 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1092 ^value 1 +)
- (R1 ^reward R1092 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2178 = 0.6710543009425525)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2178 = 0.3289463162519161)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2177 = 0.02602968095631553)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2177 = 0.4318908829664698)
- =>WM: (15360: S1 ^operator O2180 +)
- =>WM: (15359: S1 ^operator O2179 +)
- =>WM: (15358: I3 ^dir R)
- =>WM: (15357: O2180 ^name predict-no)
- =>WM: (15356: O2179 ^name predict-yes)
- =>WM: (15355: R1093 ^value 1)
- =>WM: (15354: R1 ^reward R1093)
- <=WM: (15345: S1 ^operator O2177 +)
- <=WM: (15346: S1 ^operator O2178 +)
- <=WM: (15347: S1 ^operator O2178)
- <=WM: (15331: I3 ^dir L)
- <=WM: (15341: R1 ^reward R1092)
- <=WM: (15344: O2178 ^name predict-no)
- <=WM: (15343: O2177 ^name predict-yes)
- <=WM: (15342: R1092 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2179 = 0.2631728503035469)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2179 = 0.7368282439492768)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2180 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2180 = 0.2572447880449083)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2178 = 0.2572447880449083)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2178 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2177 = 0.7368282439492768)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2177 = 0.2631728503035469)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565404 -0.236458 0.328946 -> 0.565404 -0.236458 0.328946(R,m,v=1,0.912791,0.0800694)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*33 0.434597 0.236457 0.671054 -> 0.434597 0.236457 0.671054(R,m,v=1,1,0)
- =>WM: (15361: S1 ^operator O2179)
- 1090: O: O2179 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1090 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1089 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15362: I3 ^predict-yes N1090)
- <=WM: (15349: N1089 ^status complete)
- <=WM: (15348: I3 ^predict-no N1089)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- /|\--- Input Phase ---
- =>WM: (15366: I2 ^dir R)
- =>WM: (15365: I2 ^reward 1)
- =>WM: (15364: I2 ^see 1)
- =>WM: (15363: N1090 ^status complete)
- <=WM: (15352: I2 ^dir R)
- <=WM: (15351: I2 ^reward 1)
- <=WM: (15350: I2 ^see 0)
- =>WM: (15367: I2 ^level-1 R1-root)
- <=WM: (15353: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2179 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2180 = 0.7427540615878073)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1094 ^value 1 +)
- (R1 ^reward R1094 +)
- Firing propose*predict-yes
- -->
- (O2181 ^name predict-yes +)
- (S1 ^operator O2181 +)
- Firing propose*predict-no
- -->
- (O2182 ^name predict-no +)
- (S1 ^operator O2182 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2180 = 0.2572447880449083)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2179 = 0.7368282439492768)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2180 ^name predict-no +)
- (S1 ^operator O2180 +)
- Retracting propose*predict-yes
- -->
- (O2179 ^name predict-yes +)
- (S1 ^operator O2179 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1093 ^value 1 +)
- (R1 ^reward R1093 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2180 = 0.2572447880449083)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2180 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2179 = 0.7368282439492768)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2179 = 0.2631728503035469)
- =>WM: (15374: S1 ^operator O2182 +)
- =>WM: (15373: S1 ^operator O2181 +)
- =>WM: (15372: O2182 ^name predict-no)
- =>WM: (15371: O2181 ^name predict-yes)
- =>WM: (15370: R1094 ^value 1)
- =>WM: (15369: R1 ^reward R1094)
- =>WM: (15368: I3 ^see 1)
- <=WM: (15359: S1 ^operator O2179 +)
- <=WM: (15361: S1 ^operator O2179)
- <=WM: (15360: S1 ^operator O2180 +)
- <=WM: (15354: R1 ^reward R1093)
- <=WM: (15326: I3 ^see 0)
- <=WM: (15357: O2180 ^name predict-no)
- <=WM: (15356: O2179 ^name predict-yes)
- <=WM: (15355: R1093 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2181 = 0.7368282439492768)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2181 = -0.3011268063455669)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2182 = 0.2572447880449083)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2182 = 0.7427540615878073)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2180 = 0.2572447880449083)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2180 = 0.7427540615878073)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2179 = 0.7368282439492768)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2179 = -0.3011268063455669)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114079 0.736828 -> 0.748236 -0.011408 0.736828(R,m,v=1,0.903955,0.0873138)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*35 0.251764 0.0114087 0.263173 -> 0.251764 0.0114086 0.263173(R,m,v=1,1,0)
- =>WM: (15375: S1 ^operator O2182)
- 1091: O: O2182 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1091 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1090 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15376: I3 ^predict-no N1091)
- <=WM: (15363: N1090 ^status complete)
- <=WM: (15362: I3 ^predict-yes N1090)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- ---- Input Phase ---
- =>WM: (15380: I2 ^dir L)
- =>WM: (15379: I2 ^reward 1)
- =>WM: (15378: I2 ^see 0)
- =>WM: (15377: N1091 ^status complete)
- <=WM: (15366: I2 ^dir R)
- <=WM: (15365: I2 ^reward 1)
- <=WM: (15364: I2 ^see 1)
- =>WM: (15381: I2 ^level-1 R0-root)
- <=WM: (15367: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2182 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2181 = 0.5681100310706091)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1095 ^value 1 +)
- (R1 ^reward R1095 +)
- Firing propose*predict-yes
- -->
- (O2183 ^name predict-yes +)
- (S1 ^operator O2183 +)
- Firing propose*predict-no
- -->
- (O2184 ^name predict-no +)
- (S1 ^operator O2184 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2182 = 0.3289462236727457)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2181 = 0.4318908829664698)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2182 ^name predict-no +)
- (S1 ^operator O2182 +)
- Retracting propose*predict-yes
- -->
- (O2181 ^name predict-yes +)
- (S1 ^operator O2181 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1094 ^value 1 +)
- (R1 ^reward R1094 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2182 = 0.7427540615878073)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2182 = 0.2572447880449083)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2181 = -0.3011268063455669)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2181 = 0.7368280798113533)
- =>WM: (15389: S1 ^operator O2184 +)
- =>WM: (15388: S1 ^operator O2183 +)
- =>WM: (15387: I3 ^dir L)
- =>WM: (15386: O2184 ^name predict-no)
- =>WM: (15385: O2183 ^name predict-yes)
- =>WM: (15384: R1095 ^value 1)
- =>WM: (15383: R1 ^reward R1095)
- =>WM: (15382: I3 ^see 0)
- <=WM: (15373: S1 ^operator O2181 +)
- <=WM: (15374: S1 ^operator O2182 +)
- <=WM: (15375: S1 ^operator O2182)
- <=WM: (15358: I3 ^dir R)
- <=WM: (15369: R1 ^reward R1094)
- <=WM: (15368: I3 ^see 1)
- <=WM: (15372: O2182 ^name predict-no)
- <=WM: (15371: O2181 ^name predict-yes)
- <=WM: (15370: R1094 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2183 = 0.4318908829664698)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2183 = 0.5681100310706091)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2184 = 0.3289462236727457)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2184 = 0.04178081990804111)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2182 = 0.3289462236727457)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2182 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2181 = 0.4318908829664698)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2181 = 0.5681100310706091)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586135 -0.32889 0.257245 -> 0.586135 -0.32889 0.257245(R,m,v=1,0.87234,0.111958)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*36 0.413864 0.32889 0.742754 -> 0.413864 0.32889 0.742754(R,m,v=1,1,0)
- =>WM: (15390: S1 ^operator O2183)
- 1092: O: O2183 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1092 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1091 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15391: I3 ^predict-yes N1092)
- <=WM: (15377: N1091 ^status complete)
- <=WM: (15376: I3 ^predict-no N1091)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- /|\--- Input Phase ---
- =>WM: (15395: I2 ^dir R)
- =>WM: (15394: I2 ^reward 1)
- =>WM: (15393: I2 ^see 1)
- =>WM: (15392: N1092 ^status complete)
- <=WM: (15380: I2 ^dir L)
- <=WM: (15379: I2 ^reward 1)
- <=WM: (15378: I2 ^see 0)
- =>WM: (15396: I2 ^level-1 L1-root)
- <=WM: (15381: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2184 = -0.1377248055371832)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2183 = 0.2631705197030996)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1096 ^value 1 +)
- (R1 ^reward R1096 +)
- Firing propose*predict-yes
- -->
- (O2185 ^name predict-yes +)
- (S1 ^operator O2185 +)
- Firing propose*predict-no
- -->
- (O2186 ^name predict-no +)
- (S1 ^operator O2186 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2184 = 0.2572449606000009)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2183 = 0.7368280798113533)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2184 ^name predict-no +)
- (S1 ^operator O2184 +)
- Retracting propose*predict-yes
- -->
- (O2183 ^name predict-yes +)
- (S1 ^operator O2183 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1095 ^value 1 +)
- (R1 ^reward R1095 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2184 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2184 = 0.3289462236727457)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2183 = 0.5681100310706091)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2183 = 0.4318908829664698)
- =>WM: (15404: S1 ^operator O2186 +)
- =>WM: (15403: S1 ^operator O2185 +)
- =>WM: (15402: I3 ^dir R)
- =>WM: (15401: O2186 ^name predict-no)
- =>WM: (15400: O2185 ^name predict-yes)
- =>WM: (15399: R1096 ^value 1)
- =>WM: (15398: R1 ^reward R1096)
- =>WM: (15397: I3 ^see 1)
- <=WM: (15388: S1 ^operator O2183 +)
- <=WM: (15390: S1 ^operator O2183)
- <=WM: (15389: S1 ^operator O2184 +)
- <=WM: (15387: I3 ^dir L)
- <=WM: (15383: R1 ^reward R1095)
- <=WM: (15382: I3 ^see 0)
- <=WM: (15386: O2184 ^name predict-no)
- <=WM: (15385: O2183 ^name predict-yes)
- <=WM: (15384: R1095 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2185 = 0.7368280798113533)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2185 = 0.2631705197030996)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2186 = 0.2572449606000009)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2186 = -0.1377248055371832)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2184 = 0.2572449606000009)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2184 = -0.1377248055371832)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2183 = 0.7368280798113533)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2183 = 0.2631705197030996)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683777 -0.251886 0.431891 -> 0.683777 -0.251886 0.431891(R,m,v=1,0.928571,0.066693)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*30 0.316224 0.251886 0.56811 -> 0.316224 0.251886 0.56811(R,m,v=1,1,0)
- =>WM: (15405: S1 ^operator O2185)
- 1093: O: O2185 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1093 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1092 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15406: I3 ^predict-yes N1093)
- <=WM: (15392: N1092 ^status complete)
- <=WM: (15391: I3 ^predict-yes N1092)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- -/|--- Input Phase ---
- =>WM: (15410: I2 ^dir R)
- =>WM: (15409: I2 ^reward 1)
- =>WM: (15408: I2 ^see 1)
- =>WM: (15407: N1093 ^status complete)
- <=WM: (15395: I2 ^dir R)
- <=WM: (15394: I2 ^reward 1)
- <=WM: (15393: I2 ^see 1)
- =>WM: (15411: I2 ^level-1 R1-root)
- <=WM: (15396: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2185 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2186 = 0.7427542341429)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1097 ^value 1 +)
- (R1 ^reward R1097 +)
- Firing propose*predict-yes
- -->
- (O2187 ^name predict-yes +)
- (S1 ^operator O2187 +)
- Firing propose*predict-no
- -->
- (O2188 ^name predict-no +)
- (S1 ^operator O2188 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2186 = 0.2572449606000009)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2185 = 0.7368280798113533)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2186 ^name predict-no +)
- (S1 ^operator O2186 +)
- Retracting propose*predict-yes
- -->
- (O2185 ^name predict-yes +)
- (S1 ^operator O2185 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1096 ^value 1 +)
- (R1 ^reward R1096 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*39
- -->
- (S1 ^operator O2186 = -0.1377248055371832)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2186 = 0.2572449606000009)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*40
- -->
- (S1 ^operator O2185 = 0.2631705197030996)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2185 = 0.7368280798113533)
- =>WM: (15417: S1 ^operator O2188 +)
- =>WM: (15416: S1 ^operator O2187 +)
- =>WM: (15415: O2188 ^name predict-no)
- =>WM: (15414: O2187 ^name predict-yes)
- =>WM: (15413: R1097 ^value 1)
- =>WM: (15412: R1 ^reward R1097)
- <=WM: (15403: S1 ^operator O2185 +)
- <=WM: (15405: S1 ^operator O2185)
- <=WM: (15404: S1 ^operator O2186 +)
- <=WM: (15398: R1 ^reward R1096)
- <=WM: (15401: O2186 ^name predict-no)
- <=WM: (15400: O2185 ^name predict-yes)
- <=WM: (15399: R1096 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2187 = 0.7368280798113533)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2187 = -0.3011268063455669)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2188 = 0.2572449606000009)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2188 = 0.7427542341429)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2186 = 0.2572449606000009)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2186 = 0.7427542341429)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2185 = 0.7368280798113533)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2185 = -0.3011268063455669)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.011408 0.736828 -> 0.748236 -0.0114078 0.736828(R,m,v=1,0.904494,0.0868723)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*40 0.251764 0.011407 0.263171 -> 0.251764 0.0114071 0.263171(R,m,v=1,1,0)
- =>WM: (15418: S1 ^operator O2188)
- 1094: O: O2188 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1094 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1093 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15419: I3 ^predict-no N1094)
- <=WM: (15407: N1093 ^status complete)
- <=WM: (15406: I3 ^predict-yes N1093)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (15423: I2 ^dir L)
- =>WM: (15422: I2 ^reward 1)
- =>WM: (15421: I2 ^see 0)
- =>WM: (15420: N1094 ^status complete)
- <=WM: (15410: I2 ^dir R)
- <=WM: (15409: I2 ^reward 1)
- <=WM: (15408: I2 ^see 1)
- =>WM: (15424: I2 ^level-1 R0-root)
- <=WM: (15411: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2188 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2187 = 0.5681098939650473)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1098 ^value 1 +)
- (R1 ^reward R1098 +)
- Firing propose*predict-yes
- -->
- (O2189 ^name predict-yes +)
- (S1 ^operator O2189 +)
- Firing propose*predict-no
- -->
- (O2190 ^name predict-no +)
- (S1 ^operator O2190 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2188 = 0.3289462236727457)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2187 = 0.431890745860908)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2188 ^name predict-no +)
- (S1 ^operator O2188 +)
- Retracting propose*predict-yes
- -->
- (O2187 ^name predict-yes +)
- (S1 ^operator O2187 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1097 ^value 1 +)
- (R1 ^reward R1097 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2188 = 0.7427542341429)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2188 = 0.2572449606000009)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2187 = -0.3011268063455669)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2187 = 0.7368282898841854)
- =>WM: (15432: S1 ^operator O2190 +)
- =>WM: (15431: S1 ^operator O2189 +)
- =>WM: (15430: I3 ^dir L)
- =>WM: (15429: O2190 ^name predict-no)
- =>WM: (15428: O2189 ^name predict-yes)
- =>WM: (15427: R1098 ^value 1)
- =>WM: (15426: R1 ^reward R1098)
- =>WM: (15425: I3 ^see 0)
- <=WM: (15416: S1 ^operator O2187 +)
- <=WM: (15417: S1 ^operator O2188 +)
- <=WM: (15418: S1 ^operator O2188)
- <=WM: (15402: I3 ^dir R)
- <=WM: (15412: R1 ^reward R1097)
- <=WM: (15397: I3 ^see 1)
- <=WM: (15415: O2188 ^name predict-no)
- <=WM: (15414: O2187 ^name predict-yes)
- <=WM: (15413: R1097 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2189 = 0.431890745860908)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2189 = 0.5681098939650473)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2190 = 0.3289462236727457)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2190 = 0.04178081990804111)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2188 = 0.3289462236727457)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2188 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2187 = 0.431890745860908)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2187 = 0.5681098939650473)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586135 -0.32889 0.257245 -> 0.586135 -0.32889 0.257245(R,m,v=1,0.873016,0.111449)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*36 0.413864 0.32889 0.742754 -> 0.413864 0.32889 0.742754(R,m,v=1,1,0)
- =>WM: (15433: S1 ^operator O2189)
- 1095: O: O2189 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1095 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1094 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15434: I3 ^predict-yes N1095)
- <=WM: (15420: N1094 ^status complete)
- <=WM: (15419: I3 ^predict-no N1094)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- /|\--- Input Phase ---
- =>WM: (15438: I2 ^dir L)
- =>WM: (15437: I2 ^reward 1)
- =>WM: (15436: I2 ^see 1)
- =>WM: (15435: N1095 ^status complete)
- <=WM: (15423: I2 ^dir L)
- <=WM: (15422: I2 ^reward 1)
- <=WM: (15421: I2 ^see 0)
- =>WM: (15439: I2 ^level-1 L1-root)
- <=WM: (15424: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2190 = 0.6710533014174725)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2189 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1099 ^value 1 +)
- (R1 ^reward R1099 +)
- Firing propose*predict-yes
- -->
- (O2191 ^name predict-yes +)
- (S1 ^operator O2191 +)
- Firing propose*predict-no
- -->
- (O2192 ^name predict-no +)
- (S1 ^operator O2192 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2190 = 0.3289462236727457)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2189 = 0.431890745860908)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2190 ^name predict-no +)
- (S1 ^operator O2190 +)
- Retracting propose*predict-yes
- -->
- (O2189 ^name predict-yes +)
- (S1 ^operator O2189 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1098 ^value 1 +)
- (R1 ^reward R1098 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2190 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2190 = 0.3289462236727457)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2189 = 0.5681098939650473)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2189 = 0.431890745860908)
- =>WM: (15446: S1 ^operator O2192 +)
- =>WM: (15445: S1 ^operator O2191 +)
- =>WM: (15444: O2192 ^name predict-no)
- =>WM: (15443: O2191 ^name predict-yes)
- =>WM: (15442: R1099 ^value 1)
- =>WM: (15441: R1 ^reward R1099)
- =>WM: (15440: I3 ^see 1)
- <=WM: (15431: S1 ^operator O2189 +)
- <=WM: (15433: S1 ^operator O2189)
- <=WM: (15432: S1 ^operator O2190 +)
- <=WM: (15426: R1 ^reward R1098)
- <=WM: (15425: I3 ^see 0)
- <=WM: (15429: O2190 ^name predict-no)
- <=WM: (15428: O2189 ^name predict-yes)
- <=WM: (15427: R1098 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2191 = 0.431890745860908)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2191 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2192 = 0.3289462236727457)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2192 = 0.6710533014174725)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2190 = 0.3289462236727457)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2190 = 0.6710533014174725)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2189 = 0.431890745860908)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2189 = -0.06092862110810815)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683777 -0.251886 0.431891 -> 0.683777 -0.251886 0.431891(R,m,v=1,0.928962,0.0663544)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*30 0.316224 0.251886 0.56811 -> 0.316224 0.251886 0.56811(R,m,v=1,1,0)
- =>WM: (15447: S1 ^operator O2192)
- 1096: O: O2192 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1096 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1095 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15448: I3 ^predict-no N1096)
- <=WM: (15435: N1095 ^status complete)
- <=WM: (15434: I3 ^predict-yes N1095)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- -/--- Input Phase ---
- =>WM: (15452: I2 ^dir L)
- =>WM: (15451: I2 ^reward 1)
- =>WM: (15450: I2 ^see 0)
- =>WM: (15449: N1096 ^status complete)
- <=WM: (15438: I2 ^dir L)
- <=WM: (15437: I2 ^reward 1)
- <=WM: (15436: I2 ^see 1)
- =>WM: (15453: I2 ^level-1 L0-root)
- <=WM: (15439: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2192 = 0.6710542083633821)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2191 = 0.02602968095631553)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1100 ^value 1 +)
- (R1 ^reward R1100 +)
- Firing propose*predict-yes
- -->
- (O2193 ^name predict-yes +)
- (S1 ^operator O2193 +)
- Firing propose*predict-no
- -->
- (O2194 ^name predict-no +)
- (S1 ^operator O2194 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2192 = 0.3289462236727457)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2191 = 0.4318906498870147)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2192 ^name predict-no +)
- (S1 ^operator O2192 +)
- Retracting propose*predict-yes
- -->
- (O2191 ^name predict-yes +)
- (S1 ^operator O2191 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1099 ^value 1 +)
- (R1 ^reward R1099 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2192 = 0.6710533014174725)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2192 = 0.3289462236727457)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2191 = -0.06092862110810815)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2191 = 0.4318906498870147)
- =>WM: (15460: S1 ^operator O2194 +)
- =>WM: (15459: S1 ^operator O2193 +)
- =>WM: (15458: O2194 ^name predict-no)
- =>WM: (15457: O2193 ^name predict-yes)
- =>WM: (15456: R1100 ^value 1)
- =>WM: (15455: R1 ^reward R1100)
- =>WM: (15454: I3 ^see 0)
- <=WM: (15445: S1 ^operator O2191 +)
- <=WM: (15446: S1 ^operator O2192 +)
- <=WM: (15447: S1 ^operator O2192)
- <=WM: (15441: R1 ^reward R1099)
- <=WM: (15440: I3 ^see 1)
- <=WM: (15444: O2192 ^name predict-no)
- <=WM: (15443: O2191 ^name predict-yes)
- <=WM: (15442: R1099 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2193 = 0.4318906498870147)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2193 = 0.02602968095631553)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2194 = 0.3289462236727457)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2194 = 0.6710542083633821)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2192 = 0.3289462236727457)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2192 = 0.6710542083633821)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2191 = 0.4318906498870147)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2191 = 0.02602968095631553)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565404 -0.236458 0.328946 -> 0.565404 -0.236458 0.328946(R,m,v=1,0.913295,0.0796478)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*43 0.434595 0.236458 0.671053 -> 0.434595 0.236458 0.671053(R,m,v=1,1,0)
- =>WM: (15461: S1 ^operator O2194)
- 1097: O: O2194 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1097 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1096 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15462: I3 ^predict-no N1097)
- <=WM: (15449: N1096 ^status complete)
- <=WM: (15448: I3 ^predict-no N1096)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- |\--- Input Phase ---
- =>WM: (15466: I2 ^dir R)
- =>WM: (15465: I2 ^reward 1)
- =>WM: (15464: I2 ^see 0)
- =>WM: (15463: N1097 ^status complete)
- <=WM: (15452: I2 ^dir L)
- <=WM: (15451: I2 ^reward 1)
- <=WM: (15450: I2 ^see 0)
- =>WM: (15467: I2 ^level-1 L0-root)
- <=WM: (15453: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2194 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2193 = 0.2631726861656233)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1101 ^value 1 +)
- (R1 ^reward R1101 +)
- Firing propose*predict-yes
- -->
- (O2195 ^name predict-yes +)
- (S1 ^operator O2195 +)
- Firing propose*predict-no
- -->
- (O2196 ^name predict-no +)
- (S1 ^operator O2196 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2194 = 0.2572450813885658)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2193 = 0.7368282898841854)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2194 ^name predict-no +)
- (S1 ^operator O2194 +)
- Retracting propose*predict-yes
- -->
- (O2193 ^name predict-yes +)
- (S1 ^operator O2193 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1100 ^value 1 +)
- (R1 ^reward R1100 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2194 = 0.6710542083633821)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2194 = 0.328946294909213)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2193 = 0.02602968095631553)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2193 = 0.4318906498870147)
- =>WM: (15474: S1 ^operator O2196 +)
- =>WM: (15473: S1 ^operator O2195 +)
- =>WM: (15472: I3 ^dir R)
- =>WM: (15471: O2196 ^name predict-no)
- =>WM: (15470: O2195 ^name predict-yes)
- =>WM: (15469: R1101 ^value 1)
- =>WM: (15468: R1 ^reward R1101)
- <=WM: (15459: S1 ^operator O2193 +)
- <=WM: (15460: S1 ^operator O2194 +)
- <=WM: (15461: S1 ^operator O2194)
- <=WM: (15430: I3 ^dir L)
- <=WM: (15455: R1 ^reward R1100)
- <=WM: (15458: O2194 ^name predict-no)
- <=WM: (15457: O2193 ^name predict-yes)
- <=WM: (15456: R1100 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2195 = 0.2631726861656233)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2195 = 0.7368282898841854)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2196 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2196 = 0.2572450813885658)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2194 = 0.2572450813885658)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2194 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2193 = 0.7368282898841854)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2193 = 0.2631726861656233)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565404 -0.236458 0.328946 -> 0.565404 -0.236458 0.328946(R,m,v=1,0.913793,0.0792306)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*33 0.434597 0.236457 0.671054 -> 0.434597 0.236457 0.671054(R,m,v=1,1,0)
- =>WM: (15475: S1 ^operator O2195)
- 1098: O: O2195 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1098 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1097 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15476: I3 ^predict-yes N1098)
- <=WM: (15463: N1097 ^status complete)
- <=WM: (15462: I3 ^predict-no N1097)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- -/|--- Input Phase ---
- =>WM: (15480: I2 ^dir L)
- =>WM: (15479: I2 ^reward 1)
- =>WM: (15478: I2 ^see 1)
- =>WM: (15477: N1098 ^status complete)
- <=WM: (15466: I2 ^dir R)
- <=WM: (15465: I2 ^reward 1)
- <=WM: (15464: I2 ^see 0)
- =>WM: (15481: I2 ^level-1 R1-root)
- <=WM: (15467: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2195 = 0.5681081165306463)
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2196 = -0.1549421060161498)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1102 ^value 1 +)
- (R1 ^reward R1102 +)
- Firing propose*predict-yes
- -->
- (O2197 ^name predict-yes +)
- (S1 ^operator O2197 +)
- Firing propose*predict-no
- -->
- (O2198 ^name predict-no +)
- (S1 ^operator O2198 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2196 = 0.3289462194183237)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2195 = 0.4318906498870147)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2196 ^name predict-no +)
- (S1 ^operator O2196 +)
- Retracting propose*predict-yes
- -->
- (O2195 ^name predict-yes +)
- (S1 ^operator O2195 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1101 ^value 1 +)
- (R1 ^reward R1101 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2196 = 0.2572450813885658)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2196 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2195 = 0.7368282898841854)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2195 = 0.2631726861656233)
- =>WM: (15489: S1 ^operator O2198 +)
- =>WM: (15488: S1 ^operator O2197 +)
- =>WM: (15487: I3 ^dir L)
- =>WM: (15486: O2198 ^name predict-no)
- =>WM: (15485: O2197 ^name predict-yes)
- =>WM: (15484: R1102 ^value 1)
- =>WM: (15483: R1 ^reward R1102)
- =>WM: (15482: I3 ^see 1)
- <=WM: (15473: S1 ^operator O2195 +)
- <=WM: (15475: S1 ^operator O2195)
- <=WM: (15474: S1 ^operator O2196 +)
- <=WM: (15472: I3 ^dir R)
- <=WM: (15468: R1 ^reward R1101)
- <=WM: (15454: I3 ^see 0)
- <=WM: (15471: O2196 ^name predict-no)
- <=WM: (15470: O2195 ^name predict-yes)
- <=WM: (15469: R1101 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2197 = 0.4318906498870147)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2197 = 0.5681081165306463)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2198 = 0.3289462194183237)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2198 = -0.1549421060161498)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2196 = 0.3289462194183237)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2196 = -0.1549421060161498)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2195 = 0.4318906498870147)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2195 = 0.5681081165306463)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114078 0.736828 -> 0.748236 -0.0114079 0.736828(R,m,v=1,0.905028,0.0864353)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*35 0.251764 0.0114086 0.263173 -> 0.251764 0.0114085 0.263173(R,m,v=1,1,0)
- =>WM: (15490: S1 ^operator O2197)
- 1099: O: O2197 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1099 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1098 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15491: I3 ^predict-yes N1099)
- <=WM: (15477: N1098 ^status complete)
- <=WM: (15476: I3 ^predict-yes N1098)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- \---- Input Phase ---
- =>WM: (15495: I2 ^dir U)
- =>WM: (15494: I2 ^reward 1)
- =>WM: (15493: I2 ^see 1)
- =>WM: (15492: N1099 ^status complete)
- <=WM: (15480: I2 ^dir L)
- <=WM: (15479: I2 ^reward 1)
- <=WM: (15478: I2 ^see 1)
- =>WM: (15496: I2 ^level-1 L1-root)
- <=WM: (15481: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1103 ^value 1 +)
- (R1 ^reward R1103 +)
- Firing propose*predict-yes
- -->
- (O2199 ^name predict-yes +)
- (S1 ^operator O2199 +)
- Firing propose*predict-no
- -->
- (O2200 ^name predict-no +)
- (S1 ^operator O2200 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2198 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2197 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2198 ^name predict-no +)
- (S1 ^operator O2198 +)
- Retracting propose*predict-yes
- -->
- (O2197 ^name predict-yes +)
- (S1 ^operator O2197 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1102 ^value 1 +)
- (R1 ^reward R1102 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*44
- -->
- (S1 ^operator O2198 = -0.1549421060161498)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2198 = 0.3289462194183237)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*45
- -->
- (S1 ^operator O2197 = 0.5681081165306463)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2197 = 0.4318906498870147)
- =>WM: (15503: S1 ^operator O2200 +)
- =>WM: (15502: S1 ^operator O2199 +)
- =>WM: (15501: I3 ^dir U)
- =>WM: (15500: O2200 ^name predict-no)
- =>WM: (15499: O2199 ^name predict-yes)
- =>WM: (15498: R1103 ^value 1)
- =>WM: (15497: R1 ^reward R1103)
- <=WM: (15488: S1 ^operator O2197 +)
- <=WM: (15490: S1 ^operator O2197)
- <=WM: (15489: S1 ^operator O2198 +)
- <=WM: (15487: I3 ^dir L)
- <=WM: (15483: R1 ^reward R1102)
- <=WM: (15486: O2198 ^name predict-no)
- <=WM: (15485: O2197 ^name predict-yes)
- <=WM: (15484: R1102 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2199 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2200 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2198 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2197 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683777 -0.251886 0.431891 -> 0.683777 -0.251886 0.431891(R,m,v=1,0.929348,0.0660192)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*45 0.316222 0.251886 0.568108 -> 0.316222 0.251886 0.568108(R,m,v=1,1,0)
- =>WM: (15504: S1 ^operator O2200)
- 1100: O: O2200 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1100 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1099 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15505: I3 ^predict-no N1100)
- <=WM: (15492: N1099 ^status complete)
- <=WM: (15491: I3 ^predict-yes N1099)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /|--- Input Phase ---
- =>WM: (15509: I2 ^dir U)
- =>WM: (15508: I2 ^reward 1)
- =>WM: (15507: I2 ^see 0)
- =>WM: (15506: N1100 ^status complete)
- <=WM: (15495: I2 ^dir U)
- <=WM: (15494: I2 ^reward 1)
- <=WM: (15493: I2 ^see 1)
- =>WM: (15510: I2 ^level-1 L1-root)
- <=WM: (15496: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1104 ^value 1 +)
- (R1 ^reward R1104 +)
- Firing propose*predict-yes
- -->
- (O2201 ^name predict-yes +)
- (S1 ^operator O2201 +)
- Firing propose*predict-no
- -->
- (O2202 ^name predict-no +)
- (S1 ^operator O2202 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2200 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2199 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2200 ^name predict-no +)
- (S1 ^operator O2200 +)
- Retracting propose*predict-yes
- -->
- (O2199 ^name predict-yes +)
- (S1 ^operator O2199 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1103 ^value 1 +)
- (R1 ^reward R1103 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2200 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2199 = 0.)
- =>WM: (15517: S1 ^operator O2202 +)
- =>WM: (15516: S1 ^operator O2201 +)
- =>WM: (15515: O2202 ^name predict-no)
- =>WM: (15514: O2201 ^name predict-yes)
- =>WM: (15513: R1104 ^value 1)
- =>WM: (15512: R1 ^reward R1104)
- =>WM: (15511: I3 ^see 0)
- <=WM: (15502: S1 ^operator O2199 +)
- <=WM: (15503: S1 ^operator O2200 +)
- <=WM: (15504: S1 ^operator O2200)
- <=WM: (15497: R1 ^reward R1103)
- <=WM: (15482: I3 ^see 1)
- <=WM: (15500: O2200 ^name predict-no)
- <=WM: (15499: O2199 ^name predict-yes)
- <=WM: (15498: R1103 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2201 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2202 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2200 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2199 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (15518: S1 ^operator O2202)
- 1101: O: O2202 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1101 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1100 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15519: I3 ^predict-no N1101)
- <=WM: (15506: N1100 ^status complete)
- <=WM: (15505: I3 ^predict-no N1100)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- \--- Input Phase ---
- =>WM: (15523: I2 ^dir U)
- =>WM: (15522: I2 ^reward 1)
- =>WM: (15521: I2 ^see 0)
- =>WM: (15520: N1101 ^status complete)
- <=WM: (15509: I2 ^dir U)
- <=WM: (15508: I2 ^reward 1)
- <=WM: (15507: I2 ^see 0)
- =>WM: (15524: I2 ^level-1 L1-root)
- <=WM: (15510: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1105 ^value 1 +)
- (R1 ^reward R1105 +)
- Firing propose*predict-yes
- -->
- (O2203 ^name predict-yes +)
- (S1 ^operator O2203 +)
- Firing propose*predict-no
- -->
- (O2204 ^name predict-no +)
- (S1 ^operator O2204 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2202 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2201 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2202 ^name predict-no +)
- (S1 ^operator O2202 +)
- Retracting propose*predict-yes
- -->
- (O2201 ^name predict-yes +)
- (S1 ^operator O2201 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1104 ^value 1 +)
- (R1 ^reward R1104 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2202 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2201 = 0.)
- =>WM: (15530: S1 ^operator O2204 +)
- =>WM: (15529: S1 ^operator O2203 +)
- =>WM: (15528: O2204 ^name predict-no)
- =>WM: (15527: O2203 ^name predict-yes)
- =>WM: (15526: R1105 ^value 1)
- =>WM: (15525: R1 ^reward R1105)
- <=WM: (15516: S1 ^operator O2201 +)
- <=WM: (15517: S1 ^operator O2202 +)
- <=WM: (15518: S1 ^operator O2202)
- <=WM: (15512: R1 ^reward R1104)
- <=WM: (15515: O2202 ^name predict-no)
- <=WM: (15514: O2201 ^name predict-yes)
- <=WM: (15513: R1104 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2203 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2204 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2202 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2201 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (15531: S1 ^operator O2204)
- 1102: O: O2204 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1102 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1101 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15532: I3 ^predict-no N1102)
- <=WM: (15520: N1101 ^status complete)
- <=WM: (15519: I3 ^predict-no N1101)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- -/--- Input Phase ---
- =>WM: (15536: I2 ^dir L)
- =>WM: (15535: I2 ^reward 1)
- =>WM: (15534: I2 ^see 0)
- =>WM: (15533: N1102 ^status complete)
- <=WM: (15523: I2 ^dir U)
- <=WM: (15522: I2 ^reward 1)
- <=WM: (15521: I2 ^see 0)
- =>WM: (15537: I2 ^level-1 L1-root)
- <=WM: (15524: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2204 = 0.6710533726539398)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2203 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1106 ^value 1 +)
- (R1 ^reward R1106 +)
- Firing propose*predict-yes
- -->
- (O2205 ^name predict-yes +)
- (S1 ^operator O2205 +)
- Firing propose*predict-no
- -->
- (O2206 ^name predict-no +)
- (S1 ^operator O2206 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2204 = 0.3289462194183237)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2203 = 0.4318908349243655)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2204 ^name predict-no +)
- (S1 ^operator O2204 +)
- Retracting propose*predict-yes
- -->
- (O2203 ^name predict-yes +)
- (S1 ^operator O2203 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1105 ^value 1 +)
- (R1 ^reward R1105 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2204 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2203 = 0.)
- =>WM: (15544: S1 ^operator O2206 +)
- =>WM: (15543: S1 ^operator O2205 +)
- =>WM: (15542: I3 ^dir L)
- =>WM: (15541: O2206 ^name predict-no)
- =>WM: (15540: O2205 ^name predict-yes)
- =>WM: (15539: R1106 ^value 1)
- =>WM: (15538: R1 ^reward R1106)
- <=WM: (15529: S1 ^operator O2203 +)
- <=WM: (15530: S1 ^operator O2204 +)
- <=WM: (15531: S1 ^operator O2204)
- <=WM: (15501: I3 ^dir U)
- <=WM: (15525: R1 ^reward R1105)
- <=WM: (15528: O2204 ^name predict-no)
- <=WM: (15527: O2203 ^name predict-yes)
- <=WM: (15526: R1105 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2205 = -0.06092862110810815)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2205 = 0.4318908349243655)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2206 = 0.6710533726539398)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2206 = 0.3289462194183237)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2204 = 0.3289462194183237)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2204 = 0.6710533726539398)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2203 = 0.4318908349243655)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2203 = -0.06092862110810815)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (15545: S1 ^operator O2206)
- 1103: O: O2206 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1103 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1102 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15546: I3 ^predict-no N1103)
- <=WM: (15533: N1102 ^status complete)
- <=WM: (15532: I3 ^predict-no N1102)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- |\---- Input Phase ---
- =>WM: (15550: I2 ^dir U)
- =>WM: (15549: I2 ^reward 1)
- =>WM: (15548: I2 ^see 0)
- =>WM: (15547: N1103 ^status complete)
- <=WM: (15536: I2 ^dir L)
- <=WM: (15535: I2 ^reward 1)
- <=WM: (15534: I2 ^see 0)
- =>WM: (15551: I2 ^level-1 L0-root)
- <=WM: (15537: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1107 ^value 1 +)
- (R1 ^reward R1107 +)
- Firing propose*predict-yes
- -->
- (O2207 ^name predict-yes +)
- (S1 ^operator O2207 +)
- Firing propose*predict-no
- -->
- (O2208 ^name predict-no +)
- (S1 ^operator O2208 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2206 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2205 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2206 ^name predict-no +)
- (S1 ^operator O2206 +)
- Retracting propose*predict-yes
- -->
- (O2205 ^name predict-yes +)
- (S1 ^operator O2205 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1106 ^value 1 +)
- (R1 ^reward R1106 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2206 = 0.3289462194183237)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2206 = 0.6710533726539398)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2205 = 0.4318908349243655)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2205 = -0.06092862110810815)
- =>WM: (15558: S1 ^operator O2208 +)
- =>WM: (15557: S1 ^operator O2207 +)
- =>WM: (15556: I3 ^dir U)
- =>WM: (15555: O2208 ^name predict-no)
- =>WM: (15554: O2207 ^name predict-yes)
- =>WM: (15553: R1107 ^value 1)
- =>WM: (15552: R1 ^reward R1107)
- <=WM: (15543: S1 ^operator O2205 +)
- <=WM: (15544: S1 ^operator O2206 +)
- <=WM: (15545: S1 ^operator O2206)
- <=WM: (15542: I3 ^dir L)
- <=WM: (15538: R1 ^reward R1106)
- <=WM: (15541: O2206 ^name predict-no)
- <=WM: (15540: O2205 ^name predict-yes)
- <=WM: (15539: R1106 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2207 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2208 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2206 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2205 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565404 -0.236458 0.328946 -> 0.565404 -0.236458 0.328946(R,m,v=1,0.914286,0.0788177)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*43 0.434595 0.236458 0.671053 -> 0.434596 0.236458 0.671053(R,m,v=1,1,0)
- =>WM: (15559: S1 ^operator O2208)
- 1104: O: O2208 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1104 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1103 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15560: I3 ^predict-no N1104)
- <=WM: (15547: N1103 ^status complete)
- <=WM: (15546: I3 ^predict-no N1103)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /|\--- Input Phase ---
- =>WM: (15564: I2 ^dir U)
- =>WM: (15563: I2 ^reward 1)
- =>WM: (15562: I2 ^see 0)
- =>WM: (15561: N1104 ^status complete)
- <=WM: (15550: I2 ^dir U)
- <=WM: (15549: I2 ^reward 1)
- <=WM: (15548: I2 ^see 0)
- =>WM: (15565: I2 ^level-1 L0-root)
- <=WM: (15551: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1108 ^value 1 +)
- (R1 ^reward R1108 +)
- Firing propose*predict-yes
- -->
- (O2209 ^name predict-yes +)
- (S1 ^operator O2209 +)
- Firing propose*predict-no
- -->
- (O2210 ^name predict-no +)
- (S1 ^operator O2210 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2208 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2207 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2208 ^name predict-no +)
- (S1 ^operator O2208 +)
- Retracting propose*predict-yes
- -->
- (O2207 ^name predict-yes +)
- (S1 ^operator O2207 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1107 ^value 1 +)
- (R1 ^reward R1107 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2208 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2207 = 0.)
- =>WM: (15571: S1 ^operator O2210 +)
- =>WM: (15570: S1 ^operator O2209 +)
- =>WM: (15569: O2210 ^name predict-no)
- =>WM: (15568: O2209 ^name predict-yes)
- =>WM: (15567: R1108 ^value 1)
- =>WM: (15566: R1 ^reward R1108)
- <=WM: (15557: S1 ^operator O2207 +)
- <=WM: (15558: S1 ^operator O2208 +)
- <=WM: (15559: S1 ^operator O2208)
- <=WM: (15552: R1 ^reward R1107)
- <=WM: (15555: O2208 ^name predict-no)
- <=WM: (15554: O2207 ^name predict-yes)
- <=WM: (15553: R1107 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2209 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2210 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2208 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2207 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (15572: S1 ^operator O2210)
- 1105: O: O2210 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1105 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1104 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15573: I3 ^predict-no N1105)
- <=WM: (15561: N1104 ^status complete)
- <=WM: (15560: I3 ^predict-no N1104)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- -/|--- Input Phase ---
- =>WM: (15577: I2 ^dir U)
- =>WM: (15576: I2 ^reward 1)
- =>WM: (15575: I2 ^see 0)
- =>WM: (15574: N1105 ^status complete)
- <=WM: (15564: I2 ^dir U)
- <=WM: (15563: I2 ^reward 1)
- <=WM: (15562: I2 ^see 0)
- =>WM: (15578: I2 ^level-1 L0-root)
- <=WM: (15565: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1109 ^value 1 +)
- (R1 ^reward R1109 +)
- Firing propose*predict-yes
- -->
- (O2211 ^name predict-yes +)
- (S1 ^operator O2211 +)
- Firing propose*predict-no
- -->
- (O2212 ^name predict-no +)
- (S1 ^operator O2212 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2210 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2209 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2210 ^name predict-no +)
- (S1 ^operator O2210 +)
- Retracting propose*predict-yes
- -->
- (O2209 ^name predict-yes +)
- (S1 ^operator O2209 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1108 ^value 1 +)
- (R1 ^reward R1108 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2210 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2209 = 0.)
- =>WM: (15584: S1 ^operator O2212 +)
- =>WM: (15583: S1 ^operator O2211 +)
- =>WM: (15582: O2212 ^name predict-no)
- =>WM: (15581: O2211 ^name predict-yes)
- =>WM: (15580: R1109 ^value 1)
- =>WM: (15579: R1 ^reward R1109)
- <=WM: (15570: S1 ^operator O2209 +)
- <=WM: (15571: S1 ^operator O2210 +)
- <=WM: (15572: S1 ^operator O2210)
- <=WM: (15566: R1 ^reward R1108)
- <=WM: (15569: O2210 ^name predict-no)
- <=WM: (15568: O2209 ^name predict-yes)
- <=WM: (15567: R1108 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2211 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2212 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2210 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2209 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (15585: S1 ^operator O2212)
- 1106: O: O2212 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1106 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1105 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15586: I3 ^predict-no N1106)
- <=WM: (15574: N1105 ^status complete)
- <=WM: (15573: I3 ^predict-no N1105)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- \--- Input Phase ---
- =>WM: (15590: I2 ^dir L)
- =>WM: (15589: I2 ^reward 1)
- =>WM: (15588: I2 ^see 0)
- =>WM: (15587: N1106 ^status complete)
- <=WM: (15577: I2 ^dir U)
- <=WM: (15576: I2 ^reward 1)
- <=WM: (15575: I2 ^see 0)
- =>WM: (15591: I2 ^level-1 L0-root)
- <=WM: (15578: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2212 = 0.6710541328724928)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2211 = 0.02602968095631553)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1110 ^value 1 +)
- (R1 ^reward R1110 +)
- Firing propose*predict-yes
- -->
- (O2213 ^name predict-yes +)
- (S1 ^operator O2213 +)
- Firing propose*predict-no
- -->
- (O2214 ^name predict-no +)
- (S1 ^operator O2214 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2212 = 0.3289462806074842)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2211 = 0.4318908349243655)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2212 ^name predict-no +)
- (S1 ^operator O2212 +)
- Retracting propose*predict-yes
- -->
- (O2211 ^name predict-yes +)
- (S1 ^operator O2211 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1109 ^value 1 +)
- (R1 ^reward R1109 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2212 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2211 = 0.)
- =>WM: (15598: S1 ^operator O2214 +)
- =>WM: (15597: S1 ^operator O2213 +)
- =>WM: (15596: I3 ^dir L)
- =>WM: (15595: O2214 ^name predict-no)
- =>WM: (15594: O2213 ^name predict-yes)
- =>WM: (15593: R1110 ^value 1)
- =>WM: (15592: R1 ^reward R1110)
- <=WM: (15583: S1 ^operator O2211 +)
- <=WM: (15584: S1 ^operator O2212 +)
- <=WM: (15585: S1 ^operator O2212)
- <=WM: (15556: I3 ^dir U)
- <=WM: (15579: R1 ^reward R1109)
- <=WM: (15582: O2212 ^name predict-no)
- <=WM: (15581: O2211 ^name predict-yes)
- <=WM: (15580: R1109 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2213 = 0.02602968095631553)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2213 = 0.4318908349243655)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2214 = 0.6710541328724928)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2214 = 0.3289462806074842)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2212 = 0.3289462806074842)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2212 = 0.6710541328724928)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2211 = 0.4318908349243655)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2211 = 0.02602968095631553)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (15599: S1 ^operator O2214)
- 1107: O: O2214 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1107 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1106 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15600: I3 ^predict-no N1107)
- <=WM: (15587: N1106 ^status complete)
- <=WM: (15586: I3 ^predict-no N1106)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- -/|\--- Input Phase ---
- =>WM: (15604: I2 ^dir U)
- =>WM: (15603: I2 ^reward 1)
- =>WM: (15602: I2 ^see 0)
- =>WM: (15601: N1107 ^status complete)
- <=WM: (15590: I2 ^dir L)
- <=WM: (15589: I2 ^reward 1)
- <=WM: (15588: I2 ^see 0)
- =>WM: (15605: I2 ^level-1 L0-root)
- <=WM: (15591: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1111 ^value 1 +)
- (R1 ^reward R1111 +)
- Firing propose*predict-yes
- -->
- (O2215 ^name predict-yes +)
- (S1 ^operator O2215 +)
- Firing propose*predict-no
- -->
- (O2216 ^name predict-no +)
- (S1 ^operator O2216 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2214 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2213 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2214 ^name predict-no +)
- (S1 ^operator O2214 +)
- Retracting propose*predict-yes
- -->
- (O2213 ^name predict-yes +)
- (S1 ^operator O2213 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1110 ^value 1 +)
- (R1 ^reward R1110 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2214 = 0.3289462806074842)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2214 = 0.6710541328724928)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2213 = 0.4318908349243655)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2213 = 0.02602968095631553)
- =>WM: (15612: S1 ^operator O2216 +)
- =>WM: (15611: S1 ^operator O2215 +)
- =>WM: (15610: I3 ^dir U)
- =>WM: (15609: O2216 ^name predict-no)
- =>WM: (15608: O2215 ^name predict-yes)
- =>WM: (15607: R1111 ^value 1)
- =>WM: (15606: R1 ^reward R1111)
- <=WM: (15597: S1 ^operator O2213 +)
- <=WM: (15598: S1 ^operator O2214 +)
- <=WM: (15599: S1 ^operator O2214)
- <=WM: (15596: I3 ^dir L)
- <=WM: (15592: R1 ^reward R1110)
- <=WM: (15595: O2214 ^name predict-no)
- <=WM: (15594: O2213 ^name predict-yes)
- <=WM: (15593: R1110 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2215 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2216 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2214 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2213 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565404 -0.236458 0.328946 -> 0.565404 -0.236458 0.328946(R,m,v=1,0.914773,0.0784091)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*33 0.434597 0.236457 0.671054 -> 0.434597 0.236457 0.671054(R,m,v=1,1,0)
- =>WM: (15613: S1 ^operator O2216)
- 1108: O: O2216 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1108 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1107 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15614: I3 ^predict-no N1108)
- <=WM: (15601: N1107 ^status complete)
- <=WM: (15600: I3 ^predict-no N1107)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- -/|--- Input Phase ---
- =>WM: (15618: I2 ^dir R)
- =>WM: (15617: I2 ^reward 1)
- =>WM: (15616: I2 ^see 0)
- =>WM: (15615: N1108 ^status complete)
- <=WM: (15604: I2 ^dir U)
- <=WM: (15603: I2 ^reward 1)
- <=WM: (15602: I2 ^see 0)
- =>WM: (15619: I2 ^level-1 L0-root)
- <=WM: (15605: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2216 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2215 = 0.2631725397581521)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1112 ^value 1 +)
- (R1 ^reward R1112 +)
- Firing propose*predict-yes
- -->
- (O2217 ^name predict-yes +)
- (S1 ^operator O2217 +)
- Firing propose*predict-no
- -->
- (O2218 ^name predict-no +)
- (S1 ^operator O2218 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2216 = 0.2572450813885658)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2215 = 0.736828143476714)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2216 ^name predict-no +)
- (S1 ^operator O2216 +)
- Retracting propose*predict-yes
- -->
- (O2215 ^name predict-yes +)
- (S1 ^operator O2215 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1111 ^value 1 +)
- (R1 ^reward R1111 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2216 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2215 = 0.)
- =>WM: (15626: S1 ^operator O2218 +)
- =>WM: (15625: S1 ^operator O2217 +)
- =>WM: (15624: I3 ^dir R)
- =>WM: (15623: O2218 ^name predict-no)
- =>WM: (15622: O2217 ^name predict-yes)
- =>WM: (15621: R1112 ^value 1)
- =>WM: (15620: R1 ^reward R1112)
- <=WM: (15611: S1 ^operator O2215 +)
- <=WM: (15612: S1 ^operator O2216 +)
- <=WM: (15613: S1 ^operator O2216)
- <=WM: (15610: I3 ^dir U)
- <=WM: (15606: R1 ^reward R1111)
- <=WM: (15609: O2216 ^name predict-no)
- <=WM: (15608: O2215 ^name predict-yes)
- <=WM: (15607: R1111 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2217 = 0.2631725397581521)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2217 = 0.736828143476714)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2218 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2218 = 0.2572450813885658)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2216 = 0.2572450813885658)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2216 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2215 = 0.736828143476714)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2215 = 0.2631725397581521)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (15627: S1 ^operator O2217)
- 1109: O: O2217 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1109 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1108 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15628: I3 ^predict-yes N1109)
- <=WM: (15615: N1108 ^status complete)
- <=WM: (15614: I3 ^predict-no N1108)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- \-/--- Input Phase ---
- =>WM: (15632: I2 ^dir R)
- =>WM: (15631: I2 ^reward 1)
- =>WM: (15630: I2 ^see 1)
- =>WM: (15629: N1109 ^status complete)
- <=WM: (15618: I2 ^dir R)
- <=WM: (15617: I2 ^reward 1)
- <=WM: (15616: I2 ^see 0)
- =>WM: (15633: I2 ^level-1 R1-root)
- <=WM: (15619: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2217 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2218 = 0.7427543549314648)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1113 ^value 1 +)
- (R1 ^reward R1113 +)
- Firing propose*predict-yes
- -->
- (O2219 ^name predict-yes +)
- (S1 ^operator O2219 +)
- Firing propose*predict-no
- -->
- (O2220 ^name predict-no +)
- (S1 ^operator O2220 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2218 = 0.2572450813885658)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2217 = 0.736828143476714)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2218 ^name predict-no +)
- (S1 ^operator O2218 +)
- Retracting propose*predict-yes
- -->
- (O2217 ^name predict-yes +)
- (S1 ^operator O2217 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1112 ^value 1 +)
- (R1 ^reward R1112 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2218 = 0.2572450813885658)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2218 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2217 = 0.736828143476714)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2217 = 0.2631725397581521)
- =>WM: (15640: S1 ^operator O2220 +)
- =>WM: (15639: S1 ^operator O2219 +)
- =>WM: (15638: O2220 ^name predict-no)
- =>WM: (15637: O2219 ^name predict-yes)
- =>WM: (15636: R1113 ^value 1)
- =>WM: (15635: R1 ^reward R1113)
- =>WM: (15634: I3 ^see 1)
- <=WM: (15625: S1 ^operator O2217 +)
- <=WM: (15627: S1 ^operator O2217)
- <=WM: (15626: S1 ^operator O2218 +)
- <=WM: (15620: R1 ^reward R1112)
- <=WM: (15511: I3 ^see 0)
- <=WM: (15623: O2218 ^name predict-no)
- <=WM: (15622: O2217 ^name predict-yes)
- <=WM: (15621: R1112 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2219 = 0.736828143476714)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2219 = -0.3011268063455669)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2220 = 0.2572450813885658)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2220 = 0.7427543549314648)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2218 = 0.2572450813885658)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2218 = 0.7427543549314648)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2217 = 0.736828143476714)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2217 = -0.3011268063455669)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.0114079 0.736828 -> 0.748236 -0.011408 0.736828(R,m,v=1,0.905556,0.0860025)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*35 0.251764 0.0114085 0.263173 -> 0.251764 0.0114084 0.263172(R,m,v=1,1,0)
- =>WM: (15641: S1 ^operator O2220)
- 1110: O: O2220 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1110 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1109 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15642: I3 ^predict-no N1110)
- <=WM: (15629: N1109 ^status complete)
- <=WM: (15628: I3 ^predict-yes N1109)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- |\--- Input Phase ---
- =>WM: (15646: I2 ^dir L)
- =>WM: (15645: I2 ^reward 1)
- =>WM: (15644: I2 ^see 0)
- =>WM: (15643: N1110 ^status complete)
- <=WM: (15632: I2 ^dir R)
- <=WM: (15631: I2 ^reward 1)
- <=WM: (15630: I2 ^see 1)
- =>WM: (15647: I2 ^level-1 R0-root)
- <=WM: (15633: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2220 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2219 = 0.568109797991154)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1114 ^value 1 +)
- (R1 ^reward R1114 +)
- Firing propose*predict-yes
- -->
- (O2221 ^name predict-yes +)
- (S1 ^operator O2221 +)
- Firing propose*predict-no
- -->
- (O2222 ^name predict-no +)
- (S1 ^operator O2222 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2220 = 0.3289462185854877)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2219 = 0.4318908349243655)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2220 ^name predict-no +)
- (S1 ^operator O2220 +)
- Retracting propose*predict-yes
- -->
- (O2219 ^name predict-yes +)
- (S1 ^operator O2219 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1113 ^value 1 +)
- (R1 ^reward R1113 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2220 = 0.7427543549314648)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2220 = 0.2572450813885658)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2219 = -0.3011268063455669)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2219 = 0.7368280409914841)
- =>WM: (15655: S1 ^operator O2222 +)
- =>WM: (15654: S1 ^operator O2221 +)
- =>WM: (15653: I3 ^dir L)
- =>WM: (15652: O2222 ^name predict-no)
- =>WM: (15651: O2221 ^name predict-yes)
- =>WM: (15650: R1114 ^value 1)
- =>WM: (15649: R1 ^reward R1114)
- =>WM: (15648: I3 ^see 0)
- <=WM: (15639: S1 ^operator O2219 +)
- <=WM: (15640: S1 ^operator O2220 +)
- <=WM: (15641: S1 ^operator O2220)
- <=WM: (15624: I3 ^dir R)
- <=WM: (15635: R1 ^reward R1113)
- <=WM: (15634: I3 ^see 1)
- <=WM: (15638: O2220 ^name predict-no)
- <=WM: (15637: O2219 ^name predict-yes)
- <=WM: (15636: R1113 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2221 = 0.4318908349243655)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2221 = 0.568109797991154)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2222 = 0.3289462185854877)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2222 = 0.04178081990804111)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2220 = 0.3289462185854877)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2220 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2219 = 0.4318908349243655)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2219 = 0.568109797991154)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586135 -0.32889 0.257245 -> 0.586135 -0.32889 0.257245(R,m,v=1,0.873684,0.110944)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*36 0.413864 0.32889 0.742754 -> 0.413864 0.32889 0.742754(R,m,v=1,1,0)
- =>WM: (15656: S1 ^operator O2221)
- 1111: O: O2221 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1111 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1110 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15657: I3 ^predict-yes N1111)
- <=WM: (15643: N1110 ^status complete)
- <=WM: (15642: I3 ^predict-no N1110)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- ---- Input Phase ---
- =>WM: (15661: I2 ^dir L)
- =>WM: (15660: I2 ^reward 1)
- =>WM: (15659: I2 ^see 1)
- =>WM: (15658: N1111 ^status complete)
- <=WM: (15646: I2 ^dir L)
- <=WM: (15645: I2 ^reward 1)
- <=WM: (15644: I2 ^see 0)
- =>WM: (15662: I2 ^level-1 L1-root)
- <=WM: (15647: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2222 = 0.6710534338431002)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2221 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1115 ^value 1 +)
- (R1 ^reward R1115 +)
- Firing propose*predict-yes
- -->
- (O2223 ^name predict-yes +)
- (S1 ^operator O2223 +)
- Firing propose*predict-no
- -->
- (O2224 ^name predict-no +)
- (S1 ^operator O2224 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2222 = 0.3289462185854877)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2221 = 0.4318908349243655)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2222 ^name predict-no +)
- (S1 ^operator O2222 +)
- Retracting propose*predict-yes
- -->
- (O2221 ^name predict-yes +)
- (S1 ^operator O2221 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1114 ^value 1 +)
- (R1 ^reward R1114 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2222 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2222 = 0.3289462185854877)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2221 = 0.568109797991154)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2221 = 0.4318908349243655)
- =>WM: (15669: S1 ^operator O2224 +)
- =>WM: (15668: S1 ^operator O2223 +)
- =>WM: (15667: O2224 ^name predict-no)
- =>WM: (15666: O2223 ^name predict-yes)
- =>WM: (15665: R1115 ^value 1)
- =>WM: (15664: R1 ^reward R1115)
- =>WM: (15663: I3 ^see 1)
- <=WM: (15654: S1 ^operator O2221 +)
- <=WM: (15656: S1 ^operator O2221)
- <=WM: (15655: S1 ^operator O2222 +)
- <=WM: (15649: R1 ^reward R1114)
- <=WM: (15648: I3 ^see 0)
- <=WM: (15652: O2222 ^name predict-no)
- <=WM: (15651: O2221 ^name predict-yes)
- <=WM: (15650: R1114 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2223 = 0.4318908349243655)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2223 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2224 = 0.3289462185854877)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2224 = 0.6710534338431002)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2222 = 0.3289462185854877)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2222 = 0.6710534338431002)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2221 = 0.4318908349243655)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2221 = -0.06092862110810815)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683777 -0.251886 0.431891 -> 0.683777 -0.251886 0.431891(R,m,v=1,0.92973,0.0656874)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*30 0.316224 0.251886 0.56811 -> 0.316224 0.251886 0.56811(R,m,v=1,1,0)
- =>WM: (15670: S1 ^operator O2224)
- 1112: O: O2224 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1112 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1111 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15671: I3 ^predict-no N1112)
- <=WM: (15658: N1111 ^status complete)
- <=WM: (15657: I3 ^predict-yes N1111)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- /|--- Input Phase ---
- =>WM: (15675: I2 ^dir L)
- =>WM: (15674: I2 ^reward 1)
- =>WM: (15673: I2 ^see 0)
- =>WM: (15672: N1112 ^status complete)
- <=WM: (15661: I2 ^dir L)
- <=WM: (15660: I2 ^reward 1)
- <=WM: (15659: I2 ^see 1)
- =>WM: (15676: I2 ^level-1 L0-root)
- <=WM: (15662: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2224 = 0.6710540708504963)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2223 = 0.02602968095631553)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1116 ^value 1 +)
- (R1 ^reward R1116 +)
- Firing propose*predict-yes
- -->
- (O2225 ^name predict-yes +)
- (S1 ^operator O2225 +)
- Firing propose*predict-no
- -->
- (O2226 ^name predict-no +)
- (S1 ^operator O2226 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2224 = 0.3289462185854877)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2223 = 0.4318907399870376)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2224 ^name predict-no +)
- (S1 ^operator O2224 +)
- Retracting propose*predict-yes
- -->
- (O2223 ^name predict-yes +)
- (S1 ^operator O2223 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1115 ^value 1 +)
- (R1 ^reward R1115 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2224 = 0.6710534338431002)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2224 = 0.3289462185854877)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2223 = -0.06092862110810815)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2223 = 0.4318907399870376)
- =>WM: (15683: S1 ^operator O2226 +)
- =>WM: (15682: S1 ^operator O2225 +)
- =>WM: (15681: O2226 ^name predict-no)
- =>WM: (15680: O2225 ^name predict-yes)
- =>WM: (15679: R1116 ^value 1)
- =>WM: (15678: R1 ^reward R1116)
- =>WM: (15677: I3 ^see 0)
- <=WM: (15668: S1 ^operator O2223 +)
- <=WM: (15669: S1 ^operator O2224 +)
- <=WM: (15670: S1 ^operator O2224)
- <=WM: (15664: R1 ^reward R1115)
- <=WM: (15663: I3 ^see 1)
- <=WM: (15667: O2224 ^name predict-no)
- <=WM: (15666: O2223 ^name predict-yes)
- <=WM: (15665: R1115 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2225 = 0.4318907399870376)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2225 = 0.02602968095631553)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2226 = 0.3289462185854877)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2226 = 0.6710540708504963)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2224 = 0.3289462185854877)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2224 = 0.6710540708504963)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2223 = 0.4318907399870376)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2223 = 0.02602968095631553)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565404 -0.236458 0.328946 -> 0.565404 -0.236458 0.328946(R,m,v=1,0.915254,0.0780046)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*43 0.434596 0.236458 0.671053 -> 0.434596 0.236458 0.671053(R,m,v=1,1,0)
- =>WM: (15684: S1 ^operator O2226)
- 1113: O: O2226 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1113 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1112 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15685: I3 ^predict-no N1113)
- <=WM: (15672: N1112 ^status complete)
- <=WM: (15671: I3 ^predict-no N1112)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- \-/--- Input Phase ---
- =>WM: (15689: I2 ^dir L)
- =>WM: (15688: I2 ^reward 1)
- =>WM: (15687: I2 ^see 0)
- =>WM: (15686: N1113 ^status complete)
- <=WM: (15675: I2 ^dir L)
- <=WM: (15674: I2 ^reward 1)
- <=WM: (15673: I2 ^see 0)
- =>WM: (15690: I2 ^level-1 L0-root)
- <=WM: (15676: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2226 = 0.6710540708504963)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2225 = 0.02602968095631553)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1117 ^value 1 +)
- (R1 ^reward R1117 +)
- Firing propose*predict-yes
- -->
- (O2227 ^name predict-yes +)
- (S1 ^operator O2227 +)
- Firing propose*predict-no
- -->
- (O2228 ^name predict-no +)
- (S1 ^operator O2228 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2226 = 0.3289462707211995)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2225 = 0.4318907399870376)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2226 ^name predict-no +)
- (S1 ^operator O2226 +)
- Retracting propose*predict-yes
- -->
- (O2225 ^name predict-yes +)
- (S1 ^operator O2225 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1116 ^value 1 +)
- (R1 ^reward R1116 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2226 = 0.6710540708504963)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2226 = 0.3289462707211995)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2225 = 0.02602968095631553)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2225 = 0.4318907399870376)
- =>WM: (15696: S1 ^operator O2228 +)
- =>WM: (15695: S1 ^operator O2227 +)
- =>WM: (15694: O2228 ^name predict-no)
- =>WM: (15693: O2227 ^name predict-yes)
- =>WM: (15692: R1117 ^value 1)
- =>WM: (15691: R1 ^reward R1117)
- <=WM: (15682: S1 ^operator O2225 +)
- <=WM: (15683: S1 ^operator O2226 +)
- <=WM: (15684: S1 ^operator O2226)
- <=WM: (15678: R1 ^reward R1116)
- <=WM: (15681: O2226 ^name predict-no)
- <=WM: (15680: O2225 ^name predict-yes)
- <=WM: (15679: R1116 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2227 = 0.4318907399870376)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2227 = 0.02602968095631553)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2228 = 0.3289462707211995)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2228 = 0.6710540708504963)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2226 = 0.3289462707211995)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2226 = 0.6710540708504963)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2225 = 0.4318907399870376)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2225 = 0.02602968095631553)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565404 -0.236458 0.328946 -> 0.565404 -0.236458 0.328946(R,m,v=1,0.91573,0.0776043)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*33 0.434597 0.236457 0.671054 -> 0.434597 0.236457 0.671054(R,m,v=1,1,0)
- =>WM: (15697: S1 ^operator O2228)
- 1114: O: O2228 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1114 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1113 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15698: I3 ^predict-no N1114)
- <=WM: (15686: N1113 ^status complete)
- <=WM: (15685: I3 ^predict-no N1113)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction L in state State-A
- In State-A moving L
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- |\---- Input Phase ---
- =>WM: (15702: I2 ^dir R)
- =>WM: (15701: I2 ^reward 1)
- =>WM: (15700: I2 ^see 0)
- =>WM: (15699: N1114 ^status complete)
- <=WM: (15689: I2 ^dir L)
- <=WM: (15688: I2 ^reward 1)
- <=WM: (15687: I2 ^see 0)
- =>WM: (15703: I2 ^level-1 L0-root)
- <=WM: (15690: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2228 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2227 = 0.2631724372729221)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1118 ^value 1 +)
- (R1 ^reward R1118 +)
- Firing propose*predict-yes
- -->
- (O2229 ^name predict-yes +)
- (S1 ^operator O2229 +)
- Firing propose*predict-no
- -->
- (O2230 ^name predict-no +)
- (S1 ^operator O2230 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2228 = 0.2572451659405612)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2227 = 0.7368280409914841)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2228 ^name predict-no +)
- (S1 ^operator O2228 +)
- Retracting propose*predict-yes
- -->
- (O2227 ^name predict-yes +)
- (S1 ^operator O2227 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1117 ^value 1 +)
- (R1 ^reward R1117 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*33
- -->
- (S1 ^operator O2228 = 0.6710540196147419)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2228 = 0.3289462194854451)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*32
- -->
- (S1 ^operator O2227 = 0.02602968095631553)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2227 = 0.4318907399870376)
- =>WM: (15710: S1 ^operator O2230 +)
- =>WM: (15709: S1 ^operator O2229 +)
- =>WM: (15708: I3 ^dir R)
- =>WM: (15707: O2230 ^name predict-no)
- =>WM: (15706: O2229 ^name predict-yes)
- =>WM: (15705: R1118 ^value 1)
- =>WM: (15704: R1 ^reward R1118)
- <=WM: (15695: S1 ^operator O2227 +)
- <=WM: (15696: S1 ^operator O2228 +)
- <=WM: (15697: S1 ^operator O2228)
- <=WM: (15653: I3 ^dir L)
- <=WM: (15691: R1 ^reward R1117)
- <=WM: (15694: O2228 ^name predict-no)
- <=WM: (15693: O2227 ^name predict-yes)
- <=WM: (15692: R1117 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2229 = 0.2631724372729221)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2229 = 0.7368280409914841)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2230 = -0.07401383653737587)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2230 = 0.2572451659405612)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2228 = 0.2572451659405612)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2228 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2227 = 0.7368280409914841)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2227 = 0.2631724372729221)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*6 0.565404 -0.236458 0.328946 -> 0.565404 -0.236458 0.328946(R,m,v=1,0.916201,0.077208)
- RL update rl*prefer*rvt*predict-no*H0*6*v1*H1*33 0.434597 0.236457 0.671054 -> 0.434597 0.236457 0.671054(R,m,v=1,1,0)
- =>WM: (15711: S1 ^operator O2229)
- 1115: O: O2229 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1115 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1114 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15712: I3 ^predict-yes N1115)
- <=WM: (15699: N1114 ^status complete)
- <=WM: (15698: I3 ^predict-no N1114)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction R in state State-A
- In State-A moving R
- ENV: (next state, see, prediction correct?) = (State-B, 1, True)
- predict error 0
- dir: dir isR
- --- END Output Phase ---
- /|--- Input Phase ---
- =>WM: (15716: I2 ^dir R)
- =>WM: (15715: I2 ^reward 1)
- =>WM: (15714: I2 ^see 1)
- =>WM: (15713: N1115 ^status complete)
- <=WM: (15702: I2 ^dir R)
- <=WM: (15701: I2 ^reward 1)
- <=WM: (15700: I2 ^see 0)
- =>WM: (15717: I2 ^level-1 R1-root)
- <=WM: (15703: I2 ^level-1 L0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2229 = -0.3011268063455669)
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2230 = 0.7427544394834602)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1119 ^value 1 +)
- (R1 ^reward R1119 +)
- Firing propose*predict-yes
- -->
- (O2231 ^name predict-yes +)
- (S1 ^operator O2231 +)
- Firing propose*predict-no
- -->
- (O2232 ^name predict-no +)
- (S1 ^operator O2232 +)
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2230 = 0.2572451659405612)
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2229 = 0.7368280409914841)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2230 ^name predict-no +)
- (S1 ^operator O2230 +)
- Retracting propose*predict-yes
- -->
- (O2229 ^name predict-yes +)
- (S1 ^operator O2229 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1118 ^value 1 +)
- (R1 ^reward R1118 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2230 = 0.2572451659405612)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*34
- -->
- (S1 ^operator O2230 = -0.07401383653737587)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2229 = 0.7368280409914841)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*35
- -->
- (S1 ^operator O2229 = 0.2631724372729221)
- =>WM: (15724: S1 ^operator O2232 +)
- =>WM: (15723: S1 ^operator O2231 +)
- =>WM: (15722: O2232 ^name predict-no)
- =>WM: (15721: O2231 ^name predict-yes)
- =>WM: (15720: R1119 ^value 1)
- =>WM: (15719: R1 ^reward R1119)
- =>WM: (15718: I3 ^see 1)
- <=WM: (15709: S1 ^operator O2229 +)
- <=WM: (15711: S1 ^operator O2229)
- <=WM: (15710: S1 ^operator O2230 +)
- <=WM: (15704: R1 ^reward R1118)
- <=WM: (15677: I3 ^see 0)
- <=WM: (15707: O2230 ^name predict-no)
- <=WM: (15706: O2229 ^name predict-yes)
- <=WM: (15705: R1118 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2231 = 0.7368280409914841)
- Firing prefer*rvt*predict-yes*H0*3*v1*H1
- -->
- Firing rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2231 = -0.3011268063455669)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2232 = 0.2572451659405612)
- Firing prefer*rvt*predict-no*H0*4*v1*H1
- -->
- Firing rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2232 = 0.7427544394834602)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2230 = 0.2572451659405612)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2230 = 0.7427544394834602)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2229 = 0.7368280409914841)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2229 = -0.3011268063455669)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*3 0.748236 -0.011408 0.736828 -> 0.748236 -0.0114081 0.736828(R,m,v=1,0.906077,0.085574)
- RL update rl*prefer*rvt*predict-yes*H0*3*v1*H1*35 0.251764 0.0114084 0.263172 -> 0.251764 0.0114083 0.263172(R,m,v=1,1,0)
- =>WM: (15725: S1 ^operator O2232)
- 1116: O: O2232 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1116 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1115 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15726: I3 ^predict-no N1116)
- <=WM: (15713: N1115 ^status complete)
- <=WM: (15712: I3 ^predict-yes N1115)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction R in state State-B
- In State-B moving R
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- \-/--- Input Phase ---
- =>WM: (15730: I2 ^dir U)
- =>WM: (15729: I2 ^reward 1)
- =>WM: (15728: I2 ^see 0)
- =>WM: (15727: N1116 ^status complete)
- <=WM: (15716: I2 ^dir R)
- <=WM: (15715: I2 ^reward 1)
- <=WM: (15714: I2 ^see 1)
- =>WM: (15731: I2 ^level-1 R0-root)
- <=WM: (15717: I2 ^level-1 R1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1120 ^value 1 +)
- (R1 ^reward R1120 +)
- Firing propose*predict-yes
- -->
- (O2233 ^name predict-yes +)
- (S1 ^operator O2233 +)
- Firing propose*predict-no
- -->
- (O2234 ^name predict-no +)
- (S1 ^operator O2234 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2232 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2231 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2232 ^name predict-no +)
- (S1 ^operator O2232 +)
- Retracting propose*predict-yes
- -->
- (O2231 ^name predict-yes +)
- (S1 ^operator O2231 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1119 ^value 1 +)
- (R1 ^reward R1119 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir R +)
- Retracting rl*prefer*rvt*predict-no*H0*4*v1*H1*36
- -->
- (S1 ^operator O2232 = 0.7427544394834602)
- Retracting rl*prefer*rvt*predict-no*H0*4
- -->
- (S1 ^operator O2232 = 0.2572451659405612)
- Retracting rl*prefer*rvt*predict-yes*H0*3*v1*H1*37
- -->
- (S1 ^operator O2231 = -0.3011268063455669)
- Retracting rl*prefer*rvt*predict-yes*H0*3
- -->
- (S1 ^operator O2231 = 0.7368279692518231)
- =>WM: (15739: S1 ^operator O2234 +)
- =>WM: (15738: S1 ^operator O2233 +)
- =>WM: (15737: I3 ^dir U)
- =>WM: (15736: O2234 ^name predict-no)
- =>WM: (15735: O2233 ^name predict-yes)
- =>WM: (15734: R1120 ^value 1)
- =>WM: (15733: R1 ^reward R1120)
- =>WM: (15732: I3 ^see 0)
- <=WM: (15723: S1 ^operator O2231 +)
- <=WM: (15724: S1 ^operator O2232 +)
- <=WM: (15725: S1 ^operator O2232)
- <=WM: (15708: I3 ^dir R)
- <=WM: (15719: R1 ^reward R1119)
- <=WM: (15718: I3 ^see 1)
- <=WM: (15722: O2232 ^name predict-no)
- <=WM: (15721: O2231 ^name predict-yes)
- <=WM: (15720: R1119 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2233 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2234 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2232 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2231 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*4 0.586135 -0.32889 0.257245 -> 0.586135 -0.32889 0.257245(R,m,v=1,0.874346,0.110444)
- RL update rl*prefer*rvt*predict-no*H0*4*v1*H1*36 0.413864 0.32889 0.742754 -> 0.413864 0.32889 0.742754(R,m,v=1,1,0)
- =>WM: (15740: S1 ^operator O2234)
- 1117: O: O2234 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1117 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1116 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15741: I3 ^predict-no N1117)
- <=WM: (15727: N1116 ^status complete)
- <=WM: (15726: I3 ^predict-no N1116)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-B
- In State-B moving U
- ENV: (next state, see, prediction correct?) = (State-B, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- |\---- Input Phase ---
- =>WM: (15745: I2 ^dir L)
- =>WM: (15744: I2 ^reward 1)
- =>WM: (15743: I2 ^see 0)
- =>WM: (15742: N1117 ^status complete)
- <=WM: (15730: I2 ^dir U)
- <=WM: (15729: I2 ^reward 1)
- <=WM: (15728: I2 ^see 0)
- =>WM: (15746: I2 ^level-1 R0-root)
- <=WM: (15731: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2234 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2233 = 0.568109703053826)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1121 ^value 1 +)
- (R1 ^reward R1121 +)
- Firing propose*predict-yes
- -->
- (O2235 ^name predict-yes +)
- (S1 ^operator O2235 +)
- Firing propose*predict-no
- -->
- (O2236 ^name predict-no +)
- (S1 ^operator O2236 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2234 = 0.3289461836204171)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2233 = 0.4318907399870376)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2234 ^name predict-no +)
- (S1 ^operator O2234 +)
- Retracting propose*predict-yes
- -->
- (O2233 ^name predict-yes +)
- (S1 ^operator O2233 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1120 ^value 1 +)
- (R1 ^reward R1120 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2234 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2233 = 0.)
- =>WM: (15753: S1 ^operator O2236 +)
- =>WM: (15752: S1 ^operator O2235 +)
- =>WM: (15751: I3 ^dir L)
- =>WM: (15750: O2236 ^name predict-no)
- =>WM: (15749: O2235 ^name predict-yes)
- =>WM: (15748: R1121 ^value 1)
- =>WM: (15747: R1 ^reward R1121)
- <=WM: (15738: S1 ^operator O2233 +)
- <=WM: (15739: S1 ^operator O2234 +)
- <=WM: (15740: S1 ^operator O2234)
- <=WM: (15737: I3 ^dir U)
- <=WM: (15733: R1 ^reward R1120)
- <=WM: (15736: O2234 ^name predict-no)
- <=WM: (15735: O2233 ^name predict-yes)
- <=WM: (15734: R1120 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2235 = 0.568109703053826)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2235 = 0.4318907399870376)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2236 = 0.04178081990804111)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2236 = 0.3289461836204171)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2234 = 0.3289461836204171)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2234 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2233 = 0.4318907399870376)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2233 = 0.568109703053826)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-no*H0*2 1 0 1 -> 1 0 1(R,m,v=1,1,0)
- =>WM: (15754: S1 ^operator O2235)
- 1118: O: O2235 (predict-yes)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-yes N1118 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-no N1117 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15755: I3 ^predict-yes N1118)
- <=WM: (15742: N1117 ^status complete)
- <=WM: (15741: I3 ^predict-no N1117)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-yes inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-yes for direction L in state State-B
- In State-B moving L
- ENV: (next state, see, prediction correct?) = (State-A, 1, True)
- predict error 0
- dir: dir isU
- --- END Output Phase ---
- /|\--- Input Phase ---
- =>WM: (15759: I2 ^dir U)
- =>WM: (15758: I2 ^reward 1)
- =>WM: (15757: I2 ^see 1)
- =>WM: (15756: N1118 ^status complete)
- <=WM: (15745: I2 ^dir L)
- <=WM: (15744: I2 ^reward 1)
- <=WM: (15743: I2 ^see 0)
- =>WM: (15760: I2 ^level-1 L1-root)
- <=WM: (15746: I2 ^level-1 R0-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1122 ^value 1 +)
- (R1 ^reward R1122 +)
- Firing propose*predict-yes
- -->
- (O2237 ^name predict-yes +)
- (S1 ^operator O2237 +)
- Firing propose*predict-no
- -->
- (O2238 ^name predict-no +)
- (S1 ^operator O2238 +)
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2236 = 0.9999999999999999)
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2235 = 0.)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Retracting propose*predict-no
- -->
- (O2236 ^name predict-no +)
- (S1 ^operator O2236 +)
- Retracting propose*predict-yes
- -->
- (O2235 ^name predict-yes +)
- (S1 ^operator O2235 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1121 ^value 1 +)
- (R1 ^reward R1121 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2236 = 0.3289461836204171)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*38
- -->
- (S1 ^operator O2236 = 0.04178081990804111)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2235 = 0.4318907399870376)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*30
- -->
- (S1 ^operator O2235 = 0.568109703053826)
- =>WM: (15768: S1 ^operator O2238 +)
- =>WM: (15767: S1 ^operator O2237 +)
- =>WM: (15766: I3 ^dir U)
- =>WM: (15765: O2238 ^name predict-no)
- =>WM: (15764: O2237 ^name predict-yes)
- =>WM: (15763: R1122 ^value 1)
- =>WM: (15762: R1 ^reward R1122)
- =>WM: (15761: I3 ^see 1)
- <=WM: (15752: S1 ^operator O2235 +)
- <=WM: (15754: S1 ^operator O2235)
- <=WM: (15753: S1 ^operator O2236 +)
- <=WM: (15751: I3 ^dir L)
- <=WM: (15747: R1 ^reward R1121)
- <=WM: (15732: I3 ^see 0)
- <=WM: (15750: O2236 ^name predict-no)
- <=WM: (15749: O2235 ^name predict-yes)
- <=WM: (15748: R1121 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2237 = 0.)
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2238 = 0.9999999999999999)
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2236 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2235 = 0.)
- --- END Proposal Phase ---
- --- Decision Phase ---
- RL update rl*prefer*rvt*predict-yes*H0*5 0.683777 -0.251886 0.431891 -> 0.683777 -0.251886 0.431891(R,m,v=1,0.930108,0.0653589)
- RL update rl*prefer*rvt*predict-yes*H0*5*v1*H1*30 0.316224 0.251886 0.56811 -> 0.316223 0.251886 0.56811(R,m,v=1,1,0)
- =>WM: (15769: S1 ^operator O2238)
- 1119: O: O2238 (predict-no)
- --- END Decision Phase ---
- --- Application Phase ---
- --- Firing Productions (PE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing apply*operator
- -->
- (I3 ^predict-no N1119 + :O )
- Firing apply*operator*complete
- -->
- (I3 ^predict-yes N1118 - :O )
- inner elaboration loop at bottom goal.
- --- Change Working Memory (PE) ---
- =>WM: (15770: I3 ^predict-no N1119)
- <=WM: (15756: N1118 ^status complete)
- <=WM: (15755: I3 ^predict-yes N1118)
- --- Firing Productions (IE) For State At Depth 1 ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing monitor*world
- -->
- I see 1 and I'm going to do: predict-no inner elaboration loop at bottom goal.
- --- Change Working Memory (IE) ---
- --- END Application Phase ---
- --- Output Phase ---
- ENV: Agent did: predict-no for direction U in state State-A
- In State-A moving U
- ENV: (next state, see, prediction correct?) = (State-A, 0, True)
- predict error 0
- dir: dir isL
- --- END Output Phase ---
- -/|--- Input Phase ---
- =>WM: (15774: I2 ^dir L)
- =>WM: (15773: I2 ^reward 1)
- =>WM: (15772: I2 ^see 0)
- =>WM: (15771: N1119 ^status complete)
- <=WM: (15759: I2 ^dir U)
- <=WM: (15758: I2 ^reward 1)
- <=WM: (15757: I2 ^see 1)
- =>WM: (15775: I2 ^level-1 L1-root)
- <=WM: (15760: I2 ^level-1 L1-root)
- --- END Input Phase ---
- --- Proposal Phase ---
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2238 = 0.6710534859788121)
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2237 = -0.06092862110810815)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing elaborate*copy-see-to-output-link
- -->
- (I3 ^see 0 +)
- Firing elaborate*reward*based*on*reward
- -->
- (R1123 ^value 1 +)
- (R1 ^reward R1123 +)
- Firing propose*predict-yes
- -->
- (O2239 ^name predict-yes +)
- (S1 ^operator O2239 +)
- Firing propose*predict-no
- -->
- (O2240 ^name predict-no +)
- (S1 ^operator O2240 +)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2238 = 0.3289461836204171)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2237 = 0.431890673530908)
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir L +)
- inner elaboration loop at bottom goal.
- Retracting elaborate*copy-see-to-output-link
- -->
- (I3 ^see 1 +)
- Retracting propose*predict-no
- -->
- (O2238 ^name predict-no +)
- (S1 ^operator O2238 +)
- Retracting propose*predict-yes
- -->
- (O2237 ^name predict-yes +)
- (S1 ^operator O2237 +)
- Retracting elaborate*reward*based*on*reward
- -->
- (R1122 ^value 1 +)
- (R1 ^reward R1122 +)
- Retracting elaborate*copy-dir-to-output-link
- -->
- (I3 ^dir U +)
- Retracting rl*prefer*rvt*predict-no*H0*2
- -->
- (S1 ^operator O2238 = 0.9999999999999999)
- Retracting rl*prefer*rvt*predict-yes*H0*1
- -->
- (S1 ^operator O2237 = 0.)
- =>WM: (15783: S1 ^operator O2240 +)
- =>WM: (15782: S1 ^operator O2239 +)
- =>WM: (15781: I3 ^dir L)
- =>WM: (15780: O2240 ^name predict-no)
- =>WM: (15779: O2239 ^name predict-yes)
- =>WM: (15778: R1123 ^value 1)
- =>WM: (15777: R1 ^reward R1123)
- =>WM: (15776: I3 ^see 0)
- <=WM: (15767: S1 ^operator O2237 +)
- <=WM: (15768: S1 ^operator O2238 +)
- <=WM: (15769: S1 ^operator O2238)
- <=WM: (15766: I3 ^dir U)
- <=WM: (15762: R1 ^reward R1122)
- <=WM: (15761: I3 ^see 1)
- <=WM: (15765: O2238 ^name predict-no)
- <=WM: (15764: O2237 ^name predict-yes)
- <=WM: (15763: R1122 ^value 1)
- --- Inner Elaboration Phase, active level 1 (S1) ---
- Firing prefer*rvt*predict-yes*H0
- -->
- Firing rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2239 = -0.06092862110810815)
- Firing rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2239 = 0.431890673530908)
- Firing prefer*rvt*predict-yes*H0*5*v1*H1
- -->
- Firing prefer*rvt*predict-no*H0
- -->
- Firing rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2240 = 0.6710534859788121)
- Firing rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2240 = 0.3289461836204171)
- Firing prefer*rvt*predict-no*H0*6*v1*H1
- -->
- inner elaboration loop at bottom goal.
- Retracting rl*prefer*rvt*predict-no*H0*6
- -->
- (S1 ^operator O2238 = 0.3289461836204171)
- Retracting rl*prefer*rvt*predict-no*H0*6*v1*H1*43
- -->
- (S1 ^operator O2238 = 0.6710534859788121)
- Retracting rl*prefer*rvt*predict-yes*H0*5
- -->
- (S1 ^operator O2237 = 0.431890673530908)
- Retracting rl*prefer*rvt*predict-yes*H0*5*v1*H1*31
- -->
- (S1 ^operator O2237 = -0.06092862110810815)