mirror of
https://github.com/ArthurDanjou/ArtStudies.git
synced 2026-03-16 05:11:40 +01:00
- Bump catboost from 1.2.8 to 1.2.10 - Update google-api-python-client from 2.190.0 to 2.191.0 - Upgrade langchain from 1.2.0 to 1.2.10 - Update langchain-core from 1.2.16 to 1.2.17 - Upgrade langchain-huggingface from 1.2.0 to 1.2.1 - Bump marimo from 0.19.11 to 0.20.2 - Update matplotlib from 3.10.1 to 3.10.8 - Upgrade numpy from 2.2.5 to 2.4.2 - Update opencv-python from 4.11.0.86 to 4.13.0.92 - Bump pandas from 2.2.3 to 3.0.1 - Update plotly from 6.3.0 to 6.6.0 - Upgrade polars from 1.37.0 to 1.38.1 - Bump rasterio from 1.4.4 to 1.5.0 - Update scikit-learn from 1.6.1 to 1.8.0 - Upgrade scipy from 1.15.2 to 1.17.1 - Bump shap from 0.49.1 to 0.50.0 - Adjust isort section order for better readability
1645 lines
125 KiB
Plaintext
1645 lines
125 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "fef45687",
|
||
"metadata": {},
|
||
"source": [
|
||
"# RL Project: Atari Tennis Tournament\n",
|
||
"\n",
|
||
"This notebook implements four Reinforcement Learning algorithms to play Atari Tennis (`ALE/Tennis-v5` via Gymnasium):\n",
|
||
"\n",
|
||
"1. **SARSA** — Semi-gradient SARSA with linear approximation (inspired by Lab 7, on-policy update from Lab 5B)\n",
|
||
"2. **Q-Learning** — Off-policy linear approximation (inspired by Lab 5B)\n",
|
||
"3. **DQN** — Deep Q-Network with PyTorch MLP, experience replay and target network (inspired by Lab 6A + classic DQN), GPU-accelerated via MPS\n",
|
||
"4. **Monte Carlo** — First-visit MC control with linear approximation (inspired by Lab 4)\n",
|
||
"\n",
|
||
"Each agent is **pre-trained independently** against the built-in Atari AI opponent, then evaluated in a comparative tournament."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "b50d7174",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"PyTorch device: cuda\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"import itertools\n",
|
||
"import pickle\n",
|
||
"from collections import deque\n",
|
||
"from pathlib import Path\n",
|
||
"\n",
|
||
"import ale_py # noqa: F401 — registers ALE environments\n",
|
||
"import gymnasium as gym\n",
|
||
"import supersuit as ss\n",
|
||
"from gymnasium.wrappers import FrameStackObservation, ResizeObservation\n",
|
||
"from pettingzoo.atari import tennis_v3\n",
|
||
"from tqdm.auto import tqdm\n",
|
||
"\n",
|
||
"import matplotlib.pyplot as plt\n",
|
||
"import numpy as np\n",
|
||
"import seaborn as sns\n",
|
||
"\n",
|
||
"import torch\n",
|
||
"from torch import nn, optim\n",
|
||
"\n",
|
||
"if torch.cuda.is_available():\n",
|
||
" DEVICE = torch.device(\"cuda\")\n",
|
||
"elif torch.backends.mps.is_available():\n",
|
||
" DEVICE = torch.device(\"mps\")\n",
|
||
"else:\n",
|
||
" DEVICE = torch.device(\"cpu\")\n",
|
||
"print(f\"PyTorch device: {DEVICE}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "ff3486a4",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"CHECKPOINT_DIR = Path(\"checkpoints\")\n",
|
||
"CHECKPOINT_DIR.mkdir(parents=True, exist_ok=True)\n",
|
||
"\n",
|
||
"\n",
|
||
"def _ckpt_path(name: str) -> Path:\n",
|
||
" \"\"\"Return the checkpoint path for an agent (DQN uses .pt, others use .pkl).\"\"\"\n",
|
||
" base = name.lower().replace(\" \", \"_\").replace(\"-\", \"_\")\n",
|
||
" ext = \".pt\" if name == \"DQN\" else \".pkl\"\n",
|
||
" return CHECKPOINT_DIR / (base + ext)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "ec691487",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Utility Functions\n",
|
||
"\n",
|
||
"## Observation Normalization\n",
|
||
"\n",
|
||
"The Tennis environment produces image observations of shape `(4, 84, 84)` after preprocessing (grayscale + resize + frame stack).\n",
|
||
"We normalize them into 1D `float64` vectors divided by 255, as in Lab 7 (continuous feature normalization).\n",
|
||
"\n",
|
||
"## ε-greedy Policy\n",
|
||
"\n",
|
||
"Follows the pattern from Lab 5B (`epsilon_greedy`) and Lab 7 (`epsilon_greedy_action`):\n",
|
||
"- With probability ε: random action (exploration)\n",
|
||
"- With probability 1−ε: action maximizing $\\hat{q}(s, a)$ with uniform tie-breaking (`np.flatnonzero`)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "be85c130",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def normalize_obs(observation: np.ndarray) -> np.ndarray:\n",
|
||
" \"\"\"Flatten and normalize an observation to a 1D float64 vector.\n",
|
||
"\n",
|
||
" Replicates the /255.0 normalization used in all agents from the original project.\n",
|
||
" For image observations of shape (4, 84, 84), this produces a vector of length 28_224.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" observation: Raw observation array from the environment.\n",
|
||
"\n",
|
||
" Returns:\n",
|
||
" 1D numpy array of dtype float64, values in [0, 1].\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" return observation.flatten().astype(np.float64) / 255.0\n",
|
||
"\n",
|
||
"\n",
|
||
"def epsilon_greedy(\n",
|
||
" q_values: np.ndarray,\n",
|
||
" epsilon: float,\n",
|
||
" rng: np.random.Generator,\n",
|
||
") -> int:\n",
|
||
" \"\"\"Select an action using an ε-greedy policy with fair tie-breaking.\n",
|
||
"\n",
|
||
" Follows the same logic as Lab 5B epsilon_greedy and Lab 7 epsilon_greedy_action:\n",
|
||
" - With probability epsilon: choose a random action (exploration).\n",
|
||
" - With probability 1-epsilon: choose the action with highest Q-value (exploitation).\n",
|
||
" - If multiple actions share the maximum Q-value, break ties uniformly at random.\n",
|
||
"\n",
|
||
" Handles edge cases: empty q_values, NaN/Inf values.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" q_values: Array of Q-values for each action, shape (n_actions,).\n",
|
||
" epsilon: Exploration probability in [0, 1].\n",
|
||
" rng: NumPy random number generator.\n",
|
||
"\n",
|
||
" Returns:\n",
|
||
" Selected action index.\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" q_values = np.asarray(q_values, dtype=np.float64).reshape(-1)\n",
|
||
"\n",
|
||
" if q_values.size == 0:\n",
|
||
" msg = \"q_values is empty.\"\n",
|
||
" raise ValueError(msg)\n",
|
||
"\n",
|
||
" if rng.random() < epsilon:\n",
|
||
" return int(rng.integers(0, q_values.size))\n",
|
||
"\n",
|
||
" # Handle NaN/Inf values safely\n",
|
||
" finite_mask = np.isfinite(q_values)\n",
|
||
" if not np.any(finite_mask):\n",
|
||
" return int(rng.integers(0, q_values.size))\n",
|
||
"\n",
|
||
" safe_q = q_values.copy()\n",
|
||
" safe_q[~finite_mask] = -np.inf\n",
|
||
" max_val = np.max(safe_q)\n",
|
||
" best = np.flatnonzero(safe_q == max_val)\n",
|
||
"\n",
|
||
" if best.size == 0:\n",
|
||
" return int(rng.integers(0, q_values.size))\n",
|
||
"\n",
|
||
" return int(rng.choice(best))\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "bb53da28",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Agent Definitions\n",
|
||
"\n",
|
||
"## Base Class `Agent`\n",
|
||
"\n",
|
||
"Common interface for all agents, same signatures: `get_action`, `update`, `save`, `load`.\n",
|
||
"Serialization uses `pickle` (compatible with numpy arrays)."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "ded9b1fb",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class Agent:\n",
|
||
" \"\"\"Base class for reinforcement learning agents.\n",
|
||
"\n",
|
||
" All agents share this interface so they are compatible with the tournament system.\n",
|
||
" \"\"\"\n",
|
||
"\n",
|
||
" def __init__(self, seed: int, action_space: int) -> None:\n",
|
||
" \"\"\"Initialize the agent with its action space and a reproducible RNG.\"\"\"\n",
|
||
" self.action_space = action_space\n",
|
||
" self.rng = np.random.default_rng(seed=seed)\n",
|
||
"\n",
|
||
" def get_action(self, observation: np.ndarray, epsilon: float = 0.0) -> int:\n",
|
||
" \"\"\"Select an action from the current observation.\"\"\"\n",
|
||
" raise NotImplementedError\n",
|
||
"\n",
|
||
" def update(\n",
|
||
" self,\n",
|
||
" state: np.ndarray,\n",
|
||
" action: int,\n",
|
||
" reward: float,\n",
|
||
" next_state: np.ndarray,\n",
|
||
" done: bool,\n",
|
||
" next_action: int | None = None,\n",
|
||
" ) -> None:\n",
|
||
" \"\"\"Update agent parameters from one transition.\"\"\"\n",
|
||
"\n",
|
||
" def save(self, filename: str) -> None:\n",
|
||
" \"\"\"Save the agent state to disk using pickle.\"\"\"\n",
|
||
" with Path(filename).open(\"wb\") as f:\n",
|
||
" pickle.dump(self.__dict__, f)\n",
|
||
"\n",
|
||
" def load(self, filename: str) -> None:\n",
|
||
" \"\"\"Load the agent state from disk.\"\"\"\n",
|
||
" with Path(filename).open(\"rb\") as f:\n",
|
||
" self.__dict__.update(pickle.load(f)) # noqa: S301\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "8a4eae79",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Random Agent (baseline)\n",
|
||
"\n",
|
||
"Serves as a reference to evaluate the performance of learning agents."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "78bdc9d2",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class RandomAgent(Agent):\n",
|
||
" \"\"\"A simple agent that selects actions uniformly at random (baseline).\"\"\"\n",
|
||
"\n",
|
||
" def get_action(self, observation: np.ndarray, epsilon: float = 0.0) -> int:\n",
|
||
" \"\"\"Select a random action, ignoring the observation and epsilon.\"\"\"\n",
|
||
" _ = observation, epsilon\n",
|
||
" return int(self.rng.integers(0, self.action_space))\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "5f679032",
|
||
"metadata": {},
|
||
"source": [
|
||
"## SARSA Agent — Linear Approximation (Semi-gradient)\n",
|
||
"\n",
|
||
"This agent combines:\n",
|
||
"- **Linear approximation** from Lab 7 (`SarsaAgent`): $\\hat{q}(s, a; \\mathbf{W}) = \\mathbf{W}_a^\\top \\phi(s)$\n",
|
||
"- **On-policy SARSA update** from Lab 5B (`train_sarsa`): $\\delta = r + \\gamma \\hat{q}(s', a') - \\hat{q}(s, a)$\n",
|
||
"\n",
|
||
"The semi-gradient update rule is:\n",
|
||
"$$W_a \\leftarrow W_a + \\alpha \\cdot \\delta \\cdot \\phi(s)$$\n",
|
||
"\n",
|
||
"where $\\phi(s)$ is the normalized observation vector (analogous to tile coding features in Lab 7, but in dense form)."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "c124ed9a",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class SarsaAgent(Agent):\n",
|
||
" \"\"\"Semi-gradient SARSA agent with linear function approximation.\n",
|
||
"\n",
|
||
" Inspired by:\n",
|
||
" - Lab 7 SarsaAgent: linear q(s,a) = W_a . phi(s), semi-gradient update\n",
|
||
" - Lab 5B train_sarsa: on-policy TD target using Q(s', a')\n",
|
||
"\n",
|
||
" The weight matrix W has shape (n_actions, n_features).\n",
|
||
" For a given state s, q(s, a) = W[a] @ phi(s) is the dot product\n",
|
||
" of the action's weight row with the normalized observation.\n",
|
||
" \"\"\"\n",
|
||
"\n",
|
||
" def __init__(\n",
|
||
" self,\n",
|
||
" n_features: int,\n",
|
||
" n_actions: int,\n",
|
||
" alpha: float = 0.001,\n",
|
||
" gamma: float = 0.99,\n",
|
||
" seed: int = 42,\n",
|
||
" ) -> None:\n",
|
||
" \"\"\"Initialize SARSA agent with linear weights.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" n_features: Dimension of the feature vector phi(s).\n",
|
||
" n_actions: Number of discrete actions.\n",
|
||
" alpha: Learning rate (kept small for high-dim features).\n",
|
||
" gamma: Discount factor.\n",
|
||
" seed: RNG seed for reproducibility.\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" super().__init__(seed, n_actions)\n",
|
||
" self.n_features = n_features\n",
|
||
" self.alpha = alpha\n",
|
||
" self.gamma = gamma\n",
|
||
" # Weight matrix: one row per action, analogous to Lab 7's self.w\n",
|
||
" # but organized as (n_actions, n_features) for dense features.\n",
|
||
" self.W = np.zeros((n_actions, n_features), dtype=np.float64)\n",
|
||
"\n",
|
||
" def _q_values(self, phi: np.ndarray) -> np.ndarray:\n",
|
||
" \"\"\"Compute Q-values for all actions given feature vector phi(s).\n",
|
||
"\n",
|
||
" Equivalent to Lab 7's self.q(s, a) = self.w[idx].sum()\n",
|
||
" but using dense linear approximation: q(s, a) = W[a] @ phi.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" phi: Normalized feature vector, shape (n_features,).\n",
|
||
"\n",
|
||
" Returns:\n",
|
||
" Array of Q-values, shape (n_actions,).\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" return self.W @ phi # shape (n_actions,)\n",
|
||
"\n",
|
||
" def get_action(self, observation: np.ndarray, epsilon: float = 0.0) -> int:\n",
|
||
" \"\"\"Select action using ε-greedy policy over linear Q-values.\n",
|
||
"\n",
|
||
" Same pattern as Lab 7 SarsaAgent.eps_greedy:\n",
|
||
" compute q-values for all actions, then apply epsilon_greedy.\n",
|
||
" \"\"\"\n",
|
||
" phi = normalize_obs(observation)\n",
|
||
" q_vals = self._q_values(phi)\n",
|
||
" return epsilon_greedy(q_vals, epsilon, self.rng)\n",
|
||
"\n",
|
||
" def update(\n",
|
||
" self,\n",
|
||
" state: np.ndarray,\n",
|
||
" action: int,\n",
|
||
" reward: float,\n",
|
||
" next_state: np.ndarray,\n",
|
||
" done: bool,\n",
|
||
" next_action: int | None = None,\n",
|
||
" ) -> None:\n",
|
||
" \"\"\"Perform one semi-gradient SARSA update.\n",
|
||
"\n",
|
||
" Follows the SARSA update from Lab 5B train_sarsa:\n",
|
||
" td_target = r + gamma * Q(s', a') * (0 if done else 1)\n",
|
||
" Q(s, a) += alpha * (td_target - Q(s, a))\n",
|
||
"\n",
|
||
" In continuous form with linear approximation (Lab 7 SarsaAgent.update):\n",
|
||
" delta = target - q(s, a)\n",
|
||
" W[a] += alpha * delta * phi(s)\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" state: Current observation.\n",
|
||
" action: Action taken.\n",
|
||
" reward: Reward received.\n",
|
||
" next_state: Next observation.\n",
|
||
" done: Whether the episode ended.\n",
|
||
" next_action: Action chosen in next state (required for SARSA).\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" phi = np.nan_to_num(normalize_obs(state), nan=0.0, posinf=0.0, neginf=0.0)\n",
|
||
" q_sa = float(self.W[action] @ phi) # current estimate q(s, a)\n",
|
||
" if not np.isfinite(q_sa):\n",
|
||
" q_sa = 0.0\n",
|
||
"\n",
|
||
" if done:\n",
|
||
" # Terminal: no future value (Lab 5B: gamma * Q[s2, a2] * 0)\n",
|
||
" target = reward\n",
|
||
" else:\n",
|
||
" # On-policy: use q(s', a') where a' is the actual next action\n",
|
||
" # This is the key SARSA property (Lab 5B)\n",
|
||
" phi_next = np.nan_to_num(normalize_obs(next_state), nan=0.0, posinf=0.0, neginf=0.0)\n",
|
||
" if next_action is None:\n",
|
||
" next_action = 0 # fallback, should not happen in practice\n",
|
||
" q_sp_ap = float(self.W[next_action] @ phi_next)\n",
|
||
" if not np.isfinite(q_sp_ap):\n",
|
||
" q_sp_ap = 0.0\n",
|
||
" target = float(reward) + self.gamma * q_sp_ap\n",
|
||
"\n",
|
||
" # Semi-gradient update: W[a] += alpha * delta * phi(s)\n",
|
||
" # Analogous to Lab 7: self.w[idx] += self.alpha * delta\n",
|
||
" if not np.isfinite(target):\n",
|
||
" return\n",
|
||
"\n",
|
||
" delta = float(target - q_sa)\n",
|
||
" if not np.isfinite(delta):\n",
|
||
" return\n",
|
||
"\n",
|
||
" td_step = float(np.clip(delta, -1_000.0, 1_000.0))\n",
|
||
" self.W[action] += self.alpha * td_step * phi\n",
|
||
" self.W[action] = np.nan_to_num(self.W[action], nan=0.0, posinf=1e6, neginf=-1e6)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "d4e18536",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Q-Learning Agent — Linear Approximation (Off-policy)\n",
|
||
"\n",
|
||
"Same architecture as SARSA but with the **off-policy update** from Lab 5B (`train_q_learning`):\n",
|
||
"\n",
|
||
"$$\\delta = r + \\gamma \\max_{a'} \\hat{q}(s', a') - \\hat{q}(s, a)$$\n",
|
||
"\n",
|
||
"The key difference from SARSA: we use $\\max_{a'} Q(s', a')$ instead of $Q(s', a')$ where $a'$ is the action actually chosen. This allows learning the optimal policy independently of the exploration policy."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "f5b5b9ea",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class QLearningAgent(Agent):\n",
|
||
" \"\"\"Q-Learning agent with linear function approximation (off-policy).\n",
|
||
"\n",
|
||
" Inspired by:\n",
|
||
" - Lab 5B train_q_learning: off-policy TD target using max_a' Q(s', a')\n",
|
||
" - Lab 7 SarsaAgent: linear approximation q(s,a) = W[a] @ phi(s)\n",
|
||
"\n",
|
||
" The only difference from SarsaAgent is the TD target:\n",
|
||
" SARSA uses Q(s', a') (on-policy), Q-Learning uses max_a' Q(s', a') (off-policy).\n",
|
||
" \"\"\"\n",
|
||
"\n",
|
||
" def __init__(\n",
|
||
" self,\n",
|
||
" n_features: int,\n",
|
||
" n_actions: int,\n",
|
||
" alpha: float = 0.001,\n",
|
||
" gamma: float = 0.99,\n",
|
||
" seed: int = 42,\n",
|
||
" ) -> None:\n",
|
||
" \"\"\"Initialize Q-Learning agent with linear weights.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" n_features: Dimension of the feature vector phi(s).\n",
|
||
" n_actions: Number of discrete actions.\n",
|
||
" alpha: Learning rate.\n",
|
||
" gamma: Discount factor.\n",
|
||
" seed: RNG seed.\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" super().__init__(seed, n_actions)\n",
|
||
" self.n_features = n_features\n",
|
||
" self.alpha = alpha\n",
|
||
" self.gamma = gamma\n",
|
||
" self.W = np.zeros((n_actions, n_features), dtype=np.float64)\n",
|
||
"\n",
|
||
" def _q_values(self, phi: np.ndarray) -> np.ndarray:\n",
|
||
" \"\"\"Compute Q-values for all actions: q(s, a) = W[a] @ phi for each a.\"\"\"\n",
|
||
" return self.W @ phi\n",
|
||
"\n",
|
||
" def get_action(self, observation: np.ndarray, epsilon: float = 0.0) -> int:\n",
|
||
" \"\"\"Select action using ε-greedy policy over linear Q-values.\"\"\"\n",
|
||
" phi = normalize_obs(observation)\n",
|
||
" q_vals = self._q_values(phi)\n",
|
||
" return epsilon_greedy(q_vals, epsilon, self.rng)\n",
|
||
"\n",
|
||
" def update(\n",
|
||
" self,\n",
|
||
" state: np.ndarray,\n",
|
||
" action: int,\n",
|
||
" reward: float,\n",
|
||
" next_state: np.ndarray,\n",
|
||
" done: bool,\n",
|
||
" next_action: int | None = None,\n",
|
||
" ) -> None:\n",
|
||
" \"\"\"Perform one Q-learning update.\n",
|
||
"\n",
|
||
" Follows Lab 5B train_q_learning:\n",
|
||
" td_target = r + gamma * max(Q[s2]) * (0 if terminated else 1)\n",
|
||
" Q[s, a] += alpha * (td_target - Q[s, a])\n",
|
||
"\n",
|
||
" In continuous form with linear approximation:\n",
|
||
" delta = target - q(s, a)\n",
|
||
" W[a] += alpha * delta * phi(s)\n",
|
||
" \"\"\"\n",
|
||
" _ = next_action # Q-learning is off-policy: next_action is not used\n",
|
||
" phi = np.nan_to_num(normalize_obs(state), nan=0.0, posinf=0.0, neginf=0.0)\n",
|
||
" q_sa = float(self.W[action] @ phi)\n",
|
||
" if not np.isfinite(q_sa):\n",
|
||
" q_sa = 0.0\n",
|
||
"\n",
|
||
" if done:\n",
|
||
" # Terminal state: no future value\n",
|
||
" # Lab 5B: gamma * np.max(Q[s2]) * (0 if terminated else 1)\n",
|
||
" target = reward\n",
|
||
" else:\n",
|
||
" # Off-policy: use max over all actions in next state\n",
|
||
" # This is the key Q-learning property (Lab 5B)\n",
|
||
" phi_next = np.nan_to_num(normalize_obs(next_state), nan=0.0, posinf=0.0, neginf=0.0)\n",
|
||
" q_next_all = self._q_values(phi_next) # q(s', a') for all a'\n",
|
||
" q_next_max = float(np.max(q_next_all))\n",
|
||
" if not np.isfinite(q_next_max):\n",
|
||
" q_next_max = 0.0\n",
|
||
" target = float(reward) + self.gamma * q_next_max\n",
|
||
"\n",
|
||
" if not np.isfinite(target):\n",
|
||
" return\n",
|
||
"\n",
|
||
" delta = float(target - q_sa)\n",
|
||
" if not np.isfinite(delta):\n",
|
||
" return\n",
|
||
"\n",
|
||
" td_step = float(np.clip(delta, -1_000.0, 1_000.0))\n",
|
||
" self.W[action] += self.alpha * td_step * phi\n",
|
||
" self.W[action] = np.nan_to_num(self.W[action], nan=0.0, posinf=1e6, neginf=-1e6)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "f644e2ef",
|
||
"metadata": {},
|
||
"source": [
|
||
"## DQN Agent — PyTorch MLP with Experience Replay and Target Network\n",
|
||
"\n",
|
||
"This agent implements the Deep Q-Network (DQN) using **PyTorch** for GPU-accelerated training (MPS on Apple Silicon).\n",
|
||
"\n",
|
||
"**Network architecture** (same structure as before, now as `torch.nn.Module`):\n",
|
||
"$$\\text{Input}(n\\_features) \\to \\text{Linear}(256) \\to \\text{ReLU} \\to \\text{Linear}(256) \\to \\text{ReLU} \\to \\text{Linear}(n\\_actions)$$\n",
|
||
"\n",
|
||
"**Key techniques** (inspired by Lab 6A Dyna-Q + classic DQN):\n",
|
||
"- **Experience Replay**: circular buffer of transitions, sampled as minibatches for off-policy updates\n",
|
||
"- **Target Network**: periodically synchronized copy of the Q-network, stabilizes learning\n",
|
||
"- **Gradient clipping**: prevents exploding gradients in deep networks\n",
|
||
"- **GPU acceleration**: tensors on MPS/CUDA device for fast forward/backward passes"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "ec090a9a",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class QNetwork(nn.Module):\n",
|
||
" \"\"\"MLP Q-network: Input -> 256 -> ReLU -> 256 -> ReLU -> n_actions.\"\"\"\n",
|
||
"\n",
|
||
" def __init__(self, n_features: int, n_actions: int) -> None:\n",
|
||
" super().__init__()\n",
|
||
" self.net = nn.Sequential(\n",
|
||
" nn.Linear(n_features, 256),\n",
|
||
" nn.ReLU(),\n",
|
||
" nn.Linear(256, 256),\n",
|
||
" nn.ReLU(),\n",
|
||
" nn.Linear(256, n_actions),\n",
|
||
" )\n",
|
||
"\n",
|
||
" def forward(self, x: torch.Tensor) -> torch.Tensor:\n",
|
||
" return self.net(x)\n",
|
||
"\n",
|
||
"\n",
|
||
"class ReplayBuffer:\n",
|
||
" \"\"\"Fixed-size circular replay buffer storing (s, a, r, s', done) transitions.\"\"\"\n",
|
||
"\n",
|
||
" def __init__(self, capacity: int) -> None:\n",
|
||
" self.buffer: deque[tuple[np.ndarray, int, float, np.ndarray, bool]] = deque(maxlen=capacity)\n",
|
||
"\n",
|
||
" def push(self, state: np.ndarray, action: int, reward: float, next_state: np.ndarray, done: bool) -> None:\n",
|
||
" self.buffer.append((state, action, reward, next_state, done))\n",
|
||
"\n",
|
||
" def sample(self, batch_size: int, rng: np.random.Generator) -> tuple[np.ndarray, ...]:\n",
|
||
" indices = rng.choice(len(self.buffer), size=batch_size, replace=False)\n",
|
||
" batch = [self.buffer[i] for i in indices]\n",
|
||
" states = np.array([t[0] for t in batch])\n",
|
||
" actions = np.array([t[1] for t in batch])\n",
|
||
" rewards = np.array([t[2] for t in batch])\n",
|
||
" next_states = np.array([t[3] for t in batch])\n",
|
||
" dones = np.array([t[4] for t in batch], dtype=np.float32)\n",
|
||
" return states, actions, rewards, next_states, dones\n",
|
||
"\n",
|
||
" def __len__(self) -> int:\n",
|
||
" return len(self.buffer)\n",
|
||
"\n",
|
||
"\n",
|
||
"class DQNAgent(Agent):\n",
|
||
" \"\"\"Deep Q-Network agent using PyTorch with GPU acceleration (MPS/CUDA).\n",
|
||
"\n",
|
||
" Inspired by:\n",
|
||
" - Lab 6A Dyna-Q: experience replay (store transitions, sample for updates)\n",
|
||
" - Classic DQN (Mnih et al., 2015): target network, minibatch SGD\n",
|
||
"\n",
|
||
" Uses Adam optimizer and Huber loss (smooth L1) for stable training.\n",
|
||
" \"\"\"\n",
|
||
"\n",
|
||
" def __init__(\n",
|
||
" self,\n",
|
||
" n_features: int,\n",
|
||
" n_actions: int,\n",
|
||
" lr: float = 1e-4,\n",
|
||
" gamma: float = 0.99,\n",
|
||
" buffer_size: int = 50_000,\n",
|
||
" batch_size: int = 128,\n",
|
||
" target_update_freq: int = 1000,\n",
|
||
" seed: int = 42,\n",
|
||
" ) -> None:\n",
|
||
" \"\"\"Initialize DQN agent.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" n_features: Input feature dimension.\n",
|
||
" n_actions: Number of discrete actions.\n",
|
||
" lr: Learning rate for Adam optimizer.\n",
|
||
" gamma: Discount factor.\n",
|
||
" buffer_size: Maximum replay buffer capacity.\n",
|
||
" batch_size: Minibatch size for updates.\n",
|
||
" target_update_freq: Steps between target network syncs.\n",
|
||
" seed: RNG seed.\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" super().__init__(seed, n_actions)\n",
|
||
" self.n_features = n_features\n",
|
||
" self.lr = lr\n",
|
||
" self.gamma = gamma\n",
|
||
" self.batch_size = batch_size\n",
|
||
" self.target_update_freq = target_update_freq\n",
|
||
" self.update_step = 0\n",
|
||
"\n",
|
||
" # Q-network and target network on GPU\n",
|
||
" torch.manual_seed(seed)\n",
|
||
" self.q_net = QNetwork(n_features, n_actions).to(DEVICE)\n",
|
||
" self.target_net = QNetwork(n_features, n_actions).to(DEVICE)\n",
|
||
" self.target_net.load_state_dict(self.q_net.state_dict())\n",
|
||
" self.target_net.eval()\n",
|
||
"\n",
|
||
" self.optimizer = optim.Adam(self.q_net.parameters(), lr=lr)\n",
|
||
" self.loss_fn = nn.SmoothL1Loss() # Huber loss — more robust than MSE\n",
|
||
"\n",
|
||
" # Experience replay buffer\n",
|
||
" self.replay_buffer = ReplayBuffer(buffer_size)\n",
|
||
"\n",
|
||
" def get_action(self, observation: np.ndarray, epsilon: float = 0.0) -> int:\n",
|
||
" \"\"\"Select action using ε-greedy policy over Q-network outputs.\"\"\"\n",
|
||
" if self.rng.random() < epsilon:\n",
|
||
" return int(self.rng.integers(0, self.action_space))\n",
|
||
"\n",
|
||
" phi = normalize_obs(observation)\n",
|
||
" with torch.no_grad():\n",
|
||
" state_t = torch.from_numpy(phi).float().unsqueeze(0).to(DEVICE)\n",
|
||
" q_vals = self.q_net(state_t).cpu().numpy().squeeze(0)\n",
|
||
" return epsilon_greedy(q_vals, 0.0, self.rng)\n",
|
||
"\n",
|
||
" def update(\n",
|
||
" self,\n",
|
||
" state: np.ndarray,\n",
|
||
" action: int,\n",
|
||
" reward: float,\n",
|
||
" next_state: np.ndarray,\n",
|
||
" done: bool,\n",
|
||
" next_action: int | None = None,\n",
|
||
" ) -> None:\n",
|
||
" \"\"\"Store transition and perform a minibatch DQN update.\n",
|
||
"\n",
|
||
" Steps:\n",
|
||
" 1. Add transition to replay buffer\n",
|
||
" 2. If buffer has enough samples, sample a minibatch\n",
|
||
" 3. Compute targets using target network (max_a' Q_target(s', a'))\n",
|
||
" 4. Compute Huber loss and backpropagate\n",
|
||
" 5. Clip gradients and update weights with Adam\n",
|
||
" 6. Periodically sync target network\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" _ = next_action # DQN is off-policy\n",
|
||
"\n",
|
||
" # Store transition\n",
|
||
" phi_s = normalize_obs(state)\n",
|
||
" phi_sp = normalize_obs(next_state)\n",
|
||
" self.replay_buffer.push(phi_s, action, reward, phi_sp, done)\n",
|
||
"\n",
|
||
" if len(self.replay_buffer) < self.batch_size:\n",
|
||
" return\n",
|
||
"\n",
|
||
" # Sample minibatch\n",
|
||
" states_b, actions_b, rewards_b, next_states_b, dones_b = self.replay_buffer.sample(\n",
|
||
" self.batch_size, self.rng,\n",
|
||
" )\n",
|
||
"\n",
|
||
" # Convert to tensors on device\n",
|
||
" states_t = torch.from_numpy(states_b).float().to(DEVICE)\n",
|
||
" actions_t = torch.from_numpy(actions_b).long().to(DEVICE)\n",
|
||
" rewards_t = torch.from_numpy(rewards_b).float().to(DEVICE)\n",
|
||
" next_states_t = torch.from_numpy(next_states_b).float().to(DEVICE)\n",
|
||
" dones_t = torch.from_numpy(dones_b).float().to(DEVICE)\n",
|
||
"\n",
|
||
" # Current Q-values for taken actions\n",
|
||
" q_values = self.q_net(states_t)\n",
|
||
" q_curr = q_values.gather(1, actions_t.unsqueeze(1)).squeeze(1)\n",
|
||
"\n",
|
||
" # Target Q-values (off-policy: max over actions in next state)\n",
|
||
" with torch.no_grad():\n",
|
||
" q_next = self.target_net(next_states_t).max(dim=1).values\n",
|
||
" targets = rewards_t + (1.0 - dones_t) * self.gamma * q_next\n",
|
||
"\n",
|
||
" # Compute loss and update\n",
|
||
" loss = self.loss_fn(q_curr, targets)\n",
|
||
" self.optimizer.zero_grad()\n",
|
||
" loss.backward()\n",
|
||
" nn.utils.clip_grad_norm_(self.q_net.parameters(), max_norm=10.0)\n",
|
||
" self.optimizer.step()\n",
|
||
"\n",
|
||
" # Sync target network periodically\n",
|
||
" self.update_step += 1\n",
|
||
" if self.update_step % self.target_update_freq == 0:\n",
|
||
" self.target_net.load_state_dict(self.q_net.state_dict())\n",
|
||
"\n",
|
||
" def save(self, filename: str) -> None:\n",
|
||
" \"\"\"Save agent state using torch.save (networks + optimizer + metadata).\"\"\"\n",
|
||
" torch.save(\n",
|
||
" {\n",
|
||
" \"q_net\": self.q_net.state_dict(),\n",
|
||
" \"target_net\": self.target_net.state_dict(),\n",
|
||
" \"optimizer\": self.optimizer.state_dict(),\n",
|
||
" \"update_step\": self.update_step,\n",
|
||
" \"n_features\": self.n_features,\n",
|
||
" \"action_space\": self.action_space,\n",
|
||
" },\n",
|
||
" filename,\n",
|
||
" )\n",
|
||
"\n",
|
||
" def load(self, filename: str) -> None:\n",
|
||
" \"\"\"Load agent state from a torch checkpoint.\"\"\"\n",
|
||
" checkpoint = torch.load(filename, map_location=DEVICE, weights_only=False)\n",
|
||
" self.q_net.load_state_dict(checkpoint[\"q_net\"])\n",
|
||
" self.target_net.load_state_dict(checkpoint[\"target_net\"])\n",
|
||
" self.optimizer.load_state_dict(checkpoint[\"optimizer\"])\n",
|
||
" self.update_step = checkpoint[\"update_step\"]\n",
|
||
" self.q_net.to(DEVICE)\n",
|
||
" self.target_net.to(DEVICE)\n",
|
||
" self.target_net.eval()\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "b7b63455",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Monte Carlo Agent — Linear Approximation (First-visit)\n",
|
||
"\n",
|
||
"This agent is inspired by Lab 4 (`mc_control_epsilon_soft`):\n",
|
||
"- Accumulates transitions in an episode buffer `(state, action, reward)`\n",
|
||
"- At the end of the episode (`done=True`), computes **cumulative returns** by traversing the buffer backward:\n",
|
||
" $$G \\leftarrow \\gamma \\cdot G + r$$\n",
|
||
"- Updates weights with the semi-gradient rule:\n",
|
||
" $$W_a \\leftarrow W_a + \\alpha \\cdot (G - \\hat{q}(s, a)) \\cdot \\phi(s)$$\n",
|
||
"\n",
|
||
"Unlike TD methods (SARSA, Q-Learning), Monte Carlo waits for the complete episode to finish before updating."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "3c9d74be",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class MonteCarloAgent(Agent):\n",
|
||
" \"\"\"Monte Carlo control agent with linear function approximation.\n",
|
||
"\n",
|
||
" Inspired by Lab 4 mc_control_epsilon_soft:\n",
|
||
" - Accumulates transitions in an episode buffer\n",
|
||
" - At episode end (done=True), computes discounted returns backward:\n",
|
||
" G = gamma * G + r (same as Lab 4's reversed loop)\n",
|
||
" - Updates weights with semi-gradient: W[a] += alpha * (G - q(s,a)) * phi(s)\n",
|
||
"\n",
|
||
" Unlike TD methods (SARSA, Q-Learning), no update occurs until the episode ends.\n",
|
||
" \"\"\"\n",
|
||
"\n",
|
||
" def __init__(\n",
|
||
" self,\n",
|
||
" n_features: int,\n",
|
||
" n_actions: int,\n",
|
||
" alpha: float = 0.001,\n",
|
||
" gamma: float = 0.99,\n",
|
||
" seed: int = 42,\n",
|
||
" ) -> None:\n",
|
||
" \"\"\"Initialize Monte Carlo agent.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" n_features: Dimension of the feature vector phi(s).\n",
|
||
" n_actions: Number of discrete actions.\n",
|
||
" alpha: Learning rate.\n",
|
||
" gamma: Discount factor.\n",
|
||
" seed: RNG seed.\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" super().__init__(seed, n_actions)\n",
|
||
" self.n_features = n_features\n",
|
||
" self.alpha = alpha\n",
|
||
" self.gamma = gamma\n",
|
||
" self.W = np.zeros((n_actions, n_features), dtype=np.float64)\n",
|
||
" # Episode buffer: stores (state, action, reward) tuples\n",
|
||
" # Analogous to Lab 4's episode list in generate_episode\n",
|
||
" self.episode_buffer: list[tuple[np.ndarray, int, float]] = []\n",
|
||
"\n",
|
||
" def _q_values(self, phi: np.ndarray) -> np.ndarray:\n",
|
||
" \"\"\"Compute Q-values for all actions: q(s, a) = W[a] @ phi for each a.\"\"\"\n",
|
||
" return self.W @ phi\n",
|
||
"\n",
|
||
" def get_action(self, observation: np.ndarray, epsilon: float = 0.0) -> int:\n",
|
||
" \"\"\"Select action using ε-greedy policy over linear Q-values.\"\"\"\n",
|
||
" phi = normalize_obs(observation)\n",
|
||
" q_vals = self._q_values(phi)\n",
|
||
" return epsilon_greedy(q_vals, epsilon, self.rng)\n",
|
||
"\n",
|
||
" def update(\n",
|
||
" self,\n",
|
||
" state: np.ndarray,\n",
|
||
" action: int,\n",
|
||
" reward: float,\n",
|
||
" next_state: np.ndarray,\n",
|
||
" done: bool,\n",
|
||
" next_action: int | None = None,\n",
|
||
" ) -> None:\n",
|
||
" \"\"\"Accumulate transitions and update at episode end with MC returns.\n",
|
||
"\n",
|
||
" Follows Lab 4 mc_control_epsilon_soft / mc_control_exploring_starts:\n",
|
||
" 1. Append (state, action, reward) to episode buffer\n",
|
||
" 2. If not done: wait (no update yet)\n",
|
||
" 3. If done: compute returns backward and update weights\n",
|
||
"\n",
|
||
" The backward loop is exactly the Lab 4 pattern:\n",
|
||
" G = 0\n",
|
||
" for s, a, r in reversed(episode_buffer):\n",
|
||
" G = gamma * G + r\n",
|
||
" # update Q(s, a) toward G\n",
|
||
" \"\"\"\n",
|
||
" _ = next_state, next_action # Not used in MC\n",
|
||
"\n",
|
||
" self.episode_buffer.append((state, action, reward))\n",
|
||
"\n",
|
||
" if not done:\n",
|
||
" return # Wait until episode ends\n",
|
||
"\n",
|
||
" # Episode finished: compute MC returns and update\n",
|
||
" # Backward pass through episode (Lab 4 pattern)\n",
|
||
" returns = 0.0\n",
|
||
" for s, a, r in reversed(self.episode_buffer):\n",
|
||
" returns = self.gamma * returns + r\n",
|
||
"\n",
|
||
" phi = np.nan_to_num(normalize_obs(s), nan=0.0, posinf=0.0, neginf=0.0)\n",
|
||
" q_sa = float(self.W[a] @ phi)\n",
|
||
" if not np.isfinite(q_sa):\n",
|
||
" q_sa = 0.0\n",
|
||
"\n",
|
||
" # Semi-gradient update toward the MC return G\n",
|
||
" # Analogous to Lab 4: Q[(s,a)] += (G - Q[(s,a)]) / N[(s,a)]\n",
|
||
" # but with linear approximation and fixed step size\n",
|
||
" if not np.isfinite(returns):\n",
|
||
" continue\n",
|
||
"\n",
|
||
" delta = float(returns - q_sa)\n",
|
||
" if not np.isfinite(delta):\n",
|
||
" continue\n",
|
||
"\n",
|
||
" td_step = float(np.clip(delta, -1_000.0, 1_000.0))\n",
|
||
" self.W[a] += self.alpha * td_step * phi\n",
|
||
" self.W[a] = np.nan_to_num(self.W[a], nan=0.0, posinf=1e6, neginf=-1e6)\n",
|
||
"\n",
|
||
" # Clear episode buffer for next episode\n",
|
||
" self.episode_buffer = []\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "91e51dc8",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Tennis Environment\n",
|
||
"\n",
|
||
"Creation of the Atari Tennis environment via Gymnasium (`ALE/Tennis-v5`) with standard wrappers:\n",
|
||
"- **Grayscale**: `obs_type=\"grayscale\"` — single-channel observations\n",
|
||
"- **Resize**: `ResizeObservation(84, 84)` — downscale to 84×84\n",
|
||
"- **Frame stack**: `FrameStackObservation(4)` — stack 4 consecutive frames\n",
|
||
"\n",
|
||
"The final observation is an array of shape `(4, 84, 84)`, which flattens to 28,224 features.\n",
|
||
"\n",
|
||
"The agent plays against the **built-in Atari AI opponent**."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "f9a973dd",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def create_env() -> gym.Env:\n",
|
||
" \"\"\"Create the ALE/Tennis-v5 environment with preprocessing wrappers.\n",
|
||
"\n",
|
||
" Applies:\n",
|
||
" - obs_type=\"grayscale\": grayscale observation (210, 160)\n",
|
||
" - ResizeObservation(84, 84): downscale to 84x84\n",
|
||
" - FrameStackObservation(4): stack 4 consecutive frames -> (4, 84, 84)\n",
|
||
"\n",
|
||
" Returns:\n",
|
||
" Gymnasium environment ready for training.\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" env = gym.make(\"ALE/Tennis-v5\", obs_type=\"grayscale\")\n",
|
||
" env = ResizeObservation(env, shape=(84, 84))\n",
|
||
" return FrameStackObservation(env, stack_size=4)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "18cb28d8",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Training & Evaluation Infrastructure\n",
|
||
"\n",
|
||
"Functions for training and evaluating agents in the single-agent Gymnasium environment:\n",
|
||
"\n",
|
||
"1. **`train_agent`** — Pre-trains an agent against the built-in AI for a given number of episodes with ε-greedy exploration\n",
|
||
"2. **`evaluate_agent`** — Evaluates a trained agent (no exploration, ε = 0) and returns performance metrics\n",
|
||
"3. **`plot_training_curves`** — Plots the training reward history (moving average) for all agents\n",
|
||
"4. **`plot_evaluation_comparison`** — Bar chart comparing final evaluation scores across agents\n",
|
||
"5. **`evaluate_tournament`** — Evaluates all agents and produces a summary comparison"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "06b91580",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def train_agent(\n",
|
||
" env: gym.Env,\n",
|
||
" agent: Agent,\n",
|
||
" name: str,\n",
|
||
" *,\n",
|
||
" episodes: int = 5000,\n",
|
||
" epsilon_start: float = 1.0,\n",
|
||
" epsilon_end: float = 0.05,\n",
|
||
" epsilon_decay: float = 0.999,\n",
|
||
" max_steps: int = 5000,\n",
|
||
") -> list[float]:\n",
|
||
" \"\"\"Pre-train an agent against the built-in Atari AI opponent.\n",
|
||
"\n",
|
||
" Each agent learns independently by playing full episodes. This is the\n",
|
||
" self-play pre-training phase: the agent interacts with the environment's\n",
|
||
" built-in opponent and updates its parameters after each transition.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" env: Gymnasium ALE/Tennis-v5 environment.\n",
|
||
" agent: Agent instance to train.\n",
|
||
" name: Display name for the progress bar.\n",
|
||
" episodes: Number of training episodes.\n",
|
||
" epsilon_start: Initial exploration rate.\n",
|
||
" epsilon_end: Minimum exploration rate.\n",
|
||
" epsilon_decay: Multiplicative decay per episode.\n",
|
||
" max_steps: Maximum steps per episode.\n",
|
||
"\n",
|
||
" Returns:\n",
|
||
" List of total rewards per episode.\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" rewards_history: list[float] = []\n",
|
||
" epsilon = epsilon_start\n",
|
||
"\n",
|
||
" pbar = tqdm(range(episodes), desc=f\"Training {name}\", leave=True)\n",
|
||
"\n",
|
||
" for _ep in pbar:\n",
|
||
" obs, _info = env.reset()\n",
|
||
" obs = np.asarray(obs)\n",
|
||
" total_reward = 0.0\n",
|
||
"\n",
|
||
" # Select first action\n",
|
||
" action = agent.get_action(obs, epsilon=epsilon)\n",
|
||
"\n",
|
||
" for _step in range(max_steps):\n",
|
||
" next_obs, reward, terminated, truncated, _info = env.step(action)\n",
|
||
" next_obs = np.asarray(next_obs)\n",
|
||
" done = terminated or truncated\n",
|
||
" reward = float(reward)\n",
|
||
" total_reward += reward\n",
|
||
"\n",
|
||
" # Select next action (needed for SARSA's on-policy update)\n",
|
||
" next_action = agent.get_action(next_obs, epsilon=epsilon) if not done else None\n",
|
||
"\n",
|
||
" # Update agent with the transition\n",
|
||
" agent.update(\n",
|
||
" state=obs,\n",
|
||
" action=action,\n",
|
||
" reward=reward,\n",
|
||
" next_state=next_obs,\n",
|
||
" done=done,\n",
|
||
" next_action=next_action,\n",
|
||
" )\n",
|
||
"\n",
|
||
" if done:\n",
|
||
" break\n",
|
||
"\n",
|
||
" obs = next_obs\n",
|
||
" action = next_action\n",
|
||
"\n",
|
||
" rewards_history.append(total_reward)\n",
|
||
" epsilon = max(epsilon_end, epsilon * epsilon_decay)\n",
|
||
"\n",
|
||
" # Update progress bar\n",
|
||
" recent_window = 50\n",
|
||
" if len(rewards_history) >= recent_window:\n",
|
||
" recent_avg = np.mean(rewards_history[-recent_window:])\n",
|
||
" pbar.set_postfix(\n",
|
||
" avg50=f\"{recent_avg:.1f}\",\n",
|
||
" eps=f\"{epsilon:.3f}\",\n",
|
||
" rew=f\"{total_reward:.0f}\",\n",
|
||
" )\n",
|
||
"\n",
|
||
" return rewards_history\n",
|
||
"\n",
|
||
"\n",
|
||
"def evaluate_agent(\n",
|
||
" env: gym.Env,\n",
|
||
" agent: Agent,\n",
|
||
" name: str,\n",
|
||
" *,\n",
|
||
" episodes: int = 20,\n",
|
||
" max_steps: int = 5000,\n",
|
||
") -> dict[str, object]:\n",
|
||
" \"\"\"Evaluate a trained agent with no exploration (ε = 0).\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" env: Gymnasium ALE/Tennis-v5 environment.\n",
|
||
" agent: Trained agent to evaluate.\n",
|
||
" name: Display name for the progress bar.\n",
|
||
" episodes: Number of evaluation episodes.\n",
|
||
" max_steps: Maximum steps per episode.\n",
|
||
"\n",
|
||
" Returns:\n",
|
||
" Dictionary with rewards list, mean, std, wins, and win rate.\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" rewards: list[float] = []\n",
|
||
" wins = 0\n",
|
||
"\n",
|
||
" for _ep in tqdm(range(episodes), desc=f\"Evaluating {name}\", leave=False):\n",
|
||
" obs, _info = env.reset()\n",
|
||
" total_reward = 0.0\n",
|
||
"\n",
|
||
" for _step in range(max_steps):\n",
|
||
" action = agent.get_action(np.asarray(obs), epsilon=0.0)\n",
|
||
" obs, reward, terminated, truncated, _info = env.step(action)\n",
|
||
" reward = float(reward)\n",
|
||
" total_reward += reward\n",
|
||
" if terminated or truncated:\n",
|
||
" break\n",
|
||
"\n",
|
||
" rewards.append(total_reward)\n",
|
||
" if total_reward > 0:\n",
|
||
" wins += 1\n",
|
||
"\n",
|
||
" return {\n",
|
||
" \"rewards\": rewards,\n",
|
||
" \"mean_reward\": float(np.mean(rewards)),\n",
|
||
" \"std_reward\": float(np.std(rewards)),\n",
|
||
" \"wins\": wins,\n",
|
||
" \"win_rate\": wins / episodes,\n",
|
||
" }\n",
|
||
"\n",
|
||
"\n",
|
||
"def plot_training_curves(\n",
|
||
" training_histories: dict[str, list[float]],\n",
|
||
" path: str,\n",
|
||
" window: int = 100,\n",
|
||
") -> None:\n",
|
||
" \"\"\"Plot training reward curves for all agents on a single figure.\n",
|
||
"\n",
|
||
" Uses a moving average to smooth the curves.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" training_histories: Dict mapping agent names to reward lists.\n",
|
||
" path: File path to save the plot image.\n",
|
||
" window: Moving average window size.\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" plt.figure(figsize=(12, 6))\n",
|
||
"\n",
|
||
" for name, rewards in training_histories.items():\n",
|
||
" if len(rewards) >= window:\n",
|
||
" ma = np.convolve(rewards, np.ones(window) / window, mode=\"valid\")\n",
|
||
" plt.plot(np.arange(window - 1, len(rewards)), ma, label=name)\n",
|
||
" else:\n",
|
||
" plt.plot(rewards, label=f\"{name} (raw)\")\n",
|
||
"\n",
|
||
" plt.xlabel(\"Episodes\")\n",
|
||
" plt.ylabel(f\"Average Reward (Window={window})\")\n",
|
||
" plt.title(\"Training Curves (vs built-in AI)\")\n",
|
||
" plt.legend()\n",
|
||
" plt.grid(visible=True)\n",
|
||
" plt.tight_layout()\n",
|
||
" plt.savefig(path)\n",
|
||
" plt.show()\n",
|
||
"\n",
|
||
"\n",
|
||
"def plot_evaluation_comparison(results: dict[str, dict[str, object]]) -> None:\n",
|
||
" \"\"\"Bar chart comparing evaluation performance of all agents.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" results: Dict mapping agent names to evaluation result dicts.\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" names = list(results.keys())\n",
|
||
" means = [results[n][\"mean_reward\"] for n in names]\n",
|
||
" stds = [results[n][\"std_reward\"] for n in names]\n",
|
||
" win_rates = [results[n][\"win_rate\"] for n in names]\n",
|
||
"\n",
|
||
" _fig, axes = plt.subplots(1, 2, figsize=(14, 5))\n",
|
||
"\n",
|
||
" # Mean reward bar chart\n",
|
||
" colors = sns.color_palette(\"husl\", len(names))\n",
|
||
" axes[0].bar(names, means, yerr=stds, capsize=5, color=colors, edgecolor=\"black\")\n",
|
||
" axes[0].set_ylabel(\"Mean Reward\")\n",
|
||
" axes[0].set_title(\"Evaluation: Mean Reward per Agent (vs built-in AI)\")\n",
|
||
" axes[0].axhline(y=0, color=\"gray\", linestyle=\"--\", alpha=0.5)\n",
|
||
" axes[0].grid(axis=\"y\", alpha=0.3)\n",
|
||
"\n",
|
||
" # Win rate bar chart\n",
|
||
" axes[1].bar(names, win_rates, color=colors, edgecolor=\"black\")\n",
|
||
" axes[1].set_ylabel(\"Win Rate\")\n",
|
||
" axes[1].set_title(\"Evaluation: Win Rate per Agent (vs built-in AI)\")\n",
|
||
" axes[1].set_ylim(0, 1)\n",
|
||
" axes[1].axhline(y=0.5, color=\"gray\", linestyle=\"--\", alpha=0.5, label=\"50% baseline\")\n",
|
||
" axes[1].legend()\n",
|
||
" axes[1].grid(axis=\"y\", alpha=0.3)\n",
|
||
"\n",
|
||
" plt.tight_layout()\n",
|
||
" plt.show()\n",
|
||
"\n",
|
||
"\n",
|
||
"def evaluate_tournament(\n",
|
||
" env: gym.Env,\n",
|
||
" agents: dict[str, Agent],\n",
|
||
" episodes_per_agent: int = 20,\n",
|
||
") -> dict[str, dict[str, object]]:\n",
|
||
" \"\"\"Evaluate all agents against the built-in AI and produce a comparison.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" env: Gymnasium ALE/Tennis-v5 environment.\n",
|
||
" agents: Dictionary mapping agent names to Agent instances.\n",
|
||
" episodes_per_agent: Number of evaluation episodes per agent.\n",
|
||
"\n",
|
||
" Returns:\n",
|
||
" Dict mapping agent names to their evaluation results.\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" results: dict[str, dict[str, object]] = {}\n",
|
||
" n_agents = len(agents)\n",
|
||
"\n",
|
||
" for idx, (name, agent) in enumerate(agents.items(), start=1):\n",
|
||
" print(f\"[Evaluation {idx}/{n_agents}] {name}\")\n",
|
||
" results[name] = evaluate_agent(\n",
|
||
" env, agent, name, episodes=episodes_per_agent,\n",
|
||
" )\n",
|
||
" mean_r = results[name][\"mean_reward\"]\n",
|
||
" wr = results[name][\"win_rate\"]\n",
|
||
" print(f\" -> Mean reward: {mean_r:.2f} | Win rate: {wr:.1%}\\n\")\n",
|
||
"\n",
|
||
" return results\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "9605e9c4",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Agent Instantiation & Incremental Training (One Agent at a Time)\n",
|
||
"\n",
|
||
"**Environment**: `ALE/Tennis-v5` (grayscale, 84×84×4 frames → 28,224 features, 18 actions).\n",
|
||
"\n",
|
||
"**Agents**:\n",
|
||
"- **Random** — random baseline (no training needed)\n",
|
||
"- **SARSA** — linear approximation, semi-gradient TD(0)\n",
|
||
"- **Q-Learning** — linear approximation, off-policy\n",
|
||
"- **DQN** — PyTorch MLP (2 hidden layers of 256, experience replay, target network, GPU-accelerated)\n",
|
||
"- **Monte Carlo** — linear approximation, first-visit returns\n",
|
||
"\n",
|
||
"**Workflow**:\n",
|
||
"1. Train **one** selected agent (`AGENT_TO_TRAIN`)\n",
|
||
"2. Save its weights to `checkpoints/` (`.pkl` for linear agents, `.pt` for DQN)\n",
|
||
"3. Repeat later for another agent without retraining previous ones\n",
|
||
"4. Load all saved checkpoints before the final evaluation"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "6f6ba8df",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Observation shape : (4, 84, 84)\n",
|
||
"Feature vector dim: 28224\n",
|
||
"Number of actions : 18\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# Create environment\n",
|
||
"env = create_env()\n",
|
||
"obs, _info = env.reset()\n",
|
||
"\n",
|
||
"n_actions = int(env.action_space.n)\n",
|
||
"n_features = int(np.prod(obs.shape))\n",
|
||
"\n",
|
||
"print(f\"Observation shape : {obs.shape}\")\n",
|
||
"print(f\"Feature vector dim: {n_features}\")\n",
|
||
"print(f\"Number of actions : {n_actions}\")\n",
|
||
"\n",
|
||
"# Instantiate agents\n",
|
||
"agent_random = RandomAgent(seed=42, action_space=int(n_actions))\n",
|
||
"agent_sarsa = SarsaAgent(n_features=n_features, n_actions=n_actions, alpha=1e-5)\n",
|
||
"agent_q = QLearningAgent(n_features=n_features, n_actions=n_actions, alpha=1e-5)\n",
|
||
"agent_dqn = DQNAgent(n_features=n_features, n_actions=n_actions, lr=1e-4)\n",
|
||
"agent_mc = MonteCarloAgent(n_features=n_features, n_actions=n_actions, alpha=1e-5)\n",
|
||
"\n",
|
||
"agents = {\n",
|
||
" \"Random\": agent_random,\n",
|
||
" \"SARSA\": agent_sarsa,\n",
|
||
" \"Q-Learning\": agent_q,\n",
|
||
" \"DQN\": agent_dqn,\n",
|
||
" \"Monte Carlo\": agent_mc,\n",
|
||
"}\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 23,
|
||
"id": "4d449701",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Selected agent: DQN\n",
|
||
"Checkpoint path: checkpoints/dqn.pt\n",
|
||
"\n",
|
||
"============================================================\n",
|
||
"Training: DQN (2500 episodes)\n",
|
||
"============================================================\n"
|
||
]
|
||
},
|
||
{
|
||
"data": {
|
||
"application/vnd.jupyter.widget-view+json": {
|
||
"model_id": "47c9af1f459346dea335fb822a091b1b",
|
||
"version_major": 2,
|
||
"version_minor": 0
|
||
},
|
||
"text/plain": [
|
||
"Training DQN: 0%| | 0/2500 [00:00<?, ?it/s]"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
},
|
||
{
|
||
"ename": "KeyboardInterrupt",
|
||
"evalue": "",
|
||
"output_type": "error",
|
||
"traceback": [
|
||
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
|
||
"\u001b[0;31mKeyboardInterrupt\u001b[0m Traceback (most recent call last)",
|
||
"\u001b[0;32m/tmp/ipykernel_730/423872786.py\u001b[0m in \u001b[0;36m<cell line: 0>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 26\u001b[0m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34mf\"{'='*60}\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 27\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 28\u001b[0;31m training_histories[AGENT_TO_TRAIN] = train_agent(\n\u001b[0m\u001b[1;32m 29\u001b[0m \u001b[0menv\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0menv\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 30\u001b[0m \u001b[0magent\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0magent\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||
"\u001b[0;32m/tmp/ipykernel_730/4063369955.py\u001b[0m in \u001b[0;36mtrain_agent\u001b[0;34m(env, agent, name, episodes, epsilon_start, epsilon_end, epsilon_decay, max_steps)\u001b[0m\n\u001b[1;32m 54\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 55\u001b[0m \u001b[0;31m# Update agent with the transition\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 56\u001b[0;31m agent.update(\n\u001b[0m\u001b[1;32m 57\u001b[0m \u001b[0mstate\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mobs\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 58\u001b[0m \u001b[0maction\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0maction\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||
"\u001b[0;32m/tmp/ipykernel_730/4071728682.py\u001b[0m in \u001b[0;36mupdate\u001b[0;34m(self, state, action, reward, next_state, done, next_action)\u001b[0m\n\u001b[1;32m 136\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 137\u001b[0m \u001b[0;31m# Sample minibatch\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 138\u001b[0;31m states_b, actions_b, rewards_b, next_states_b, dones_b = self.replay_buffer.sample(\n\u001b[0m\u001b[1;32m 139\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mbatch_size\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrng\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 140\u001b[0m )\n",
|
||
"\u001b[0;32m/tmp/ipykernel_730/4071728682.py\u001b[0m in \u001b[0;36msample\u001b[0;34m(self, batch_size, rng)\u001b[0m\n\u001b[1;32m 28\u001b[0m \u001b[0mindices\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mrng\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mchoice\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mbuffer\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0msize\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mbatch_size\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mreplace\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 29\u001b[0m \u001b[0mbatch\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mbuffer\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mi\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mi\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mindices\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 30\u001b[0;31m \u001b[0mstates\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0marray\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mt\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mt\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mbatch\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 31\u001b[0m \u001b[0mactions\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0marray\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mt\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mt\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mbatch\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 32\u001b[0m \u001b[0mrewards\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0marray\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mt\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m2\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mt\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mbatch\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||
"\u001b[0;31mKeyboardInterrupt\u001b[0m: "
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"AGENT_TO_TRAIN = \"DQN\" # TODO: change to: \"Q-Learning\", \"DQN\", \"Monte Carlo\", \"Random\"\n",
|
||
"TRAINING_EPISODES = 2500\n",
|
||
"FORCE_RETRAIN = False\n",
|
||
"\n",
|
||
"if AGENT_TO_TRAIN not in agents:\n",
|
||
" msg = f\"Unknown agent '{AGENT_TO_TRAIN}'. Available: {list(agents)}\"\n",
|
||
" raise ValueError(msg)\n",
|
||
"\n",
|
||
"training_histories: dict[str, list[float]] = {}\n",
|
||
"agent = agents[AGENT_TO_TRAIN]\n",
|
||
"ckpt_path = _ckpt_path(AGENT_TO_TRAIN)\n",
|
||
"\n",
|
||
"print(f\"Selected agent: {AGENT_TO_TRAIN}\")\n",
|
||
"print(f\"Checkpoint path: {ckpt_path}\")\n",
|
||
"\n",
|
||
"if AGENT_TO_TRAIN == \"Random\":\n",
|
||
" print(\"Random is a baseline and is not trained.\")\n",
|
||
" training_histories[AGENT_TO_TRAIN] = []\n",
|
||
"elif ckpt_path.exists() and not FORCE_RETRAIN:\n",
|
||
" agent.load(str(ckpt_path))\n",
|
||
" print(\"Checkpoint found -> weights loaded, training skipped.\")\n",
|
||
" training_histories[AGENT_TO_TRAIN] = []\n",
|
||
"else:\n",
|
||
" print(f\"\\n{'='*60}\")\n",
|
||
" print(f\"Training: {AGENT_TO_TRAIN} ({TRAINING_EPISODES} episodes)\")\n",
|
||
" print(f\"{'='*60}\")\n",
|
||
"\n",
|
||
" training_histories[AGENT_TO_TRAIN] = train_agent(\n",
|
||
" env=env,\n",
|
||
" agent=agent,\n",
|
||
" name=AGENT_TO_TRAIN,\n",
|
||
" episodes=TRAINING_EPISODES,\n",
|
||
" epsilon_start=1.0,\n",
|
||
" epsilon_end=0.05,\n",
|
||
" epsilon_decay=0.999,\n",
|
||
" )\n",
|
||
"\n",
|
||
" avg_last_100 = np.mean(training_histories[AGENT_TO_TRAIN][-100:])\n",
|
||
" print(f\"-> {AGENT_TO_TRAIN} avg reward (last 100 eps): {avg_last_100:.2f}\")\n",
|
||
"\n",
|
||
" agent.save(str(ckpt_path))\n",
|
||
" print(\"Checkpoint saved.\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "a13a65df",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"image/png": "iVBORw0KGgoAAAANSUhEUgAABKUAAAJOCAYAAABm7rQwAAAAOnRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjEwLjgsIGh0dHBzOi8vbWF0cGxvdGxpYi5vcmcvwVt1zgAAAAlwSFlzAAAPYQAAD2EBqD+naQAAqeNJREFUeJzs3Qd4VNXWxvGV3itphNB7B+mICCqoqNgb6kWvFxtXr71cK3Ys2K6fvRfsXVGKilJERHrvLb33nu9ZezJDJgWSMEkmk//vecY5c86UPWcyQl7WXtutoqKiQgAAAAAAAIBm5N6cLwYAAAAAAAAoQikAAAAAAAA0O0IpAAAAAAAANDtCKQAAAAAAADQ7QikAAAAAAAA0O0IpAAAAAAAANDtCKQAAAAAAADQ7QikAAAAAAAA0O0IpAAAAAAAANDtCKQAAUC+XX365dOnSpVFn64EHHhA3NzfOdAuZMmWKzJgxo9lez/p5p6amOuw53377bfOce/bsse2bMGGCuRwNRzyHI23atEk8PT1lw4YNLT0UAACaHKEUAACtnP6iXp/Lr7/+Km2Zvv9zzjlHYmJixNvbW6KiouSMM86QL774QlzZ0qVLZf78+XLHHXeIq4uPjzeB2Jo1a8QZ/N///Z/57o0aNarO++jxf//737bb/fr1k9NOO03uu+++ZholAAAtx7MFXxsAADjAe++9Z3f73XfflQULFtTY37dv36N6nddee03Ky8sb9dh77rlH7rzzTmkp999/vzz44IPSs2dPufrqq6Vz586SlpYmP/zwg5x77rnywQcfyLRp08QVPfnkk3LiiSdKjx49pDW77LLL5KKLLhIfH5/DhlKzZs0yFX1Dhgyp1/NqYNdU9OdKx/Lnn3/Kjh076v0ZXHPNNaa6befOndK9e/cmGx8AAC2NUAoAgFbu0ksvtbv9xx9/mFCq+v7q8vPzxd/fv96v4+Xl1egx6nQkvbSEzz77zARS5513nnz44Yd27+O2226Tn376SUpKShzyWg09p00tOTlZvv/+e3n55ZeltfPw8DAXR9Oquaawe/duWbZsmanE0yBUAyoNR+vjpJNOkrCwMHnnnXfMzy4AAK6K6XsAALQB2jNnwIABsmrVKhk/frwJTv773/+aY19//bWZLhQbG2uqULQy46GHHpKysrLD9pTS3j469eipp56SV1991TxOHz9ixAhZuXLlEXtKWactffXVV2Zs+tj+/fvLjz/+WOvUu+HDh4uvr695nVdeeaXefaruvfdeCQ8PlzfffLPWYO3kk0+W008/vc6+RdbXrz4Fsq5zqs/VrVu3WscyZswY8z6qev/992XYsGHi5+dnxqnVQPv377e7z/bt201Fl0491HMQFxdn7peVlXXY966BVGlpqQk5rP766y/zXjTwqE4DOj323Xffmds5OTly4403ms9dPx+d8jhp0iT5+++/pT60p9QFF1wgwcHB0q5dO/nPf/4jhYWFNX6G9LxXp/v1M7aq67Ox0s9Gf/bUFVdcYZu2WttzH66nlPWz/uSTT+SRRx4x51rPuVababVTfWkIpcGSfrc0ENXb9aU/pzom/W4CAODKqJQCAKCN0Olqp556qgkztIoqOjra7Ndf2gMDA+Xmm2821z///LPpZ5OdnW2mfh2JVh9peKHVIPrL/BNPPGF6N+3ateuI1VVLliwxlSTXXXedBAUFyfPPP2/Cl3379pkQQ61evVpOOeUUad++vZmapWGZVo9ERkYecWwa5mzZskX++c9/mudvjnOqAdM//vEPE8xZQxK1d+9eU8VW9Zxq6KGhmQY3//rXvyQlJUVeeOEFE3Lp+w4NDZXi4mITnBUVFcn1119vgqmDBw+a4CgzM1NCQkLqHJ9W6uh51OmKVhqKaWimocv06dPt7v/xxx+bIEVfzzqNTCvNNDzUXkf6fvUz27x5sxxzzDFHPD/6vjTQeuyxx8x71883IyPDTDF1NJ2eqj8X+rN71VVXyXHHHWf2jx07tlHP9/jjj4u7u7vceuutJvzTn+tLLrlEVqxYUa/Hawil3wOtxLr44ovlpZdeqvEzcTj6c6ShlH4PNdQDAMAVEUoBANBGJCYmmmlcGh5VD5W0SsdKgwi9aJPmhx9++LA9fJQGSBr+aJihevfuLWeeeaapurFWINVFww1dbczaN2fixIkyePBgmTt3rq35s0550mlb2rBbq7msYUd9emTp86uBAwdKc51TDRH0nGnAUzWA0BBIQzsduzWk0vem59hataY0yBg6dKg5/7pfz49OBfv0009NxY1VfRphayBX24qJF154oalw04DI+rlp+PXll1+a17eGiVpppav2Pf3007bH3n777fU+P127drVV+8ycOdOEK/q+NOgZNGiQOJIGghoQ6nnRirQjTV89Eq3o0obp1ul9ep600ktXxdMKucPR6jk99xowqnHjxpmKKw2q6htKaXCoPdz0eUaOHHlU7wUAAGfF9D0AANoIDUp0WlN1VQMprXjSKVdaZaL9kfQX4iPRgMMabChrhYpWSh2JTiur2shZgwoNLqyP1aqohQsXyllnnWULpJQ2jNYA4kg0IFJNUSVV1znV8evYNISqqKiw7deQavTo0dKpUydzWyvENHTQkErPufWilVDakP2XX34x97NWQmnIp59JQ2hlU9XPpupnpn20qq48qA2/tfJKj1lppZZWBmkD8cbQIKoqrfRS2mDe2ennWrXfVEN+rjV80pBMQ1alYaSe148++qjGtNi6WD83/ZkAAMBVEUoBANBGdOjQodamzhs3bpSzzz7bhB8aqOi0OGuVyZF6FilryFL9l2mtwmnoY62Ptz5WG3UXFBTUumpZfVYys0570rCtOc+pBhDaF2r58uXmtq6iptUzVQMfrS7T0EoDKD3nVS9a4aXv3VptpFMrX3/9dYmIiDBT61588cV6fTaqajBmpdVoffr0MUGZlW7r859wwgm2fTplTSuDOnbsaKp1tMdTfUIZK31vVWkAqVPi6uoL1VRyc3NNVZv1otMkm+rnWkMnDZ80kNIKN+1DpZdRo0ZJUlKSLFq0qEGfW336pgEA0FoRSgEA0EZUrYiy0sqY448/XtauXWv68Xz77bdm5b7Zs2eb41rJcyR1rYhWWxjiyMfWhwYvav369fW6f10BQF3VLbWdU3XGGWeYxudaLaX0WsOY888/33YfPbf6etrYXc959Ys2c7fS6XPr1q0z0/k0pLvhhhtMU/gDBw4c9v1oP6m6QhQNyLQaSytxtF/VN998Y/p5VV0lUau4NITSaWhaqab9sPR1582bJ41RW7P72tS3mqi+dKqi9iSzXuozha6xP5vaky0hIcEEUxrKWS/WaZv1bXhu/dw0KAQAwFXRUwoAgDZMVxrTKV46jUuba1tphYcz0NXedOWz2lY9q89KaL169TI9rrSv0XPPPWcauR+OtRpGw7qqtP9TQwQEBJh+WtoHas6cOaYKSad/VZ2CqFVDGnBoJZSO80i0L5Ze7rnnHtPA/NhjjzX9rLQn1eFCuc8//7zOUEobx+txnWqmUx21YXt1GuJoI3q9aPWWNjjXBu31mT6p1WD6/qp+ZhrGWftcOep8Hynk0sbz2tfpSGGiI2jopD+3Ws1WnX7PtG+Xfm5HGoN+BzXIrM/PBgAArRWVUgAAtGHWapCq1R/a8FqbUTvL+LTv1FdffWXX10jDjfpW62jwosGbrm5XWlpa47j2UtKV7JS1v9Vvv/1mV7Xz6quvNnjsGvromHXanVaiVZ26p7ShuL4/HV/16hu9rWNWGhZVH7eGUxpYaIXT4WjDb624qW3KnTaK1+fRwEwvGj5VDSb1fVefIqhhiwZrR3pdq+rBjLXxtzXQ0umVWglU9Xyrxv78aRhYW8ilTcP158h60UCvKWgVmwZPGkhqU/rqF23er1NJtSrtSHS6p1alHW51RQAAWjsqpQAAaMPGjh1rqlWmT59upoRppcl7773nsOlzjqB9jDQ40iDh2muvNWHJ//73P7MCmq6OdiQaBun0Pa3uWb16tVx88cXSuXNnE/ro1Dnt8aMrECoNAbQZ+V133SXp6ekSHh5upmHVFmYdyZQpU0yDdV1pTsMnnRpXlQZgWuWkr6U9lrSZu95fK2S0muaqq64yj9XpYBpm6NQ/rZrRsehnVNtzVnfaaaeZ6XjaLF6fr7Zzo6vVaTXalVdeaYIuKw1PdMU4DVO0B5VWmenzrFy50m41vsPR9zJ16lQ55ZRTTH+t999/X6ZNm2aez0rDwscff9xcDx8+3ARU27Ztk8bQc6rN2bUSSc+lhlTay6lqtVZT0rBJz5u+59roz5b2DNNqquohZVXahH7x4sWmOg0AAFdGpRQAAG2Y9hzSKiGtktFpYdp7Z9KkSabBtbMYNmyYqYrS8Ozee++VN954w/S/OvHEE02YUh8a/mj4pNVBL730kglo9D1q3yed2qdBlZUGBhrWaVDy6KOPmobVut1QOjYNJzSk0OfQKqPq7rzzTjN9TsMgrZjSEEqDjcmTJ9uCDQ1wtLm59vvShuca0mlApOdEQ47D0Wl5Go5Ze1tVp8GITqfTVf2qhyR6bjQU0eDv/vvvl5tuukm2bt1qqph0HPWhFVi6QqG+z++//96Ea/r5VaWhmAZin332mdx+++0mdGxszyovLy955513TGB3zTXXmM9Vw53moj87+rnrd6g2+jlrUKhhqLUSrjb6s6qhqIbFAAC4MrcKZ/qnUAAAgHrSyiJdOVD7FqFuv//+u0yYMEG2bNlSYzU8OO/PtlYtasUcAACujEopAADg9LRXT1UaRP3www8mbMHhaYN1rbxypuo31G3z5s2mevGhhx7iNAEAXB6VUgAAwOnp9MLLL7/cNKzWldl0Cp4229YeUVT/AAAAtE40OgcAAE5PG2XPnTtXEhMTTY8iXVVO+z0RSAEAALReVEoBAAAAAACg2dFTCgAAAAAAAM2OUAoAAAAAAADNjp5S1ZSXl0t8fLwEBQWZpXgBAAAAAABQfxUVFZKTkyOxsbHi7l53PRShVDUaSHXs2LEBpxoAAAAAAADV7d+/X+Li4qQuhFLVaIWU9cQFBwfXOGElJSUyf/58mTx5snh5edV5YgEcHt8l4OjxPQIcg+8SwPcIcBYlLpI5ZGdnm4Ifa8ZSF0KpaqxT9jSQqiuU8vf3N8da8w8I0NL4LgF8jwBnwZ9JAN8jwFmUuFjmcKS2SDQ6BwAAAAAAQLMjlAIAAAAAAECzI5QCAAAAAABAs6OnVCPnRBYVFUlZWZnjPxE4FZ3D6+Hh0dLDAAAAAADA5RBKNUBFRYUkJSVJ+/btZd++fUds2AXXEBoaKjExMXzeAAAAAAA4EKFUAyQmJpplDTWgCA8Pp4KmDYSQ+fn5kpycbG5rGAkAAAAAAByDUKqedKpeZmamREZGmildfn5+4u5OSy5Xp5+z0mAqKiqKIBIAAAAAAAchVamnkpISc+3v7++oc49WwvqZW38GAAAAAADA0SOUaiD6SLU9fOYAAAAAADgeoRQAAAAAAACaHaEUWrUJEybIjTfe2NLDAAAAAAAADUQo1Ybs379f/vnPf0psbKx4e3tL586d5T//+Y+kpaUd9nEPPPCADBkyRJzRF198IQ899FBLDwMAAAAAADQQoVQbsWvXLhk+fLhs375d5s6dKzt27JCXX35ZFi1aJGPGjJH09HRxJvVtKh4eHi5BQUFNPh4AAAAAAOBYLhlKvfjii9KlSxfx9fWVUaNGyZ9//ilt3cyZM0111Pz58+X444+XTp06yamnnioLFy6UgwcPyt13331UFVgXXHCBhIaGmpDozDPPlD179tiOr1y5UiZNmiQRERESEhJiXv/vv/+u0Uz8pZdekqlTp0pAQIA88sgjtgqt9957z3ye+tiLLrpIcnJy6py+p/d79NFHTUWYhlX6Pl999VW711q2bJl5Xv350KDuq6++Mq+/Zs2aRp8DAAAAAADQxkOpjz/+WG6++Wa5//77TfAxePBgOfnkkyU5Odnhr1VRUSH5xaUtctHXri+tgvrpp5/kuuuuEz8/P7tjMTExcskll5jz1pDnrFrRpOdXA6Dff/9dli5dKoGBgXLKKadIcXGxuY+GSNOnT5clS5bIH3/8IT179pQpU6bYhUtKQ6izzz5b1q9fb0IltXPnThMafffdd+ayePFiefzxxw87pqefftqETatXrzbv+dprr5WtW7eaY9nZ2XLGGWfIwIEDzc+HTv274447Gvy+AQAAAADA0fEUFzNnzhyZMWOGXHHFFea2TlH7/vvv5c0335Q777zToa9VUFIm/e77SVrCpgdPFn/v+n18OmVPA6e+ffvWelz3Z2RkSEpKikRFRTVoHBpmlZeXy+uvv26qjdRbb71lqqZ+/fVXmTx5spxwwgl2j9HKJT2uAdPpp59u2z9t2jTb52alz/3222/bpuhddtllZsqhVlLVRQMvDaOUBk7PPPOM/PLLL9K7d2/58MMPzThfe+01UynVr18/UymmPzMAAAAAAKD5uFQopZU5q1atkrvuusu2z93dXU466SRZvnx5rY8pKioyFyutpLFWAFXta6TbGuxYq4n0WgOTlqKvXd/Xt96vrKys1sdY31N+fr6pcrLS86gX6/HaHqtT3rQ/VfW+ToWFhSYM03OflJQk9957rwmhtGJNx6GvtXfvXrvnPOaYY+xu6+vqdDydzmfdr5Vd+hzV71f1tlZBVb2tj9Ex6L4tW7bIoEGDzFRG6320qupw51T36Wvoz4CHh8cRzjbqy/r9qm//MAB8j4Cmwp9JAN8jwFmUuMjvSfUdv0uFUqmpqSbwiI6OttuvtzWMqM1jjz0ms2bNqrFfey/5+/vbbnt6eppwIy8vzwQaOvVMg4rlN4+WllBSkCfZhZbKpCPR92/tmXTiiSfWOL5u3TrT70kDqd9++822PywszIR0GtrpebUGdtWnBmp/pup9m1S7du3MY7S6Se+n1U0dO3YUHx8fU0Glx6o+pwaIVW/r69a2r7S01LZPtzWMtN7WAKn6WHVfQUGB2af3rfp4lZuba671s63tPepj9PF6bvSxcKwFCxZwSgG+R4BT4M8kgO8R4CwWtPLfk7QQpc2FUo2hlUDag8pKQwkNTjQ0CQ4Otqv80YbeWrWjiZ9WBmnQEyLOT9+HVizptDqdwli1r1RiYqJ8+umnZrqbNinXS3UaImmFUNXzYaWN5LXnU7du3Wo9rlasWCH/+9//5LzzzjO39TympaWZ6XNVH6Pjqnq7ttfVx2hQZd2nYaGGhNbbeqz68+pz6HPpvgEDBpj3q7f1ojZv3myu9bOt7T3oZ69jGz9+vHluOIZ+j/R/tNoE38vLi9MK8D0CWgx/JgF8jwBnUeIivyfVVvDh8qGUVvtoAKFTtarS21rlVJuq4URV+uFX/QHQ6hsNoax9k/RaA5DWtCLh2LFjzYp7Dz/8sHTt2lU2btwot912m/Tq1cs0hq/r/eh71UohraiqSoM5rYLSxuLaoPzBBx+UuLg4My3viy++kNtvv93c1sbmH3zwgYwcOdL8YOprashT/RzqdtXb1nNdn32Hu11136WXXmqmEl5zzTUmoNu3b5/pQ6b0Z6e2c6D79PHVfybgGJxXgO8R4Cz4Mwnge3Q4y3emyYu/7JDcolIJ8fMyC1DFhPhJflGpnDE4VrpFBoi7m5v0jA4UH09L2w+dXZOUXSSv/b5LikrLZEjHMOkVHSgxIb7m+Vbvy5Sy8grpFxtsniPQp/X9iq7vUd/DzpQ88XB3k7yiUtmfkS9h/t4yuls7cXc79HtcXY/XjjHuesdWZO3+TNmUYAlevDzcpWtEgPh4ukt8ZoGk5xXL8C5h0i0iUNyqvP/96fmSV1wqPSIDzc+Kqv6+96bnS2l56/8zqb5jb30/8YehFTPDhg0zjbDPOuss29Qtvf3vf/+7pYfXojQYWrlypVnh7oILLjB9mfTLf84558h7771nN1WxNtu2bZOhQ4fa7dOpgAsXLjTT2rShuD6XTmvs0KGDOWatOnrjjTfkqquuMj2jtArt0UcflVtvvVVago7p22+/NSvy6bRD7T913333mSbrVEEBAAAAqMusbzfKlkT7FcRFMsx/F205tNq7ZgzdIwNNQJOQVShZBYd667z/x746n18Di9tP6dMqPoCdKbny44ZE2RifJb9tSzVBXV0CvD2kY7j975uxoX4muPHycJNtSbnm8Z3b+UtKTpEc2z1Cnr5gsARUBnQa5u1OzRNPdzezT8M8DfU8qxQU6GOjg31NIHQgo8DutfRYkK+n+Hl5HDYcOxwdgwZvGfklcjCjQP7amy5z5m+T0vIjr2Bvff85haVyMNN+bKpf+2ATaOp7LCotl10puXJhNzeZKm2DW4W1i7WL0NXgpk+fLq+88oqpzHn22Wflk08+MT2lqveaqo1W8oSEhEhWVlaN6Xu7d++Wzp07mx5Deqw1VUrVRqujtEpISwNHj26Z3ljOQKu4dNU//cyrTm2s/tlrdRnBlWPLUn/44QezWmJr/hcAoCXxPQL4LgGt+c8k/SX/SFU0zqK8vEL63vejCQ1uP6W3qezRsEGrWjbFZ5swRY9pAFVbQKMVNKO6tTOBQ0FxmaTlFYu/t4ecMShWftqUKJn5JTKhd6S8fcVIcXYbDmbJ6S8sqfN4RKC3Rg2SmV9cr9CmLlp5FOzrKZsTc6RYS4eOIDzA2wRddenbPljG9Wgna/dnya7UPDl9UHsZ2ilUVu5Jl4y8EjmQWSDZBSUSGegjvt4e5rPKyCuWUH9vScgqkNreilbMjewabj7/pOxCSc0tMo9vH+onf+/LMD8njTE+plzemHlKq/49qa5sxaUrpdSFF14oKSkppvpF+yVpNcyPP/5Yr0CqrdEG77q63R9//GECvNYestXXu+++a3pgaUXX2rVrTZWXVo/VFkgBAAAAcAyth9BKkWU70mR3Wp58vHK/CRG0SuTTq8dIu8CabVWcRXJl6KTVTzOO62ama9UVtG1PzpHUHEs44uNlmdYV7Osl3p6HHqPhlVbu6L4zd8TKtNdXyL60+jWGbmn3fb3Btn3LpF4ypFOotA/xNe8xKvhQD14N3wpKymRvWp7kFZXZ9heWWCqf/Lw9zLYGO/1jQ8zPwqer9svXa+LN/fQ+ddHqJ/0sVFpusTmf1kCqQ6ifOe/WMWi1mtqckG0uVm8v2yNvL6v53NVfN6/YvrpJn98EioNj5Zrju9t9rnaPKyo1PzNV339sqK/EhfmbY+pgZoGp/CqrqDDVY+H+3hId5CmbViyWtsLlQimlU/Xa+nS9+tIKobZGw0praNm+fXs5//zzzcqAAAAAAJqGVrpc9sYKWbE7vcaxXSl58u3aeLn82K5Oe/p3pebaAom6AimlQUmfmGCR2lsa22gQY9U5IqDyNfLMFL7qU92ciVYM/b0v02w/evZAmTaqU5331dBJL1rBVF/jekbIHaf0MdPwisvKTUDUOzpIhnYKs/0cadVZ1T5MGnZuTcoxQU/XiMAar6fBl1YxffjnPsnKLzHPoZVQWsm1OzVf2gV4y9ge7SQiwEfiwvxMYFpaVmHCRF8vD1PNp4FpTLCv6Q+l++pDpxoG+FgquKrz9rTsCwvwlgEdQmpUHW6StsMlQyngcLQBu14AAAAAOH6a247kHNmRnCd70vIkMavQ/CK/NTHHFkjpL/vaANvbw81Mzfpzd7oJOi4/tv6vo0HEnjRtpu1lplc1JX0t7Z+k+rYPcvjzx4b4ysAOIbL+YJbM25AgV43vLs6qahXROcd0aJLX0Iohvaix3SPsjmnIVZ1O/zRBYB00ROrcLkDuOrVvvV5/bA/710TTIpQCAAAAADSIVp5oBcrKPRnyxd8HJCWnUPalech/li847OOmj+kss84cYLv985YkE0ptS6reQPzwwdcNH62W79YlmNsn9Y0yK5lZK1K00mhU13BTTaNVWKv2am8fS3OfjPxi0/+nW2SgaZxt5enhZnoGab8kXQFPq250SpVO/9JV5aw0PHI0DVW0QkhDqfhMy1QzZ5VQOb5xPSLqXTEEHA6hFAAAAACg1v5Izy3aboKafen5Ul5RIUUl5abyqfoKZxaWkMfbw106hPlJ98gAswqdhlca7OQUlcr0sV3sHtEp3DJ1TZ9fgyNr03MNhbSP0+7UXNmZnGeqrLTySuUXlUp8ZZ8gtXDzoZXv6ss6Ba2+dJU4bWh93rCO0hS0Wsp6HpyZ9TPQHlKAIxBKNVB5+ZG7/sO18JkDAAC4Tsiy7kCmdI8KlMLiMtP/R6trWsPqby1BG0E/v2j7Ye/TpZ2/nD00TjqE+siW9WvkijNPkIhgP/HxrF8VTcdwPxNy5ReXyUWv/iFh/t4mmNlUpSF1bTT4uuPUPlJWXi6FJeWmT5BO7dIqqD92pZnnU75e7jK+Z6Tp3aO0oirAx0NyCu1XyNPHJecUmobbGkBFBflKiL+l71OncH/TS6opK4Os09V+3pJsml/r6zkbDQ0/W3XAbPdpX/d0OaAhCKXqydvb26xOl5CQIAEBAWZpRg8PyhVdmf5Pt7i42KzmqJ+9/gwAAADA+en0roTsQvFwc5OiUl39K9800l6wOUky80vs7qvTkN79p65E3bqDKV1V7J1leyQi0Ef6xQbLSX2jzapgWqVkXeWuR1SgjO3ezrbKnR7TKii9/d3aeCksLZP2IX5ywfCOkpJbJA99t8l2jq4c11UqpMKsItYu0Ft6RAZKkK+XrcePNmf2OrhaooJ8xKuegZTS8GpArKWfUvUm6BpWdWkXIN0iAsyqbjolL7pydTetwqq60ltrd0xlI281Z/42eer8QU4XlqblFdtWsrtwRNNUjKHtIZSqJw0lunbtKgcPHpT4+HjJzMx0uv9JoGn4+/tLp06dzM8AAAAAaqerhmmPn1MGxJhG1tbQQwOhjfFZsiM5Vy4d3dlWEVJXmLRsZ5oE+3maQOVAeoFsT841vX/iswrM8uo6zenikZ1swYranpQjN3+yVrYkZpsKmtLyCnPf+liyI9U8ty7T3pppRdO8ymbcKjLIx6xatnpfhuRVVg1Z6TS0jmH+pqJIq3Kq+9/PO6Sg5NBjnjhv0GE/t6P13pUjZcGmJLvPbFjnMOkZFSieh1npzpVoJdetk3vJU/O3yed/H5CfNiZKdLCP+RwHx4XKmUM6SJ+YIFt4WlJWLr9vTzEh0Z7UPPFwd5dukQGmyisy0Feign0kurLaS7+H+p3QajLt31VQYqkS06mYWjWooWD1BuLaM0yrx7SxuU7b7BkVJJkFxeaYfge17xbgCPwkNYBWynTo0EE2bNggEydOFE9PTp+r02o4/ZwJIAEAAGqfDncwo8BMwXpq/lb5ek28zP5xizk2oEOwWXGtpMzSYFolZhfKnAuG1HkqF25OkqveW3XEU62/uGtPG63KScouMs9rVVJmH8D4e3uYX6B7xwSZCiANZNYdyDLVP5OfXSz70wvML/atPZSyroqmoVxxWbkJFPSifDzd5dgeEbJ2f6apdtFgQi9VDYoLMZVUX/x90O58zprav0kDKaWr550/nMqbC0d0Mr2ulmxPldyiUslNKTW9uP7YlS6v/LZLNI/SZuwaPFX9Xh1Ofe+rPzf6+Q/pFCq7U/Jk+a60Ou/bqV3r/q7AuZCqNJCGE9pjyMfHx0zhAwAAANqirPwSufi1P0zvnzHd2tX4JXbDwZo9gZbuSLVrZl3drirLzbcL8LZURoX6mioPnXan29bn1SDJOpVIaRXJHaf0MZUlAT6e5lorrGrrAzSpn2Xal/bt0VAqvpZqodZEz6m1Qfa8G48zK8Yt3pZiQiitmLnntH7mPOj9dJqcrpiXmltkpuCN6hYuujBdXJj2dnKTMwbHyopd6dK5nb+c0j/G1osJTU9/Zt+8fIRkF5aYYOjvfRmSkVcsS3emmV5o1nDJeq1B0jGdQ81Kglr1pD2xSssqTECp0y81NK4eSOn3Sqd3avWTNqvXn329jwaZ+l2u2ssrzN9LYkL8xNvDzezX+wX7espFIzrx4wCHIZQCAAAA0GBfrD5g+wW2aiD19hUjxN/bU7ILSswUPK1MUoNmzTdVTfd8tcHsv2p8N9O/qCprZc/V47vJXVP61vq6pWXl8ueedHPfAG9P8fFyNyu8xQT7NrgvlLUCqPaV5MSEOjpNangXy3twVqm5xaaxt2Z9Gi6ZPk0dQmrcT0OnQXGh5lKXib2jzAUtJ9jXSwZ3DDUXdXPlVFitckvLLTLH9edem8LX1Xxdp8Ku2pchvp4eJpjUEEpD3CFxoXbfE72fBlh6bNnOVDPFT43tHmH77krl62swrMGZLhAAOAqhFAAAAIAG015RSqfBDe0UKnvS8k3F1IQ6Ao1hncJMePXBin3mdlpusdw6ubek5xebX3TD/b1lZ0quOaa36/wFxsPd/MLsCBpmqS2JOXb79RdwrTw588Wl5vbf904yq/Q5q33plgqz2JD6r3qH1kXDJ63sq++qfBo8jagWplYPga330+bxehlSGYLV9foxIfxswfEIpQAAAADU2960PEnMKrQFSCf3j5bLxnQ54uNuntxLbvxoja2x9jdr482lNocLpRxpYGU10foDmbZ92gdLpyXqanVWOuXt+F6R0tKyCkpM5ZZOvduWlCslpeWik7O0YbnqFE6vHwCtC6EUAAAAgFpl5hebJuFanfTV6oNy79cbJKfQsnKXVYew+lVuaNXG0jtPkKLSMrn41T9kQ7xl6p91ulDVxsz92gc3yyeiq5rplDet8krOLpSoYF95Y8kuu0BK6eqBLR1KaZh30tOL7VbFq077QAFAa0IoBQAAAFRx/dzVsjM5V76cObZNT4XS/jLTXlsh3SMD5B9jusicBdvsAilteNwx3L/GFKEj0XP6xXXH1pgup9VX2tsmItBHukYESHMI8feS/rHBpnm6Ti08c0gH26p0957ez4RRuhrdvsqpii3p163JdoGUrpSmq+ptrAz3lDawBoDWhFAKAAAAbY4ut/5/v+wwK091DveTtBQ32fnLTjl9cAf5tnJK2eaEnMP2WHF1i7emmGtdkv7+bzba9r962TA5qW90g5uKH472q+kSEWAuzU37U2koNevbTfLoD5tNM3atnjp/eJyEbvQyodT+jJYPpZbttEzR+8eYznLDiT1NeKc2HMyS019YYrZra24OAM6MUAoAAABtyrakHHlu4Xb5fn1Clb0eIjt2yvM/77Tr39PWVFRUyOr9mfL9ugR5Y8lu23S6UV3bSXxWgUwf00Um948RVzK5X7S8+pv9lD2dPqgrnOlKdmp/eu2r8zUXneJoDQnPHtrBFkhZg6jnLhpiGs8PbcMhKoDWiVAKAAAALkeng+lUs4hAbxMuFZeVy8bKKVoaQFhpY2jt1VNWru2i7aXmFElb8+IvO+Sp+dvs9r135SgZ3a2duKrhXcJlwU3jTUXYm0t3S25hqVx/Qg9zTKcnqvjKnxGPBlSH5ReXyperD8rQjmES5OspsaF+DXp89SBVq/t0yqT2wapOpx0CQGtEKAUAAACXC6QmPvWrJGQVmp47RdUaaVtpYLXw5uOlvKxU5s2bJx0GjpVtKfnyyV/7Zd2BLLPCWVvz1RrL1MXe0UEyrmeEhAd4y8gG9oxqjXpGB5nLKQPsq8Cig31NpVhJWYUkZhdKWVmFrNyTbla706BoysD25jwVFJeZ1Qh/254qK3alVa6SZx90hvl7SUZ+iQztFCpXj+8mpwxoX+/x6RQ9a1WUI6dNAkBLI5QCAACAS9mRnGsCKVU1kNJQQMMCdzc3eeisATK2ezvx9nSXkgo30d/zNSwY2T1S9qblmVBKewu1NSmV1WH/mzbUhDRtnVY2dQj1M6vzHfv4zzWOrz2QJTKvfs+lgZRavS9Trnn/bzmxT5S8cfmIej12V2qeue4dw2cCwLUQSgEAAMClaG8dpSu4vXn5CEnLLTIVP53bBZgpVToNK9Tfu87H66pmSqdy3TSppwT5eh329crLK6SsosJMF1y4KUn8fTwkr6hUThsUK4E+reev20WlZbY+WpFBh3oWtXUD40JNKGU1uGOoCS61Ck8btOvnrqGm/rz1jw0xQZaeP92nPwMhfl7i4eYmv2xNllV7M8xlzf5MWbQlWfan59umCB6OTjFVGpABgCtpPX9KAgAAAPWwN91SVTI4LsQEBXqxOlLAVH0Fs2/XJsi0UZ3qvK9O2zrlud9MzyEvD3fJLy6zHTuQUSC3TO7daj4zrQ5TOl1NgxRYPH7OQDnnmA6ik+b6xQZLVJBvo06NTvXTi4aiPe/+QbSN2YSnfjUVexo6hfp5SftQP/Nze+HwThLib/kM9qRq5V6m2da+VADgSgilAAAA4DK0aunnzclmu7HTz/rGBNu2D2YeqpCpzdakHFtlVknZoUBKbUnMkeZaMU/DsIAqVVnaV0sDM08Pt3oFcVq5c/7Ly812+xA/cXOjb5GVnteJvaMc9nlpJdXr04fLle/8ZQKq37enHjq4z7Ly4aM/bJEAbw/x8/aQ1NxDqwISSgFwNYRSAAAAaPWSsgtl6Y5UeeHnHbK7sv/O0I41VymrD20kfcukXvL0gm22Hkt1sa7Qp1O5Pr56jPRtHyTLd6bJ5W+tlC2J2dIcHvl+s7yxdLec2CdahnQMkc0JObJgc5IUV/bTCvX3MlU+Oi1RV9LTKWdWWxNz5Lt18fJ/v+607bvt5NZT3dVandAnWn6+ZYJplp6QWSAdwvxNY33dfn/FPknPK5a84jJzsRrSMVQGxB4KTAHAFRBKAQAAoNX759srZWP8oRBoUFyIHNM5rNHPp6uuqeQjhVKVK/Qd2yPChAaqSzvLdMH96QVyxgtL5PoTesjk/varujnS0p1pUlEhsnBzkrlUl1nZYHvlngyzitvwytX0tErngleW2/pIqbevGCETHFgVhLpVn1pqddOkXpKSW2R+ngO8PSU62Ec8PdwlOshyDQCuhFAKAAAArX7KnlacqDHd2sn1J/aQsd0jjuo5I4Mtjb4TK1fxq06nzOnKa//9cr25HRF4qHF653b+cuHwjvLxX/tl/cEs+feHq2X5XSdIu8CmaR6enmcNxtqZlQW1H9SJfaNkXI9IKS4rl4LiUrnx4zWy4WC2LURT2gfLGkjpeesWGSDjehzdecPR06mT2rcqqnfjelcBQGtCKAUAAIBWLT2/WErKKkTbIL175UjTcPxodQzzszUr1wCqao+lP3eny78//NuuiiqmsrJK6X1nnzdILhndSaa/+adk5JeYqYCzpvZ3yNiqB3LWnkNPnz9EYkJqDzLiQv1NKHUws1BW78uQBZuSbI3NNYyae9Voh44LAID6IJQCAABAq+8npdoF+Dgs9IkL8zfXuUWlcuErf0h4gLek5RWZFdNW7c0wx9zdRIZ2CpNuEQFy0ciaK/QNiguVy8d2lWcWbpMPV+wTX08Pue+MfuLoQE6n4Wlm1q5KtVZ1kUGWKq2HvttU41jncMt7BQCguRFKAQAAwCVCKe294yjaDDzM38tUOf25J73Gce0fpf2XQv3rDoLUtRO6m4bn8zYkyrKdVVZZcxBrI/Zwf+/DBnJVV23z9XI308OKSstMWHX5sV0dPi4AAOqDUAoAAACtWmJWUY0pdI5w16l9Zf6mJBnTvZ3kF5XK7ztSZX96vkzuFy23n9JHAnyO/Fdpb093uf6EniaUqtrPydGhlLUSqi4XjugomfnF0j0yUM4dFiceWuYFAEALI5QCAACAS1RKRTk4lLpgREdzsbr+xJ6Nep6IIEs1VXpe5VQ703fK0nuquUIpnX5415S+R/16AAA4EqEUAAAAWq156xPkuUXbm6RSylF0ap3SflTd//uD2e4TEyRfzTzWTBM8GtuSLasORjbRyn4AADQlxy7/AQAAADST7MISuX7uatttR/aUciRPD/caY9uSmGMuR0Orrl5ZvMtsRzrpewcA4HAIpQAAANAqrdqTIaVafiQiPaIC5fjekeKsHjpzgFwwPE6ev3ioxIZYKrp2p+bKxvgsmb8x0TYFsSG2JR0KtaYOjnXoeAEAaA5M3wMAAECrtCkh21yfOSRWnrtoqDizyf1jzEUt3poin/99QG76eK3teKdwf/n11gny974M+XFDotl392l9D9t3at2BTHM9tns76R8b0uTvAQAARyOUAgAAQKt0ICPfXHduFyCtiYZIGkopzZwqKkT2pedLt8p+U1YXjewoPaKC6nyeg5mW6qouEa3r/QMAYEUoBQAAgFZn7p/7ZO6f+812xzA/aU3OHRYnx3QOk4SsAjmmU5g8+sNmeXf53lpDp8OFUtaV96KOsPIeAADOilAKAAAArUpyTqHc9cV62+2e0XUHN86qa0SAuahZU/vLjOO6yY7kXHF3d5M3l+yWxdtSZP2BTNkUny15RaUyZWB76RcbXGsoFUkoBQBopQilAAAA0KokVE5bU89dNEQGx7XufkraN6pjuL+5KG18rp6av812n09X7Zc/7jrRbGsg9+Xqg1JUWm5uRwZSKQUAaJ0IpQAAANCqWCuEBsWFyJlDOoirOalvtHy66oAUl5ZLl3b+sictX5Kyi0zPKV9PDykoKbPd19PdTfq2t6+gAgCgtSCUAgAAQKuSbJ225qIVQhP7RMm6+ydLel6xtA/xldNfWCIb47NNQ3QNpDzc3eSmk3rKsT0ipH2In8SE+Lb0kAEAaBRCKQAAADgdrRJKyS2S2BBfue/rjZJXXCrh/t7m+rt1CS7fS8nXy0NiQy0N3G+Z3Ete+223hPh5ydXHd5O4MH+Xfu8AgLaDUAoAAAAtKruwRO74bJ3pqXTzpF5m37kvLTPVQf+d0kfe+6PmynTubmKaf7cFJ/SJNhcAAFwNoRQAAABa1BerDsi8DZbm3l+tPig5haW2vkmLNifb7jd1cKz4ermb6WuXju4s/WNbd4NzAADaOkIpAAAANLntSTly22frzFS043pG2vbnFpXKA99uqtEvympTfLa5Pq5nhDx/8VA+KQAAXAihFAAAgIPtSM6VwpIyGdChbVXyJOcUyswP/pbhXcLljlP6mH3l5RXyyA+b5Y0lu83ty974U04f1F5uO7m3dG4XIBsPZtkeP+8/x8nOlFzZnJAt369LMKvO5RSVmmNRQTTzBgDA1RBKAQAAONAri3fKY/O22G4H+njK+cPj5L7T+4mbm5tLn+tnFmyXlXsyzCUpu1Cigy1BkjWQstJG5Wm5xfLCtKHy08Yks298r0jp2z7YXE4fFCt9YoLl+rmrbY+JCqaxNwAAroZQCgAAwEHKyivkpcU77fbp9LS3lu6RvjHBMnWI9kTykIqKCpcMqBKzCmzbX/x9sNb76KpxKTlFsnxXmgx/eKFtf9d2/nb36xUdZHf7/GFxDh8vAABoWYRSAAAADrIxPksy80tqPXb75+vMNLYLR3SUT//aL1cc21VuOLGnS5371Nxi2/YZg2NFY7f5mxKlsKRc3pg+XE7sa1lB7rI3Vsjv21Nt9+0WESBnDu1g91y9Y4Lkm38fax47okuYS4Z4AAC0dYRSAAAAR6GkrNz0UdLrpGxLk+6T+kZJuwAfySkqkbHdI+S+rzdIeYVIVkGJvPrbLnOfOQu2yS9bk2XKgPYyY3w3l/gMUnMt7/+rmcfKkI6hZjshq0AKisukW2Sg7X6vTx8uq/ZkSICPpwyKC6kzcBoUZ3kOAADgmgilAAAAjsLKPekyf5OlL5LV2UPj5LRB7W23zxsWJ7tS8uTbdfGyPz3f9FRSq/dlSkJmoUuEUjol0RpK6RQ9q/YhfjXu6+PpIWN7RDTr+AAAgPMhlAIAADgKn/51wC580pXlJvSOsruP9pHqFxtsLqqgeKUs2pJsthOzC12ix5RWgZWUVZjtdgHeLT0cAADQChBKAQAANJJO2ft6jaWh9+v/GC4n9bP0TDqSK8d1lXUHs0zDb5VTVCrBvl6t+nOwVkkF+XqaEA4AAOBI3I94DwAAANSQnlcsfe/90fSK8nB3kxP62FdHHY5OXVt590kS5GP598HnF243/aVaq6U7UuXJn7bWmLoHAABwOIRSAAAAjfDF3wekVBMpDWICfcTdveHT7yKDLQHO60t2yxVvrTTT+Fqbg5kFcukbK+SnjZa+WklZhS09JAAA0EoQSgEAADTmL1FVekBppVRjWFeos0qpnALniGmF2YUlsnZ/pl3QpdvFpeXiSFsTs6Vqllbo4OcHAACui55SAAAAjZBbVGrbjghsXGPvs4d2kC/+tvSkUvvS8iUqyPeoPo+C4jI5/5VlsuFgtrn98qXDJNjXU+Ys2CZbE3Mkr7hU3pg+QiY2YLrh4exOzTfX7UMs477ntH4OeV4AAOD6qJQCAABoIK040pDH6o5T+jTqHB7XM1L+uuckGdU13NxevS/TLuxqDG28bg2k1NoDmfL8z9vlr70ZpqG6zji84u2Vcu9XG2R3ap4crX1pluc4a2gHWX7XiXLaoPZH/ZwAAKBtIJQCAABooIQqfZNmnzvQNC5vrIhAHxnb3fL4R37YLAPu/0nu/Hxdg54jp7BErn7vL5n17UZ5btF2u2N7UvMkKdsyLfCfx3a17X/vj70y8alf5bdtKbbG7Q99t0k++nNfg3pbWZ/bWikFAABQX4RSAAAADZScc6j309TBHY76/Gl1ka/Xob+WfbrqgGQVlNT78Yu3pZhG428t3WMCM21xdevkXubYvA2Jtoqoy8Z0lo2zTpZnLxxie+z6g1nmteYs2CpvLNktd36xXn5Yn1iv19XphnsqK6WiWHUPAAA0EKEUAABAA6VUhlKD40LEz9vjqM9fj6hAWX3vZFl97ySJDPKRsvIKWbU3vcHVSmp45zB58rzBctmYLuLnZT82DY4CfDzNVLsZx1mqpp78aasMnjVf3v9jn11QdSS7UnLl+Kd+kS2JOZbnDqZSCgAANAyhFAAAQAOlVq6SpwGSo2i4FRbgLUMrV+T7z0dr5N3le+r12OScQtv0vM+uHSvnDouTED8v+c9JPW33CfP3MoGUVWyo3xFDt8PZEG+/6l40oRQAAGggVt8DAABoIGtoo/2gHM0aFuUUlsp9X2+UaSM7iafH4f8d8UBGgbmOCrYfz6WjO0tGfrFkF5TIpH7RdseqBmojuoTJhzNGy1erD8ptn62Tz/8+IPsz8uX9K0eJt2ftrx2fWVDldTpJLD2lAABAAxFKAQAANDKUcmSllFVsqP00OG1Afripcb9sTZbv1yWY7U7h/nbHAn085a5T+9b6uJ5RQbbt6WO7iJeHu937+XN3umxKyJYhlZVbSlcG/GtPuozvGSkJlaHUdRO6y+2NXH0QAAC0bYRSAAAAThRK9Y8NqdFUvbZQas3+TCkuLZc3l+w2twd0CJaT+tpXQx1O75gg+eK6sVJaViEju4abfb2ig8TT3U1Kyy3z8val59uFUo/9sFk+WLFPrhrfTRZuTjb74sLsgzAAAID6oqcUAABAY3tKNcH0vbHd28mX140V78ope9uScuRARn6N17/wleVywSvL5fftqWbfMxcMqXOqXV2O6RRmC6SsUwd/vHG8LdzaFJ8t+9PzpaKiQr5ZG28CKfXqb7vkYGWl1LgeEUf5jgEAQFtFKAUAANBAKZWhVEQTVEq5ubnJ0E5hMrp7O3P75k/WyrjZv8gP6y1T9NS6A5lSVFpuux0T7GtW8HMEfZ7hXcLM9suLd8rEp36Vt5ftkRvmrq5x3+ljOkundlRKAQCAxmH6HgAAQANo1VBStmW1u6gmCKWs2lebsrd8Z5pMGdhevl5zUP77xXq7Y6cOjDFhlqMc2/1Q9ZNO5Xtu0XbbbT8vD/npxvGmKiuG5uYAAOAoEEoBAAA0gPZ4KiwpFw93N9tKeU2hegWSTtnTqXQ3frxGKiq0okorlbpIdLCvXHFsF4e+tvanumx0Z3nvj73mdmZ+iW1q4d2n9aU6CgAAOAShFAAAQAPsTcu3rZKnK9Y1leor6WkYtjMl1wRS7m4in1871kzzawpadfXQWQMkLMBbnq9SJTVzYo8ajdgBAAAai55SAAAADWBtOl49NHK0EV3CJcjX027Fv9TcYrN9bI+IJgukqooLs68Ea4rVBgEAQNtFKAUAANAAGg6pqCD7nk+Opv2aVt59ksy/aby5nZxTaHvtplj1r16hVDO9LgAAaBsIpQAAQJunzcvzi0vrdR60t5OKCPRu8vPm6+UhHcMsFVnax2r2j1ssr91MFUvW11baQyvEz6tZXhcAALQN9JQCAABt0pr9mfLW0t0ytGOo/LknXeZvTJKv/33sEXsmLdycbK4jmqlqyM/bQ4J8PCWn6FBo1qVdQLO8tlZraf+q8gqRE/pEibveAAAAcBBCKQAA0OZkFZTIRa8uN9VHX6+Jt+1fvjPtsKHUvrR82Z2a1+z9laoGUur84XHN8rrayP2N6SPM+TpraIdmeU0AANB2EEoBAIA2Z0dyrgmkqkvPszQSzy0qlfyiUokKtu8btTkx27bdHI3GraKCfMzqe+rrmcc26ap/1U3sE9VsrwUAANoWQikAANDm7KmsdqrOGvxc+Mpy2ZqYI29dMUKO6xlpO74/3bLy3kl9o6RrRPNMoVMvXTpM5q1PkBsn9ZJAH/76BgAAXAN/qwEAAG2OdQrexSM7ycUjO8rPW5Ll2YXb5Zu18TKpX7RsjLdURL3w8w67UOpARoG57h4V2KzjHdY5zFwAAABcCaEUAABwacWl5ZKcUygB3p7y4Z/7pEdUoOxOs4RS3SICZFBcqFnlTkMpve/V762yPXZv5f2skrILzXVsiF/zvgkAAAAXRCgFAABcVkVFhZz54lLZnHCoF1RVXSqn4PWKDpJ3/jlSbvp4ja2vlErKLpLU3CLbSnu63Zwr7wEAALiy5uuSCQAA0Mx01bi6AinVK/rQNLzje0XK3VP61rjP8IcXSnJlhVRKZc+piEDvJhkvAABAW0KlFAAAcEkZecUy9KEFZtvXy10W3HS8ZBeWyP1fb5QQPy85c2gH6dzOvln5yK7htT7XugNZMqKrh+xJszQ6jwyiUgoAAOBoEUoBAACXtGBzkm27U7i/dAz3N9ufXTu2zsfofRbePF7+2JUuy3elyffrEsz+lNwieXnxTtv9CKUAAACOHqEUAABwKSVl5fLe8r3y+u+7bPsGdAip9+N7RAWZywXDO5pqq2U70yQ1p0h+25Zijg/vHCZBvl5NMnYAAIC2hFAKAAC4lO/WxcuD322y3e4TEyT3nNavwc/j7ekuQzqGmlDq6QXbzD4Pdzd5+bJhDh0vAABAW0WjcwAA4FKW7kizbceG+MqzFw2R8IDGNSaPDva1u33WkA6svAcAAOAgVEoBAACXsuFglrl+Y/pwObFv9FE912mD2svf+zIkr6jUbJ8+KNZBowQAAIBLVUp16dJF3Nzc7C6PP/54Sw8LAAA0k/ziUtmSmGO2u0cGHvXzRQT6yHMXDZXXp4+Qs4fGiZeHS/3VCQAAoEW5XKXUgw8+KDNmzLDdDgoKatHxAACAprcjOUeSsovkhrmrbfviwvw49QAAAE7M5UIpDaFiYmJaehgAAKCZJGYVypTnlkhxWblt36iu4eJJVRMAAIBTc7kadJ2u165dOxk6dKg8+eSTUlpa2tJDAgAATaS0rFwem7fZLpDqERUo7145knMOAADg5FyqUuqGG26QY445RsLDw2XZsmVy1113SUJCgsyZM6fOxxQVFZmLVXZ2trkuKSkxl+qs+2o7BqD++C4BR4/vkcis7zbL12vizfm4ZGRHySksleljOol7RbmUlBwKqgC+S0DT4s8kgO9SVfXNTNwqKioqxIndeeedMnv27MPeZ/PmzdKnT58a+9988025+uqrJTc3V3x8fGp97AMPPCCzZs2qsf/DDz8Uf3//oxg5AABwlNJykf15IrH+Ij4eh/bf+5eHZJe4yUmx5XJap3Jxd+OcAwAAtLT8/HyZNm2aZGVlSXBwcOsNpVJSUiQtLe2w9+nWrZt4e3vX2L9x40YZMGCAbNmyRXr37l3vSqmOHTtKampqrSdO074FCxbIpEmTxMvLq1HvCQDfJcAR2tKfSfd8vVE+/uugTOwdIa9eeozZV1BcJoMeWmS2V941UUL9XfscoOm0pe8S0FT4HgF8l6rSbCUiIuKIoZTTT9+LjIw0l8ZYs2aNuLu7S1RUVJ330Qqq2qqo9C8kh/tLyZGOA6gfvkvA0WsL36Nft6Wa61+2psoNH6+TS0d3lohAy5/fwb6eEhlCdTOOXlv4LgFNje8RwHdJ1ffPU6cPpepr+fLlsmLFCpk4caJZgU9v33TTTXLppZdKWFhYSw8PAAAcBTc5NC9v3oZEmb8pSa4c19XW2BwAAACtj8uEUlrt9NFHH5keUTodr2vXriaUuvnmm1t6aAAA4CiUl1dIaq5lqv3guBDZlpQrBSVl8upvu8y+EV3DOb8AAACtkMuEUrrq3h9//NHSwwAAAI30zrI98ufudHn2oiHi5eFu259ZUCKl5ZYWmJ9eM1Y+WrlP7vt6o7nt4e4mZwyK5ZwDAAC0Qi4TSgEAgNbt/m8sQdNJ/aLk7KFxtv3rD2aZ6/YhvuLt6S7nHBMnGw9mS05RiZw/rKMM6BDSYmMGAABA4xFKAQCAFlVaVi7/evcv2+30vBK740t3WJqcT+htWfgk0MdTZp83qJlHCQAAAEc7VBsPAADQArYk5sivW1Nst3ck59odP5hZYK57RAU1+9gAAADQdAilAABAizqQkW93e+6f+6SwpMx2OyXb0uQ8Otin2ccGAACApkMoBQAAWoyGT7d9uq7G/pQcSxClknIKzXVUkG+zjg0AAABNi1AKAIBGem/5Hvl81QHO31F44eftklNUWmN/cmUoVVFRIclUSgEAALgkQikAABohMatQ7v16o9zy6VopKj001QwNs3BTsm3765nHypCOoWY7NdcSSuUWlUpB5VQ+KqUAAABcC6EUAACNYA1NlLWSBw2noZM6b1icDO4YKpFBPnbT95Iqz22Qj6f4eXtwigEAAFwIoRQAAI2QUiWUSsiy9DzC4e1MyZV56xMkrzKI0ql51vDppkm9zHVEoH0olWztJ0WTcwAAAJfj2dIDAACgNTqQfmjFuISsghYdS2vwy9ZkueKtlbbbvaID5fmLh0pxWbm5HRHoba6tlVJaiVa1nxRT9wAAAFwPoRQAAI2g/aSq9pdC7TRYuv+bjfLu8r12+7cl5copz/5utn293MXH08MulPpq9UH5YMU+2/27RwVwigEAAFwM0/cAAGigwpIycXM7dPuxeVvkzP8tkeJSS9UPDvluXYItkPLyqHLSqugZFWTbjqycvpdXbN88/sQ+0ZxWAAAAF0MoBQBAA21OyJaKCvt9aw9kybakHM5lFct2pMrdX64322cOiZWvZh5rd356RwfJuB4R8vQFg237rJVS1Q3rEsa5BQAAcDFM3wMAoIFu+Gh1rfv3pOXJgA4hnE8RScstkmmvr7Cdi8vHdpGYYF/b7X9P7CG3nty7xrkaHFf7+Qv29eK8AgAAuBgqpQAAaCA3sUxDG9klXAK8Lb2Q1O6UPM5llYCuqvYhfhIe4C0n9Y0y5+2f47rWeq48PdzlwuEd7fZ9Xa3CCgAAAK6BSikAABrYuFtXhlOzzxsk7UN85eXFO+XZhdtld7Ugpi3bm3ZodULr6npubm7y+vQRR3xsUal9P6n+scEOHx8AAABaHpVSAAA0QE5RqeRXNuHW6Wi+Xh62Rt3rD2TJ4/O2yO5Uwqld1arGtAKqvsqr9Ov64F+jGvRYAAAAtB78LQ8AgAZIyCw010G+nuJXOXWvS4S/ud6enGuqpi5+9Y82f05X7km36yfVEDec2EOCfDzlxpN6yrE9Itr8uQQAAHBVTN8DAKAe8opK5dZP18q8DYnmdq9oS3WU0kqpMH8vycgvMbcTswulsKTMVFG1VZvis831TzeOl94xh85VffSICpI1908WD3dL7y4AAAC4JiqlAACoh2U702yBlBraMdS27e3pLqO6trO7f2KWpaKqLSooLjPTHFX70EMr7jUEgRQAAIDra3ClVFFRkaxYsUL27t0r+fn5EhkZKUOHDpWuXWtfRQcAAFeQnHMoZJrYO1IuG9PZ7vjk/tHy48ZDoVVSdqF0iQiQtkSrw5Kzi6SgxNJzy8fT3UzDAwAAAGpT778pLl26VJ577jn59ttvpaSkREJCQsTPz0/S09NNUNWtWze56qqr5JprrpGgoIaV6QMA4OxSciwr7l08sqM8ds6gGsfPHtpBSssrZPa8LZKWV2ym8LU1/3rnL1myI9V2OyLQx6y4BwAAADR6+t7UqVPlwgsvlC5dusj8+fMlJydH0tLS5MCBA6Zaavv27XLPPffIokWLpFevXrJgwYL6PC0AAE7tzSW75ZO/9ktFRYU8u3C7LWipjYYvFwzvaGvM3dam7yVnF9oFUioiqPZzBQAAANS7Uuq0006Tzz//XLy8vGo9rlVSepk+fbps2rRJEhISOLsAgFY/Xe/B7zaZ7U7hltX1VLBv7X8WWnVuZ7nvnrT8Wo9n5hfL9+sT5IQ+UdI+xE9cxZr9mTX29Wsf3CJjAQAAgAuFUldffXW9n7Bfv37mAgCAK0zXU1e8tdK2fWLfqMM+rntkoLnemZJb41hpWbnc9cV60zC9W2SA/HzLBHEVyVXOl9WV4+g3CQAAgLo1uPtoaWmpbNy4URITLc1cY2JiTAhVVxUVAACtUWpusW3b2rj77il9pVtl6FQXDZvUntS8GseueX+VLNycbLZ3peRJUWmZ+Hh6SGuXnlcsCVkFZvukvlEytFOYTOoXLT2iDn+uAAAA0LbVO5QqLy+X++67T1588UXJysqyO6ZNz//973/LrFmzxN29Xm2qAABwaqm1VP70iz3ydLSYEF9b5dD8jYkyrmeE+Ht7mtX4rIGU1YGMAltlVWv14Leb5M2lu223B3QIkZkTe7TomAAAAOBiodSdd94pb7/9tjz++ONy8sknS3R0tNmflJRkmp/fe++9UlxcLLNnz27K8QIA0CxSci2hlL+3h+QXl5nrAbEhR3xcuwAfcXcTKa8Queq9VRLk4ymnDoyRT/46YI73reyztDkh21RTxYX5yZtL9sjsH7fI1eO7yZ2n9mlVK9bN32SpnLaKpLk5AAAAHB1Kvfvuu/Lee++ZQKoqXZHvqquuks6dO8s//vEPQikAgEtVSl0yqpOcOrC9RAT4SIj/kaeqe7i7mWAmKdvy+JyiUlsgpU4bGCObErItoVRavny/br18sfqgOfbKb7vkzz3pMnfGaPH1cv5pfboqYWpleHdinyjJKSyVE/tY/tEKAAAAcFgolZOTI7GxsXUeb9++veTl1eyfAQBAa2QNWzRgOqZTWIMeG+TrZQulqjuhT7TkFVt6VD2/aLtkFZTYHV+9L1P+2pNhpv05u9yiUiksKTfb/5t2jPh5O3+QBgAAAOdR7wZQEyZMkFtvvVVSU1NrHNN9d9xxh7kPAACusPLeV2vizXZEoE+DH98xzK/W/YPiQqRv+yDp2s7SDN0aSHl7usv1J/SQqMqpb/sz8qU1rVCoUxQJpAAAANBklVIvv/yyTJkyxVREDRw40K6n1Pr1680KfN99912DBwAAgLOp2ri7MaHUrKkD5K8XfjfT2dQ9p/WVi0d2MlP7tF/UyQNi5PbP19nur9P1hnUOk8z8Ennvj71yoJWFUvSRAgAAQJOGUh07dpS1a9fKTz/9JH/88YckJloam44cOVIeffRRmTx5MivvAQBcgqd2Kq8UHuDd4Md3aucv6x84WZ5ZsE2CfD3lX8d1szse4mffm0oDKdUx3FJhtT+9QFpTM/jGBHcAAABAvUMp5e7uLqeeeqq5AADgqqwVTmpAhyOvuFeXmyb1atD9O4b5t8rpe1RKAQAAoMlDKfXnn3/K8uXLbZVSMTExMnbsWBkxYkSjBgAAgLM2Ob/39H7N+rpxlaGUNjtfuSddRnQJF2dGKAUAAIBmaXSenJwsxx13nIwePVqeeeYZ+fnnn81Ft0eNGmWO6X0AAGjtrGFLRGDDp+7V1+xzB5rr207ubdvXKdwSSqn/zF0tzo5QCgAAAM0SSl133XVSVlYmmzdvlj179siKFSvMRbd1X3l5ucycOfOoBgMAgDNVSjXltLQLR3SSP+46Ua6b0N22L8TfS6aN6mS247MKpbi0XJzVil1p8umqA2Y7kp5SAAAAaMpQShucv/jii9K796F/0bXSfc8//7z8+OOPjRkDAABOJTW3uFnClpgQX7MaX1UPnzlAvD0sfzwnZReKM6qoqJBr3l9lu01PKQAAADRpKOXj4yPZ2dl1Hs/JyTH3AQCgNSsqLZOsgpIWW1XO3d3NhFUqIcs5Q6n0vGLJyLecI0UoBQAAgCYNpS688EKZPn26fPnll3bhlG7rviuuuEIuvvjiRg0CAABnsTUxx1x7ebhJiJ9Xi4yhvS2UKhBnCOmeXbhNbv5kjexPt6wK+Pv2VLv7EEoBAACgSVffmzNnjukbddFFF0lpaal4e1uavxYXF4unp6dceeWV8tRTTzVqEAAAOIu5f+431+0CfEzVUkuIDfVzmkqpb9cmyLMLt5ttf28PefisgfLqb7vs7hMe0HQN4QEAAOC66h1K6dS8l156SWbPni2rVq2SxMREsz8mJkaGDRsmwcHBTTlOAACaRUqOJQg6a2iHFjvjtkqpzJavlDqYcWgM7/+xT8b3jJTcolJzu39ssFw3oYd4VfbAAgAAAJoklLLS8GnixIkNfRgAAK1CSmWT82M6hbZ4KKUr8Fl9sGKvvLFkt7xy6TDpGR3U7CsRWj23aLtt34vTjpEuEQHNNhYAAAC4Fof902ZSUpI8+OCDjno6AABaRGqOJXCJCGq5xTvah1in7xXI56sOyA1zV8vdX26QXSl5cvoLS6S8vKLZxpJSeT5O7BNlrvem5Ut+cVmLnyMAAAC0fg4LpXQ636xZsxz1dAAANLuKigpJqawCimyBlfes2of62gKgWz5dK9+sjbcdKyotl8XbU5rstbckZsslr/8hT/60RZKzC+XHjZbp+sf1jDDX1ql7vl7uEuDt0WTjAAAAgOur9/S9devWHfb41q1bHTEeAABaTE5RqRSXlrf4inKxlZVSOYWWAKi6X7Yky8TelsqlqrSC6qXFO+XbtfFm/NdO6C5ju1vCpPp6dfEuWbojzVw2xR9abXdMtefpHxsibm4t0wgeAAAAbSyUGjJkiPnLp/4rcnXW/fzlFADQmlmnqgX6eIqvV8tVAYX6e4mfl4cUlFimyVWnFVS1+WrNQXnyJ8s/Em1JzDEBW0NDqd1pebbtX7ZaKrIuH9tFesfY97F67qIhDXpeAAAAoNHT98LDw+W1116T3bt317js2rVLvvvuu/o+FQAATt1PqiWrpJT+I8/Nk3rJ6YPay5lDYm37zxhs2U7KPtQAvaqVe9Ltbh9sxOp9+9NrBl5jurcz1xcO72iup43qJHFh/g1+bgAAAKBRlVLDhg2T+Ph46dy5c63HMzMza62iAgCgtUitXHkvItC7pYciM8Z3M9fvLt8jX6+x9JQaEBtspubVFUppdZS6/ZTe8sSPW+VARoHM35gok/vH1Os1M/OLbeegqqjKkO7hsweYYGxoC65MCAAAgDZYKXXNNddIly5d6jzeqVMneeuttxw1LgAAml1yjiXsiWjBJufV9Y8Ntm0P6BBirjPyS2wNx6v2k9pWGUqN7xlp23/Ve6tsfbIOZ8PBLJn8zG+1vv/oYEvjdS8PdxnXM0ICfOr9b1oAAADA0YdSZ599tlx66aV1Hg8LC5Pp06fX9+kAAHA6WytDna4RAeIsBnY4VJXUJyZIOrfztzU7t/prb4YMf2Sh5BWXiZeHm+n/VHVlvLoqq6q6+8v1klw5ffHEPlES5u9lO+ZMIR0AAADaYCgFAICr2xCfZa4HVlYkOQNvT3f59t/j5MN/jZJ2gT5y2sD2Zv/1c1fLp6sOmu1v1iZIep5l2l3PqCBT0fTBjNG19pYqKSu3m26fX1xqQqu1ByzvfcZxXeWOU/vIW1eMlItGdJRnLhxsxgAAAAA4WqP+ljl37lzJy8ursQ0AQGumPZhU96hAcSYD40JkbA/LKnpnDulg2//JqgPm2toHyt3N0vdJDekYKmMrG5QnZFne1/KdadLz7nny9rI95nZiVqEMf3ihjHp0kbk9uGOo3H1aPwkP8DaPf/zcQXL20Lhmfa8AAABoOxoVSl199dWSlJRUYxsAgNYkq6BEJs1ZLI/P2yJFpWWSmV9i19jbGenUvAfO6GcXRqXmWqbd/d8lx8gxncJs920f4meu4zMt0/ce+WGTuZ717SbZk5onV76zUvKLy2z312l7AAAAgFOHUlXL/llxDwDQWn226oBsT86VlxfvtAU82pMpxO9QPyVnNLEyPErLLZJv97nL6v1ZtfZ+6hBqaVAeXzl9L8jn0Pua8NSvsjE+2+7+JxBKAQAAoBnRJAIA0GYVlhyqEkqoDG4iA33Ezc1NnJk1fCooKZeFBw/9UR5ZrcKrfailUiohy1IpVVh66P3WpoeTTVsEAACAayOUAgC0WUu2p9q2l+5IswtynFmAj6f4V1ldTw3tFCqdwi0r81m1D7FUSh3MKJDSsnJZsz/zsM/r62X/nAAAAEBTIpQCALTZflLLd1mCKPXJX/vN9bjKhuLOrl/7YLvbc2eMrlHh1bEypNqfkS93f7lBdPZ9oI9ns44TAAAAqAuhFACgTbL2WbI6WHl7QIcQaQ2sfaUOV+UUF+YnmlNpM/NftiabfcM6H2qEroIIqQAAANBCCKUAAG1Sco5lxbrqooOdd+W9qi4f20X6xgSZ7RFd7IMmKx9PD4mtXIHP+n7vO6OfvPvPkXLJqE6y6cGT5b+n9TX7zxwS22xjBwAAABQ1/ACANikp29L8u7qoIEsfptbQV+qbmWPk1U9/kPOmDK7zfuN7RcrcP/eZbXc3kY5h/tI9MtDsVxeN6Ci9ogOlb7XpgAAAAIBTVkrNmzdPOnToUGMbAIDWIqWOSqmIQG9pTeICRMID6h7zxN6W8Em1C/QRb0/7P/q1D9WwzuHi782/UwEAAKAVhFLjxo0THx+fGtsAALS2Sqmqoc3kftHi6eFaM9ujgg9Vfv13Sp8WHQsAAABQVYP/WfS+++6TiRMnypgxY8TXt3VMcQAAoK5Q6oQ+UfLm5SPMdvXV61xBZNChfziKbiVTEwEAANA2NPifg5cvXy5nnHGGhIaGynHHHSf33HOPLFy4UAoK7FcxAgDAmVkbf0cG+ZowyhUDKRUZeCiUCvRlih4AAABacSi1YMECyczMlEWLFsmUKVPkr7/+knPOOceEVDqVDwCA1iA5u6hVrbbXWNpD6uKRncw0xf6xIS09HAAAAMCmUf9k6unpKccee6xERkZKeHi4BAUFyVdffSVbtmxpzNMBANCsKioqJDmnsEbPJVf12DkDW3oIAAAAwNFXSr366qsybdo0s+Le2LFj5ccffzQVUloxlZKS0tCnAwCg2WXkl0hJWUWN6W0AAAAAnLhS6pprrjEVUrfccotcd911EhgY2DQjAwCgiVirpMIDvM30NgAAAADNr8F/E//iiy/kkksukY8++siEU1ot9d///lfmz58v+fn5TTNKAAAcKKmyn1RUlZXpAAAAADh5pdRZZ51lLiorK0t+//13+fTTT+X0008Xd3d3KSy0/OszAADOKim77fSTAgAAAFyq0XlaWposXrxYfv31V3PZuHGjhIWFyXHHHef4EQIA4GApOZUr71EpBQAAALSeUGrgwIGyefNmE0KNHz9eZsyYIccff7wMGjSoaUYIAEATVUpFUykFAAAAtK5G5xpCDRgwoGlGBABAE0u29pQKpqcUAAAA0GpCqZkzZ9q2Kyosy2m7ubk5dlQAADShpMrV96KC6CkFAAAAtJRGrYP97rvvmml8fn5+5qJT99577z3Hjw4AgCZApRQAAADQCiul5syZI/fee6/8+9//lmOPPdbsW7JkiZnWl5qaKjfddFNTjBMAAIfQKt/kykopekoBAAAArSiUeuGFF+Sll16Sf/zjH7Z9U6dOlf79+8sDDzxAKAUAcGoZ+SVSUmaZfh4ZSE8pAAAAoNVM30tISJCxY8fW2K/79BgAAM7MWiUVHuAt3p6NmsUOAAAAwAEa/LfxHj16yCeffFJj/8cffyw9e/Z0xJgAAGgySdaV94KokgIAAABa1fS9WbNmyYUXXii//fabrafU0qVLZdGiRbWGVQAAOJPk7MqV94JZeQ8AAABoVZVS5557rqxYsUIiIiLkq6++Mhfd/vPPP+Xss89umlECAOAgyTlUSgEAAACtslJKDRs2TN5//33HjwYAgGaqlIoOZvoeAAAA4PShVHZ2dr2fMDg4+GjGAwBAM/WUYvoeAAAA4PShVGhoqLi5udXrCcvKyo52TAAANJmkytX3qJQCAAAAWkEo9csvv9i29+zZI3feeadcfvnlMmbMGLNv+fLl8s4778hjjz3WdCMFAOAolZaVy+p9mWY7kkopAAAAwPlDqeOPP962/eCDD8qcOXPk4osvtu2bOnWqDBw4UF599VWZPn16kwz0kUceke+//17WrFkj3t7ekplp+aWiqn379sm1115rQrTAwEAzFg3KPD0b1ToLAOBirnznL9s2lVIAAABAK1t9T6uihg8fXmO/7tMV+JpKcXGxnH/++SZ0qmva4GmnnWbut2zZMlO59fbbb8t9993XZGMCALQeZeUVsnxXmtn28XSXmGB6SgEAAACtKpTq2LGjvPbaazX2v/766+ZYU5k1a5bcdNNNpiKrNvPnz5dNmzaZVQGHDBkip556qjz00EPy4osvmqAKANC2HcwokOLScvFwd5O1908WT48G/xEIAAAAwIEaPK/tmWeekXPPPVfmzZsno0aNMvu0Qmr79u3y+eefS0vRCi4NrKKjo237Tj75ZFNZtXHjRhk6dGitjysqKjKX6isNlpSUmEt11n21HQNQf3yX0Fxyi0rl2UU7pF2At7ndPSJAPKRcSkrKW/2HwPcI4LsEOAv+TAL4LlVV38ykwaHUlClTTAD1f//3f7Jlyxaz74wzzpBrrrmmSSuljiQxMdEukFLW23qsLtpzSquwaqu88vf3r/NxCxYsOKrxAuC7hObx3nZ3+Sv1UFWUR3G2/PDDDy51+vkzCeC7BDgL/kwC+C6p/Px8qY9GdQCPi4uTRx99VI6WruI3e/bsw95n8+bN0qdPH2kqd911l9x88812lVIark2ePFmCg4NrTfv0f7STJk0SLy+vJhsX4Or4LqG53D5roYgcqooa0L2jTJnS3yU+AL5HAN8lwFnwZxLAd6kq6yy0JgmldOU7nbKXnJws5eX20x/+8Y9/1Pt5brnlFrn88ssPe59u3brV67liYmJqNFpPSkqyHauLj4+PuVSngdPhQqcjHQdQP3yX0JQ2HMySolL7P6diQvxc7v/ffI8AvkuAs+DPJIDvkqrv37cbHEp9++23cskll0hubq6pJHJzc7Md0+2GhFKRkZHm4ghjxoyRRx55xARlUVFRZp9WNOkY+/Xr55DXAAC0Hul5xXLey8tq7I8KqvkPEQAAAACaX4OXHtLqpn/+858mlNKKqYyMDNslPT29aUYpIvv27ZM1a9aY67KyMrOtFx2H0ul2Gj5ddtllsnbtWvnpp5/knnvukZkzZ9ZaCQUAcG1/7k6TwmrNzOPC/GRCb8s/XAAAAABoWQ2ulDp48KDccMMNh20C3hTuu+8+eeedd2y3ravp/fLLLzJhwgTx8PCQ7777zqy2p1VTAQEBMn36dHnwwQebdZwAAOew4aD9PPYFN42XLhEB4uXR4H+PAQAAAOAModTJJ58sf/31V717PTnK22+/bS6H07lzZ5dbUQkA0DhJ2YV2t3tGB3EqAQAAgNYcSp122mly2223yaZNm2TgwIE1mldNnTrVkeMDAKBRknOKzHX/2GB58EzXWG0PAAAAaNOh1IwZM8x1bdPitNG59nsCAMBZKqVuP6WPDOsc3tLDAQAAAHC0oVR5uX3TWAAAnE1pWbktlGK1PQAAAMBFQikAAJzZJyv3y+2fr7Pd7hTevAtzAAAAAHBgKPX888/LVVddJb6+vmb7cHRlPgAAWkrVQCom2FcCfPj3FwAAAMAZ1etv6s8884xccsklJpTS7bpoTylCKQCAs+gaEdDSQwAAAABwNKHU7t27a90GAMCZdYsklAIAAACclXt973j//ffLb7/9JsXFxU07IgAAHIRKKQAAAMAFQql3331XJkyYIKGhoXLiiSfKww8/LEuXLpXS0tKmHSEAAPWUXLnintXgjqGcOwAAAKC1h1I6bW/Xrl3y4osvSlxcnLz++uty3HHHSVhYmJxyyikye/Zs+fPPP5t2tAAAVFFQXCa/bEmW9DxLFe8jP2y2Hfv0mjEyoks45wsAAABo7aGU6tKli1xxxRXyzjvvyJ49e2Tnzp3y3HPPSVRUlDz66KMyduzYphspAADVPDZvs1zx9kr51zsrpaKiwgRU6t7T+xFIAQAAAE6u0etk79271/SYWrx4sbkuKSmR8ePHO3Z0AADU4p1le6R7ZKDM25Bobv+9L1My8ksku9AypXzayE6cNwAAAMBVQql9+/bJr7/+Kr/88ou5Tk1NNZVRxx9/vMyYMUNGjhwp3t7eTTtaAECbt2JXmtz/zcYa5+HxeZape7EhvuLn7dHmzxMAAADgMqGUTt3r1KmTXHvtteYybNgw8fDgL/0AgOa1OzWv1v2/bUs11zEhvs08IgAAAABN2lPqggsukKKiItPQXFfee/bZZ+Xvv/82PTwAAGguJWXlte5PrFx576GzBvBhAAAAAK5UKfXRRx+Z6y1bttim8D355JNSWFgo48aNM9P4JkyYICNGjGjK8QIA2riUXMtKe3Xp3C6g2cYCAAAAoJlW31N9+vQx0/c+/vhjSUxMlGXLlsmQIUNM9dSYMWOOYigAABxZYlaBbbt9iK9M6B1pux0d7COBPo1ewwMAAABAM2rU39yTkpJMpZS18fm2bdvEx8dHjjvuOMePEACAKvam5du23/nnSMkpLJVft6aY24PjQjlXAAAAgKuFUp988oktiNq6dat4eXmZqXraa2rixIlmJT4NpgAAaCpFpWWyYne62X7/ylHSKzrI9DY895g4+WL1ATnnmDhOPgAAAOBqodSll14qw4cPl7PPPtuEUMcee6z4+fk17egAAKjizSV7bNv9YoPNtZubmzx9wWBzAQAAAOCCoVRGRoYEBNA8FgDQcr5ZG2+uT+obLeEB3nwUAAAAgKs3Os/Ly2tQIKX3BwDAkUrLymVHco7ZfmBqP04uAAAA0BZCqR49esjjjz8uCQkJdd5He3osWLBATj31VHn++ecdOUYAACQ+s1BKyirE29NdYkOYPg4AAAC0iel72tz8v//9rzzwwAMyePBg01sqNjZWfH19zbS+TZs2yfLly8XT01Puuusuufrqq5t+5ACANmVPmqUKt3O4v7i7u7X0cAAAAAA0RyjVu3dv+fzzz2Xfvn3y6aefyu+//y7Lli2TgoICiYiIkKFDh8prr71mqqQ8PDyOdkwAANSQmFVormNDqZICAAAA2lSjc9WpUye55ZZbzAUAgOaUlG0JpWKCfTnxAAAAQFvpKQUAQEvSvoVPL9hmtqODffgwAAAAABdAKAUAcHp70/Jt2zE0OQcAAABcAqEUAMDpJVZO3VOnDohp0bEAAAAAcAxCKQBAncrLK+TZhdtk8baUFjlLablF8s3aeNmfbqmUGtk1XMICvFtkLAAAAABasNE5AKBt+XZdvDy7cLvZ3vP4ac3++k/N3ypz/9xvux0ZRD8pAAAAoE2FUuvWrav3Ew4aNOhoxgMAcCKr92XaNRt3c3NrsddXkYGEUgAAAECbCqWGDBlifhGpzy8kZWVljhobAKCFZRWU2LZzi0olyNerWV+/qLTc7na/9sHN+voAAAAAWrin1O7du2XXrl3m+vPPP5euXbvK//3f/8nq1avNRbe7d+9ujgEAXEd6XrFt+/Xfd0tpmX1I1NSSqzQ4V8f3jmzW1wcAAADQwpVSnTt3tm2ff/758vzzz8uUKVPspux17NhR7r33XjnrrLOaZqQAgGaXmltk235u0XY5mFkgT50/uMb9ft+eIrd+utYcO66nY4IjrczKK7ZU3y68eby4u7lJdLCvQ54bAAAAQCtcfW/9+vWmUqo63bdp0yZHjQsA4GShlPps1YFa73fLJ2slKbtILnvjT4e8rlZk7UrJNdvtArylR1SQdIsMdMhzAwAAAGiloVTfvn3lsccek+LiQ1M6dFv36TEAgGsoL6+QtNxD/6+3Kqk2hS+7sESSc+zDq8YqKC6TmR/+Lf3v/0neXLLb7OsZTRgFAAAAtNnpe1W9/PLLcsYZZ0hcXJxtpT1dnU8boH/77bdNMUYAQAtIzy+W0vKKGvs/X3VAAnw8xdPdTcb2iJALX1ludzyvqNQcb4zP/z4g369LMNtfrYk31z2iCKUAAAAAV9Tg3xpGjhxpmp5/8MEHsmXLFrPvwgsvlGnTpklAQEBTjBEA0AL2puWZ64hAb0mtUjF15xfrbdvdIgJkV6rlflZaNdW1kaHU/vT8Gvvah/g16rkAAAAAOLcG/dZQUlIiffr0ke+++06uuuqqphsVAKDF7U61BES9ooMkLqxM1uzPtB2LDPKRlJyiGoGU0v1dIxr3jxS1TQPU1wIAAADQxntKeXl5SWGh/fLcAADXtK+yaqlzuwD57JoxMnVwrO3YbSf3rvNxGko1lq7uVx0r7gEAAACuqcGNzmfOnCmzZ8+W0tLSphkRAMApJGdb/hGifYiveHq4y2PnDJSHzxogz100RE4dEFPj/tpjyjwup3H/eLEzJVf+3J1eY39siG+jng8AAACAc2tw04+VK1fKokWLZP78+TJw4MAafaS++OILR44PANBCrFPpoiqnz2nz8ktHd7YdD/D2kLziMrOtQdXqfZny9rI9ja6UeqNytT11/rA4WbQlWS4Z1YlG5wAAAICLanAoFRoaKueee27TjAYAUC8VFRWSXVgqIX5eTXbGkiorpaKCa+/pNKRTqCzdkWa5T5CvrfdTY0IpfT+LNieZ7bevGCETekeZfbqyKwAAAADX1OBQ6q233mqakQAA6kXDmhnvrpKftyTJvP+Ml94xQU1cKVX79LkT+kTbQqnoYB9bKPXpqgMyrmeEnDmkQ71f60BGgSRlF4m3h7uM7tbO7COQAgAAAFxbg3tKAQBa1q9bU2Th5iQprxBZuadmDyZHKC0rl9TcosNWSg3rHGbbjgr2lU7h/rbbN328RrIKSur9etY+VPpavl4eRzFyAAAAAC5bKaU+++wz+eSTT2Tfvn1SXFxsd+zvv/921NgAALWYvynRtp2Zb///YEdJyyuWigoR7V3eLqD2UGpQhxCzIp+/t4cE+njKqK7hMvvcgXLH5+tNYJaYVWimF+YVlcrzP2839+0fG1Lrc6XkWN6HtdoKAAAAgOtrcKXU888/L1dccYVER0fL6tWrZeTIkdKuXTvZtWuXnHrqqU0zSgCAjU5zs2psU/EjSa58DQ2JPCpX1avO3d1Nnr94qDx+7iDbdLsLR3SS3tFBtuonnWo4ac5ieWXxLvnHG3/W+XrWqqyIQEIpAAAAoK1ocCj1f//3f/Lqq6/KCy+8IN7e3nL77bfLggUL5IYbbpCsrKymGSUAoNYgKqUyzGmyJud19JM6HOt0v3UHsmRjfLbEZxXaqq8KKlfrq+6+rzeYa0IpAAAAoO1ocCilU/bGjh1rtv38/CQnJ8dsX3bZZTJ37lzHjxAAUGv/JZVaOe2t6ZqcN7xyyRoszVmwTfan59sdG/noQrnt07WmZ5XS63nrE8x0PzW0Y+jRDx4AAACAa4ZSMTExkp5uaazbqVMn+eOPP8z27t27zTQNAIBjaXCzYFOSpOUWSXl5haTmFjd5pdShxuMNr5Q6Y3B723ZiZcWVVU5hqVmdb9XeDPlxQ4L0v/8nufYDSy9CbZR+wYiORz12AAAAAC4aSp1wwgnyzTffmG3tLXXTTTfJpEmT5MILL5Szzz67KcYIAG3ax3/tlxnv/iWXvL5CMvKLpcxaVmQqpZpq+l7jK6WO7xUlnu5uZpybE7LNvjHd2smpA2IOPX9OkTzyw2YpKi23HX/q/MEOGz8AAAAAF1x9T/tJlZdbfomYOXOmaXK+bNkymTp1qlx99dVNMUYAaNO+XRtvrrck5tim1Xl7uEtxWbnkFJWaPk1+3h4Oe72SsnKZ++c+u/5QDaGN0aODfeVgZoHpK6VGdQuXG0/qJdfPXW3eT0JmgRzMKDDHltwxUeLC/B02fgAAAAAuGkq5u7ubi9VFF11kLgCAphHo41ljOlzXiADZk5ZnKo105bqO4Y4LdTYcPLRoxaAOjevxFBNiCaU0SLOOV0VXVl5tTcwxfaR0Yb/2IX4OGTcAAAAAF5++N378eLnvvvtk0aJFUlho3ysEAOB4nlX+IWBDZeWRVjBFVgY81r5SP25IlBW70o769azVWNqwfGBcSKNDKSut6jq5v2XqnlZQqfWVwVd4gLeprAIAAADQ9jS4Umry5Mny22+/yZw5c6S0tFSGDx8uEyZMkOOPP16OPfZY8fdnCgYAOFJ63qHG5psqezRFBvqYpuEHMgpk3f5M8XJ3l2veX2WO7X5siri5uR11KDW0U+NXwmtfpUF6hzA/8fWyTC+0VnRtT861W6kPAAAAQNvT4FDqnnvuMdcaSK1cuVIWL14sv/76qzzxxBNmWh/VUwDgWPFZlt5LdqFUsI9peq4e+HaT3f21z1Swr1ejXy+lcopgY5qc11Yp1SH00PQ86zQ+K2vlFAAAAIC2p8HT96x27dol69evl7Vr18q6deskKChITj31VMeODgDaMF257ryXlplqKKu9afm2Sint2VSb6ivylZZZFqeozbKdqfLXnvRaK6WighofGHVudyh8qtrvqnM7+2ra7pGBjX4NAAAAAG0slJo2bZp06NBBxo4dKz/++KOMHj1a5s2bJ6mpqfLll182zSgBoI35bl28nPrc7/LX3oxaj2s/qaEdw2o9lpp7aLrfG0t2S6975snLi3ea2yk5RXL3l+tNM3OdFjjttRVy3svLpai0TK569y95+LtNh0KpRqy8ZzWxd6T8d0ofOX9YnMw4rqttv07jO65nhO12jyhCKQAAAKCtavD0vY8++kgiIiLkX//6l5xwwgkybtw4+kgBgAPtTMmVf3+4+rD30VDqv1P6ys9bk03QVNW+9HwZ2TVcyssr5Os1B80qd4/P2yL/GtdVHv5+k3y9Jl4+WLFPvrhurO0x2iR9/qYks923ffBRT9/z9HCXq8Z3r/XYaQPby+/bU812z2hCKQAAAKCtanClVFpamrz++utSXFwsd911lwmotGrqv//9r8yfP79pRgkAbWza3pHo1LoQfy+Zfe5A274wf0sfqS/+PmCur3h7payrXK1PbYzPtq16p6pOC/x+XUKN1z+a6XuHE1Sl31UPpu8BAAAAbVaDQ6mwsDCZOnWqWX1v1apVpp9Ur1695Mknn6SnFAA4wM7kvBr7dBpc9UopNaBDiAR4e4iPp7vcf0Z/s293ap5UVFTI4m0pdo/ZlpQjAd6HCmR/3Zps27ZWSVV1NNP3DqdqX6mwAO8meQ0AAAAALjh9TyulrCvu6WXTpk0SGhoqZ5xxhhx//PFNM0oAaEN0+p26eVIvmXFcN8kvLpXtybny6SpLBZS3p7sE+3raqplW3H2SCaFyi0rNvoSsQhNMVbcnLc+EV1Zf/H2wzjHEhfmZZupNQYO05y4aYtcAHQAAAEDb0+BQKioqykzZO+6442TGjBkyYcIEGTjw0PQRAMDRScm19IiKDfUTP28Pc8ksKLEd17DIzc3NdjvQx9PWRNzqhKcX13jePan5prn5kXh5uMk9p/UVd/dDr+FoZw7p0GTPDQAAAMBFQymdrte/v2WKCADA8ZKzC+2m6KluEQGmefmfu9NlUFxIrY/z8qh9RvbMid3lxV92SlJ2oW1lvdosvPl4ySookf6xwXYBFwAAAAA4RU8pDaRKS0tl4cKF8sorr0hOTo7ZHx8fL7m5uU0xRgBoM3IKS2RLouX/q1Wnz2ll1NwZo+X7G8bJcxcNrfPxAzpYVs6zGtIxVI7vFWWbFmid4leb9iG+MqxzGIEUAAAAAOcMpfbu3Wum65155pkyc+ZMSUmxNNKdPXu23HrrrU0xRgBoM6r2eareaNzD3U36x4aYnlJ1efK8wXa3fb3cJSLQ0kz8cFVSKqByGiAAAAAAOGUo9Z///EeGDx8uGRkZ4ufnZ9t/9tlny6JFixw9PgBoU+IzC2zbEY1oNF79MX5eHnbTAJW/N1PzAAAAALS8Bv+z+O+//y7Lli0Tb2/7Zby7dOkiBw/WvZITAODIrNVMd53ap1GnKzzA/v/N/zqum2mErqvuFZWWm3392gfLX3szbPd55sLB0jvaftofAAAAADhdKFVeXi5lZWU19h84cECCgoIcNS4AaJNSKkOp6tVN9aVT/FbefZLkFZVKSVm59IwOslVQHayswuoU7m8XSp09NM4hYwcAAACAJp2+N3nyZHn22Wftmu9qg/P7779fpkyZ0tCnAwA4MJSyPrZLRIAtkFIRVZ6vQ5ifhPp7mW1PdzfOPwAAAIDWEUo9/fTTsnTpUunXr58UFhbKtGnTbFP3tNk5AKDxUnKPPpSqTdWV/OLC/OTq8d0lJthXnrlwiENfBwAAAACabPpeXFycrF27Vj7++GNzrVVSV155pVxyySV2jc8BAA2j0+3S84rNdlSQr0NPX3iApTJKjezaTrpGBMi1E7rzEQEAAABoMY1a/9vT09OEUHqxSkhIkNtuu03+97//OXJ8ANBmpOUW26bUhfodCpEcYVzPSPls1QEZ2ilMurTzd+hzAwAAAECTh1IbN26UX375xay8d8EFF0hoaKikpqbKI488Ii+//LJ069atUYMAgLbusXmb5ZXFu2xNyd0d3Otp6uBYOb5XpAT5eJpegAAAAADQanpKffPNNzJ06FC54YYb5JprrpHhw4ebgKpv376yefNm+fLLL01oBQBomLLyClsgpaKDHdtPyirEz8vhYRcAAAAANHko9fDDD8vMmTMlOztb5syZI7t27TIB1Q8//CA//vijnHLKKY0eBAC0ZWl5lubmVtrvCQAAAABcXb1Dqa1bt5pQKjAwUK6//npxd3eXZ555RkaMGNG0IwQAF5ecbR9KdY8MbLGxAAAAAIDThVI5OTkSHBxstj08PMxKe/SQAoD6qaiokDeW7JafNibWOJacU2jb1ibn43pGcFoBAAAAuLwGNTr/6aefJCQkxGyXl5fLokWLZMOGDXb3mTp1qmNHCAAu4MwXl8q6A1lme/djU+yajX/61wFzfVzPCHnxkmMk2NexK+8BAAAAQKsPpaZPn253++qrr7a7rb9klZWVOWZkAOAiikrLbIGUyi4olRB/S/CUkVcsCzYlme2rx3cnkAIAAADQZtQ7lNLKKABAw6Xk2PeMSsktsoVSG+OzpbS8wjQ3Z9oeAAAAgLak3j2lAACNk1wtlErNPXQ7JdfSTyo21JfTCwAAAKBNIZQCgKNQXl4hm+KzJa+otM77vLV0j93tJ3/aWqOKKjLQh88BAAAAQJvSoJ5SAAB7zy7cJs//vEN6RwfJ1/8+Vny9PGzVUNd/uFpC/b1k/cFD/aTUgYx82/a8DZbV+CKDCKUAAAAAtC2tplLqkUcekbFjx4q/v7+EhobWeh9ttF798tFHHzX7WAG0HWsrG5hvTcqRZTtTbfs/+Wu/LN+VZkKnAxkFZt/n144x10nZRZJVUCK5RaWyel+m2dc+xK9Fxg8AAAAALaXVVEoVFxfL+eefL2PGjJE33nijzvu99dZbcsopp9hu1xVgAYAjVO0PtTftUAXU0h2HAiqr/rEh0j7EVxKyCmVnSq4E+x76X/DZQzvwgQAAAABoUxoVSmVmZspnn30mO3fulNtuu03Cw8Pl77//lujoaOnQoWl+sZo1a5a5fvvttw97Pw2hYmJimmQMAHC4UOqlX3fKpH7REhfmb4Kn6nRqnzWUSs4ulMISywp83SMDJCzAm5MLAAAAoE1p8PS9devWSa9evWT27Nny1FNPmYBKffHFF3LXXXdJS5s5c6ZERETIyJEj5c0335SKioqWHhIAF25ynppbbLfK3rjZv8iLv+yQXSl5Zt/n146VYZ3D5IYTe9r1jtqVmneoyTn9pAAAAAC0QQ2ulLr55pvl8ssvlyeeeEKCgoJs+6dMmSLTpk2TlvTggw/KCSecYPpOzZ8/X6677jrJzc2VG264oc7HFBUVmYtVdna2uS4pKTGX6qz7ajsGoP5c4buUnlcsZeU1g++qq+t1DvORj/41wvZeg3ws/9t94sdD94kI8G7V5wEtxxW+R4Az4LsE8D0CnEWJi/z9rr7jd6toYClRSEiImarXvXt3E0qtXbtWunXrJnv37pXevXtLYWHNKSt1ufPOO03F1eFs3rxZ+vTpY7ut0/duvPFGW4XW4dx3332mx9T+/fvrvM8DDzxgmxpY1YcffmjCLQCoS0K+yONrPcXfs0LyS91qHPdwq5CnR5WJW5VDb21zlzVp9kWqJ8eVy5SO5ZxoAAAAAC4hPz/fFC5lZWVJcHCw4yqlfHx8bNVEVW3btk0iIyMb9Fy33HKLqbo6HA28GmvUqFHy0EMPmUooHXdtdMqhVn9Z6Xvr2LGjTJ48udYTp2nfggULZNKkSeLlZekHA6DhXOG7pKvrydpV0j4sUEL9vGRV5Up6Vp3CA+S008bZ7xucLWe//Ifdvuknj5IRXcKaZcxwLa7wPQKcAd8lgO8R4CxKXOTvd7XlRg4JpaZOnWqmyX3yySfmtpubm+zbt0/uuOMOOffccxv0XBpiNTTIaog1a9ZIWFhYnYGU0mO1HdcP/3A/AEc6DqB+Wtt3SRubp+UWS++YIMkoKLP1hJpzwRB5d/leuWB4nJzw9GKzf2Kf6BrvbWiXdrLz0Sky5bnfZWtSjtnXMyakVZ0DOJ/W9j0CnBXfJYDvEeAsvFr53+/qO/YGh1JPP/20nHfeeRIVFSUFBQVy/PHHS2JioowZM0YeeeQRaSoafKWnp5vrsrIyEzipHj16SGBgoHz77beSlJQko0ePFl9fX5MsPvroo3Lrrbc22ZgAtD1XvLVS1h/Mki+vG1ulUbmvxIb6yZ2nWqYavzjtGFl7IFOuHl97paeHu5t0iwywhVLhrLwHAAAAoA1qcCilPaU08FmyZIlZiU8biR9zzDFy0kknSVPS/lDvvPOO7fbQoUPN9S+//CITJkwwKdyLL74oN910k1lxT8OqOXPmyIwZM5p0XADahpKycjnv5eUmkFJaFVVQbKmUigm2r7Y8bVB7c6kvDakAAAAAoK1pcChlNW7cOHNpLtrgXC91OeWUU8wFAJrCntQ8Wbv/UM+oL1cftG13iQho8PN5e9o3OwcAAACAtqbBodTzzz9f637tLaXT5rRCafz48eLh4eGI8QGAU7BO1atNXFjDV+q88aRe8vOWZLl8bJejHBkAAAAAtJFQ6plnnpGUlBSzvJ82EVcZGRni7+9vejslJyebFfN0Wp2uYgcAriAlt/ZQqk9MkAzv3PCV87pGBMia+yYzdQ8AAABAm9Xg+SPaPHzEiBGyfft2SUtLM5dt27bJqFGj5LnnnjONyGNiYkxvJwBwFZ+tOmB3OzbEV9Y/MFl+vHG8BPg0biY0vaQAAAAAtGUN/k3qnnvukc8//1y6d+9u26dT9p566ik599xzZdeuXfLEE0+YbQBwBbp4wu/bU832tFGd5L7T+0lpeYUENjKMAgAAAAA0IpRKSEiQ0tLSGvt1X2JiotmOjY2VnBzLUucA0NplFx76f960kZ3E14ueeQAAAADQ7NP3Jk6cKFdffbWsXr3atk+3r732WjnhhBPM7fXr10vXrl2PenAA4ExNzoN8PWVAh5CWHg4AAAAAtM1Q6o033pDw8HAZNmyY+Pj4mMvw4cPNPj2mtOH5008/3RTjBYAWC6Uig3w4+wAAAADQUtP3tIn5ggULZMuWLabBuerdu7e5VK2mAgBXW3kvMpBQCgAAAAAcpdFdevv06WMuAODqUqmUAgAAAADnCKUOHDgg33zzjezbt0+Ki4vtjs2ZM8dRYwMA56qUYvoeAAAAALRcKLVo0SKZOnWqdOvWzUzhGzBggOzZs8csmX7MMcc4bmQA4CToKQUAAAAATtDo/K677pJbb73VrLDn6+srn3/+uezfv1+OP/54Of/885tgiADgJKEUPaUAAAAAoOVCqc2bN8s//vEPs+3p6SkFBQVmtb0HH3xQZs+e7biRAYCToFIKAAAAAJwglAoICLD1kWrfvr3s3LnTdiw1NdWxowMAJ0BPKQAAAABwgp5So0ePliVLlkjfvn1lypQpcsstt5ipfF988YU5BgCupKy8QtJodA4AAAAALR9K6ep6ubm5ZnvWrFlm++OPP5aePXuy8h4Al5OeVyzlFSLubiLtAnxaejgAAAAA0DZDqbKyMjlw4IAMGjTINpXv5ZdfbqqxAYDT9JMKD/ARD02mAAAAAADN31PKw8NDJk+eLBkZGY55dQBwcgcy8s11+xDflh4KAAAAALTtRucDBgyQXbt2Nc1oAMCJbDiYJesPZpntLhEBLT0cAAAAAGjbPaUefvhhufXWW+Whhx6SYcOGmSl8VQUHBztyfADQIlbuSZfzX15uu921nT+fBAAAAAC0ZCilK+6pqVOnipvbof4qFRUV5rb2nQKA1u6bNfF2twfFhbbYWAAAAADAFTU4lPrll1+aZiQA4ER8vQ7Nbg4P8JZxPSNadDwAAAAAIG09lDr++OObZiQA4ETiswrN9XE9I2T2uYPE18ujpYcEAAAAAG270bn6/fff5dJLL5WxY8fKwYMHzb733ntPlixZ4ujxAUCLSMgsMNfTRnaS2FA/PgUAAAAAaOlQ6vPPP5eTTz5Z/Pz85O+//5aioiKzPysrSx599FFHjw8AWkR8pqVSqj2BFAAAAAA4Ryilq++9/PLL8tprr4mXl5dt/7HHHmtCKgBo7UrLyiUpxxJKxYb4tvRwAAAAAMAlNTiU2rp1q4wfP77G/pCQEMnMzHTUuACgxXywYp9UVIh4ebhJRKAPnwQAAAAAOEMoFRMTIzt27KixX/tJdevWzVHjAoAWs3pfhrl2c3MTd3c3PgkAAAAAcIZQasaMGfKf//xHVqxYYX5hi4+Plw8++EBuvfVWufbaa5tijADQrFJzi8314+cM5MwDAAAAQBPxbOgD7rzzTikvL5cTTzxR8vPzzVQ+Hx8fE0pdf/31TTNKAGhGqbmWBRyYugcAAAAAThRKaXXU3XffLbfddpuZxpebmyv9+vWTwMDAphkhADSzlBxLKBUZRD8pAAAAAHCa6Xvvv/++qZDy9vY2YdTIkSMJpAC4jILiMknPt0zfo1IKAAAAAJwolLrpppskKipKpk2bJj/88IOUlZU1zcgAoAWs2J1mVt6LDfGViEBvPgMAAAAAcJZQKiEhQT766CMzje+CCy6Q9u3by8yZM2XZsmVNM0IAaEabE3LM9Yiu4eb/cwAAAAAAJwmlPD095fTTTzcr7iUnJ8szzzwje/bskYkTJ0r37t2bZpQA0Mz9pGJCfDnnAAAAAOBMjc6r8vf3l5NPPlkyMjJk7969snnzZseNDABacOW9yECanAMAAACAU1VKKW10rpVSU6ZMkQ4dOsizzz4rZ599tmzcuNHxIwSAFqiUosk5AAAAADhZpdRFF10k3333namS0p5S9957r4wZM6ZpRgcAzaCwpEw2xmdLeUWFLN+VZvYRSgEAAACAk4VSHh4e8sknn5hpe7pd1YYNG2TAgAGOHB8ANLmZH/wti7Yk2277eLrLgA7BnHkAAAAAcKZQSqftVZWTkyNz586V119/XVatWiVlZWWOHB8ANKmcwhJZvC3FbHdp5y/u7m5y6ajOEurvzZkHAAAAAGdsdP7bb7/JG2+8IZ9//rnExsbKOeecIy+++KJjRwcATWBvWp6UlFVIj6hA2RSfLaXlFdIh1E9+vW0i5xsAAAAAnDGUSkxMlLffftuEUdnZ2aanVFFRkXz11VfSr1+/phslADhISVm5HP/kr2Z746yTJaVytb3YUF/OMQAAAAA44+p7Z5xxhvTu3VvWrVtnVtuLj4+XF154oWlHBwAOlpBZaNt+9IfNklq52l5kkA/nGgAAAACcMZSaN2+eXHnllTJr1iw57bTTajQ5B4CWlFtUKl+vOWiuD2dvep5t+4MV+yQ1t9hss9oeAAAAADhpKLVkyRLT1HzYsGEyatQo+d///iepqalNOzoAqKd7v9og//lojfz3i/WSnlcsbyzZLRsOZtW4357UQ6GUOphZYK4JpQAAAADASUOp0aNHy2uvvSYJCQly9dVXy0cffWQanJeXl8uCBQtMYAUALeXL1QfN9Tdr42XOgq3y0Heb5NI3Vtjdp6KiQh79YYvdvs0J2eaaUAoAAAAAnDSUsgoICJB//vOfpnJq/fr1csstt8jjjz8uUVFRMnXq1KYZJQA0wPt/7DPXmfklkpFXbFbYW7s/U35YnygFJWV2992SaAnU6SkFAAAAAE4eSlWljc+feOIJOXDggMydO9dxowIAB1m4OUnO+N8SOfPFpTLzw7/NvgBvD4kNsV9tLyLQm3MOAAAAAK0llLLSpudnnXWWfPPNN454OgBwmNs+Wydl5RV2+56+YIgM6BBit4/pewAAAADQvDyb+fUA4KjEZxbIcU/8Yguanj5/sEzqH92g5zi5f7TpLzV/U5JtH9P3AAAAAKB5EUoBaBVyi0rl+3Xx8uOGRLvKp1s+XSsPFvc329HBPnLdhB7yx640aRfobestVZ2bm5ucMiBG/jdtqGxLzJGukQHi6+XRbO8FAAAAAEAoBaCVePLHLfLO8r21Hrvv643mukOon0wf28VcXv99l+24t4e7FJeV1wimTh8UKzKoiQcOAAAAAGi6nlIA0NTmrtx/xPtcMqpzrdPxnr94qFw+tovZvmz0ofsAAAAAAFoOoRSAVqFjmJ9t29/bQ1b890T5xxj7gGlsj3a27eFdwm3b4QHecuepfeT9K0fJ3af1baYRAwAAAAAOh55SAFqFpOwic/35tWOkd0ywBPp4Sscwf7v7tAs4VB2lU/mum9BdtibmyNBOoeLl4S7jekY0+7gBAAAAALUjlALg9PKLS02jc9UrOsgEUtWn6IX4eYm3p33x5+2n9GnmkQIAAAAA6ovpewCcRnFpudz39QZZtDnJbn9qTrG59vVytwVSKqpKKNU+xLcZRwoAAAAAOFqEUgCcxgcr9sq7y/fKle/8Zbc/JdcydS8i0MesmmdVtVKqR1RgM44UAAAAAHC0CKUAOA3t/1SblJxDoVRV0VWqo3pHBzXx6AAAAAAAjkRPKQBO46OV+23bBcVl4uftIaVl5fLr1uQalVEq2NdL5lwwWP7YlSaXjrZfiQ8AAAAA4NyolALgFPan59vdfvKnrZbr+VttYVX1Sil1zjFx8sR5gyUswLuZRgoAAAAAcARCKQDNoqy8Qq55b5U89N2mWo/vSMm1u/3m0t2yMyVXXlm8y7YvIpDgCQAAAABcBaEUgGaxZn+G/LgxUd5YsltKysprHN+dkldj3wuLttvdLq+oaNIxAgAAAACaDz2lADSLfVWm52nj8g0HMuR/G93ljf1/SEFxueQXl9V4zNKdaXa3vTzI0QEAAADAVRBKAXC4wpIyOevFpZKZXyJzrxotXdr5y00fr7UdT8wulP/9slO2Z7uLZGfX+TzWVff6xwab6X//GNOFTwsAAAAAXAShFACH2xifJVsSc8z2xKd+rXH8lcU7ZXeapXIqOshHkirDJ3XqgBiZtyHR7v6Pnj1QBncM5ZMCAAAAABfCXBgADhefWXjY4z9tTJKcwlKzfdvknrb9d5zSRx4+a4B0iwiwu3/ndv58SgAAAADgYgilADhUeXmFfLn6oN2+p84fLPef0U++u36c3f6OARUyvEuY7Xav6EBpF+gjP986Qcb3irTtD/Vn1T0AAAAAcDVM3wPgUD9sSJCftyTb7TtvWFyN+53YJ1JODk6QDqF+8uG/RsmqvRlyfJUgqm/7IPltWwqfDgAAAAC4KEIpAA61tbKXlNW0UZ3sbj930RBZsTtd7jq5p/y8IMHsG9sjwlyqmjmxhxzMKJAzh3TgEwIAAAAAF0QoBeCoVVRUiJubm92KeTdP6iWXH9tFgnzs/zejIZNeSkpKDvucwb5e8r9px/DpAAAAAICLoqcUgKPy0Z/7ZNCs+bJyT7pdKBUZ5GOCJWtYBQAAAABAVYRSAI7KnV+sNyvpXf/halm9L0MWVfaTigz04cwCAAAAAOpEKAXAIRKzC+XxeVtst3tFB3FmAQAAAAB1IpQC4DDawFzdPaWvdGrnz5kFAAAAANSJRucAGmXZjlRZcyCz1mOju7XjrAIAAAAADotQCkCjmptrL6m6RAR5c1YBAAAAAIfF9D0A9ZJdWCJvLd0tf+1Jlwe+3Wjbf92E7vLkeYPs7tsugCbnAAAAAIDDo1IKQL289OtOc6lq6Z0nSIdQPykoLpPbPltn9kUEeou3J3k3AAAAAODw+M0RwGGVlpXLXV+srxFIxQT7mkBK+Xl7SGyIr9k+e2gHzigAAAAA4IiolAJwWH/vy5S5f+6rsT8q2H6K3gczRsuGg1lyUt9ozigAAAAA4IgIpQAcVkJWQa37o4LsQ6muEQHmAgAAAABAfRBKAahVRUWFZBeUyoGMOkKpYMt0PQAAAAAAGoNQCkANRaVlcvGrf5ipe1ZXj+8mBzIL5Pt1CbVWSgEAAAAA0BCEUgBq2JqYYxdIqe5RgVJUWm67HRVEpRQAAAAAoPEIpQDUkJJTZK67RQbITSf1Enc3N5ncP9q2X0VXa3QOAAAAAEBDuEsrsGfPHrnyyiula9eu4ufnJ927d5f7779fiouL7e63bt06Oe6448TX11c6duwoTzzxRIuNGWjNkivDp67tAuSMwbFy2qD24uXhLkG+h3Lszu1oag4AAAAAcPFKqS1btkh5ebm88sor0qNHD9mwYYPMmDFD8vLy5KmnnjL3yc7OlsmTJ8tJJ50kL7/8sqxfv17++c9/SmhoqFx11VUt/RaAViO3qFTWHbBM3Yus1jcqzN/btt09klAKAAAAAODiodQpp5xiLlbdunWTrVu3yksvvWQLpT744ANTOfXmm2+Kt7e39O/fX9asWSNz5swhlAIa4F/vrJQ/dqXXGkpNGdhetiXlyJhu7cTNzY3zCgAAAABw7VCqNllZWRIeHm67vXz5chk/frwJpKxOPvlkmT17tmRkZEhYWFitz1NUVGQuVlpxpUpKSsylOuu+2o4BrVlJWblc9uZfsqpKg/PYEJ8aP+s3TOzmkO8A3yXg6PE9AhyD7xLA9whwFiUukjnUd/ytMpTasWOHvPDCC7YqKZWYmGh6TlUVHR1tO1ZXKPXYY4/JrFmzauyfP3+++Pv71zmGBQsWHMU7AJxLfqnITwfcZVWCfZu5hG1r5YfEtU362nyXAL5HgLPgzySA7xHgLBa08swhPz/f+UOpO++801QyHc7mzZulT58+ttsHDx40U/nOP/9801fqaN11111y880321VKaZN07U8VHBxca9qnPxyTJk0SLy+vo359oKXpinonPvO7FJSUm9udwv1kX3qB2Z52xknSLuBQ9aEj8V0C+B4BzoI/kwC+R4CzKHGRzME6C82pQ6lbbrlFLr/88sPeR/tHWcXHx8vEiRNl7Nix8uqrr9rdLyYmRpKSkuz2WW/rsbr4+PiYS3X64R/uB+BIx4HWYltKhi2QOqlvtNx+Sm9JzS2SotJyiQlt+mbmfJcAvkeAs+DPJIDvEeAsvFp55lDfsbdoKBUZGWku9aEVUhpIDRs2TN566y1xd7efZjRmzBi5++67TapoffOaLvbu3bvOqXsALJVSanyvSHl9+nCz3Ss6iFMDAAAAAGhS9smOk9JAasKECdKpUyfTRyolJcX0idKL1bRp00yT8yuvvFI2btwoH3/8sTz33HN2U/MA1JRcGUpFVVtpDwAAAACAptQqGp1rxZM2N9dLXFyc3bGKigpzHRISYpqTz5w501RTRUREyH333SdXXXVVC40acH6zf9wiL/2602xHEkoBAAAAAJpRqwiltO/UkXpPqUGDBsnvv//eLGMCXMH7y/fatrtFNH3/KAAAAAAAWlUoBcDxCkvKJKeo1Gz/b9pQOaV/3QsCAAAAAADgaIRSQBulK+wpbw93OW1ge3Fzc2vpIQEAAAAA2pBW0egcgOOtO5BlrsMCvAikAAAAAADNjlAKaKOun7vaXNPgHAAAAADQEgilgDbo0R82S1m5ZeXKs4Z0aOnhAAAAAADaIEIpoI35bVuKvPrbLrM9rkeE/Ou4bi09JAAAAABAG0QoBbQxS3ek2rbfumJEi44FAAAAANB2EUoBbUhhSZm8Ulkl9dCZ/cXLg/8FAAAAAABaBr+RAm3Ig99tsm33jA5q0bEAAAAAANo2QimgDdmWmGOuh3cOk5Fdwlt6OAAAAACANoxQCmhDUnKLzPUdp/YRd3e3lh4OAAAAAKANI5QC2pCUHEsoFRno09JDAQAAAAC0cYRSQBuxYlea5BeXme3IIEIpAAAAAEDLIpQC2ogf1ifYtgN8PFt0LAAAAAAAEEoBbayf1I0n9WzpoQAAAAAAIJRLAC6qpKxc7vhsnSzakiyju4VLam6x2d8zKqilhwYAAAAAAKEU4Kp+3pIsX6w+aLZ/2phk2x8dTD8pAAAAAEDLY/oe4KJ2JOfWuj862LfZxwIAAAAAQHVM3wNaqYqKCskuLDXXwb5e4u7uJr9uTZatiTm2Sil1cv9ou0qp9iGEUgAAAACAlkcoBbQyu1Jy5d8frpbE7EJJz7P0iRocFyKXjOost3++rsb9pwxsbxdKeXpQIAkAAAAAaHmEUkAr8+PGRNmUkG23b+2BLFl74FAgde4xceY6IshbTu4fIyO6hMnKPRly6oCYZh8vAAAAAAC1IZQCWpnErEJzPb5XpLxw0VA5cc5iSc0tsh1fcsdEiQvzt3vMCxcfI0t3pMqE3pHNPl4AAAAAAGpDKAW0EnPmb5XXft8tBSVl5vbkftES4u8lMSE+tlBqUFxIjUBKxYT4yrnDLNVTAAAAAAA4A0IpoBXQZub/9+tOKS2vqNGwPMzf27bvnKEdWmR8AAAAAAA0FB2PgVYgM7/ELpDy9XKXwR1Dzbb2jHJz04bmMXL5sV1bcJQAAAAAANQflVJAK5Ccc6hn1JwLBpsgKsDH8vW9dHRn09hcgyoAAAAAAFoLQinAye1IzpFv18ab7V7RgXJO5cp6Vfl5e7TAyAAAAAAAaDxCKcCJFZaUydkvLpOcolJzOzrY0kcKAAAAAIDWjlAKcGJJ2YUmkPJ0d5NxPSPkqvHdWnpIAAAAAAA4BKEU4CRKysplV0qe9IwKFHd3N7MvNdfSS6p9qK+8fcXIFh4hAAAAAACOQ2dkwAlUVFTIP974U05+9jd57fddtv0plQ3OIwJ9WnB0AAAAAAA4HpVSgBNYdyBLlu9KM9uPzdsiS3emSUlpuW1fJKEUAAAAAMDFEEoBTmDdgUy7279tS7G73SMqsJlHBAAAAABA02L6HuAEUnKLzfWkftHy3EVD5MLhHW3HooJ85N8n9GjB0QEAAAAA4HiEUoATsPaO6h8bLGcO6SBnDom1Hbt2Qnfx96aoEQAAAADgWgilACcKpSKDLA3NB3UMlQ6hfhLg7SHjekS08OgAAAAAAHA8yi8AJ5CSW2TX0DzQx1MW3zZByioqxMfTo4VHBwAAAACA4xFKAU4gtbJSKqKyUkp5erjzBQUAAAAAuCym7wEtrKKiokalFAAAAAAAro5KKaCZe0ftTcuTvOIySc4ulDMGx0pRabkUl5bb9ZQCAAAAAMDVEUoBzSS3qFROePpXySkste17btF2KS+vMNtBvp7i60X/KAAAAABA20AoBRylVxbvlL/3ZcjUwR3ktEHt67zf9qQcE0h5ebhJSZkliDqQUWA73j82mM8CAAAAANBmEEoBR+HjlfvksXlbzPayHWlSWFImO1NyzSUjr0SKy8rl5UuHSUyIr+xLzzf3G9opTD6+arRcP3e1fLcuweyLCfaVty4fyWcBAAAAAGgzCKWAo/D0/G227ZyiUrnl07U17vPkT1tlxviuMm99orndKdxf3Nzc5KnzB9tCqWmjOomfN1P3AAAAAABtB6EU0EhZ+SWSnGNZNe/Gk3rKil3pUl5RIR3C/KRLuwBZvC1FVu3NkM//PmAuViO7hJtr7R91Qp8oWXcgSy4e2YnPAQAAAADQphBKAY304q87/r+9OwGOosz7OP6fHJP7JAlJIGBCJEhgua8FEQQh1L4g6FbpgiiuQnG4paLIYhUC69aLu6JlLS+HJ7C1ciy7Bo9VXFBgxeVSCJeQNRIChgQwkIPcyfRbz4MzO5OLIJOZJPP9VDWTnu6Z9MzwT3f/5nmetnW9e2ps93rLZ41Mkqc2Z8hXOVdt96XEBsukvvG2+bceHig1FkPMPl58DgAAAAAAj0IoBTShsKxKPjt1SfKLK+ot23YkV98mxwQ3+FjVEmrt9AFNvr9eXiYxe5n4DAAAAAAAHodQCmjC9LcPyvHcoibfo/+b2o/3EAAAAACAm0QoBTSisqZWTl64HkiN7B4tcaH+9dYZ27OjhAeaeQ8BAAAAALhJhFJAIzLOFYrFEAn285ENjw7SV8wDAAAAAADOwejKQCNW/DNT33YKDyCQAgAAAADAyQilgEYUlVfr27vviOE9AgAAAADAyQilgEZcKqnUt1P6deI9AgAAAADAyQilgAacv1ImhWXXW0rFhPjxHgEAAAAA4GQMdA6IyJrd38nmQ+ekuLxaKqotUl5dq98Xs4+XhAX48h4BAAAAAOBkhFLwKLUWQ9bszpITucWSV1QunSMCJTTAV97PyJWyqutBlFWIn4/c178Tg5wDAAAAANACCKXgUb749rKs+Od/bPNHvy9yWL551lCJC/PXraPCA81u2EIAAAAAADwDoRQ8wpXSKjmeWyTbT+Tb7nvtgb6SW1guFoshJpPIqJQY6dUpzK3bCQAAAACApyCUQruVmV8iOQWlutXTvI2H5YdrVbZlC9N6yGSuqgcAAAAAgNsQSqFdulhcIf+z8guprjUc7u/VKVQiAs0yhUAKAAAAAAC3IpRCu3Ts+6J6gdT/TuktU4d0cds2AQAAAACA/yKUQru0eNsJfXtv33hZOjFVvr10TQbdFuHuzQIAAAAAAD8ilEK7U1RWLfnFFfrnAV0jJCLILIMTI929WQAAAAAAwI6X/QzQ1hmGIcs+PGmbf2hIV7duDwAAAAAAaBgtpdAmg6f3My5I5sUSCfbzkS6RgWIyXV+WfjhXPjt9Sf+clhorXl4/LgAAAAAAAK0KoRRatcqaWskvqvgxeLoeMH1w9II8tSWjycd1CDLLgrQUF20lAAAAAAC4WYRS7VR5Va0EmL1/8uNrLYZYjOtXr1NRkI93/Z6eFdW18p+LJRIRaJbYMH8dHhWUVklpZY3kFJSJr7dJbosK0j+rcMkqyOwjo1NiJNjfR7wbacmkWkPVWAyZ9uYB+SrnqsSE+ElcmL9elltYrm8To4KkW3SQlFTU2B5n9vGSvgnhMndU8i29fgAAAAAA0LIIpdqph94+IN4mk3SOCNADfceHB+j7o4LNusvbmculsu9MgeRevR7w2FNhVM6VMqmqseh5lRulxIZK947BUnCtSn64Vimh/r5yKr/YIRC6WT5eJunaIVB8vOoHXlfKquRySaVt/lJJpZ6sVKOp1dP6yx1xoT/59wMAAAAAAPchlGqHzly+JkfOXRWLIXLw7K0/n3qeU3nFeqpLBVylVTXyY6MqHXqpFkpdI1ULpmrdcioyyGxr5aTWU62rzhaU6ZZQ310uveHv79UpVObf093hvriwAAIpAAAAAADaMEKpdigpOlj2LBgtn526KBU1Fsm+XCoVNbU6BFI/V9dadAulmFB/GZrUQaKCzPWeIzTAVxIiAvXPP5RWyuGcq1JYVq275HUM9Zf84gpJiQ2RwbdFSnWtIYXlqtterSTHBN9w+1TXvOKKmuvd/a79t/WTPTVAeafwAMkrqtChVKCZ/6oAAAAAALQnnOm3UwmRgTJjeKJTniss0Fe6RTceNvl4iwSYr3cPbA41YHlYgK+eREJu+DoAAAAAAED7U38wHwAAAAAAAKCFEUoBAAAAAADA5QilAAAAAAAA4HKEUgAAAAAAAHA5QikAAAAAAAC4HKEUAAAAAAAAXI5QCgAAAAAAAC5HKAUAAAAAAACXI5QCAAAAAACAyxFKAQAAAAAAwOUIpQAAAAAAAOByhFIAAAAAAABwOUIpAAAAAAAAuByhFAAAAAAAAFyuTYRSZ8+elccee0wSExMlICBAunXrJkuWLJGqqiqHdUwmU71p//79bt12AAAAAAAA1OcjbcDp06fFYrHI66+/LsnJyXLixAmZOXOmlJaWyooVKxzW3blzp6SmptrmO3To4IYtBgAAAAAAQJsPpdLS0vRklZSUJJmZmbJmzZp6oZQKoWJjY92wlQAAAAAAAGhX3fcaUlRUJJGRkfXunzRpksTExMiIESPkgw8+cMu2AQAAAAAAoB20lKorKytLVq5c6dBKKjg4WF555RUZPny4eHl5yd///neZPHmybNu2TQdVjamsrNSTVXFxsb6trq7WU13W+xpaBqD5qCXg1lFHgHNQSwB1BLQW1e0kc2ju9psMwzDETX7729/KH/7whybXOXXqlPTo0cM2n5ubK3fddZeMGjVK3nrrrSYf+/DDD0t2drZ88cUXja6zdOlSWbZsWb37N27cKIGBgc16HQAAAAAAALiurKxMpk6dqnu5hYaGSqsMpS5fviwFBQVNrqPGjzKbzfrnCxcu6DBq6NChsn79et0iqimrVq2S3//+95KXl9fsllLqDevSpYsOs0JCQhpM+3bt2iWjR48WX1/fZrxKAA2hloBbRx0BzkEtAdQR0FpUt5PMoaSkRBITE6WwsFDCwsJaZ/e96OhoPTWHaiGlPpQBAwbIunXrbhhIKRkZGRIXF9fkOn5+fnqq231PvXkAAAAAAAD46eFUqw2lmksFUqqFVNeuXfU4UqqFlZX1SnsbNmzQLar69eun59977z155513btjFr674+Hg5f/68biVlMpnqLVehVUJCgl6nqSZoAJpGLQG3jjoCnINaAqgjoLUobieZg+qUpwIplbE0pU2EUjt27NCDm6upc+fODsvsex+++OKLkpOTIz4+Pnocqi1btsgvf/nLm/pdqgVW3d/REPWfoy3/BwFaC2oJoI6A1oJ9EkAdAa1FaDvIHJpqIdUqxpRqq6mlemNvNFgXAGoJaGnskwBqCWgt2CcB1NJPceOBmQAAAAAAAAAnI5S6SWpQ9CVLljgMjg7g5lFLwK2jjgDnoJYA6ghoLfw8LHOg+x4AAAAAAABcjpZSAAAAAAAAcDlCKQAAAAAAALgcoRQAAAAAAABcjlDqJq1atUpuu+028ff3lyFDhsjBgwdb5pMB2qClS5eKyWRymHr06GFbXlFRIfPmzZMOHTpIcHCw3H///XLx4kWH5zh37pz84he/kMDAQImJiZEFCxZITU2NG14N4Br/+te/ZOLEiRIfH69rZtu2bQ7LDcOQF154QeLi4iQgIEDGjh0r3377rcM6V65ckWnTpkloaKiEh4fLY489JteuXXNY59ixY3LnnXfq/VdCQoL88Y9/dMnrA1pLLc2YMaPePiotLc1hHWoJnm758uUyaNAgCQkJ0cdhkydPlszMTId1nHU8t3v3bunfv78ezDk5OVnWr1/vktcItIY6GjVqVL190uzZsz2yjgilbsKWLVtk/vz5eiT8w4cPS58+fWT8+PFy6dKllvuEgDYmNTVV8vLybNPevXtty55++mn58MMPZevWrbJnzx65cOGC3HfffbbltbW1+g9vVVWV/Pvf/5YNGzboP6zqhBxor0pLS/X+RH3p0RAVHv3pT3+StWvXyoEDByQoKEjve9RJgZUKpE6ePCk7duyQjz76SJ+cz5o1y7a8uLhYxo0bJ127dpWvv/5aXn75ZR0iv/HGGy55jUBrqCVFhVD2+6hNmzY5LKeW4OnU8ZkKnPbv36/3KdXV1Xr/oerLmcdz2dnZep3Ro0dLRkaGPPXUU/L444/Lp59+6vLXDLijjpSZM2c67JPsvzD0qDoy0GyDBw825s2bZ5uvra014uPjjeXLl/MuAoZhLFmyxOjTp0+D70VhYaHh6+trbN261XbfqVOnDPVnaN++fXr+448/Nry8vIz8/HzbOmvWrDFCQ0ONyspK3mO0e6oe0tPTbfMWi8WIjY01Xn75ZYda8vPzMzZt2qTnv/nmG/24Q4cO2db55JNPDJPJZOTm5ur51atXGxEREQ51tHDhQiMlJcVFrwxwby0pjzzyiHHvvfc2+hhqCajv0qVLup727Nnj1OO55557zkhNTXX4XQ888IAxfvx4Pga0+zpS7rrrLuPJJ59s9DGeVEe0lGomlVCqb5dVtwkrLy8vPb9v376WygyBNkd1K1JdJ5KSkvQ3zqrZqaLqR31LYF9Dqmtfly5dbDWkbnv37i0dO3a0raNahKhWHqoVCOBp1Ddg+fn5DnUTFhamu4/b143qsjdw4EDbOmp9tY9SLaus64wcOVLMZrNDbamm5FevXnXpawLcSXVzUF0gUlJSZM6cOVJQUGBbRi0B9RUVFenbyMhIpx7PqXXsn8O6DudV8IQ6snr33XclKipKevXqJYsWLZKysjLbMk+qIx93b0Bb8cMPP+gmdPb/KRQ1f/r0abdtF9CaqBNl1axUHeyrJqjLli3TY9icOHFCn1irE2J18ly3htQyRd02VGPWZYCnsf6/b6gu7OtGnWTb8/Hx0Qc+9uskJibWew7rsoiIiBZ9HUBroLruqS5Gqha+++47ef7552XChAn64N3b25taAuqwWCy6O9Dw4cP1SbPirOO5xtZRJ9zl5eV6DEWgvdaRMnXqVD2sQnx8vB73c+HChfrLwvfee8/j6ohQCoDTqIN7q5/97Gc6pFJ/bP/617+2mT+KAID26cEHH7T9rL59Vvupbt266dZTY8aMceu2Aa2RGhNHfbFoPz4oAOfUkf3Yn71799YXtFH7IvWlido3eRK67zWTalanvkWre2UJNR8bG9sSnw3Q5qlv0bp37y5ZWVm6TlQ32MLCwkZrSN02VGPWZYCnsf6/b2rfo27rXnBDXZlFXUWM2gIap7qZq+M7tY+ilgBHTzzxhL5wxq5du6Rz584O+yVnHM81to66iixfZKK911FDhgwZom/t90meUkeEUs2kmqkOGDBAPvvsM4emeGp+2LBhLfX5AG2auiS9SvtV8q/qx9fX16GGVBNVNeaUtYbU7fHjxx1OsNUVK9Qf1p49e7rlNQDupLoZqQMO+7pRTbLVWFH2daNODtQ4H1aff/653kdZD3DUOuqKfGocEPvaUl1t6boHT/X999/rMaXUPkqhlgB9ESx9Ip2enq73JXW7fjvreE6tY/8c1nU4r4In1FFDMjIy9K39Pslj6sjdI623JZs3b9ZXPFq/fr2+QsusWbOM8PBwhxHxAU/2zDPPGLt37zays7ONL7/80hg7dqwRFRWlrzihzJ492+jSpYvx+eefG1999ZUxbNgwPVnV1NQYvXr1MsaNG2dkZGQY27dvN6Kjo41Fixa58VUBLaukpMQ4cuSIntRu+dVXX9U/5+Tk6OUvvfSS3te8//77xrFjx/TVwxITE43y8nLbc6SlpRn9+vUzDhw4YOzdu9e4/fbbjV/96le25epqSR07djSmT59unDhxQu/PAgMDjddff52PFx5RS2rZs88+q68OpvZRO3fuNPr3769rpaKiwvYc1BI83Zw5c4ywsDB9PJeXl2ebysrKbOs443juzJkzej+0YMECffW+VatWGd7e3npdoL3XUVZWlvG73/1O1092drY+xktKSjJGjhzpkXVEKHWTVq5cqf8Im81mY/Dgwcb+/ftb5pMB2iB1CdK4uDhdH506ddLz6o+ulTqJnjt3rr40vfoDOmXKFP0H2t7Zs2eNCRMmGAEBATrQUkFXdXW1G14N4Bq7du3SJ9B1J3X5esVisRiLFy/WoZL6YmTMmDFGZmamw3MUFBToECo4OFhfKvjRRx/VJ+H2jh49aowYMUI/h6pPFXYBnlJL6kRAHdirA3p1OfuuXbsaM2fOrPfFIrUET9dQDalp3bp1Tj+eUzXbt29ffdyoTsjtfwfQnuvo3LlzOoCKjIzUx2XJyck6WCoqKvLIOjKpf9zdWgsAAAAAAACehTGlAAAAAAAA4HKEUgAAAAAAAHA5QikAAAAAAAC4HKEUAAAAAAAAXI5QCgAAAAAAAC5HKAUAAAAAAACXI5QCAAAAAACAyxFKAQAAAAAAwOUIpQAAANzo7NmzYjKZJCMjo8V+x4wZM2Ty5Mkt9vwAAAA/BaEUAADALQY+KlSqO6WlpTXr8QkJCZKXlye9evXicwAAAB7Fx90bAAAA0NapAGrdunUO9/n5+TXrsd7e3hIbG9tCWwYAANB60VIKAADgFqkASgVL9lNERIReplpNrVmzRiZMmCABAQGSlJQkf/vb3xrtvnf16lWZNm2aREdH6/Vvv/12h8Dr+PHjcvfdd+tlHTp0kFmzZsm1a9dsy2tra2X+/PkSHh6ulz/33HNiGIbD9losFlm+fLkkJibq5+nTp4/DNt1oGwAAAJyBUAoAAKCFLV68WO6//345evSoDnsefPBBOXXqVKPrfvPNN/LJJ5/odVSgFRUVpZeVlpbK+PHjdeB16NAh2bp1q+zcuVOeeOIJ2+NfeeUVWb9+vbzzzjuyd+9euXLliqSnpzv8DhVI/fnPf5a1a9fKyZMn5emnn5aHHnpI9uzZc8NtAAAAcBaTUferMwAAANzUmFJ/+ctfxN/f3+H+559/Xk+qFdTs2bN1sGM1dOhQ6d+/v6xevVq3lFItlo4cOSJ9+/aVSZMm6QBIhUp1vfnmm7Jw4UI5f/68BAUF6fs+/vhjmThxoly4cEE6duwo8fHxOmRasGCBXl5TU6Off8CAAbJt2zaprKyUyMhIHWYNGzbM9tyPP/64lJWVycaNG5vcBgAAAGdhTCkAAIBbNHr0aIfQSVHBj5V9+GOdb+xqe3PmzNGtqg4fPizjxo3TV837+c9/rpepVkuqq501kFKGDx+uu+NlZmbqYEwNmj5kyBDbch8fHxk4cKCtC19WVpYOn+655x6H31tVVSX9+vW74TYAAAA4C6EUAADALVIhUXJyslPeRzX2VE5Ojm4BtWPHDhkzZozMmzdPVqxY4ZTnt44/9Y9//EM6derU4ODsLb0NAAAACmNKAQAAtLD9+/fXm7/jjjsaXV8NMP7II4/oboGvvfaavPHGG/p+9Rg1LpUaW8rqyy+/FC8vL0lJSZGwsDCJi4uTAwcO2Jar7ntff/21bb5nz546fDp37pwO0uynhISEG24DAACAs9BSCgAA4BapcZry8/MdD7J8fGyDg6sByVUXuhEjRsi7774rBw8elLfffrvB53rhhRf0+E+pqan6eT/66CNbgKUGSV+yZIkOi5YuXSqXL1+W3/zmNzJ9+nQ9npTy5JNPyksvvaSvmNejRw959dVXpbCw0Pb8ISEh8uyzz+pxp1S3P7VNRUVFOtwKDQ3Vz93UNgAAADgLoRQAAMAt2r59u26hZE+1XDp9+rT+edmyZbJ582aZO3euXm/Tpk26xVJDzGazLFq0SA+AHhAQIHfeead+rBIYGCiffvqpDp4GDRqk59XYTyp4snrmmWf0uFIqXFItqH7961/LlClTdPBk9eKLL+qWUOoqfGfOnJHw8HA98LoamP1G2wAAAOAsXH0PAACgBamr76Wnp+vBwgEAAPBfjCkFAAAAAAAAlyOUAgAAAAAAgMsxphQAAEALMgyD9xcAAKABtJQCAAAAAACAyxFKAQAAAAAAwOUIpQAAAAAAAOByhFIAAAAAAABwOUIpAAAAAAAAuByhFAAAAAAAAFyOUAoAAAAAAAAuRygFAAAAAAAAlyOUAgAAAAAAgLja/wNOi1FyiQ98rwAAAABJRU5ErkJggg==",
|
||
"text/plain": [
|
||
"<Figure size 1200x600 with 1 Axes>"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"plot_training_curves(\n",
|
||
" training_histories, f\"plots/{AGENT_TO_TRAIN}_training_curves.png\", window=100,\n",
|
||
")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "0fc0c643",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Loaded checkpoints: ['SARSA', 'Q-Learning']\n",
|
||
"Missing checkpoints (untrained/not saved yet): ['DQN', 'Monte Carlo']\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# Load all available checkpoints before final evaluation\n",
|
||
"loaded_agents: list[str] = []\n",
|
||
"missing_agents: list[str] = []\n",
|
||
"\n",
|
||
"for name, agent in agents.items():\n",
|
||
" if name == \"Random\":\n",
|
||
" continue\n",
|
||
"\n",
|
||
" ckpt_path = _ckpt_path(name)\n",
|
||
"\n",
|
||
" if ckpt_path.exists():\n",
|
||
" agent.load(str(ckpt_path))\n",
|
||
" loaded_agents.append(name)\n",
|
||
" else:\n",
|
||
" missing_agents.append(name)\n",
|
||
"\n",
|
||
"print(f\"Loaded checkpoints: {loaded_agents}\")\n",
|
||
"if missing_agents:\n",
|
||
" print(f\"Missing checkpoints (untrained/not saved yet): {missing_agents}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "0e5f2c49",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Final Evaluation\n",
|
||
"\n",
|
||
"Each agent plays 20 episodes against the built-in AI with no exploration (ε = 0).\n",
|
||
"Performance is compared via mean reward and win rate."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "70f5d5cd",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Build evaluation set: Random + agents with existing checkpoints\n",
|
||
"eval_agents: dict[str, Agent] = {\"Random\": agents[\"Random\"]}\n",
|
||
"missing_agents: list[str] = []\n",
|
||
"\n",
|
||
"for name, agent in agents.items():\n",
|
||
" if name == \"Random\":\n",
|
||
" continue\n",
|
||
"\n",
|
||
" ckpt_path = _ckpt_path(name)\n",
|
||
"\n",
|
||
" if ckpt_path.exists():\n",
|
||
" agent.load(str(ckpt_path))\n",
|
||
" eval_agents[name] = agent\n",
|
||
" else:\n",
|
||
" missing_agents.append(name)\n",
|
||
"\n",
|
||
"print(f\"Agents evaluated: {list(eval_agents.keys())}\")\n",
|
||
"if missing_agents:\n",
|
||
" print(f\"Skipped (no checkpoint yet): {missing_agents}\")\n",
|
||
"\n",
|
||
"if len(eval_agents) < 2:\n",
|
||
" raise RuntimeError(\"Train at least one non-random agent before final evaluation.\")\n",
|
||
"\n",
|
||
"results = evaluate_tournament(env, eval_agents, episodes_per_agent=20)\n",
|
||
"plot_evaluation_comparison(results)\n",
|
||
"\n",
|
||
"# Print summary table\n",
|
||
"print(f\"\\n{'Agent':<15} {'Mean Reward':>12} {'Std':>8} {'Win Rate':>10}\")\n",
|
||
"print(\"-\" * 48)\n",
|
||
"for name, res in results.items():\n",
|
||
" print(f\"{name:<15} {res['mean_reward']:>12.2f} {res['std_reward']:>8.2f} {res['win_rate']:>9.1%}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "d04d37e0",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def create_tournament_env():\n",
|
||
" \"\"\"Create PettingZoo Tennis env with preprocessing compatible with our agents.\"\"\"\n",
|
||
" env = tennis_v3.env(obs_type=\"rgb_image\")\n",
|
||
" env = ss.color_reduction_v0(env, mode=\"full\")\n",
|
||
" env = ss.resize_v1(env, x_size=84, y_size=84)\n",
|
||
" return ss.frame_stack_v1(env, 4)\n",
|
||
"\n",
|
||
"\n",
|
||
"def run_pz_match(\n",
|
||
" env,\n",
|
||
" agent_first: Agent,\n",
|
||
" agent_second: Agent,\n",
|
||
" episodes: int = 10,\n",
|
||
" max_steps: int = 4000,\n",
|
||
") -> dict[str, int]:\n",
|
||
" \"\"\"Run multiple PettingZoo episodes between two agents.\n",
|
||
"\n",
|
||
" Returns wins for global labels {'first': ..., 'second': ..., 'draw': ...}.\n",
|
||
" \"\"\"\n",
|
||
" wins = {\"first\": 0, \"second\": 0, \"draw\": 0}\n",
|
||
"\n",
|
||
" for _ep in range(episodes):\n",
|
||
" env.reset()\n",
|
||
" rewards = {\"first_0\": 0.0, \"second_0\": 0.0}\n",
|
||
"\n",
|
||
" for step_idx, agent_id in enumerate(env.agent_iter()):\n",
|
||
" obs, reward, termination, truncation, _info = env.last()\n",
|
||
" done = termination or truncation\n",
|
||
" rewards[agent_id] += float(reward)\n",
|
||
"\n",
|
||
" if done or step_idx >= max_steps:\n",
|
||
" action = None\n",
|
||
" else:\n",
|
||
" current_agent = agent_first if agent_id == \"first_0\" else agent_second\n",
|
||
" action = current_agent.get_action(np.asarray(obs), epsilon=0.0)\n",
|
||
"\n",
|
||
" env.step(action)\n",
|
||
"\n",
|
||
" if step_idx + 1 >= max_steps:\n",
|
||
" break\n",
|
||
"\n",
|
||
" if rewards[\"first_0\"] > rewards[\"second_0\"]:\n",
|
||
" wins[\"first\"] += 1\n",
|
||
" elif rewards[\"second_0\"] > rewards[\"first_0\"]:\n",
|
||
" wins[\"second\"] += 1\n",
|
||
" else:\n",
|
||
" wins[\"draw\"] += 1\n",
|
||
"\n",
|
||
" return wins\n",
|
||
"\n",
|
||
"\n",
|
||
"def run_pettingzoo_tournament(\n",
|
||
" agents: dict[str, Agent],\n",
|
||
" checkpoint_dir: Path,\n",
|
||
" episodes_per_side: int = 10,\n",
|
||
") -> tuple[np.ndarray, list[str]]:\n",
|
||
" \"\"\"Round-robin tournament excluding Random, with seat-swap fairness.\"\"\"\n",
|
||
" _ = itertools # kept for notebook context consistency\n",
|
||
" candidate_names = [name for name in agents if name != \"Random\"]\n",
|
||
"\n",
|
||
" # Keep only agents that have a checkpoint\n",
|
||
" ready_names: list[str] = []\n",
|
||
" for name in candidate_names:\n",
|
||
" ckpt_path = _ckpt_path(name)\n",
|
||
" if ckpt_path.exists():\n",
|
||
" agents[name].load(str(ckpt_path))\n",
|
||
" ready_names.append(name)\n",
|
||
"\n",
|
||
" if len(ready_names) < 2:\n",
|
||
" msg = \"Need at least 2 trained (checkpointed) non-random agents for PettingZoo tournament.\"\n",
|
||
" raise RuntimeError(msg)\n",
|
||
"\n",
|
||
" n = len(ready_names)\n",
|
||
" win_matrix = np.full((n, n), np.nan)\n",
|
||
" np.fill_diagonal(win_matrix, 0.5)\n",
|
||
"\n",
|
||
" for i in range(n):\n",
|
||
" for j in range(i + 1, n):\n",
|
||
" name_i = ready_names[i]\n",
|
||
" name_j = ready_names[j]\n",
|
||
"\n",
|
||
" print(f\"Matchup: {name_i} vs {name_j}\")\n",
|
||
" env = create_tournament_env()\n",
|
||
"\n",
|
||
" # Leg 1: i as first_0, j as second_0\n",
|
||
" leg1 = run_pz_match(\n",
|
||
" env,\n",
|
||
" agent_first=agents[name_i],\n",
|
||
" agent_second=agents[name_j],\n",
|
||
" episodes=episodes_per_side,\n",
|
||
" )\n",
|
||
"\n",
|
||
" # Leg 2: swap seats\n",
|
||
" leg2 = run_pz_match(\n",
|
||
" env,\n",
|
||
" agent_first=agents[name_j],\n",
|
||
" agent_second=agents[name_i],\n",
|
||
" episodes=episodes_per_side,\n",
|
||
" )\n",
|
||
"\n",
|
||
" env.close()\n",
|
||
"\n",
|
||
" wins_i = leg1[\"first\"] + leg2[\"second\"]\n",
|
||
" wins_j = leg1[\"second\"] + leg2[\"first\"]\n",
|
||
"\n",
|
||
" decisive = wins_i + wins_j\n",
|
||
" if decisive == 0:\n",
|
||
" wr_i = 0.5\n",
|
||
" wr_j = 0.5\n",
|
||
" else:\n",
|
||
" wr_i = wins_i / decisive\n",
|
||
" wr_j = wins_j / decisive\n",
|
||
"\n",
|
||
" win_matrix[i, j] = wr_i\n",
|
||
" win_matrix[j, i] = wr_j\n",
|
||
"\n",
|
||
" print(f\" -> {name_i}: {wins_i} wins | {name_j}: {wins_j} wins\\n\")\n",
|
||
"\n",
|
||
" return win_matrix, ready_names\n",
|
||
"\n",
|
||
"\n",
|
||
"# Run tournament (non-random agents only)\n",
|
||
"win_matrix_pz, pz_names = run_pettingzoo_tournament(\n",
|
||
" agents=agents,\n",
|
||
" checkpoint_dir=CHECKPOINT_DIR,\n",
|
||
" episodes_per_side=10,\n",
|
||
")\n",
|
||
"\n",
|
||
"# Plot win-rate matrix\n",
|
||
"plt.figure(figsize=(8, 6))\n",
|
||
"sns.heatmap(\n",
|
||
" win_matrix_pz,\n",
|
||
" annot=True,\n",
|
||
" fmt=\".2f\",\n",
|
||
" cmap=\"Blues\",\n",
|
||
" vmin=0.0,\n",
|
||
" vmax=1.0,\n",
|
||
" xticklabels=pz_names,\n",
|
||
" yticklabels=pz_names,\n",
|
||
")\n",
|
||
"plt.xlabel(\"Opponent\")\n",
|
||
"plt.ylabel(\"Agent\")\n",
|
||
"plt.title(\"PettingZoo Tournament Win Rate Matrix (Non-random agents)\")\n",
|
||
"plt.tight_layout()\n",
|
||
"plt.show()\n",
|
||
"\n",
|
||
"# Rank agents by mean win rate vs others (excluding diagonal)\n",
|
||
"scores = {}\n",
|
||
"for idx, name in enumerate(pz_names):\n",
|
||
" row = np.delete(win_matrix_pz[idx], idx)\n",
|
||
" scores[name] = float(np.mean(row))\n",
|
||
"\n",
|
||
"ranking = sorted(scores.items(), key=lambda x: x[1], reverse=True)\n",
|
||
"print(\"Final ranking (PettingZoo tournament, non-random):\")\n",
|
||
"for rank_idx, (name, score) in enumerate(ranking, start=1):\n",
|
||
" print(f\"{rank_idx}. {name:<12} | mean win rate: {score:.3f}\")\n",
|
||
"\n",
|
||
"print(f\"\\nBest agent: {ranking[0][0]}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "3f8b300d",
|
||
"metadata": {},
|
||
"source": [
|
||
"## PettingZoo Tournament (Agents vs Agents)\n",
|
||
"\n",
|
||
"This tournament uses `from pettingzoo.atari import tennis_v3` to make trained agents play against each other directly.\n",
|
||
"\n",
|
||
"- Checkpoints are loaded from `checkpoints/` (`.pkl` for linear agents, `.pt` for DQN)\n",
|
||
"- `Random` is **excluded** from ranking\n",
|
||
"- Each pair plays in both seat positions (`first_0` and `second_0`) to reduce position bias\n",
|
||
"- A win-rate matrix and final ranking are produced"
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "Python 3 (ipykernel)",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.12.12"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 5
|
||
}
|