mirror of
https://github.com/ArthurDanjou/ArtStudies.git
synced 2026-03-16 05:11:40 +01:00
1478 lines
58 KiB
Plaintext
1478 lines
58 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "fef45687",
|
||
"metadata": {},
|
||
"source": [
|
||
"# RL Project: Atari Tennis Tournament\n",
|
||
"\n",
|
||
"This notebook implements four Reinforcement Learning algorithms to play Atari Tennis (`ALE/Tennis-v5` via Gymnasium):\n",
|
||
"\n",
|
||
"1. **SARSA** — Semi-gradient SARSA with linear approximation (inspired by Lab 7, on-policy update from Lab 5B)\n",
|
||
"2. **Q-Learning** — Off-policy linear approximation (inspired by Lab 5B)\n",
|
||
"\n",
|
||
"Each agent is **pre-trained independently** against the built-in Atari AI opponent, then evaluated in a comparative tournament."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 12,
|
||
"id": "b50d7174",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Using device: mps\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"import pickle\n",
|
||
"from collections import deque\n",
|
||
"from pathlib import Path\n",
|
||
"\n",
|
||
"import ale_py # noqa: F401 — registers ALE environments\n",
|
||
"import gymnasium as gym\n",
|
||
"import supersuit as ss\n",
|
||
"from gymnasium.wrappers import FrameStackObservation, ResizeObservation\n",
|
||
"from pettingzoo.atari import tennis_v3\n",
|
||
"from tqdm.auto import tqdm\n",
|
||
"\n",
|
||
"import matplotlib.pyplot as plt\n",
|
||
"import numpy as np\n",
|
||
"\n",
|
||
"import torch\n",
|
||
"from torch import nn, optim\n",
|
||
"\n",
|
||
"DEVICE = torch.device(\"mps\" if torch.backends.mps.is_available() else \"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
|
||
"print(f\"Using device: {DEVICE}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "86047166",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Configuration & Checkpoints\n",
|
||
"\n",
|
||
"We use a **checkpoint** system (`pickle` serialization) to save and restore trained agent weights. This enables an incremental workflow:\n",
|
||
"- Train one agent at a time and save its weights\n",
|
||
"- Resume later without retraining previous agents\n",
|
||
"- Load all checkpoints for the final evaluation"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 13,
|
||
"id": "ff3486a4",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"CHECKPOINT_DIR = Path(\"checkpoints\")\n",
|
||
"CHECKPOINT_DIR.mkdir(parents=True, exist_ok=True)\n",
|
||
"\n",
|
||
"\n",
|
||
"def get_path(name: str) -> Path:\n",
|
||
" \"\"\"Return the checkpoint path for an agent (.pkl).\"\"\"\n",
|
||
" base = name.lower().replace(\" \", \"_\").replace(\"-\", \"_\")\n",
|
||
" return CHECKPOINT_DIR / (base + \".pkl\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "ec691487",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Utility Functions\n",
|
||
"\n",
|
||
"## Observation Normalization\n",
|
||
"\n",
|
||
"The Tennis environment produces image observations of shape `(4, 84, 84)` after preprocessing (grayscale + resize + frame stack).\n",
|
||
"We normalize them into 1D `float64` vectors divided by 255, as in Lab 7 (continuous feature normalization).\n",
|
||
"\n",
|
||
"## ε-greedy Policy\n",
|
||
"\n",
|
||
"Follows the pattern from Lab 5B (`epsilon_greedy`) and Lab 7 (`epsilon_greedy_action`):\n",
|
||
"- With probability ε: random action (exploration)\n",
|
||
"- With probability 1−ε: action maximizing $\\hat{q}(s, a)$ with uniform tie-breaking (`np.flatnonzero`)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 14,
|
||
"id": "be85c130",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def normalize_obs(observation: np.ndarray) -> np.ndarray:\n",
|
||
" \"\"\"Flatten and normalize an observation to a 1D float64 vector.\n",
|
||
"\n",
|
||
" Replicates the /255.0 normalization used in all agents from the original project.\n",
|
||
" For image observations of shape (4, 84, 84), this produces a vector of length 28_224.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" observation: Raw observation array from the environment.\n",
|
||
"\n",
|
||
" Returns:\n",
|
||
" 1D numpy array of dtype float64, values in [0, 1].\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" return observation.flatten().astype(np.float64) / 255.0\n",
|
||
"\n",
|
||
"\n",
|
||
"def epsilon_greedy(\n",
|
||
" q_values: np.ndarray,\n",
|
||
" epsilon: float,\n",
|
||
" rng: np.random.Generator,\n",
|
||
") -> int:\n",
|
||
" \"\"\"Select an action using an ε-greedy policy with fair tie-breaking.\n",
|
||
"\n",
|
||
" Follows the same logic as Lab 5B epsilon_greedy and Lab 7 epsilon_greedy_action:\n",
|
||
" - With probability epsilon: choose a random action (exploration).\n",
|
||
" - With probability 1-epsilon: choose the action with highest Q-value (exploitation).\n",
|
||
" - If multiple actions share the maximum Q-value, break ties uniformly at random.\n",
|
||
"\n",
|
||
" Handles edge cases: empty q_values, NaN/Inf values.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" q_values: Array of Q-values for each action, shape (n_actions,).\n",
|
||
" epsilon: Exploration probability in [0, 1].\n",
|
||
" rng: NumPy random number generator.\n",
|
||
"\n",
|
||
" Returns:\n",
|
||
" Selected action index.\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" q_values = np.asarray(q_values, dtype=np.float64).reshape(-1)\n",
|
||
"\n",
|
||
" if q_values.size == 0:\n",
|
||
" msg = \"q_values is empty.\"\n",
|
||
" raise ValueError(msg)\n",
|
||
"\n",
|
||
" if rng.random() < epsilon:\n",
|
||
" return int(rng.integers(0, q_values.size))\n",
|
||
"\n",
|
||
" # Handle NaN/Inf values safely\n",
|
||
" finite_mask = np.isfinite(q_values)\n",
|
||
" if not np.any(finite_mask):\n",
|
||
" return int(rng.integers(0, q_values.size))\n",
|
||
"\n",
|
||
" safe_q = q_values.copy()\n",
|
||
" safe_q[~finite_mask] = -np.inf\n",
|
||
" max_val = np.max(safe_q)\n",
|
||
" best = np.flatnonzero(safe_q == max_val)\n",
|
||
"\n",
|
||
" if best.size == 0:\n",
|
||
" return int(rng.integers(0, q_values.size))\n",
|
||
"\n",
|
||
" return int(rng.choice(best))\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "bb53da28",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Agent Definitions\n",
|
||
"\n",
|
||
"## Base Class `Agent`\n",
|
||
"\n",
|
||
"Common interface for all agents, same signatures: `get_action`, `update`, `save`, `load`.\n",
|
||
"Serialization uses `pickle` (compatible with numpy arrays)."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 15,
|
||
"id": "ded9b1fb",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class Agent:\n",
|
||
" \"\"\"Base class for reinforcement learning agents.\n",
|
||
"\n",
|
||
" All agents share this interface so they are compatible with the tournament system.\n",
|
||
" \"\"\"\n",
|
||
"\n",
|
||
" def __init__(self, seed: int, action_space: int) -> None:\n",
|
||
" \"\"\"Initialize the agent with its action space and a reproducible RNG.\"\"\"\n",
|
||
" self.action_space = action_space\n",
|
||
" self.rng = np.random.default_rng(seed=seed)\n",
|
||
"\n",
|
||
" def get_action(self, observation: np.ndarray, epsilon: float = 0.0) -> int:\n",
|
||
" \"\"\"Select an action from the current observation.\"\"\"\n",
|
||
" raise NotImplementedError\n",
|
||
"\n",
|
||
" def update(\n",
|
||
" self,\n",
|
||
" state: np.ndarray,\n",
|
||
" action: int,\n",
|
||
" reward: float,\n",
|
||
" next_state: np.ndarray,\n",
|
||
" done: bool,\n",
|
||
" next_action: int | None = None,\n",
|
||
" ) -> None:\n",
|
||
" \"\"\"Update agent parameters from one transition.\"\"\"\n",
|
||
"\n",
|
||
" def save(self, filename: str) -> None:\n",
|
||
" \"\"\"Save the agent state to disk using pickle.\"\"\"\n",
|
||
" with Path(filename).open(\"wb\") as f:\n",
|
||
" pickle.dump(self.__dict__, f)\n",
|
||
"\n",
|
||
" def load(self, filename: str) -> None:\n",
|
||
" \"\"\"Load the agent state from disk.\"\"\"\n",
|
||
" with Path(filename).open(\"rb\") as f:\n",
|
||
" self.__dict__.update(pickle.load(f)) # noqa: S301\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "8a4eae79",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Random Agent (baseline)\n",
|
||
"\n",
|
||
"Serves as a reference to evaluate the performance of learning agents."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 16,
|
||
"id": "78bdc9d2",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class RandomAgent(Agent):\n",
|
||
" \"\"\"A simple agent that selects actions uniformly at random (baseline).\"\"\"\n",
|
||
"\n",
|
||
" def get_action(self, observation: np.ndarray, epsilon: float = 0.0) -> int:\n",
|
||
" \"\"\"Select a random action, ignoring the observation and epsilon.\"\"\"\n",
|
||
" _ = observation, epsilon\n",
|
||
" return int(self.rng.integers(0, self.action_space))\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "5f679032",
|
||
"metadata": {},
|
||
"source": [
|
||
"## SARSA Agent — Linear Approximation (Semi-gradient)\n",
|
||
"\n",
|
||
"This agent combines:\n",
|
||
"- **Linear approximation** from Lab 7 (`SarsaAgent`): $\\hat{q}(s, a; \\mathbf{W}) = \\mathbf{W}_a^\\top \\phi(s)$\n",
|
||
"- **On-policy SARSA update** from Lab 5B (`train_sarsa`): $\\delta = r + \\gamma \\hat{q}(s', a') - \\hat{q}(s, a)$\n",
|
||
"\n",
|
||
"The semi-gradient update rule is:\n",
|
||
"$$W_a \\leftarrow W_a + \\alpha \\cdot \\delta \\cdot \\phi(s)$$\n",
|
||
"\n",
|
||
"where $\\phi(s)$ is the normalized observation vector (analogous to tile coding features in Lab 7, but in dense form)."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 17,
|
||
"id": "c124ed9a",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class SarsaAgent(Agent):\n",
|
||
" \"\"\"Semi-gradient SARSA agent with linear function approximation.\n",
|
||
"\n",
|
||
" Inspired by:\n",
|
||
" - Lab 7 SarsaAgent: linear q(s,a) = W_a . phi(s), semi-gradient update\n",
|
||
" - Lab 5B train_sarsa: on-policy TD target using Q(s', a')\n",
|
||
"\n",
|
||
" The weight matrix W has shape (n_actions, n_features).\n",
|
||
" For a given state s, q(s, a) = W[a] @ phi(s) is the dot product\n",
|
||
" of the action's weight row with the normalized observation.\n",
|
||
" \"\"\"\n",
|
||
"\n",
|
||
" def __init__(\n",
|
||
" self,\n",
|
||
" n_features: int,\n",
|
||
" n_actions: int,\n",
|
||
" alpha: float = 0.001,\n",
|
||
" gamma: float = 0.99,\n",
|
||
" seed: int = 42,\n",
|
||
" ) -> None:\n",
|
||
" \"\"\"Initialize SARSA agent with linear weights.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" n_features: Dimension of the feature vector phi(s).\n",
|
||
" n_actions: Number of discrete actions.\n",
|
||
" alpha: Learning rate (kept small for high-dim features).\n",
|
||
" gamma: Discount factor.\n",
|
||
" seed: RNG seed for reproducibility.\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" super().__init__(seed, n_actions)\n",
|
||
" self.n_features = n_features\n",
|
||
" self.alpha = alpha\n",
|
||
" self.gamma = gamma\n",
|
||
" # Weight matrix: one row per action, analogous to Lab 7's self.w\n",
|
||
" # but organized as (n_actions, n_features) for dense features.\n",
|
||
" self.W = np.zeros((n_actions, n_features), dtype=np.float64)\n",
|
||
"\n",
|
||
" def _q_values(self, phi: np.ndarray) -> np.ndarray:\n",
|
||
" \"\"\"Compute Q-values for all actions given feature vector phi(s).\n",
|
||
"\n",
|
||
" Equivalent to Lab 7's self.q(s, a) = self.w[idx].sum()\n",
|
||
" but using dense linear approximation: q(s, a) = W[a] @ phi.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" phi: Normalized feature vector, shape (n_features,).\n",
|
||
"\n",
|
||
" Returns:\n",
|
||
" Array of Q-values, shape (n_actions,).\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" return self.W @ phi # shape (n_actions,)\n",
|
||
"\n",
|
||
" def get_action(self, observation: np.ndarray, epsilon: float = 0.0) -> int:\n",
|
||
" \"\"\"Select action using ε-greedy policy over linear Q-values.\n",
|
||
"\n",
|
||
" Same pattern as Lab 7 SarsaAgent.eps_greedy:\n",
|
||
" compute q-values for all actions, then apply epsilon_greedy.\n",
|
||
" \"\"\"\n",
|
||
" phi = normalize_obs(observation)\n",
|
||
" q_vals = self._q_values(phi)\n",
|
||
" return epsilon_greedy(q_vals, epsilon, self.rng)\n",
|
||
"\n",
|
||
" def update(\n",
|
||
" self,\n",
|
||
" state: np.ndarray,\n",
|
||
" action: int,\n",
|
||
" reward: float,\n",
|
||
" next_state: np.ndarray,\n",
|
||
" done: bool,\n",
|
||
" next_action: int | None = None,\n",
|
||
" ) -> None:\n",
|
||
" \"\"\"Perform one semi-gradient SARSA update.\n",
|
||
"\n",
|
||
" Follows the SARSA update from Lab 5B train_sarsa:\n",
|
||
" td_target = r + gamma * Q(s', a') * (0 if done else 1)\n",
|
||
" Q(s, a) += alpha * (td_target - Q(s, a))\n",
|
||
"\n",
|
||
" In continuous form with linear approximation (Lab 7 SarsaAgent.update):\n",
|
||
" delta = target - q(s, a)\n",
|
||
" W[a] += alpha * delta * phi(s)\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" state: Current observation.\n",
|
||
" action: Action taken.\n",
|
||
" reward: Reward received.\n",
|
||
" next_state: Next observation.\n",
|
||
" done: Whether the episode ended.\n",
|
||
" next_action: Action chosen in next state (required for SARSA).\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" phi = np.nan_to_num(normalize_obs(state), nan=0.0, posinf=0.0, neginf=0.0)\n",
|
||
" q_sa = float(self.W[action] @ phi) # current estimate q(s, a)\n",
|
||
" if not np.isfinite(q_sa):\n",
|
||
" q_sa = 0.0\n",
|
||
"\n",
|
||
" if done:\n",
|
||
" # Terminal: no future value (Lab 5B: gamma * Q[s2, a2] * 0)\n",
|
||
" target = reward\n",
|
||
" else:\n",
|
||
" # On-policy: use q(s', a') where a' is the actual next action\n",
|
||
" # This is the key SARSA property (Lab 5B)\n",
|
||
" phi_next = np.nan_to_num(normalize_obs(next_state), nan=0.0, posinf=0.0, neginf=0.0)\n",
|
||
" if next_action is None:\n",
|
||
" next_action = 0 # fallback, should not happen in practice\n",
|
||
" q_sp_ap = float(self.W[next_action] @ phi_next)\n",
|
||
" if not np.isfinite(q_sp_ap):\n",
|
||
" q_sp_ap = 0.0\n",
|
||
" target = float(reward) + self.gamma * q_sp_ap\n",
|
||
"\n",
|
||
" # Semi-gradient update: W[a] += alpha * delta * phi(s)\n",
|
||
" # Analogous to Lab 7: self.w[idx] += self.alpha * delta\n",
|
||
" if not np.isfinite(target):\n",
|
||
" return\n",
|
||
"\n",
|
||
" delta = float(target - q_sa)\n",
|
||
" if not np.isfinite(delta):\n",
|
||
" return\n",
|
||
"\n",
|
||
" td_step = float(np.clip(delta, -1_000.0, 1_000.0))\n",
|
||
" self.W[action] += self.alpha * td_step * phi\n",
|
||
" self.W[action] = np.nan_to_num(self.W[action], nan=0.0, posinf=1e6, neginf=-1e6)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "d4e18536",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Q-Learning Agent — Linear Approximation (Off-policy)\n",
|
||
"\n",
|
||
"Same architecture as SARSA but with the **off-policy update** from Lab 5B (`train_q_learning`):\n",
|
||
"\n",
|
||
"$$\\delta = r + \\gamma \\max_{a'} \\hat{q}(s', a') - \\hat{q}(s, a)$$\n",
|
||
"\n",
|
||
"The key difference from SARSA: we use $\\max_{a'} Q(s', a')$ instead of $Q(s', a')$ where $a'$ is the action actually chosen. This allows learning the optimal policy independently of the exploration policy."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 18,
|
||
"id": "f5b5b9ea",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class QLearningAgent(Agent):\n",
|
||
" \"\"\"Q-Learning agent with linear function approximation (off-policy).\n",
|
||
"\n",
|
||
" Inspired by:\n",
|
||
" - Lab 5B train_q_learning: off-policy TD target using max_a' Q(s', a')\n",
|
||
" - Lab 7 SarsaAgent: linear approximation q(s,a) = W[a] @ phi(s)\n",
|
||
"\n",
|
||
" The only difference from SarsaAgent is the TD target:\n",
|
||
" SARSA uses Q(s', a') (on-policy), Q-Learning uses max_a' Q(s', a') (off-policy).\n",
|
||
" \"\"\"\n",
|
||
"\n",
|
||
" def __init__(\n",
|
||
" self,\n",
|
||
" n_features: int,\n",
|
||
" n_actions: int,\n",
|
||
" alpha: float = 0.001,\n",
|
||
" gamma: float = 0.99,\n",
|
||
" seed: int = 42,\n",
|
||
" ) -> None:\n",
|
||
" \"\"\"Initialize Q-Learning agent with linear weights.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" n_features: Dimension of the feature vector phi(s).\n",
|
||
" n_actions: Number of discrete actions.\n",
|
||
" alpha: Learning rate.\n",
|
||
" gamma: Discount factor.\n",
|
||
" seed: RNG seed.\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" super().__init__(seed, n_actions)\n",
|
||
" self.n_features = n_features\n",
|
||
" self.alpha = alpha\n",
|
||
" self.gamma = gamma\n",
|
||
" self.W = np.zeros((n_actions, n_features), dtype=np.float64)\n",
|
||
"\n",
|
||
" def _q_values(self, phi: np.ndarray) -> np.ndarray:\n",
|
||
" \"\"\"Compute Q-values for all actions: q(s, a) = W[a] @ phi for each a.\"\"\"\n",
|
||
" return self.W @ phi\n",
|
||
"\n",
|
||
" def get_action(self, observation: np.ndarray, epsilon: float = 0.0) -> int:\n",
|
||
" \"\"\"Select action using ε-greedy policy over linear Q-values.\"\"\"\n",
|
||
" phi = normalize_obs(observation)\n",
|
||
" q_vals = self._q_values(phi)\n",
|
||
" return epsilon_greedy(q_vals, epsilon, self.rng)\n",
|
||
"\n",
|
||
" def update(\n",
|
||
" self,\n",
|
||
" state: np.ndarray,\n",
|
||
" action: int,\n",
|
||
" reward: float,\n",
|
||
" next_state: np.ndarray,\n",
|
||
" done: bool,\n",
|
||
" next_action: int | None = None,\n",
|
||
" ) -> None:\n",
|
||
" \"\"\"Perform one Q-learning update.\n",
|
||
"\n",
|
||
" Follows Lab 5B train_q_learning:\n",
|
||
" td_target = r + gamma * max(Q[s2]) * (0 if terminated else 1)\n",
|
||
" Q[s, a] += alpha * (td_target - Q[s, a])\n",
|
||
"\n",
|
||
" In continuous form with linear approximation:\n",
|
||
" delta = target - q(s, a)\n",
|
||
" W[a] += alpha * delta * phi(s)\n",
|
||
" \"\"\"\n",
|
||
" _ = next_action # Q-learning is off-policy: next_action is not used\n",
|
||
" phi = np.nan_to_num(normalize_obs(state), nan=0.0, posinf=0.0, neginf=0.0)\n",
|
||
" q_sa = float(self.W[action] @ phi)\n",
|
||
" if not np.isfinite(q_sa):\n",
|
||
" q_sa = 0.0\n",
|
||
"\n",
|
||
" if done:\n",
|
||
" # Terminal state: no future value\n",
|
||
" # Lab 5B: gamma * np.max(Q[s2]) * (0 if terminated else 1)\n",
|
||
" target = reward\n",
|
||
" else:\n",
|
||
" # Off-policy: use max over all actions in next state\n",
|
||
" # This is the key Q-learning property (Lab 5B)\n",
|
||
" phi_next = np.nan_to_num(normalize_obs(next_state), nan=0.0, posinf=0.0, neginf=0.0)\n",
|
||
" q_next_all = self._q_values(phi_next) # q(s', a') for all a'\n",
|
||
" q_next_max = float(np.max(q_next_all))\n",
|
||
" if not np.isfinite(q_next_max):\n",
|
||
" q_next_max = 0.0\n",
|
||
" target = float(reward) + self.gamma * q_next_max\n",
|
||
"\n",
|
||
" if not np.isfinite(target):\n",
|
||
" return\n",
|
||
"\n",
|
||
" delta = float(target - q_sa)\n",
|
||
" if not np.isfinite(delta):\n",
|
||
" return\n",
|
||
"\n",
|
||
" td_step = float(np.clip(delta, -1_000.0, 1_000.0))\n",
|
||
" self.W[action] += self.alpha * td_step * phi\n",
|
||
" self.W[action] = np.nan_to_num(self.W[action], nan=0.0, posinf=1e6, neginf=-1e6)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "79e6b39f",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Monte Carlo Agent — Linear Approximation (First-visit)\n",
|
||
"\n",
|
||
"This agent is inspired by Lab 4 (`mc_control_epsilon_soft`):\n",
|
||
"- Accumulates transitions in an episode buffer `(state, action, reward)`\n",
|
||
"- At the end of the episode (`done=True`), computes **cumulative returns** by traversing the buffer backward:\n",
|
||
" $$G \\leftarrow \\gamma \\cdot G + r$$\n",
|
||
"- Updates weights with the semi-gradient rule:\n",
|
||
" $$W_a \\leftarrow W_a + \\alpha \\cdot (G - \\hat{q}(s, a)) \\cdot \\phi(s)$$\n",
|
||
"\n",
|
||
"Unlike TD methods (SARSA, Q-Learning), Monte Carlo waits for the complete episode to finish before updating.\n",
|
||
"\n",
|
||
"> **Note**: This agent currently has **checkpoint loading issues** — the saved weights fail to restore properly, causing the agent to behave as if untrained during evaluation. The training code itself works correctly."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 19,
|
||
"id": "7a3aa454",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class MonteCarloAgent(Agent):\n",
|
||
" \"\"\"Monte Carlo control agent with linear function approximation.\n",
|
||
"\n",
|
||
" Inspired by Lab 4 mc_control_epsilon_soft:\n",
|
||
" - Accumulates transitions in an episode buffer\n",
|
||
" - At episode end (done=True), computes discounted returns backward:\n",
|
||
" G = gamma * G + r (same as Lab 4's reversed loop)\n",
|
||
" - Updates weights with semi-gradient: W[a] += alpha * (G - q(s,a)) * phi(s)\n",
|
||
"\n",
|
||
" Unlike TD methods (SARSA, Q-Learning), no update occurs until the episode ends.\n",
|
||
"\n",
|
||
" Performance optimizations over naive per-step implementation:\n",
|
||
" - float32 weights & features (halves memory bandwidth, faster SIMD)\n",
|
||
" - Raw observations stored compactly as uint8, batch-normalized at episode end\n",
|
||
" - Vectorized return computation & chunk-based weight updates via einsum\n",
|
||
" - Single weight sanitization per episode instead of per-step\n",
|
||
" \"\"\"\n",
|
||
"\n",
|
||
" def __init__(\n",
|
||
" self,\n",
|
||
" n_features: int,\n",
|
||
" n_actions: int,\n",
|
||
" alpha: float = 0.001,\n",
|
||
" gamma: float = 0.99,\n",
|
||
" seed: int = 42,\n",
|
||
" ) -> None:\n",
|
||
" super().__init__(seed, n_actions)\n",
|
||
" self.n_features = n_features\n",
|
||
" self.alpha = alpha\n",
|
||
" self.gamma = gamma\n",
|
||
" self.W = np.zeros((n_actions, n_features), dtype=np.float32)\n",
|
||
" self._obs_buf: list[np.ndarray] = []\n",
|
||
" self._act_buf: list[int] = []\n",
|
||
" self._rew_buf: list[float] = []\n",
|
||
"\n",
|
||
" def _q_values(self, phi: np.ndarray) -> np.ndarray:\n",
|
||
" return self.W @ phi\n",
|
||
"\n",
|
||
" def get_action(self, observation: np.ndarray, epsilon: float = 0.0) -> int:\n",
|
||
" phi = observation.flatten().astype(np.float32) / np.float32(255.0)\n",
|
||
" q_vals = self._q_values(phi)\n",
|
||
" return epsilon_greedy(q_vals, epsilon, self.rng)\n",
|
||
"\n",
|
||
" def update(\n",
|
||
" self,\n",
|
||
" state: np.ndarray,\n",
|
||
" action: int,\n",
|
||
" reward: float,\n",
|
||
" next_state: np.ndarray,\n",
|
||
" done: bool,\n",
|
||
" next_action: int | None = None,\n",
|
||
" ) -> None:\n",
|
||
" _ = next_state, next_action\n",
|
||
"\n",
|
||
" self._obs_buf.append(state)\n",
|
||
" self._act_buf.append(action)\n",
|
||
" self._rew_buf.append(reward)\n",
|
||
"\n",
|
||
" if not done:\n",
|
||
" return\n",
|
||
"\n",
|
||
" n = len(self._rew_buf)\n",
|
||
" actions = np.array(self._act_buf, dtype=np.intp)\n",
|
||
"\n",
|
||
" returns = np.empty(n, dtype=np.float32)\n",
|
||
" G = np.float32(0.0)\n",
|
||
" gamma32 = np.float32(self.gamma)\n",
|
||
" for i in range(n - 1, -1, -1):\n",
|
||
" G = gamma32 * G + np.float32(self._rew_buf[i])\n",
|
||
" returns[i] = G\n",
|
||
"\n",
|
||
" alpha32 = np.float32(self.alpha)\n",
|
||
" chunk_size = 500\n",
|
||
" for start in range(0, n, chunk_size):\n",
|
||
" end = min(start + chunk_size, n)\n",
|
||
" cs = end - start\n",
|
||
"\n",
|
||
" raw = np.array(self._obs_buf[start:end])\n",
|
||
" phi = raw.reshape(cs, -1).astype(np.float32)\n",
|
||
" phi /= np.float32(255.0)\n",
|
||
"\n",
|
||
" ca = actions[start:end]\n",
|
||
" q_sa = np.einsum(\"ij,ij->i\", self.W[ca], phi)\n",
|
||
"\n",
|
||
" deltas = np.clip(returns[start:end] - q_sa, -1000.0, 1000.0)\n",
|
||
"\n",
|
||
" for a in range(self.action_space):\n",
|
||
" mask = ca == a\n",
|
||
" if not np.any(mask):\n",
|
||
" continue\n",
|
||
" self.W[a] += alpha32 * (deltas[mask] @ phi[mask])\n",
|
||
"\n",
|
||
" self.W = np.nan_to_num(self.W, nan=0.0, posinf=1e6, neginf=-1e6)\n",
|
||
"\n",
|
||
" self._obs_buf.clear()\n",
|
||
" self._act_buf.clear()\n",
|
||
" self._rew_buf.clear()\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "d5766fe9",
|
||
"metadata": {},
|
||
"source": [
|
||
"## DQN Agent — PyTorch MLP with Experience Replay and Target Network\n",
|
||
"\n",
|
||
"This agent implements the Deep Q-Network (DQN) using **PyTorch** for GPU-accelerated training (MPS on Apple Silicon).\n",
|
||
"\n",
|
||
"**Network architecture** (same structure as before, now as `torch.nn.Module`):\n",
|
||
"$$\\text{Input}(n\\_features) \\to \\text{Linear}(256) \\to \\text{ReLU} \\to \\text{Linear}(256) \\to \\text{ReLU} \\to \\text{Linear}(n\\_actions)$$\n",
|
||
"\n",
|
||
"**Key techniques** (inspired by Lab 6A Dyna-Q + classic DQN):\n",
|
||
"- **Experience Replay**: circular buffer of transitions, sampled as minibatches for off-policy updates\n",
|
||
"- **Target Network**: periodically synchronized copy of the Q-network, stabilizes learning\n",
|
||
"- **Gradient clipping**: prevents exploding gradients in deep networks\n",
|
||
"- **GPU acceleration**: tensors on MPS/CUDA device for fast forward/backward passes\n",
|
||
"\n",
|
||
"> **Note**: This agent currently has **checkpoint loading issues** — the saved `.pt` checkpoint fails to restore properly (device mismatch / state dict incompatibility), causing errors during evaluation. The training code itself works correctly."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 20,
|
||
"id": "9c777493",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"class QNetwork(nn.Module):\n",
|
||
" \"\"\"MLP Q-network: Input -> 256 -> ReLU -> 256 -> ReLU -> n_actions.\"\"\"\n",
|
||
"\n",
|
||
" def __init__(self, n_features: int, n_actions: int) -> None:\n",
|
||
" super().__init__()\n",
|
||
" self.net = nn.Sequential(\n",
|
||
" nn.Linear(n_features, 256),\n",
|
||
" nn.ReLU(),\n",
|
||
" nn.Linear(256, 256),\n",
|
||
" nn.ReLU(),\n",
|
||
" nn.Linear(256, n_actions),\n",
|
||
" )\n",
|
||
"\n",
|
||
" def forward(self, x: torch.Tensor) -> torch.Tensor:\n",
|
||
" return self.net(x)\n",
|
||
"\n",
|
||
"\n",
|
||
"class ReplayBuffer:\n",
|
||
" \"\"\"Fixed-size circular replay buffer storing (s, a, r, s', done) transitions.\"\"\"\n",
|
||
"\n",
|
||
" def __init__(self, capacity: int) -> None:\n",
|
||
" self.buffer: deque[tuple[np.ndarray, int, float, np.ndarray, bool]] = deque(maxlen=capacity)\n",
|
||
"\n",
|
||
" def push(self, state: np.ndarray, action: int, reward: float, next_state: np.ndarray, done: bool) -> None:\n",
|
||
" self.buffer.append((state, action, reward, next_state, done))\n",
|
||
"\n",
|
||
" def sample(self, batch_size: int, rng: np.random.Generator) -> tuple[np.ndarray, ...]:\n",
|
||
" indices = rng.choice(len(self.buffer), size=batch_size, replace=False)\n",
|
||
" batch = [self.buffer[i] for i in indices]\n",
|
||
" states = np.array([t[0] for t in batch])\n",
|
||
" actions = np.array([t[1] for t in batch])\n",
|
||
" rewards = np.array([t[2] for t in batch])\n",
|
||
" next_states = np.array([t[3] for t in batch])\n",
|
||
" dones = np.array([t[4] for t in batch], dtype=np.float32)\n",
|
||
" return states, actions, rewards, next_states, dones\n",
|
||
"\n",
|
||
" def __len__(self) -> int:\n",
|
||
" return len(self.buffer)\n",
|
||
"\n",
|
||
"\n",
|
||
"class DQNAgent(Agent):\n",
|
||
" \"\"\"Deep Q-Network agent using PyTorch with GPU acceleration (MPS/CUDA).\n",
|
||
"\n",
|
||
" Inspired by:\n",
|
||
" - Lab 6A Dyna-Q: experience replay (store transitions, sample for updates)\n",
|
||
" - Classic DQN (Mnih et al., 2015): target network, minibatch SGD\n",
|
||
"\n",
|
||
" Uses Adam optimizer and Huber loss (smooth L1) for stable training.\n",
|
||
" \"\"\"\n",
|
||
"\n",
|
||
" def __init__(\n",
|
||
" self,\n",
|
||
" n_features: int,\n",
|
||
" n_actions: int,\n",
|
||
" lr: float = 1e-4,\n",
|
||
" gamma: float = 0.99,\n",
|
||
" buffer_size: int = 50_000,\n",
|
||
" batch_size: int = 128,\n",
|
||
" target_update_freq: int = 1000,\n",
|
||
" seed: int = 42,\n",
|
||
" ) -> None:\n",
|
||
" \"\"\"Initialize DQN agent.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" n_features: Input feature dimension.\n",
|
||
" n_actions: Number of discrete actions.\n",
|
||
" lr: Learning rate for Adam optimizer.\n",
|
||
" gamma: Discount factor.\n",
|
||
" buffer_size: Maximum replay buffer capacity.\n",
|
||
" batch_size: Minibatch size for updates.\n",
|
||
" target_update_freq: Steps between target network syncs.\n",
|
||
" seed: RNG seed.\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" super().__init__(seed, n_actions)\n",
|
||
" self.n_features = n_features\n",
|
||
" self.lr = lr\n",
|
||
" self.gamma = gamma\n",
|
||
" self.batch_size = batch_size\n",
|
||
" self.target_update_freq = target_update_freq\n",
|
||
" self.update_step = 0\n",
|
||
"\n",
|
||
" # Q-network and target network on GPU\n",
|
||
" torch.manual_seed(seed)\n",
|
||
" self.q_net = QNetwork(n_features, n_actions).to(DEVICE)\n",
|
||
" self.target_net = QNetwork(n_features, n_actions).to(DEVICE)\n",
|
||
" self.target_net.load_state_dict(self.q_net.state_dict())\n",
|
||
" self.target_net.eval()\n",
|
||
"\n",
|
||
" self.optimizer = optim.Adam(self.q_net.parameters(), lr=lr)\n",
|
||
" self.loss_fn = nn.SmoothL1Loss() # Huber loss — more robust than MSE\n",
|
||
"\n",
|
||
" # Experience replay buffer\n",
|
||
" self.replay_buffer = ReplayBuffer(buffer_size)\n",
|
||
"\n",
|
||
" def get_action(self, observation: np.ndarray, epsilon: float = 0.0) -> int:\n",
|
||
" \"\"\"Select action using epsilon-greedy policy over Q-network outputs.\"\"\"\n",
|
||
" if self.rng.random() < epsilon:\n",
|
||
" return int(self.rng.integers(0, self.action_space))\n",
|
||
"\n",
|
||
" phi = normalize_obs(observation)\n",
|
||
" with torch.no_grad():\n",
|
||
" state_t = torch.from_numpy(phi).float().unsqueeze(0).to(DEVICE)\n",
|
||
" q_vals = self.q_net(state_t).cpu().numpy().squeeze(0)\n",
|
||
" return epsilon_greedy(q_vals, 0.0, self.rng)\n",
|
||
"\n",
|
||
" def update(\n",
|
||
" self,\n",
|
||
" state: np.ndarray,\n",
|
||
" action: int,\n",
|
||
" reward: float,\n",
|
||
" next_state: np.ndarray,\n",
|
||
" done: bool,\n",
|
||
" next_action: int | None = None,\n",
|
||
" ) -> None:\n",
|
||
" \"\"\"Store transition and perform a minibatch DQN update.\"\"\"\n",
|
||
" _ = next_action # DQN is off-policy\n",
|
||
"\n",
|
||
" # Store transition\n",
|
||
" phi_s = normalize_obs(state)\n",
|
||
" phi_sp = normalize_obs(next_state)\n",
|
||
" self.replay_buffer.push(phi_s, action, reward, phi_sp, done)\n",
|
||
"\n",
|
||
" if len(self.replay_buffer) < self.batch_size:\n",
|
||
" return\n",
|
||
"\n",
|
||
" # Sample minibatch\n",
|
||
" states_b, actions_b, rewards_b, next_states_b, dones_b = self.replay_buffer.sample(\n",
|
||
" self.batch_size, self.rng,\n",
|
||
" )\n",
|
||
"\n",
|
||
" # Convert to tensors on device\n",
|
||
" states_t = torch.from_numpy(states_b).float().to(DEVICE)\n",
|
||
" actions_t = torch.from_numpy(actions_b).long().to(DEVICE)\n",
|
||
" rewards_t = torch.from_numpy(rewards_b).float().to(DEVICE)\n",
|
||
" next_states_t = torch.from_numpy(next_states_b).float().to(DEVICE)\n",
|
||
" dones_t = torch.from_numpy(dones_b).float().to(DEVICE)\n",
|
||
"\n",
|
||
" # Current Q-values for taken actions\n",
|
||
" q_values = self.q_net(states_t)\n",
|
||
" q_curr = q_values.gather(1, actions_t.unsqueeze(1)).squeeze(1)\n",
|
||
"\n",
|
||
" # Target Q-values (off-policy: max over actions in next state)\n",
|
||
" with torch.no_grad():\n",
|
||
" q_next = self.target_net(next_states_t).max(dim=1).values\n",
|
||
" targets = rewards_t + (1.0 - dones_t) * self.gamma * q_next\n",
|
||
"\n",
|
||
" # Compute loss and update\n",
|
||
" loss = self.loss_fn(q_curr, targets)\n",
|
||
" self.optimizer.zero_grad()\n",
|
||
" loss.backward()\n",
|
||
" nn.utils.clip_grad_norm_(self.q_net.parameters(), max_norm=10.0)\n",
|
||
" self.optimizer.step()\n",
|
||
"\n",
|
||
" # Sync target network periodically\n",
|
||
" self.update_step += 1\n",
|
||
" if self.update_step % self.target_update_freq == 0:\n",
|
||
" self.target_net.load_state_dict(self.q_net.state_dict())\n",
|
||
"\n",
|
||
" def save(self, filename: str) -> None:\n",
|
||
" \"\"\"Save agent state using torch.save (networks + optimizer + metadata).\"\"\"\n",
|
||
" torch.save(\n",
|
||
" {\n",
|
||
" \"q_net\": self.q_net.state_dict(),\n",
|
||
" \"target_net\": self.target_net.state_dict(),\n",
|
||
" \"optimizer\": self.optimizer.state_dict(),\n",
|
||
" \"update_step\": self.update_step,\n",
|
||
" \"n_features\": self.n_features,\n",
|
||
" \"action_space\": self.action_space,\n",
|
||
" },\n",
|
||
" filename,\n",
|
||
" )\n",
|
||
"\n",
|
||
" def load(self, filename: str) -> None:\n",
|
||
" \"\"\"Load agent state from a torch checkpoint.\"\"\"\n",
|
||
" checkpoint = torch.load(filename, map_location=DEVICE, weights_only=False)\n",
|
||
" self.q_net.load_state_dict(checkpoint[\"q_net\"])\n",
|
||
" self.target_net.load_state_dict(checkpoint[\"target_net\"])\n",
|
||
" self.optimizer.load_state_dict(checkpoint[\"optimizer\"])\n",
|
||
" self.update_step = checkpoint[\"update_step\"]\n",
|
||
" self.q_net.to(DEVICE)\n",
|
||
" self.target_net.to(DEVICE)\n",
|
||
" self.target_net.eval()\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "91e51dc8",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Tennis Environment\n",
|
||
"\n",
|
||
"Creation of the Atari Tennis environment via Gymnasium (`ALE/Tennis-v5`) with standard wrappers:\n",
|
||
"- **Grayscale**: `obs_type=\"grayscale\"` — single-channel observations\n",
|
||
"- **Resize**: `ResizeObservation(84, 84)` — downscale to 84×84\n",
|
||
"- **Frame stack**: `FrameStackObservation(4)` — stack 4 consecutive frames\n",
|
||
"\n",
|
||
"The final observation is an array of shape `(4, 84, 84)`, which flattens to 28,224 features.\n",
|
||
"\n",
|
||
"The agent plays against the **built-in Atari AI opponent**."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 21,
|
||
"id": "f9a973dd",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def create_env() -> gym.Env:\n",
|
||
" \"\"\"Create the ALE/Tennis-v5 environment with preprocessing wrappers.\n",
|
||
"\n",
|
||
" Applies:\n",
|
||
" - obs_type=\"grayscale\": grayscale observation (210, 160)\n",
|
||
" - ResizeObservation(84, 84): downscale to 84x84\n",
|
||
" - FrameStackObservation(4): stack 4 consecutive frames -> (4, 84, 84)\n",
|
||
"\n",
|
||
" Returns:\n",
|
||
" Gymnasium environment ready for training.\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" env = gym.make(\"ALE/Tennis-v5\", obs_type=\"grayscale\")\n",
|
||
" env = ResizeObservation(env, shape=(84, 84))\n",
|
||
" return FrameStackObservation(env, stack_size=4)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "18cb28d8",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Training & Evaluation Infrastructure\n",
|
||
"\n",
|
||
"Functions for training and evaluating agents in the single-agent Gymnasium environment:\n",
|
||
"\n",
|
||
"1. **`train_agent`** — Pre-trains an agent against the built-in AI for a given number of episodes with ε-greedy exploration\n",
|
||
"2. **`evaluate_agent`** — Evaluates a trained agent (no exploration, ε = 0) and returns performance metrics\n",
|
||
"3. **`plot_training_curves`** — Plots the training reward history (moving average) for all agents\n",
|
||
"4. **`plot_evaluation_comparison`** — Bar chart comparing final evaluation scores across agents\n",
|
||
"5. **`evaluate_tournament`** — Evaluates all agents and produces a summary comparison"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 22,
|
||
"id": "06b91580",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def train_agent(\n",
|
||
" env: gym.Env,\n",
|
||
" agent: Agent,\n",
|
||
" name: str,\n",
|
||
" *,\n",
|
||
" episodes: int = 5000,\n",
|
||
" epsilon_start: float = 1.0,\n",
|
||
" epsilon_end: float = 0.05,\n",
|
||
" epsilon_decay: float = 0.999,\n",
|
||
" max_steps: int = 5000,\n",
|
||
") -> list[float]:\n",
|
||
" \"\"\"Pre-train an agent against the built-in Atari AI opponent.\n",
|
||
"\n",
|
||
" Each agent learns independently by playing full episodes. This is the\n",
|
||
" self-play pre-training phase: the agent interacts with the environment's\n",
|
||
" built-in opponent and updates its parameters after each transition.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" env: Gymnasium ALE/Tennis-v5 environment.\n",
|
||
" agent: Agent instance to train.\n",
|
||
" name: Display name for the progress bar.\n",
|
||
" episodes: Number of training episodes.\n",
|
||
" epsilon_start: Initial exploration rate.\n",
|
||
" epsilon_end: Minimum exploration rate.\n",
|
||
" epsilon_decay: Multiplicative decay per episode.\n",
|
||
" max_steps: Maximum steps per episode.\n",
|
||
"\n",
|
||
" Returns:\n",
|
||
" List of total rewards per episode.\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" rewards_history: list[float] = []\n",
|
||
" epsilon = epsilon_start\n",
|
||
"\n",
|
||
" pbar = tqdm(range(episodes), desc=f\"Training {name}\", leave=True)\n",
|
||
"\n",
|
||
" for _ep in pbar:\n",
|
||
" obs, _info = env.reset()\n",
|
||
" obs = np.asarray(obs)\n",
|
||
" total_reward = 0.0\n",
|
||
"\n",
|
||
" action = agent.get_action(obs, epsilon=epsilon)\n",
|
||
"\n",
|
||
" for _step in range(max_steps):\n",
|
||
" next_obs, reward, terminated, truncated, _info = env.step(action)\n",
|
||
" next_obs = np.asarray(next_obs)\n",
|
||
" done = terminated or truncated\n",
|
||
" reward = float(reward)\n",
|
||
" total_reward += reward\n",
|
||
"\n",
|
||
" next_action = agent.get_action(next_obs, epsilon=epsilon) if not done else None\n",
|
||
"\n",
|
||
" agent.update(\n",
|
||
" state=obs,\n",
|
||
" action=action,\n",
|
||
" reward=reward,\n",
|
||
" next_state=next_obs,\n",
|
||
" done=done,\n",
|
||
" next_action=next_action,\n",
|
||
" )\n",
|
||
"\n",
|
||
" if done:\n",
|
||
" break\n",
|
||
"\n",
|
||
" obs = next_obs\n",
|
||
" action = next_action\n",
|
||
"\n",
|
||
" rewards_history.append(total_reward)\n",
|
||
" epsilon = max(epsilon_end, epsilon * epsilon_decay)\n",
|
||
"\n",
|
||
" recent_window = 50\n",
|
||
" if len(rewards_history) >= recent_window:\n",
|
||
" recent_avg = np.mean(rewards_history[-recent_window:])\n",
|
||
" pbar.set_postfix(\n",
|
||
" avg50=f\"{recent_avg:.1f}\",\n",
|
||
" eps=f\"{epsilon:.3f}\",\n",
|
||
" rew=f\"{total_reward:.0f}\",\n",
|
||
" )\n",
|
||
"\n",
|
||
" return rewards_history\n",
|
||
"\n",
|
||
"def plot_training_curves(\n",
|
||
" training_histories: dict[str, list[float]],\n",
|
||
" path: str,\n",
|
||
" window: int = 100,\n",
|
||
") -> None:\n",
|
||
" \"\"\"Plot training reward curves for all agents on a single figure.\n",
|
||
"\n",
|
||
" Uses a moving average to smooth the curves.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" training_histories: Dict mapping agent names to reward lists.\n",
|
||
" path: File path to save the plot image.\n",
|
||
" window: Moving average window size.\n",
|
||
"\n",
|
||
" \"\"\"\n",
|
||
" plt.figure(figsize=(12, 6))\n",
|
||
"\n",
|
||
" for name, rewards in training_histories.items():\n",
|
||
" if len(rewards) >= window:\n",
|
||
" ma = np.convolve(rewards, np.ones(window) / window, mode=\"valid\")\n",
|
||
" plt.plot(np.arange(window - 1, len(rewards)), ma, label=name)\n",
|
||
" else:\n",
|
||
" plt.plot(rewards, label=f\"{name} (raw)\")\n",
|
||
"\n",
|
||
" plt.xlabel(\"Episodes\")\n",
|
||
" plt.ylabel(f\"Average Reward (Window={window})\")\n",
|
||
" plt.title(\"Training Curves (vs built-in AI)\")\n",
|
||
" plt.legend()\n",
|
||
" plt.grid(visible=True)\n",
|
||
" plt.tight_layout()\n",
|
||
" plt.savefig(path)\n",
|
||
" plt.show()\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "9605e9c4",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Agent Instantiation & Incremental Training (One Agent at a Time)\n",
|
||
"\n",
|
||
"**Environment**: `ALE/Tennis-v5` (grayscale, 84×84×4 frames → 28,224 features, 18 actions).\n",
|
||
"\n",
|
||
"**Agents**:\n",
|
||
"- **Random** — random baseline (no training needed)\n",
|
||
"- **SARSA** — linear approximation, semi-gradient TD(0)\n",
|
||
"- **Q-Learning** — linear approximation, off-policy\n",
|
||
"- **Monte Carlo** — first-visit MC with linear weights (⚠️ checkpoint loading issues)\n",
|
||
"- **DQN** — deep Q-network with experience replay and target network (⚠️ `.pt` checkpoint loading issues)\n",
|
||
"\n",
|
||
"**Workflow**:\n",
|
||
"1. Train **one** selected agent (`AGENT_TO_TRAIN`)\n",
|
||
"2. Save its weights to `checkpoints/` (`.pkl` for linear agents, `.pt` for DQN)\n",
|
||
"3. Repeat later for another agent without retraining previous ones\n",
|
||
"4. Load all saved checkpoints before the final evaluation"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 23,
|
||
"id": "6f6ba8df",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Observation shape : (4, 84, 84)\n",
|
||
"Feature vector dim: 28224\n",
|
||
"Number of actions : 18\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# Create environment\n",
|
||
"env = create_env()\n",
|
||
"obs, _info = env.reset()\n",
|
||
"\n",
|
||
"n_actions = int(env.action_space.n)\n",
|
||
"n_features = int(np.prod(obs.shape))\n",
|
||
"\n",
|
||
"print(f\"Observation shape : {obs.shape}\")\n",
|
||
"print(f\"Feature vector dim: {n_features}\")\n",
|
||
"print(f\"Number of actions : {n_actions}\")\n",
|
||
"\n",
|
||
"# Instantiate agents\n",
|
||
"agent_random = RandomAgent(seed=42, action_space=int(n_actions))\n",
|
||
"agent_sarsa = SarsaAgent(n_features=n_features, n_actions=n_actions, alpha=1e-5)\n",
|
||
"agent_q = QLearningAgent(n_features=n_features, n_actions=n_actions, alpha=1e-5)\n",
|
||
"agent_mc = MonteCarloAgent(n_features=n_features, n_actions=n_actions, alpha=1e-5)\n",
|
||
"agent_dqn = DQNAgent(n_features=n_features, n_actions=n_actions)\n",
|
||
"\n",
|
||
"agents = {\n",
|
||
" \"Random\": agent_random,\n",
|
||
" \"SARSA\": agent_sarsa,\n",
|
||
" \"Q-Learning\": agent_q,\n",
|
||
" \"MonteCarlo\": agent_mc,\n",
|
||
" \"DQN\": agent_dqn,\n",
|
||
"}\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "4d449701",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Selected agent: Q-Learning\n",
|
||
"Checkpoint path: checkpoints/q_learning.pkl\n",
|
||
"\n",
|
||
"============================================================\n",
|
||
"Training: Q-Learning (5000 episodes)\n",
|
||
"============================================================\n"
|
||
]
|
||
},
|
||
{
|
||
"data": {
|
||
"application/vnd.jupyter.widget-view+json": {
|
||
"model_id": "543ff46900f84f6fa37ccc6989bbe7f2",
|
||
"version_major": 2,
|
||
"version_minor": 0
|
||
},
|
||
"text/plain": [
|
||
"Training Q-Learning: 0%| | 0/5000 [00:00<?, ?it/s]"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"AGENT_TO_TRAIN = \"Q-Learning\" # TODO: change to: \"Q-Learning\", \"Monte Carlo\", \"Random\"\n",
|
||
"TRAINING_EPISODES = 5000\n",
|
||
"FORCE_RETRAIN = True\n",
|
||
"\n",
|
||
"if AGENT_TO_TRAIN not in agents:\n",
|
||
" msg = f\"Unknown agent '{AGENT_TO_TRAIN}'. Available: {list(agents)}\"\n",
|
||
" raise ValueError(msg)\n",
|
||
"\n",
|
||
"training_histories: dict[str, list[float]] = {}\n",
|
||
"agent = agents[AGENT_TO_TRAIN]\n",
|
||
"checkpoint_path = get_path(AGENT_TO_TRAIN)\n",
|
||
"\n",
|
||
"print(f\"Selected agent: {AGENT_TO_TRAIN}\")\n",
|
||
"print(f\"Checkpoint path: {checkpoint_path}\")\n",
|
||
"\n",
|
||
"if AGENT_TO_TRAIN == \"Random\":\n",
|
||
" print(\"Random is a baseline and is not trained.\")\n",
|
||
" training_histories[AGENT_TO_TRAIN] = []\n",
|
||
"elif checkpoint_path.exists() and not FORCE_RETRAIN:\n",
|
||
" agent.load(str(checkpoint_path))\n",
|
||
" print(\"Checkpoint found -> weights loaded, training skipped.\")\n",
|
||
" training_histories[AGENT_TO_TRAIN] = []\n",
|
||
"else:\n",
|
||
" print(f\"\\n{'='*60}\")\n",
|
||
" print(f\"Training: {AGENT_TO_TRAIN} ({TRAINING_EPISODES} episodes)\")\n",
|
||
" print(f\"{'='*60}\")\n",
|
||
"\n",
|
||
" training_histories[AGENT_TO_TRAIN] = train_agent(\n",
|
||
" env=env,\n",
|
||
" agent=agent,\n",
|
||
" name=AGENT_TO_TRAIN,\n",
|
||
" episodes=TRAINING_EPISODES,\n",
|
||
" epsilon_start=1.0,\n",
|
||
" epsilon_end=0.05,\n",
|
||
" epsilon_decay=0.999,\n",
|
||
" )\n",
|
||
"\n",
|
||
" avg_last_100 = np.mean(training_histories[AGENT_TO_TRAIN][-100:])\n",
|
||
" print(f\"-> {AGENT_TO_TRAIN} avg reward (last 100 eps): {avg_last_100:.2f}\")\n",
|
||
"\n",
|
||
" agent.save(str(checkpoint_path))\n",
|
||
" print(\"Checkpoint saved.\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "a13a65df",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"plot_training_curves(\n",
|
||
" training_histories, f\"plots/{AGENT_TO_TRAIN}_training_curves.png\", window=100,\n",
|
||
")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "3047c4b0",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Multi-Agent Evaluation & Tournament\n",
|
||
"\n",
|
||
"After individual pre-training against Atari's built-in AI, we move on to **head-to-head evaluation** between our agents.\n",
|
||
"\n",
|
||
"**PettingZoo Environment**: unlike Gymnasium (single player vs built-in AI), PettingZoo (`tennis_v3`) allows **two Python agents** to play against each other. The same preprocessing wrappers are applied (grayscale, resize to 84×84, 4-frame stacking) via **SuperSuit**.\n",
|
||
"\n",
|
||
"**Match Protocol**: each matchup is played over **two legs** with swapped positions (`first_0` / `second_0`) to eliminate any side-of-court advantage. Results are tallied as wins, losses, and draws."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "d04d37e0",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def create_tournament_env():\n",
|
||
" \"\"\"Create PettingZoo Tennis env with preprocessing compatible with our agents.\"\"\"\n",
|
||
" env = tennis_v3.env(obs_type=\"rgb_image\")\n",
|
||
" env = ss.color_reduction_v0(env, mode=\"full\")\n",
|
||
" env = ss.resize_v1(env, x_size=84, y_size=84)\n",
|
||
" return ss.frame_stack_v1(env, 4)\n",
|
||
"\n",
|
||
"\n",
|
||
"def run_match(\n",
|
||
" env: gym.Env,\n",
|
||
" agent_first: Agent,\n",
|
||
" agent_second: Agent,\n",
|
||
" episodes: int = 10,\n",
|
||
" max_steps: int = 4000,\n",
|
||
") -> dict[str, int]:\n",
|
||
" \"\"\"Run multiple PettingZoo episodes between two agents.\n",
|
||
"\n",
|
||
" Returns wins for global labels {'first': ..., 'second': ..., 'draw': ...}.\n",
|
||
" \"\"\"\n",
|
||
" wins = {\"first\": 0, \"second\": 0, \"draw\": 0}\n",
|
||
"\n",
|
||
" for _ep in range(episodes):\n",
|
||
" env.reset()\n",
|
||
" rewards = {\"first_0\": 0.0, \"second_0\": 0.0}\n",
|
||
"\n",
|
||
" for step_idx, agent_id in enumerate(env.agent_iter()):\n",
|
||
" obs, reward, termination, truncation, _info = env.last()\n",
|
||
" done = termination or truncation\n",
|
||
" rewards[agent_id] += float(reward)\n",
|
||
"\n",
|
||
" if done or step_idx >= max_steps:\n",
|
||
" action = None\n",
|
||
" else:\n",
|
||
" current_agent = agent_first if agent_id == \"first_0\" else agent_second\n",
|
||
" action = current_agent.get_action(np.asarray(obs), epsilon=0.0)\n",
|
||
"\n",
|
||
" env.step(action)\n",
|
||
"\n",
|
||
" if step_idx + 1 >= max_steps:\n",
|
||
" break\n",
|
||
"\n",
|
||
" # Determine winner for the current episode\n",
|
||
" if rewards[\"first_0\"] > rewards[\"second_0\"]:\n",
|
||
" wins[\"first\"] += 1\n",
|
||
" elif rewards[\"second_0\"] > rewards[\"first_0\"]:\n",
|
||
" wins[\"second\"] += 1\n",
|
||
" else:\n",
|
||
" wins[\"draw\"] += 1\n",
|
||
"\n",
|
||
" return wins\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "150e6764",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Evaluation against the Random Agent (Baseline)\n",
|
||
"\n",
|
||
"To quantify whether our agents have actually learned, we first evaluate them against the **Random agent** baseline. A properly trained agent should achieve a **win rate significantly above 50%** against a random opponent.\n",
|
||
"\n",
|
||
"Each agent plays **two legs** (one in each position) for a total of 20 episodes. Only decisive matches (excluding draws) are counted in the win rate calculation."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "1b85a88f",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def evaluate_vs_random(\n",
|
||
" agents: dict[str, Agent],\n",
|
||
" random_agent_name: str = \"Random\",\n",
|
||
" episodes_per_leg: int = 10,\n",
|
||
") -> dict[str, float]:\n",
|
||
" \"\"\"Evaluate each agent against the specified random agent and compute win rates.\"\"\"\n",
|
||
" win_rates = {}\n",
|
||
" env = create_tournament_env()\n",
|
||
" agent_random = agents[random_agent_name]\n",
|
||
"\n",
|
||
" for name, agent in agents.items():\n",
|
||
" if name == random_agent_name:\n",
|
||
" continue\n",
|
||
"\n",
|
||
" leg1 = run_match(env, agent, agent_random, episodes=episodes_per_leg)\n",
|
||
" leg2 = run_match(env, agent_random, agent, episodes=episodes_per_leg)\n",
|
||
"\n",
|
||
" total_wins = leg1[\"first\"] + leg2[\"second\"]\n",
|
||
" total_matches = (episodes_per_leg * 2) - (leg1[\"draw\"] + leg2[\"draw\"])\n",
|
||
"\n",
|
||
" if total_matches == 0:\n",
|
||
" win_rates[name] = 0.5\n",
|
||
" else:\n",
|
||
" win_rates[name] = total_wins / total_matches\n",
|
||
"\n",
|
||
" print(\n",
|
||
" f\"{name} vs {random_agent_name}: {total_wins} wins out of {total_matches} decisive matches (Win rate: {win_rates[name]:.1%})\",\n",
|
||
" )\n",
|
||
"\n",
|
||
" env.close()\n",
|
||
" return win_rates\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "82c24f27",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Loading Checkpoints & Running Evaluation\n",
|
||
"\n",
|
||
"Before evaluation, we load the saved weights for each trained agent from the `checkpoints/` directory. If a checkpoint is missing, the agent will play with its initial weights (zeros), which is effectively random behavior."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "a053644e",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"for name in [\"SARSA\", \"Q-Learning\"]:\n",
|
||
" checkpoint_path = get_path(name)\n",
|
||
" if checkpoint_path.exists():\n",
|
||
" agents[name].load(str(checkpoint_path))\n",
|
||
" else:\n",
|
||
" print(f\"Warning: Missing checkpoint for {name}\")\n",
|
||
"\n",
|
||
"print(\"Evaluation against the Random agent\")\n",
|
||
"win_rates_vs_random = evaluate_vs_random(\n",
|
||
" agents, random_agent_name=\"Random\", episodes_per_leg=10,\n",
|
||
")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "b9cb30f5",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Championship Match: SARSA vs Q-Learning\n",
|
||
"\n",
|
||
"The final showdown pits the two trained agents against each other: **SARSA** (on-policy) versus **Q-Learning** (off-policy).\n",
|
||
"\n",
|
||
"This match directly compares the two TD learning strategies:\n",
|
||
"- **SARSA** updates its weights following the policy it actually executes (on-policy): $\\delta = r + \\gamma \\hat{q}(s', a') - \\hat{q}(s, a)$\n",
|
||
"- **Q-Learning** learns the optimal policy independently of exploration (off-policy): $\\delta = r + \\gamma \\max_{a'} \\hat{q}(s', a') - \\hat{q}(s, a)$\n",
|
||
"\n",
|
||
"The championship is played over **2 × 20 episodes** with swapped positions to ensure fairness."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "4031bde5",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def run_championship(\n",
|
||
" agent1_name: str,\n",
|
||
" agent2_name: str,\n",
|
||
" agents: dict[str, Agent],\n",
|
||
" episodes_per_leg: int = 20,\n",
|
||
") -> None:\n",
|
||
" \"\"\"Run a full championship between two agents, playing multiple legs with swapped positions.\"\"\"\n",
|
||
" env = create_tournament_env()\n",
|
||
" agent1 = agents[agent1_name]\n",
|
||
" agent2 = agents[agent2_name]\n",
|
||
"\n",
|
||
" # Leg 1: Agent 1 plays first_0, Agent 2 plays second_0\n",
|
||
" leg1 = run_match(env, agent1, agent2, episodes=episodes_per_leg)\n",
|
||
" # Leg 2: Swap starting positions\n",
|
||
" leg2 = run_match(env, agent2, agent1, episodes=episodes_per_leg)\n",
|
||
"\n",
|
||
" wins_agent1 = leg1[\"first\"] + leg2[\"second\"]\n",
|
||
" wins_agent2 = leg1[\"second\"] + leg2[\"first\"]\n",
|
||
" draws = leg1[\"draw\"] + leg2[\"draw\"]\n",
|
||
"\n",
|
||
" print(f\"--- Final Result: {agent1_name} vs {agent2_name} ---\")\n",
|
||
" print(f\"{agent1_name} wins: {wins_agent1}\")\n",
|
||
" print(f\"{agent2_name} wins: {wins_agent2}\")\n",
|
||
" print(f\"Draws: {draws}\")\n",
|
||
"\n",
|
||
" if wins_agent1 > wins_agent2:\n",
|
||
" print(f\"The winner is {agent1_name}!\")\n",
|
||
" elif wins_agent2 > wins_agent1:\n",
|
||
" print(f\"The winner is {agent2_name}!\")\n",
|
||
" else:\n",
|
||
" print(\"Perfect tie!\")\n",
|
||
"\n",
|
||
" env.close()\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "b07b403c",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"print(\"Championship match between the two trained agents\")\n",
|
||
"run_championship(\"SARSA\", \"Q-Learning\", agents, episodes_per_leg=20)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "32c37e5d",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Conclusion\n",
|
||
"\n",
|
||
"This project implemented and compared five Reinforcement Learning agents on Atari Tennis:\n",
|
||
"\n",
|
||
"| Agent | Type | Policy | Update Rule |\n",
|
||
"|-------|------|--------|-------------|\n",
|
||
"| **Random** | Baseline | Uniform random | None |\n",
|
||
"| **SARSA** | TD(0), on-policy | ε-greedy | $W_a \\leftarrow W_a + \\alpha \\cdot (r + \\gamma \\hat{q}(s', a') - \\hat{q}(s, a)) \\cdot \\phi(s)$ |\n",
|
||
"| **Q-Learning** | TD(0), off-policy | ε-greedy | $W_a \\leftarrow W_a + \\alpha \\cdot (r + \\gamma \\max_{a'} \\hat{q}(s', a') - \\hat{q}(s, a)) \\cdot \\phi(s)$ |\n",
|
||
"| **Monte Carlo** | First-visit MC | ε-greedy | $W_a \\leftarrow W_a + \\alpha \\cdot (G_t - \\hat{q}(s, a)) \\cdot \\phi(s)$ |\n",
|
||
"| **DQN** | Deep Q-Network | ε-greedy | Neural network (MLP 256→256) with experience replay and target network |\n",
|
||
"\n",
|
||
"**Architecture**:\n",
|
||
"- **Linear agents** (SARSA, Q-Learning, Monte Carlo): $\\hat{q}(s, a; \\mathbf{W}) = \\mathbf{W}_a^\\top \\phi(s)$ with $\\phi(s) \\in \\mathbb{R}^{28\\,224}$ (4 grayscale 84×84 frames, normalized)\n",
|
||
"- **DQN**: MLP network (28,224 → 256 → 256 → 18) trained with Adam optimizer, Huber loss, and periodic target network sync\n",
|
||
"\n",
|
||
"**Methodology**:\n",
|
||
"1. **Pre-training** each agent individually against Atari's built-in AI (5,000 episodes, ε decaying from 1.0 to 0.05)\n",
|
||
"2. **Evaluation vs Random** to validate learning (expected win rate > 50%)\n",
|
||
"3. **Head-to-head tournament** in matches via PettingZoo (2 × 20 episodes)\n",
|
||
"\n",
|
||
"> ⚠️ **Known Issue**: Monte Carlo and DQN agent checkpoints have loading issues. Their code is preserved here for reference."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "15f80f84",
|
||
"metadata": {},
|
||
"source": []
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "studies (3.13.9)",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.13.9"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 5
|
||
}
|