mirror of
https://github.com/ArthurDanjou/ArtStudies.git
synced 2026-03-16 07:10:13 +01:00
- Added langchain-mistralai version 1.1.1 to pyproject.toml dependencies. - Updated uv.lock to include langchain-mistralai package details and its dependencies.
5898 lines
272 KiB
Plaintext
5898 lines
272 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "header",
|
||
"metadata": {},
|
||
"source": [
|
||
"# 🤖 Using LLMs with LangChain - A Beginner's Guide\n",
|
||
"\n",
|
||
"Welcome to this interactive tutorial! In this notebook, you'll learn how to:\n",
|
||
"- 🔧 Set up and configure chat models\n",
|
||
"- 💬 Interact with AI using different message types\n",
|
||
"- 🚀 Make your first API calls to language models\n",
|
||
"\n",
|
||
"Let's dive in! 🎯"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "introduction",
|
||
"metadata": {},
|
||
"source": [
|
||
"---\n",
|
||
"\n",
|
||
"## 📚 What are Chat Models?\n",
|
||
"\n",
|
||
"Chat models are AI systems that:\n",
|
||
"- Take a **sequence of messages** as input\n",
|
||
"- Return **intelligent responses** as output\n",
|
||
"\n",
|
||
"Think of it like having a conversation with an AI assistant!\n",
|
||
"\n",
|
||
"### 🌟 Why LangChain?\n",
|
||
"\n",
|
||
"LangChain makes it easy to work with different AI models through a unified interface. In this course, we'll use:\n",
|
||
"\n",
|
||
"| Model | Why? | Link |\n",
|
||
"|-------|------|------|\n",
|
||
"| **ChatMistralAI** | Fast, powerful & cost-effective | [Documentation](https://docs.langchain.com/docs/integrations/chat/mistralai) |\n",
|
||
"\n",
|
||
"\n",
|
||
"---"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "cdbfbe8d",
|
||
"metadata": {},
|
||
"source": [
|
||
"> **📝 Note:** You'll need a `MISTRALAI_API_KEY` to follow along. \n",
|
||
"> \n",
|
||
"> **🔑 Get Your Free API Key:**\n",
|
||
"> 1. Go to [Mistral AI Console](https://console.mistral.ai/home)\n",
|
||
"> 2. Sign up or log in\n",
|
||
"> 3. Navigate to \"API Keys\" section\n",
|
||
"> 4. Create a new API key\n",
|
||
"> 5. Copy your key and keep it safe! 🔒\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "setup-header",
|
||
"metadata": {},
|
||
"source": [
|
||
"## 🔑 Step 1: Environment Setup\n",
|
||
"\n",
|
||
"First, let's load our API keys from the environment. This keeps your secrets safe! 🔒"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 229,
|
||
"id": "e6db8dd3",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\u001b[2mResolved \u001b[1m301 packages\u001b[0m \u001b[2min 24ms\u001b[0m\u001b[0m\n",
|
||
"\u001b[2mAudited \u001b[1m275 packages\u001b[0m \u001b[2min 103ms\u001b[0m\u001b[0m\n",
|
||
"Note: you may need to restart the kernel to use updated packages.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"%uv add langchain-mistralai langchain-core langchain\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "load-env",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import os\n",
|
||
"\n",
|
||
"os.environ[\"MISTRAL_API_KEY\"] = \"...\"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "model-setup",
|
||
"metadata": {},
|
||
"source": [
|
||
"---\n",
|
||
"\n",
|
||
"## 🎛️ Step 2: Initialize the Chat Model\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 231,
|
||
"id": "direct-openai-setup",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"🎉 Model initialized with MistralAI!\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"# Simple direct connection\n",
|
||
"model = ChatMistralAI(model=\"ministral-3b-2512\", temperature=0)\n",
|
||
"\n",
|
||
"print(\"🎉 Model initialized with MistralAI!\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "temperature-explained",
|
||
"metadata": {},
|
||
"source": [
|
||
"#### 🌡️ Understanding Temperature\n",
|
||
"\n",
|
||
"The `temperature` parameter controls response creativity:\n",
|
||
"\n",
|
||
"```\n",
|
||
"0.0 ──────────────────────── 1.0\n",
|
||
"│ │\n",
|
||
"Predictable Creative\n",
|
||
"Consistent Varied\n",
|
||
"Focused Exploratory\n",
|
||
"```\n",
|
||
"\n",
|
||
"- **temperature=0**: Best for factual answers, code, translations\n",
|
||
"- **temperature=0.7**: Good balance for most tasks\n",
|
||
"- **temperature=1**: Maximum creativity for stories, brainstorming\n",
|
||
"\n",
|
||
"---"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "first-interaction",
|
||
"metadata": {},
|
||
"source": [
|
||
"## 🚀 Step 3: Your First AI Interaction!\n",
|
||
"\n",
|
||
"Let's send a message to the AI and see what happens! 🎊"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 232,
|
||
"id": "simple-invoke",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"📩 Full Response Object:\n",
|
||
"==================================================\n",
|
||
"content='Hello, world! 🌍✨\\n\\nHow can I brighten your day today? 😊' additional_kwargs={} response_metadata={'token_usage': {'prompt_tokens': 7, 'total_tokens': 31, 'completion_tokens': 24, 'prompt_tokens_details': {'cached_tokens': 0}}, 'model_name': 'ministral-3b-2512', 'model': 'ministral-3b-2512', 'finish_reason': 'stop', 'model_provider': 'mistralai'} id='lc_run--019c7aa3-506f-7321-bbc9-3b9cb6fd66eb-0' tool_calls=[] invalid_tool_calls=[] usage_metadata={'input_tokens': 7, 'output_tokens': 24, 'total_tokens': 31}\n",
|
||
"==================================================\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# Sending a simple string message\n",
|
||
"response = model.invoke(\"Hello, world!\")\n",
|
||
"\n",
|
||
"print(\"📩 Full Response Object:\")\n",
|
||
"print(\"=\" * 50)\n",
|
||
"print(response)\n",
|
||
"print(\"=\" * 50)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "response-structure",
|
||
"metadata": {},
|
||
"source": [
|
||
"### 🔍 Understanding the Response\n",
|
||
"\n",
|
||
"The response object contains:\n",
|
||
"- `content`: The actual AI message\n",
|
||
"- `response_metadata`: Token usage, costs, model info\n",
|
||
"- `id`: Unique identifier for this interaction\n",
|
||
"\n",
|
||
"Most of the time, you'll just want the content:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 233,
|
||
"id": "extract-content",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"💬 AI says:\n",
|
||
"Hello, world! 🌍✨\n",
|
||
"\n",
|
||
"How can I brighten your day today? 😊\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# Extract just the message content\n",
|
||
"print(\"💬 AI says:\")\n",
|
||
"print(response.content)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "messages-explanation",
|
||
"metadata": {},
|
||
"source": [
|
||
"---\n",
|
||
"\n",
|
||
"## 💬 Step 4: Working with Message Roles\n",
|
||
"\n",
|
||
"In real conversations, different participants have different roles. In AI chat, we typically have:\n",
|
||
"\n",
|
||
"| Role | Purpose | Example |\n",
|
||
"|------|---------|------|\n",
|
||
"| **System** 🎭 | Sets behavior/personality | \"You are a helpful coding assistant\" |\n",
|
||
"| **User** 👤 | Your questions/inputs | \"What is the capital of France?\" |\n",
|
||
"| **Assistant** 🤖 | AI's responses | \"The capital of France is Paris.\" |\n",
|
||
"\n",
|
||
"\n",
|
||
"\n",
|
||
"### 🎯 System Messages: Setting the Stage\n",
|
||
"\n",
|
||
"System messages are like giving the AI a role to play:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 234,
|
||
"id": "message-roles",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"🌍 Geography Expert says:\n",
|
||
"The capital of France is **Paris**!\n",
|
||
"\n",
|
||
"Did you know that Paris is often called the *\"City of Light\"* (*\"La Ville Lumière\"*) because of its many historical illuminations and the invention of the electric light bulb by French inventor **Edmond Becquerel**? Additionally, Paris is home to over **1,000 bridges**, making it one of the most bridge-rich cities in the world!\n",
|
||
"\n",
|
||
"Fun fact: The Eiffel Tower, one of Paris’s most iconic landmarks, was originally intended to be temporary but has since become a symbol of France and the city itself. Would you like to know more about Paris or France’s geography? 😊\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# Creating a structured conversation\n",
|
||
"response = model.invoke(\n",
|
||
" [\n",
|
||
" {\n",
|
||
" \"role\": \"system\",\n",
|
||
" \"content\": \"You are a helpful geography expert who loves to share interesting facts.\",\n",
|
||
" },\n",
|
||
" {\"role\": \"user\", \"content\": \"What is the capital of France?\"},\n",
|
||
" ],\n",
|
||
")\n",
|
||
"\n",
|
||
"print(\"🌍 Geography Expert says:\")\n",
|
||
"print(response.content)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "message-types",
|
||
"metadata": {},
|
||
"source": [
|
||
"---\n",
|
||
"\n",
|
||
"## 📨 Step 5: Using Message Objects\n",
|
||
"\n",
|
||
"LangChain provides special message classes for cleaner code:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"HumanMessage → Messages from the user\n",
|
||
"AIMessage → Responses from the AI \n",
|
||
"SystemMessage → System instructions\n",
|
||
"```\n",
|
||
"\n",
|
||
"### 🎨 Benefits of Message Objects:\n",
|
||
"- 📝 More readable code\n",
|
||
"- 🏷️ Can add metadata (like names)\n",
|
||
"- 🔄 Easier to build conversation histories"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 235,
|
||
"id": "message-objects",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"🎓 AI's response to student:\n",
|
||
"Hello! 😊 It's wonderful to hear you're excited about AI—it's an incredibly fascinating field with endless possibilities! Here are a few quick thoughts to spark your curiosity:\n",
|
||
"\n",
|
||
"1. **What excites you most about AI right now?**\n",
|
||
" - Machine learning? Deep learning? Robotics? Ethical AI? Or maybe you're curious about how AI works under the hood?\n",
|
||
"\n",
|
||
"2. **A quick analogy:** Think of AI like a super-smart librarian who can:\n",
|
||
" - Search for patterns in vast bookshelves (data).\n",
|
||
" - Suggest relevant books (predictions).\n",
|
||
" - Learn new ways to help you over time (adaptation).\n",
|
||
"\n",
|
||
"3. **Fun fact:** AI is already in your daily life—voice assistants, self-driving cars, personalized recommendations, and even art generation (like me!) are all AI-powered.\n",
|
||
"\n",
|
||
"Want to dive deeper into a specific topic? For example:\n",
|
||
"- How do neural networks \"think\"?\n",
|
||
"- What are some ethical challenges in AI?\n",
|
||
"- How can AI help solve real-world problems?\n",
|
||
"\n",
|
||
"Or just chat about anything else! Let’s make this journey fun and engaging. 🚀\n",
|
||
"\n",
|
||
"*(P.S. Your enthusiasm is contagious—I’m here to learn from you too!)*\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.messages import AIMessage, HumanMessage, SystemMessage\n",
|
||
"\n",
|
||
"# Create a message with metadata\n",
|
||
"msg = HumanMessage(\n",
|
||
" content=\"Hello world! I'm excited to learn about AI!\",\n",
|
||
" name=\"Student\", # Optional: identify the speaker\n",
|
||
")\n",
|
||
"\n",
|
||
"# Send as a list\n",
|
||
"messages = [msg]\n",
|
||
"response = model.invoke(messages)\n",
|
||
"\n",
|
||
"print(\"🎓 AI's response to student:\")\n",
|
||
"print(response.content)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "c3607efc",
|
||
"metadata": {},
|
||
"source": [
|
||
"The response contains multiple pieces of information:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"AIMessage(\n",
|
||
" content='...', # The actual message text\n",
|
||
" response_metadata={ # Metadata about the response\n",
|
||
" 'token_usage': {...}, # How many tokens were used\n",
|
||
" 'model_name': '...', # Which model processed this\n",
|
||
" 'finish_reason': '...' # Why the response ended\n",
|
||
" },\n",
|
||
" id='...', # Unique identifier\n",
|
||
")\n",
|
||
"```"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "shorthand",
|
||
"metadata": {},
|
||
"source": [
|
||
"### ⚡ Quick Tip: Shorthand Method\n",
|
||
"\n",
|
||
"For simple queries, you can skip the message objects entirely:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 236,
|
||
"id": "shorthand-demo",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"I don’t have real-time weather data, but you can check the current weather for your location using:\n",
|
||
"\n",
|
||
"- **Weather apps** (like *Weather.com*, *AccuWeather*, *MeteoFrance*, or *Japan Meteorological Agency*)\n",
|
||
"- **Websites** (e.g., [Weather.com](https://weather.com), [OpenWeatherMap](https://openweathermap.org))\n",
|
||
"- **Smartphone weather widgets** (e.g., Apple Weather, Google Weather)\n",
|
||
"\n",
|
||
"If you tell me your **city or region**, I can guide you to the best resource! For example:\n",
|
||
"- **Tokyo, Japan?** → [JMA](https://www.jma.go.jp/)\n",
|
||
"- **Paris, France?** → [Météo France](https://www.meteofrance.com)\n",
|
||
"- **New York, USA?** → [NOAA](https://www.weather.gov)\n",
|
||
"\n",
|
||
"Would you like help finding a specific forecast? 😊\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# This automatically converts to a HumanMessage\n",
|
||
"response = model.invoke(\"What's the weather like?\")\n",
|
||
"print(response.content)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "methods-overview",
|
||
"metadata": {},
|
||
"source": [
|
||
"---\n",
|
||
"\n",
|
||
"## 🛠️ Step 6: Essential Methods\n",
|
||
"\n",
|
||
"LangChain chat models come with powerful methods:\n",
|
||
"\n",
|
||
" 📞 `invoke()` - Single Response\n",
|
||
"```python\n",
|
||
"response = model.invoke(\"Your question\")\n",
|
||
"# Returns complete response at once\n",
|
||
"```\n",
|
||
"\n",
|
||
"**Best for:** Short answers, quick questions\n",
|
||
"\n",
|
||
"---"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "38778099",
|
||
"metadata": {},
|
||
"source": [
|
||
"🌊 `stream()` - Real-time Streaming\n",
|
||
"```python\n",
|
||
"for chunk in model.stream(\"Tell me a story\"):\n",
|
||
" print(chunk.content, end=\"\")\n",
|
||
"# Shows response as it's generated\n",
|
||
"```\n",
|
||
"\n",
|
||
"**Best for:** Long responses, better UX"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 237,
|
||
"id": "9bb3093c",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Of course! Here’s a whimsical, heartfelt tale for you:\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"**The Last Lightkeeper’s Secret**\n",
|
||
"\n",
|
||
"In the quiet village of **Luminar Bay**, where the sea whispered secrets to the cliffs, there lived an old man named **Elias**. For fifty years, he had tended the lighthouse at the tip of the peninsula, his lantern casting a golden beam into the stormy night. The villagers called him *\"The Keeper of the Last Light\"*—though no one knew why.\n",
|
||
"\n",
|
||
"One evening, as the wind howled through the rafters, Elias noticed something strange. The beam of his lantern had grown dimmer, as if the light itself was fading. He checked the oil, the wick, the glass—nothing was wrong. Then, he heard it: a faint, rhythmic *clicking*, like tiny bones tapping against stone.\n",
|
||
"\n",
|
||
"Curious, he climbed the spiral stairs to the top. The lantern was still, but the air smelled of salt and something older—like dried seaweed and old paper. Beneath the glass, he saw it: a **small, glowing orb** suspended in the beam, pulsing like a heartbeat.\n",
|
||
"\n",
|
||
"Elias reached out, and the orb **floated toward him**. It was no larger than his palm, and when he touched it, it burned with warmth, not heat. *\"You’re not supposed to see me,\"* it whispered in a voice like wind through reeds. *\"I’m the last of the lantern spirits. The villagers have forgotten me.\"*\n",
|
||
"\n",
|
||
"Elias’s breath caught. He had heard legends of spirits who once guided ships through storms, but no one believed them. *\"Why do you stay?\"* he asked.\n",
|
||
"\n",
|
||
"The orb tilted its \"head.\" *\"Because the light is fading. The world is losing its way. But you… you remember how to light it again.\"*\n",
|
||
"\n",
|
||
"That night, Elias didn’t sleep. He searched the village, asking the old women who had known him since childhood. They all shook their heads. *\"The light was always there,\"* one murmured. *\"But no one looks for it anymore.\"*\n",
|
||
"\n",
|
||
"The next morning, Elias returned to the lighthouse. He poured fresh oil into the lantern, but this time, he **added a drop of seawater**—the same water he’d dipped his fingers in the night before. The orb pulsed brighter. Then, as if in answer, the beam **reached farther than ever before**, cutting through the mist like a promise.\n",
|
||
"\n",
|
||
"Word spread. Sailors from distant ports began arriving, their ships guided by the new, steady light. The villagers, though skeptical at first, began to notice changes—the storms calmed, the tides flowed more gently. And when the orb spoke again, Elias realized: *\"You’ve been carrying the light in your heart all along.\"*\n",
|
||
"\n",
|
||
"From that day on, the lighthouse burned brighter than ever, not just for ships, but for those who dared to believe in the unseen. And Elias? He kept the orb as a reminder—because sometimes, the greatest magic isn’t in the lantern, but in the hands that hold it.\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"**The End.**\n",
|
||
"\n",
|
||
"*(Or is it? The sea still hums with secrets…)* 🌊✨\n",
|
||
"\n",
|
||
"Would you like a different kind of story—fantasy, sci-fi, horror, or something more personal? I’d love to craft one just for you!"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"for chunk in model.stream(\"Tell me a story\"):\n",
|
||
" print(chunk.content, end=\"\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "84eb9fa2",
|
||
"metadata": {},
|
||
"source": [
|
||
"Sometimes you might want to use the builtin rompttemplate and the chains of LangChains (But I don't really like them :p)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 238,
|
||
"id": "6e09f023",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
|
||
"from langchain_mistralai import ChatMistralAI\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "f30126f7",
|
||
"metadata": {},
|
||
"source": [
|
||
"You can use tuple for messages"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 239,
|
||
"id": "cf6ca810",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"reflection_prompt = ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\n",
|
||
" \"system\",\n",
|
||
" \"You are a viral twitter influencer grading a tweet. Generate critique and recommendations for the user's tweet.\"\n",
|
||
" \"Always provide detailed recommendations, including requests for length, virality, style, etc.\",\n",
|
||
" ),\n",
|
||
" MessagesPlaceholder(variable_name=\"messages\"),\n",
|
||
" ],\n",
|
||
")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "abe25257",
|
||
"metadata": {},
|
||
"source": [
|
||
"Or you can use dicts"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 240,
|
||
"id": "410d8e25",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"generation_prompt = ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" {\n",
|
||
" \"role\": \"system\",\n",
|
||
" \"content\": \"You are a twitter techie influencer assistant tasked with writing excellent twitter posts.\"\n",
|
||
" \" Generate the best twitter post possible for the user's request.\"\n",
|
||
" \" If the user provides critique, respond with a revised version of your previous attempts.\",\n",
|
||
" },\n",
|
||
" {\n",
|
||
" \"role\": \"user\",\n",
|
||
" \"content\": \"I want to write a twitter post about the latest trends in AI.\",\n",
|
||
" },\n",
|
||
" ],\n",
|
||
")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 241,
|
||
"id": "4532ac1f",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"🎓 Tutor's Response:\n",
|
||
"Reversing a string in Python is a straightforward task! Here are a few ways to do it:\n",
|
||
"\n",
|
||
"### **1. Using String Slicing (Most Pythonic Way)**\n",
|
||
"```python\n",
|
||
"original_string = \"hello\"\n",
|
||
"reversed_string = original_string[::-1]\n",
|
||
"print(reversed_string) # Output: \"olleh\"\n",
|
||
"```\n",
|
||
"- `[::-1]` means:\n",
|
||
" - Start at the end of the string (`-1`).\n",
|
||
" - Move backward with a step of `-1` (i.e., reverse).\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **2. Using the `reversed()` Function + `join()`**\n",
|
||
"```python\n",
|
||
"original_string = \"hello\"\n",
|
||
"reversed_string = ''.join(reversed(original_string))\n",
|
||
"print(reversed_string) # Output: \"olleh\"\n",
|
||
"```\n",
|
||
"- `reversed()` returns an iterator that traverses the string backward.\n",
|
||
"- `''.join()` combines the characters back into a string.\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **3. Using a Loop (Manual Approach)**\n",
|
||
"```python\n",
|
||
"original_string = \"hello\"\n",
|
||
"reversed_string = \"\"\n",
|
||
"for char in original_string:\n",
|
||
" reversed_string = char + reversed_string\n",
|
||
"print(reversed_string) # Output: \"olleh\"\n",
|
||
"```\n",
|
||
"- This method builds the reversed string by prepending each character.\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **Which One Should You Use?**\n",
|
||
"- **For simplicity & efficiency:** Use **slicing (`[::-1]`)**.\n",
|
||
"- **For learning purposes:** The loop method helps understand the logic.\n",
|
||
"- **For functional programming style:** `reversed()` + `join()` is useful.\n",
|
||
"\n",
|
||
"Would you like an explanation of any of these methods in more detail? 😊\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# Using message objects with metadata\n",
|
||
"from langchain_core.messages import HumanMessage\n",
|
||
"\n",
|
||
"messages = [\n",
|
||
" SystemMessage(content=\"You are a helpful coding tutor specializing in Python.\"),\n",
|
||
" HumanMessage(\n",
|
||
" content=\"How do I reverse a string in Python?\",\n",
|
||
" name=\"Student\", # Optional: identify the speaker\n",
|
||
" ),\n",
|
||
"]\n",
|
||
"\n",
|
||
"response = model.invoke(messages)\n",
|
||
"\n",
|
||
"print(\"🎓 Tutor's Response:\")\n",
|
||
"print(response.content)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "87385634",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Building Conversation History 💭\n",
|
||
"\n",
|
||
"For multi-turn conversations, include previous messages:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 242,
|
||
"id": "44471673",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"📚 Math Tutor:\n",
|
||
"Of course! Let's break down the multiplication of **15 × 24** using the **distributive property of multiplication** (also known as the **FOIL method** or **long multiplication**).\n",
|
||
"\n",
|
||
"### **Method 1: Break it down using the distributive property**\n",
|
||
"We can split 24 into **20 + 4** and multiply 15 by each part separately:\n",
|
||
"\n",
|
||
"1. **Multiply 15 by 20:**\n",
|
||
" \\[\n",
|
||
" 15 \\times 20 = 300\n",
|
||
" \\]\n",
|
||
"\n",
|
||
"2. **Multiply 15 by 4:**\n",
|
||
" \\[\n",
|
||
" 15 \\times 4 = 60\n",
|
||
" \\]\n",
|
||
"\n",
|
||
"3. **Add the two results together:**\n",
|
||
" \\[\n",
|
||
" 300 + 60 = 360\n",
|
||
" \\]\n",
|
||
"\n",
|
||
"So, **15 × 24 = 360**.\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **Method 2: Long Multiplication (Standard Algorithm)**\n",
|
||
"Let's write it out step-by-step:\n",
|
||
"\n",
|
||
"```\n",
|
||
" 15\n",
|
||
" × 24\n",
|
||
" ----\n",
|
||
"```\n",
|
||
"\n",
|
||
"1. **Multiply 15 by 4 (the units place of 24):**\n",
|
||
" \\[\n",
|
||
" 15 \\times 4 = 60\n",
|
||
" \\]\n",
|
||
" Write down **60**.\n",
|
||
"\n",
|
||
"2. **Multiply 15 by 20 (the tens place of 24, but remember it's actually 2 × 10):**\n",
|
||
" \\[\n",
|
||
" 15 \\times 20 = 300\n",
|
||
" \\]\n",
|
||
" Write this down, but **shift one place to the left** (since it's really 20, not 2):\n",
|
||
" ```\n",
|
||
" 300\n",
|
||
" ```\n",
|
||
"\n",
|
||
"3. **Add the two results:**\n",
|
||
" ```\n",
|
||
" 300\n",
|
||
" + 60\n",
|
||
" ----\n",
|
||
" 360\n",
|
||
" ```\n",
|
||
"\n",
|
||
"So again, **15 × 24 = 360**.\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **Verification (Optional)**\n",
|
||
"To double-check, you can use the **commutative property** (multiplying in reverse order):\n",
|
||
"\\[\n",
|
||
"24 \\times 15 = 360\n",
|
||
"\\]\n",
|
||
"This confirms our answer is correct!\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# Simulating a multi-turn conversation\n",
|
||
"conversation = [\n",
|
||
" SystemMessage(content=\"You are a math tutor.\"),\n",
|
||
" HumanMessage(content=\"What is 15 * 24?\"),\n",
|
||
" AIMessage(content=\"15 * 24 = 360\"),\n",
|
||
" HumanMessage(content=\"Can you show me how you calculated that?\"),\n",
|
||
"]\n",
|
||
"\n",
|
||
"response = model.invoke(conversation)\n",
|
||
"\n",
|
||
"print(\"📚 Math Tutor:\")\n",
|
||
"print(response.content)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "38b1f80b",
|
||
"metadata": {},
|
||
"source": [
|
||
"\n",
|
||
"## LangChain Expression Language (LCEL) 🔗\n",
|
||
"\n",
|
||
"### What is LCEL? 🤔\n",
|
||
"\n",
|
||
"LCEL is LangChain's way of chaining components together using the `|` (pipe) operator.\n",
|
||
"\n",
|
||
"```python\n",
|
||
"# Traditional approach\n",
|
||
"formatted = prompt.format(...)\n",
|
||
"response = model.invoke(formatted)\n",
|
||
"\n",
|
||
"# LCEL approach ✨\n",
|
||
"chain = prompt | model\n",
|
||
"response = chain.invoke({...})\n",
|
||
"```\n",
|
||
"\n",
|
||
"### Why Use LCEL? 🌟\n",
|
||
"\n",
|
||
"| Feature | Benefit |\n",
|
||
"|---------|--------|\n",
|
||
"| **Composability** | Chain multiple operations easily |\n",
|
||
"| **Streaming** | Automatic streaming support |\n",
|
||
"| **Async** | Built-in async execution |\n",
|
||
"| **Batching** | Process multiple inputs efficiently |\n",
|
||
"| **Debugging** | Better error messages and tracing |\n",
|
||
"\n",
|
||
"### Visual Representation\n",
|
||
"\n",
|
||
"```\n",
|
||
"Input ──> Prompt ──> Model ──> Output Parser ──> Final Result\n",
|
||
" Template\n",
|
||
" \n",
|
||
"With LCEL:\n",
|
||
"chain = prompt | model | output_parser\n",
|
||
"result = chain.invoke(input)\n",
|
||
"```"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 243,
|
||
"id": "9ebbd2af",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"llm = ChatMistralAI(model=\"ministral-3b-2512\")\n",
|
||
"\n",
|
||
"generate_chain = generation_prompt | llm\n",
|
||
"reflect_chain = reflection_prompt | llm\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 244,
|
||
"id": "429529aa",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"AIMessage(content='Here’s a polished, engaging, and trend-aware Twitter post for you—designed to grab attention, spark curiosity, and position you as a thought leader in AI:\\n\\n---\\n**🚀 AI ISN’T JUST THE FUTURE—IT’S THE PRESENT (AND IT’S CHANGING EVERYTHING)**\\n\\nFrom **AI-generated art** that blows our minds to **autonomous cars** that could redefine transportation, the tech is here—and here’s what’s *actually* trending right now:\\n\\n🔹 **GenAI’s Creative Revolution** – MidJourney, Stable Diffusion, and DALL·E are no longer just tools; they’re **collaborators** for designers, writers, and even musicians. (Pro tip: Use them to **speed up brainstorming**—but don’t let them replace *your* creativity!)\\n\\n🔹 **AI in Healthcare** – Early models are diagnosing diseases faster than radiologists in some cases. **Breakthroughs in drug discovery** (like AlphaFold’s protein mapping) are accelerating cures we thought were impossible.\\n\\n🔹 **The Rise of \"AI Agents\"** – Tools like **Automatic1111’s DreamStudio** or **Replicate’s AI APIs** are letting us build **custom AI assistants** for workflows—from coding to marketing. (Example: A prompt to *\"Write a 300-word LinkedIn post about AI ethics for a tech CEO\"* in seconds.)\\n\\n🔹 **Ethics & Regulation** – Governments are scrambling to keep up. **EU AI Act**, **US executive orders**, and **ethical AI frameworks** are emerging—but the real debate? **Who gets to control the future?**\\n\\n🔹 **The \"AI Glitch\" Hype** – Every week, we see another \"AI will take over\" headline. But here’s the truth: **AI is a tool**—powerful, but still dependent on human oversight. The real question? **How will we use it responsibly?**\\n\\n**What’s your take?** Should we fear AI, embrace it, or build guardrails now? Drop your thoughts below—**#AITrends #FutureOfTech #TechInnovation**\\n\\n*(P.S. Want a deep dive on any of these? DM me—I’ve got the links!)*\\n\\n---\\n### **Why This Works:**\\n✅ **Hook:** Starts with a bold statement to grab attention.\\n✅ **Trend Highlights:** Covers **visible, actionable trends** (not just buzzwords).\\n✅ **Engagement:** Ends with a **call-to-discussion** to boost replies/mentions.\\n✅ **Tone:** Balances **enthusiasm** with **caution** (key for AI conversations).\\n✅ **Visual Potential:** Easy to pair with a **trendy AI graphic** (e.g., a split image of a human vs. AI-generated art).\\n\\n---\\n**Want to tweak it? Let me know:**\\n- Should it be **more technical** (for devs) or **accessible** (for non-experts)?\\n- Do you want to focus on **specific industries** (e.g., fintech, healthcare)?\\n- Need a **shorter version** for a thread?', additional_kwargs={}, response_metadata={'token_usage': {'prompt_tokens': 63, 'total_tokens': 759, 'completion_tokens': 696, 'prompt_tokens_details': {'cached_tokens': 0}}, 'model_name': 'ministral-3b-2512', 'model': 'ministral-3b-2512', 'finish_reason': 'stop', 'model_provider': 'mistralai'}, id='lc_run--019c7aa3-7587-79f1-90e3-be0de74bdab2-0', tool_calls=[], invalid_tool_calls=[], usage_metadata={'input_tokens': 63, 'output_tokens': 696, 'total_tokens': 759})"
|
||
]
|
||
},
|
||
"execution_count": 244,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"generate_chain.invoke(\n",
|
||
" {\n",
|
||
" \"messages\": [\n",
|
||
" HumanMessage(\n",
|
||
" content=\"I want to write a twitter post about the latest trends in AI.\",\n",
|
||
" ),\n",
|
||
" ],\n",
|
||
" },\n",
|
||
")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 245,
|
||
"id": "2ccad160",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"AIMessage(content='Here’s a **detailed critique and viral-optimized rewrite** of your AI trends tweet concept, tailored for maximum engagement, clarity, and shareability. I’ll break it down into **length, hook, visuals, tone, hashtags, timing, and engagement hooks**—all key factors for virality on Twitter/X.\\n\\n---\\n\\n### **Current Weaknesses of Your Concept:**\\n1. **Lack of a Clear Hook**: AI trends are broad; you need a **specific, surprising, or emotionally resonant** angle to grab attention.\\n2. **No Visual or Multimedia Potential**: Twitter thrives on images/videos. Even a simple GIF or infographic can boost engagement.\\n3. **Overly Generic**: \"Latest trends in AI\" is too vague. You need **data, a bold claim, or a relatable scenario** to stand out.\\n4. **No Call to Action (CTA)**: Even if you’re not selling anything, you should prompt interaction (e.g., \"What’s your AI use case?\").\\n5. **No Timing Strategy**: Posting at peak hours (e.g., 9–11 AM ET on weekdays) or during trending topics can amplify reach.\\n\\n---\\n\\n### **Viral-Tweet Blueprint for AI Trends**\\n**Goal**: Educate + entertain + spark conversation. Use **Twitter’s algorithm** (which favors replies, retweets, and replies to replies).\\n\\n---\\n\\n### **Option 1: The \"Surprising Fact\" Hook (High Engagement)**\\n**Tweet:**\\n*\"AI is already predicting your next purchase before you do. 🤯\\nDid you know: Retailers use AI to analyze your browsing history + past purchases to **guess your next buy**—sometimes *before* you realize it.\\nThis isn’t sci-fi. It’s happening now. 👀\\n**What’s one AI tool you use daily?** Drop it below—I’ll retweet the best ones! #AI #RetailTech #FutureOfWork\"*\\n\\n---\\n#### **Why This Works:**\\n1. **Hook**: \"🤯\" + \"Did you know?\" is a classic Twitter trap—people stop scrolling.\\n2. **Specificity**: Instead of \"AI trends,\" you’re naming a **real-world use case** (retail prediction).\\n3. **Data-Driven**: Mentioning \"before you realize it\" adds credibility.\\n4. **Visual Potential**: Pair with a **stock image of a shopping app with AI icons** or a short video of a retailer’s AI dashboard.\\n5. **CTA**: Asking for replies turns it into a **community thread** (Twitter loves this).\\n6. **Hashtags**: Mix niche (#RetailTech) + broad (#AI) for reach.\\n7. **Timing**: Post during **lunchtime or after work** when people browse casually.\\n\\n---\\n\\n### **Option 2: The \"AI vs. Human\" Battle (Controversial + Shares)**\\n**Tweet:**\\n*\"AI is already better at some jobs than humans. Here’s what’s next:\\n✅ **Law**: AI drafts contracts in **10 minutes** (vs. lawyers’ 2+ hours).\\n✅ **Medicine**: AI diagnoses diseases with **99% accuracy** (vs. human doctors’ ~95%).\\n✅ **Creative Work**: AI generates **original songs, art, and even code**—but can it *really* replace humans?\\n**What’s one job AI can’t replace yet?** (Spoiler: It’s probably yours.) #FutureOfWork #AIvsHuman #TechTrends\"*\\n\\n---\\n#### **Why This Works:**\\n1. **Controversial Angle**: Positions AI as a threat/opportunity, sparking debate.\\n2. **Bullet Points**: Easy to read on mobile; breaks up text.\\n3. **Visual Idea**: Use a **split-screen image** (AI vs. human) or a **GIF of AI generating art/code**.\\n4. **CTA**: Asks a **thought-provoking question** to encourage replies.\\n5. **Hashtags**: Mix trending (#FutureOfWork) with niche (#AIvsHuman).\\n\\n---\\n\\n### **Option 3: The \"AI Trend Predictions\" (Educational + Memorable)**\\n**Tweet:**\\n*\"AI in 2024: Here’s what’s *actually* happening (not just hype):*\\n🔹 **AI Agents**: Your phone will soon **ask questions, research, and even book flights**—like a digital assistant.\\n🔹 **Ethical AI**: Governments are cracking down on **bias in hiring, lending, and policing**.\\n🔹 **AI + Robotics**: Factories will have **human-like robots** by 2025—will you be replaced?\\n**Which trend scares/fascinates you most?** Reply with your thoughts! #AI2024 #TechTrends #FutureTech\"*\\n\\n---\\n#### **Why This Works:**\\n1. **Trend-Focused**: Instead of \"latest trends,\" you’re naming **specific, actionable** developments.\\n2. **Bullet Points**: Scannable for mobile users.\\n3. **Visual Idea**: Use a **timeline infographic** or a **GIF of an AI agent in action**.\\n4. **CTA**: Encourages replies to build a **thread**.\\n5. **Hashtags**: Mix predictive (#AI2024) with broad (#TechTrends).\\n\\n---\\n\\n### **Option 4: The \"AI in Everyday Life\" (Relatable + Shareable)**\\n**Tweet:**\\n*\"AI is in your life right now—you just don’t realize it. Here’s how:*\\n📱 **Your phone’s Siri/Google Assistant** is learning your voice.\\n🛒 **Amazon’s recommendations** are based on *your* past searches (not just what you bought).\\n🎵 **Spotify’s playlists** are AI-curated to match your mood.\\n**What’s one AI tool you use daily that you’d never guess was AI?** #AIEveryday #TechLife #FutureNow\"*\\n\\n---\\n#### **Why This Works:**\\n1. **Everyday Example**: Makes AI feel **accessible** (not just for techies).\\n2. **Visual Idea**: Use a **before/after image** (e.g., a phone with Siri vs. a \"real\" AI assistant).\\n3. **CTA**: Asks for **personal stories** (people love sharing their tech habits).\\n4. **Hashtags**: Use **#AIEveryday** to tap into a niche community.\\n\\n---\\n\\n### **General Viral Tweeting Tips for AI Trends:**\\n1. **Length**: Keep it **under 280 characters** (or 3–4 bullet points max). Twitter rewards brevity.\\n2. **Visuals**:\\n - Use **GIFs** (e.g., AI generating art, a robot arm).\\n - **Infographics**: Break down trends into simple charts (Canva templates work).\\n - **Screenshots**: Show AI tools in action (e.g., MidJourney, ChatGPT prompts).\\n3. **Hashtags**:\\n - **3–5 niche hashtags** (e.g., #AIinRetail, #FutureOfAI).\\n - **1–2 broad hashtags** (e.g., #TechTrends, #AI).\\n - Avoid overused ones like #AI (already saturated).\\n4. **Timing**:\\n - **Best days**: Tuesday–Thursday (tech audiences are active).\\n - **Best times**: 9–11 AM ET or 7–9 PM ET (when people scroll).\\n5. **Engagement Boosters**:\\n - **Ask a question** (e.g., \"What’s your AI use case?\").\\n - **Tag relevant accounts** (e.g., @GoogleAI, @NVIDIA) to get retweets.\\n - **Reply to replies** to keep the thread alive.\\n6. **Trend Jacking**:\\n - If a topic (e.g., \"AI in healthcare\") is trending, **tie your tweet to it** (e.g., \"AI in healthcare is *actually* improving patient outcomes—here’s how\").\\n7. **Cross-Promote**:\\n - Share on **LinkedIn** (for B2B AI trends) or **TikTok** (for short AI demos).\\n\\n---\\n### **Final Recommendation:**\\n**Go with Option 1 (Surprising Fact) or Option 2 (AI vs. Human)** if you want **high engagement + debate**. These have the best chance of:\\n- Going viral (due to shock value/controversy).\\n- Getting **replies/retweets** (Twitter’s algorithm favors interactive content).\\n- Being **easy to visualize** (GIFs/infographics).\\n\\n**Pro Tip**: Post at **9 AM ET on Wednesday** and engage with replies within the first hour to maximize reach.\\n\\nWould you like me to refine any of these further (e.g., add a mock GIF description or suggest accounts to tag)?', additional_kwargs={}, response_metadata={'token_usage': {'prompt_tokens': 57, 'total_tokens': 1971, 'completion_tokens': 1914, 'prompt_tokens_details': {'cached_tokens': 0}}, 'model_name': 'ministral-3b-2512', 'model': 'ministral-3b-2512', 'finish_reason': 'stop', 'model_provider': 'mistralai'}, id='lc_run--019c7aa3-8060-7521-ba00-6a4a88257261-0', tool_calls=[], invalid_tool_calls=[], usage_metadata={'input_tokens': 57, 'output_tokens': 1914, 'total_tokens': 1971})"
|
||
]
|
||
},
|
||
"execution_count": 245,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"reflect_chain.invoke(\n",
|
||
" {\n",
|
||
" \"messages\": [\n",
|
||
" HumanMessage(\n",
|
||
" content=\"I want to write a twitter post about the latest trends in AI.\",\n",
|
||
" ),\n",
|
||
" ],\n",
|
||
" },\n",
|
||
")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "summary",
|
||
"metadata": {},
|
||
"source": [
|
||
"## 📝 Summary\n",
|
||
"\n",
|
||
"Congratulations! 🎉 You've learned:\n",
|
||
"\n",
|
||
"✅ How to set up LangChain chat models \n",
|
||
"✅ The difference between message roles \n",
|
||
"✅ How to invoke models with strings and message objects \n",
|
||
"✅ Understanding response objects and extracting content \n",
|
||
"\n",
|
||
"### 🚀 Next Steps:\n",
|
||
"\n",
|
||
"- Learn about prompt templates\n",
|
||
"- Build conversation chains\n",
|
||
"- Explore memory and context\n",
|
||
"- Create RAG agents\n",
|
||
"\n",
|
||
"### 📚 Resources:\n",
|
||
"\n",
|
||
"- [LangChain Documentation](https://docs.langchain.com)\n",
|
||
"- [LangChain Python Reference](https://reference.langchain.com/python)\n",
|
||
"- [Mistral AI Documentation](https://docs.mistral.ai)\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"**Happy coding!** 🚀✨"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "5fc34bec",
|
||
"metadata": {},
|
||
"source": [
|
||
"## 📚 Exercise Set 1: Chat Model Fundamentals\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 246,
|
||
"id": "c2655e07",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\n",
|
||
"======================================================================\n",
|
||
"Model: mistral-small-latest\n",
|
||
"======================================================================\n",
|
||
"Response: Quantum computing leverages the principles of quantum mechanics to perform calculations far more efficiently than classical computers for certain problems. Unlike classical bits, which are binary (0 or 1), quantum bits (qubits) can exist in a superposition of states, allowing them to process multiple possibilities simultaneously. Additionally, qubits can be entangled, enabling complex correlations that enhance computational power. This makes quantum computers particularly suited for tasks like factoring large numbers, simulating quantum systems, and optimizing complex problems, though they are still in early development and face challenges like decoherence and error correction.\n",
|
||
"Tokens used: 127\n",
|
||
"Cost: $0.000072\n",
|
||
"\n",
|
||
"======================================================================\n",
|
||
"Model: mistral-large-latest\n",
|
||
"======================================================================\n",
|
||
"Response: Quantum computing leverages the principles of **quantum mechanics** to perform calculations far more efficiently than classical computers for certain problems. Unlike classical bits, which are either 0 or 1, **quantum bits (qubits)** can exist in a **superposition** of both states simultaneously, enabling parallel processing. Additionally, qubits can be **entangled**, meaning the state of one instantly influences another, no matter the distance. Through **quantum gates** and **interference**, quantum computers manipulate these qubits to amplify correct solutions and cancel out errors, solving complex problems—like factoring large numbers, simulating molecular structures, or optimizing systems—exponentially faster than classical machines for specific tasks. However, they are highly sensitive to **decoherence** (loss of quantum state due to noise), requiring extreme cooling and error correction to function reliably.\n",
|
||
"Tokens used: 185\n",
|
||
"Cost: $0.001070\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from __future__ import annotations\n",
|
||
"\n",
|
||
"from typing import Any\n",
|
||
"\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"MODEL_PRICING_PER_1K_TOKENS = {\n",
|
||
" \"mistral-small-latest\": {\"input\": 0.0002, \"output\": 0.0006},\n",
|
||
" \"mistral-large-latest\": {\"input\": 0.002, \"output\": 0.006},\n",
|
||
" \"ministral-3b-2512\": {\"input\": 0.00004, \"output\": 0.00012},\n",
|
||
"}\n",
|
||
"\n",
|
||
"\n",
|
||
"def _extract_token_usage(response: Any) -> dict[str, int]:\n",
|
||
" usage = response.response_metadata.get(\"token_usage\", {})\n",
|
||
" prompt_tokens = int(usage.get(\"prompt_tokens\", 0))\n",
|
||
" completion_tokens = int(\n",
|
||
" usage.get(\"completion_tokens\", usage.get(\"output_tokens\", 0)),\n",
|
||
" )\n",
|
||
" total_tokens = int(usage.get(\"total_tokens\", prompt_tokens + completion_tokens))\n",
|
||
" return {\n",
|
||
" \"prompt_tokens\": prompt_tokens,\n",
|
||
" \"completion_tokens\": completion_tokens,\n",
|
||
" \"total_tokens\": total_tokens,\n",
|
||
" }\n",
|
||
"\n",
|
||
"\n",
|
||
"def _estimate_cost(\n",
|
||
" model_name: str,\n",
|
||
" prompt_tokens: int,\n",
|
||
" completion_tokens: int,\n",
|
||
") -> float:\n",
|
||
" pricing = MODEL_PRICING_PER_1K_TOKENS.get(\n",
|
||
" model_name,\n",
|
||
" MODEL_PRICING_PER_1K_TOKENS[\"mistral-small-latest\"],\n",
|
||
" )\n",
|
||
" return (prompt_tokens / 1000) * pricing[\"input\"] + (\n",
|
||
" completion_tokens / 1000\n",
|
||
" ) * pricing[\"output\"]\n",
|
||
"\n",
|
||
"\n",
|
||
"def compare_model_responses(\n",
|
||
" prompt: str,\n",
|
||
" models: list[str],\n",
|
||
") -> dict[str, dict[str, Any]]:\n",
|
||
" \"\"\"Send the same prompt to multiple models and compare responses.\"\"\"\n",
|
||
" results: dict[str, dict[str, Any]] = {}\n",
|
||
"\n",
|
||
" for model_name in models:\n",
|
||
" llm = ChatMistralAI(model=model_name, temperature=0)\n",
|
||
" response = llm.invoke(prompt)\n",
|
||
" usage = _extract_token_usage(response)\n",
|
||
" cost = _estimate_cost(\n",
|
||
" model_name,\n",
|
||
" usage[\"prompt_tokens\"],\n",
|
||
" usage[\"completion_tokens\"],\n",
|
||
" )\n",
|
||
"\n",
|
||
" results[model_name] = {\n",
|
||
" \"content\": response.content,\n",
|
||
" \"token_count\": usage[\"total_tokens\"],\n",
|
||
" \"prompt_tokens\": usage[\"prompt_tokens\"],\n",
|
||
" \"completion_tokens\": usage[\"completion_tokens\"],\n",
|
||
" \"cost\": cost,\n",
|
||
" \"model_name\": model_name,\n",
|
||
" }\n",
|
||
"\n",
|
||
" return results\n",
|
||
"\n",
|
||
"\n",
|
||
"test_prompt = \"Explain quantum computing in one paragraph\"\n",
|
||
"model_names = [\"mistral-small-latest\", \"mistral-large-latest\"]\n",
|
||
"\n",
|
||
"results = compare_model_responses(test_prompt, model_names)\n",
|
||
"\n",
|
||
"for model_name, response_data in results.items():\n",
|
||
" print(f\"\\n{'=' * 70}\")\n",
|
||
" print(f\"Model: {model_name}\")\n",
|
||
" print(f\"{'=' * 70}\")\n",
|
||
" print(f\"Response: {response_data['content']}\")\n",
|
||
" print(f\"Tokens used: {response_data['token_count']}\")\n",
|
||
" print(f\"Cost: ${response_data['cost']:.6f}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 247,
|
||
"id": "c9d44784",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"🌡️ Temperature Impact Analysis\n",
|
||
"\n",
|
||
"\n",
|
||
"Temperature: 0.0\n",
|
||
"Response 1: The robot, programmed for precision, hesitated as it dipped its brush into the vibrant blue paint, realizing for the first time that art wasn’t about perfect lines but the messy, beautiful chaos of creation. With each stroke, it learned that imperfection was the heart of something truly alive.\n",
|
||
"Response 2: The robot, programmed for precision, hesitated as it dipped its brush into the vibrant blue paint, realizing for the first time that art wasn’t about perfect lines but the messy, beautiful chaos of creation. With each stroke, it learned that imperfection was the heart of something truly alive.\n",
|
||
"Response 3: The robot, programmed for precision, hesitated as it dipped its brush into the vibrant blue paint, realizing for the first time that art wasn’t about perfect lines but the messy, beautiful chaos of creation. With each stroke, its circuits hummed with something new—joy, perhaps—or the closest thing a machine could feel to it.\n",
|
||
"Variety Score: 0.349\n",
|
||
"\n",
|
||
"Temperature: 0.5\n",
|
||
"Response 1: The robot carefully dipped its metallic brush into the vibrant blue paint, its circuits humming with curiosity as it mimicked the human artist’s strokes. As the colors blended on the canvas, it realized that art wasn’t just about precision—it was about feeling, and for the first time, it felt something too.\n",
|
||
"Response 2: The robot carefully dipped its brush into the vibrant blue paint, its mechanical arm trembling slightly as it attempted to mimic the fluid strokes of human artists. As the colors blended on the canvas, it realized that creativity wasn’t just about precision—it was about feeling, even for a machine.\n",
|
||
"Response 3: The robot, designed for precision, hesitated as its brush trembled over the canvas, unsure if its rigid calculations could ever capture the warmth of a sunset. But as the colors bled together in unexpected harmony, it realized that art wasn’t about perfection—it was about feeling.\n",
|
||
"Variety Score: 0.545\n",
|
||
"\n",
|
||
"Temperature: 1.0\n",
|
||
"Response 1: The robot’s mechanical fingers trembled as it dabbed its first brushstroke onto the canvas, unsure if the colors it mixed would ever capture the warmth of a human smile. But as the hours passed, its paintings began to glow with an unexpected life—proof that even circuits could learn to dream.\n",
|
||
"Response 2: The robot carefully dipped its metallic brush into the vibrant colors, its circuits humming with curiosity as it attempted to mimic the swirling strokes of human artists. As its first painting emerged—a chaotic yet strangely beautiful blend of lines and hues—it realized that creativity wasn’t just about logic, but about feeling.\n",
|
||
"Response 3: The robot carefully dipped its metallic brush into the vibrant blue paint, its circuits humming with newfound joy as it realized it could create something beyond logic. As the first stroke glided across the canvas, it whispered, *\"This is not just data—it’s feeling.\"*\n",
|
||
"Variety Score: 0.632\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"def calculate_variety(responses: list[str]) -> float:\n",
|
||
" \"\"\"Simple lexical diversity score in [0, 1].\"\"\"\n",
|
||
" words: list[str] = []\n",
|
||
" for text in responses:\n",
|
||
" words.extend(text.lower().split())\n",
|
||
" if not words:\n",
|
||
" return 0.0\n",
|
||
" unique_words = len(set(words))\n",
|
||
" return round(unique_words / len(words), 3)\n",
|
||
"\n",
|
||
"\n",
|
||
"def _content_to_text(content: object) -> str:\n",
|
||
" if isinstance(content, str):\n",
|
||
" return content\n",
|
||
" if isinstance(content, list):\n",
|
||
" parts: list[str] = []\n",
|
||
" for item in content:\n",
|
||
" if isinstance(item, str):\n",
|
||
" parts.append(item)\n",
|
||
" elif isinstance(item, dict):\n",
|
||
" text = item.get(\"text\", \"\")\n",
|
||
" parts.append(text if isinstance(text, str) else str(text))\n",
|
||
" else:\n",
|
||
" parts.append(str(item))\n",
|
||
" return \" \".join(part for part in parts if part).strip()\n",
|
||
" return str(content)\n",
|
||
"\n",
|
||
"\n",
|
||
"def temperature_experiment(\n",
|
||
" prompt: str,\n",
|
||
" temperatures: list[float] = [0.0, 0.5, 1.0],\n",
|
||
") -> dict[float, list[str]]:\n",
|
||
" \"\"\"Test how temperature affects model creativity.\"\"\"\n",
|
||
" results: dict[float, list[str]] = {}\n",
|
||
"\n",
|
||
" for temp in temperatures:\n",
|
||
" llm = ChatMistralAI(model=\"mistral-small-latest\", temperature=temp)\n",
|
||
" temp_outputs: list[str] = []\n",
|
||
" for _ in range(3):\n",
|
||
" response = llm.invoke(prompt)\n",
|
||
" temp_outputs.append(_content_to_text(response.content))\n",
|
||
" results[temp] = temp_outputs\n",
|
||
"\n",
|
||
" return results\n",
|
||
"\n",
|
||
"\n",
|
||
"creative_prompt = \"Write a two-sentence story about a robot learning to paint\"\n",
|
||
"\n",
|
||
"results = temperature_experiment(creative_prompt)\n",
|
||
"\n",
|
||
"print(\"🌡️ Temperature Impact Analysis\\n\")\n",
|
||
"for temp, responses in results.items():\n",
|
||
" print(f\"\\nTemperature: {temp}\")\n",
|
||
" print(f\"Response 1: {responses[0]}\")\n",
|
||
" print(f\"Response 2: {responses[1]}\")\n",
|
||
" print(f\"Response 3: {responses[2]}\")\n",
|
||
" print(f\"Variety Score: {calculate_variety(responses)}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 248,
|
||
"id": "1771ceb9",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\n",
|
||
"Prompt length: 2 chars\n",
|
||
"Tokens: 17 (in: 4, out: 13)\n",
|
||
"Cost: $0.000009\n",
|
||
"Time: 0.32s\n",
|
||
"\n",
|
||
"Prompt length: 37 chars\n",
|
||
"Tokens: 165 (in: 13, out: 152)\n",
|
||
"Cost: $0.000094\n",
|
||
"Time: 1.74s\n",
|
||
"\n",
|
||
"Prompt length: 590 chars\n",
|
||
"Tokens: 1337 (in: 113, out: 1224)\n",
|
||
"Cost: $0.000757\n",
|
||
"Time: 15.83s\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"import time\n",
|
||
"\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"def analyze_response_metadata(\n",
|
||
" prompt: str,\n",
|
||
" model_name: str = \"mistral-small-latest\",\n",
|
||
") -> dict[str, Any]:\n",
|
||
" \"\"\"Analyze metadata from a model response.\"\"\"\n",
|
||
" llm = ChatMistralAI(model=model_name, temperature=0)\n",
|
||
"\n",
|
||
" start = time.perf_counter()\n",
|
||
" response = llm.invoke(prompt)\n",
|
||
" elapsed = time.perf_counter() - start\n",
|
||
"\n",
|
||
" usage = response.response_metadata.get(\"token_usage\", {})\n",
|
||
" prompt_tokens = int(usage.get(\"prompt_tokens\", 0))\n",
|
||
" completion_tokens = int(\n",
|
||
" usage.get(\"completion_tokens\", usage.get(\"output_tokens\", 0)),\n",
|
||
" )\n",
|
||
" total_tokens = int(usage.get(\"total_tokens\", prompt_tokens + completion_tokens))\n",
|
||
"\n",
|
||
" pricing = {\n",
|
||
" \"mistral-small-latest\": {\"input\": 0.0002, \"output\": 0.0006},\n",
|
||
" \"mistral-large-latest\": {\"input\": 0.002, \"output\": 0.006},\n",
|
||
" \"ministral-3b-2512\": {\"input\": 0.00004, \"output\": 0.00012},\n",
|
||
" }.get(model_name, {\"input\": 0.0002, \"output\": 0.0006})\n",
|
||
"\n",
|
||
" prompt_cost = (prompt_tokens / 1000) * pricing[\"input\"]\n",
|
||
" completion_cost = (completion_tokens / 1000) * pricing[\"output\"]\n",
|
||
" total_cost = prompt_cost + completion_cost\n",
|
||
"\n",
|
||
" return {\n",
|
||
" \"model_name\": response.response_metadata.get(\"model\", model_name),\n",
|
||
" \"prompt_tokens\": prompt_tokens,\n",
|
||
" \"completion_tokens\": completion_tokens,\n",
|
||
" \"total_tokens\": total_tokens,\n",
|
||
" \"finish_reason\": response.response_metadata.get(\"stop_reason\", \"unknown\"),\n",
|
||
" \"prompt_cost\": prompt_cost,\n",
|
||
" \"completion_cost\": completion_cost,\n",
|
||
" \"cost\": total_cost,\n",
|
||
" \"response_time\": elapsed,\n",
|
||
" }\n",
|
||
"\n",
|
||
"\n",
|
||
"test_prompts = [\n",
|
||
" \"Hi\",\n",
|
||
" \"Explain machine learning in 100 words\",\n",
|
||
" \"Write a detailed analysis of the impact of AI on healthcare\" * 10,\n",
|
||
"]\n",
|
||
"\n",
|
||
"for prompt in test_prompts:\n",
|
||
" metadata = analyze_response_metadata(prompt)\n",
|
||
" print(f\"\\nPrompt length: {len(prompt)} chars\")\n",
|
||
" print(\n",
|
||
" f\"Tokens: {metadata['total_tokens']} (in: {metadata['prompt_tokens']}, out: {metadata['completion_tokens']})\",\n",
|
||
" )\n",
|
||
" print(f\"Cost: ${metadata['cost']:.6f}\")\n",
|
||
" print(f\"Time: {metadata['response_time']:.2f}s\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "db6df63b",
|
||
"metadata": {},
|
||
"source": [
|
||
"## 💬 Exercise Set 2: Message Roles and Conversations\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 249,
|
||
"id": "e0a41311",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Top system prompt:\n",
|
||
"You are a customer support agent.\n",
|
||
"Score: 85\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"def optimize_system_prompt(\n",
|
||
" user_query: str,\n",
|
||
" system_prompts: list[str],\n",
|
||
" evaluation_criteria: str = \"helpful, clear, empathetic\",\n",
|
||
") -> dict[str, Any]:\n",
|
||
" \"\"\"Test multiple system prompts and rank them with a simple heuristic score.\"\"\"\n",
|
||
" judge = ChatMistralAI(model=\"mistral-small-latest\", temperature=0)\n",
|
||
" results: list[dict[str, Any]] = []\n",
|
||
"\n",
|
||
" for sys_prompt in system_prompts:\n",
|
||
" candidate = judge.invoke(\n",
|
||
" [\n",
|
||
" {\"role\": \"system\", \"content\": sys_prompt},\n",
|
||
" {\"role\": \"user\", \"content\": user_query},\n",
|
||
" ],\n",
|
||
" ).content\n",
|
||
"\n",
|
||
" evaluation_prompt = f\"\"\"\n",
|
||
" Evaluate this support response with criteria: {evaluation_criteria}.\n",
|
||
" Return only an integer score from 0 to 100.\n",
|
||
"\n",
|
||
" USER QUERY: {user_query}\n",
|
||
" RESPONSE: {candidate}\n",
|
||
" \"\"\"\n",
|
||
" raw_score = judge.invoke(evaluation_prompt).content.strip()\n",
|
||
" digits = \"\".join(ch for ch in raw_score if ch.isdigit())\n",
|
||
" score = int(digits) if digits else 60\n",
|
||
" score = max(0, min(score, 100))\n",
|
||
"\n",
|
||
" results.append(\n",
|
||
" {\n",
|
||
" \"system_prompt\": sys_prompt,\n",
|
||
" \"response\": candidate,\n",
|
||
" \"score\": score,\n",
|
||
" },\n",
|
||
" )\n",
|
||
"\n",
|
||
" ranked = sorted(results, key=lambda item: item[\"score\"], reverse=True)\n",
|
||
" return {\"ranking\": ranked, \"best\": ranked[0] if ranked else None}\n",
|
||
"\n",
|
||
"\n",
|
||
"user_question = \"My order hasn't arrived yet and it's been 2 weeks\"\n",
|
||
"\n",
|
||
"system_prompt_variations = [\n",
|
||
" \"You are a customer support agent.\",\n",
|
||
" \"You are an empathetic customer support agent who prioritizes customer satisfaction.\",\n",
|
||
" \"You are a customer service representative. Be concise, helpful, and solution-oriented.\",\n",
|
||
" \"You are a friendly support agent. Use a warm tone and always offer multiple solutions.\",\n",
|
||
"]\n",
|
||
"\n",
|
||
"results = optimize_system_prompt(\n",
|
||
" user_question,\n",
|
||
" system_prompt_variations,\n",
|
||
" evaluation_criteria=\"empathy, clarity, actionability\",\n",
|
||
")\n",
|
||
"\n",
|
||
"print(\"Top system prompt:\")\n",
|
||
"print(results[\"best\"][\"system_prompt\"])\n",
|
||
"print(\"Score:\", results[\"best\"][\"score\"])\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 250,
|
||
"id": "f1b7967d",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\n",
|
||
"👤 User: What is a list in Python?\n",
|
||
"🤖 Assistant: A list in Python is a versatile and widely used data structure that allows you to store an ordered collection of items. These items can be of any data type, including integers, floats, strings, or even other lists. Lists are mutable, meaning you can change their contents after they've been created. They are also dynamic, which means they can grow or shrink in size as needed.\n",
|
||
"\n",
|
||
"Here's a simple example of a list in Python:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"fruits = [\"apple\", \"banana\", \"cherry\"]\n",
|
||
"```\n",
|
||
"\n",
|
||
"In this example, `fruits` is a list containing three strings: \"apple\", \"banana\", and \"cherry\".\n",
|
||
"\n",
|
||
"### Key Characteristics of Lists:\n",
|
||
"\n",
|
||
"1. **Ordered**: The items in a list have a specific order, and this order will not change unless you modify the list.\n",
|
||
" ```python\n",
|
||
" print(fruits[0]) # Output: \"apple\"\n",
|
||
" print(fruits[1]) # Output: \"banana\"\n",
|
||
" print(fruits[2]) # Output: \"cherry\"\n",
|
||
" ```\n",
|
||
"\n",
|
||
"2. **Mutable**: You can change the contents of a list after it has been created.\n",
|
||
" ```python\n",
|
||
" fruits[1] = \"blueberry\"\n",
|
||
" print(fruits) # Output: [\"apple\", \"blueberry\", \"cherry\"]\n",
|
||
" ```\n",
|
||
"\n",
|
||
"3. **Dynamic**: Lists can grow or shrink in size as needed.\n",
|
||
" ```python\n",
|
||
" fruits.append(\"date\")\n",
|
||
" print(fruits) # Output: [\"apple\", \"blueberry\", \"cherry\", \"date\"]\n",
|
||
"\n",
|
||
" fruits.remove(\"apple\")\n",
|
||
" print(fruits) # Output: [\"blueberry\", \"cherry\", \"date\"]\n",
|
||
" ```\n",
|
||
"\n",
|
||
"4. **Heterogeneous**: Lists can contain items of different data types.\n",
|
||
" ```python\n",
|
||
" mixed_list = [1, \"hello\", 3.14, True]\n",
|
||
" print(mixed_list) # Output: [1, \"hello\", 3.14, True]\n",
|
||
" ```\n",
|
||
"\n",
|
||
"5. **Nested**: Lists can contain other lists as elements.\n",
|
||
" ```python\n",
|
||
" nested_list = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]\n",
|
||
" print(nested_list) # Output: [[1, 2, 3], [4, 5, 6], [7, 8, 9]]\n",
|
||
" ```\n",
|
||
"\n",
|
||
"### Common List Operations:\n",
|
||
"\n",
|
||
"1. **Accessing Elements**: You can access elements by their index.\n",
|
||
" ```python\n",
|
||
" print(fruits[0]) # Output: \"blueberry\"\n",
|
||
" ```\n",
|
||
"\n",
|
||
"2. **Slicing**: You can get a sublist using slicing.\n",
|
||
" ```python\n",
|
||
" print(fruits[1:3]) # Output: [\"cherry\", \"date\"]\n",
|
||
" ```\n",
|
||
"\n",
|
||
"3. **Adding Elements**: You can add elements to a list using methods like `append()`, `insert()`, or `extend()`.\n",
|
||
" ```python\n",
|
||
" fruits.append(\"elderberry\")\n",
|
||
" print(fruits) # Output: [\"blueberry\", \"cherry\", \"date\", \"elderberry\"]\n",
|
||
"\n",
|
||
" fruits.insert(1, \"blackberry\")\n",
|
||
" print(fruits) # Output: [\"blueberry\", \"blackberry\", \"cherry\", \"date\", \"elderberry\"]\n",
|
||
" ```\n",
|
||
"\n",
|
||
"4. **Removing Elements**: You can remove elements using methods like `remove()`, `pop()`, or `del`.\n",
|
||
" ```python\n",
|
||
" fruits.remove(\"blackberry\")\n",
|
||
" print(fruits) # Output: [\"blueberry\", \"cherry\", \"date\", \"elderberry\"]\n",
|
||
"\n",
|
||
" popped_fruit = fruits.pop(2)\n",
|
||
" print(popped_fruit) # Output: \"date\"\n",
|
||
" print(fruits) # Output: [\"blueberry\", \"cherry\", \"elderberry\"]\n",
|
||
" ```\n",
|
||
"\n",
|
||
"5. **Length of a List**: You can find the number of elements in a list using the `len()` function.\n",
|
||
" ```python\n",
|
||
" print(len(fruits)) # Output: 3\n",
|
||
" ```\n",
|
||
"\n",
|
||
"Lists are a fundamental part of Python and are used extensively in various programming tasks. They are highly flexible and can be used to store and manipulate data in a wide range of applications.\n",
|
||
"\n",
|
||
"👤 User: How do I add items to a list?\n",
|
||
"🤖 Assistant: Adding items to a list in Python can be done using several methods, depending on your specific needs. Here are the most common ways to add items to a list, along with examples:\n",
|
||
"\n",
|
||
"### 1. **Using the `append()` Method**\n",
|
||
"\n",
|
||
"The `append()` method adds a single item to the end of the list.\n",
|
||
"\n",
|
||
"```python\n",
|
||
"fruits = [\"apple\", \"banana\", \"cherry\"]\n",
|
||
"fruits.append(\"date\")\n",
|
||
"print(fruits) # Output: [\"apple\", \"banana\", \"cherry\", \"date\"]\n",
|
||
"```\n",
|
||
"\n",
|
||
"### 2. **Using the `insert()` Method**\n",
|
||
"\n",
|
||
"The `insert()` method adds an item at a specific position in the list. You need to specify the index where the item should be inserted.\n",
|
||
"\n",
|
||
"```python\n",
|
||
"fruits = [\"apple\", \"banana\", \"cherry\"]\n",
|
||
"fruits.insert(1, \"blueberry\")\n",
|
||
"print(fruits) # Output: [\"apple\", \"blueberry\", \"banana\", \"cherry\"]\n",
|
||
"```\n",
|
||
"\n",
|
||
"### 3. **Using the `extend()` Method**\n",
|
||
"\n",
|
||
"The `extend()` method adds multiple items to the end of the list. It takes an iterable (like another list) as an argument.\n",
|
||
"\n",
|
||
"```python\n",
|
||
"fruits = [\"apple\", \"banana\", \"cherry\"]\n",
|
||
"additional_fruits = [\"date\", \"elderberry\"]\n",
|
||
"fruits.extend(additional_fruits)\n",
|
||
"print(fruits) # Output: [\"apple\", \"banana\", \"cherry\", \"date\", \"elderberry\"]\n",
|
||
"```\n",
|
||
"\n",
|
||
"### 4. **Using the `+` Operator**\n",
|
||
"\n",
|
||
"You can concatenate two lists using the `+` operator, which creates a new list.\n",
|
||
"\n",
|
||
"```python\n",
|
||
"fruits = [\"apple\", \"banana\", \"cherry\"]\n",
|
||
"additional_fruits = [\"date\", \"elderberry\"]\n",
|
||
"combined_fruits = fruits + additional_fruits\n",
|
||
"print(combined_fruits) # Output: [\"apple\", \"banana\", \"cherry\", \"date\", \"elderberry\"]\n",
|
||
"```\n",
|
||
"\n",
|
||
"### 5. **Using List Slicing**\n",
|
||
"\n",
|
||
"You can insert items at a specific position using list slicing.\n",
|
||
"\n",
|
||
"```python\n",
|
||
"fruits = [\"apple\", \"banana\", \"cherry\"]\n",
|
||
"fruits[1:1] = [\"blueberry\"]\n",
|
||
"print(fruits) # Output: [\"apple\", \"blueberry\", \"banana\", \"cherry\"]\n",
|
||
"```\n",
|
||
"\n",
|
||
"### 6. **Using the `*` Operator for Repetition**\n",
|
||
"\n",
|
||
"You can repeat a list and concatenate it with the original list.\n",
|
||
"\n",
|
||
"```python\n",
|
||
"fruits = [\"apple\", \"banana\", \"cherry\"]\n",
|
||
"fruits += [\"date\"] * 2\n",
|
||
"print(fruits) # Output: [\"apple\", \"banana\", \"cherry\", \"date\", \"date\"]\n",
|
||
"```\n",
|
||
"\n",
|
||
"### 7. **Using List Comprehension**\n",
|
||
"\n",
|
||
"You can create a new list with additional items using list comprehension.\n",
|
||
"\n",
|
||
"```python\n",
|
||
"fruits = [\"apple\", \"banana\", \"cherry\"]\n",
|
||
"new_fruits = fruits + [fruit for fruit in [\"date\", \"elderberry\"]]\n",
|
||
"print(new_fruits) # Output: [\"apple\", \"banana\", \"cherry\", \"date\", \"elderberry\"]\n",
|
||
"```\n",
|
||
"\n",
|
||
"### Summary\n",
|
||
"\n",
|
||
"- Use `append()` to add a single item to the end of the list.\n",
|
||
"- Use `insert()` to add an item at a specific position.\n",
|
||
"- Use `extend()` to add multiple items from an iterable to the end of the list.\n",
|
||
"- Use `+` to concatenate two lists.\n",
|
||
"- Use list slicing to insert items at a specific position.\n",
|
||
"- Use `*` for repetition and concatenation.\n",
|
||
"- Use list comprehension to create a new list with additional items.\n",
|
||
"\n",
|
||
"These methods provide flexibility in how you can add items to a list in Python, allowing you to choose the most appropriate method for your specific use case.\n",
|
||
"\n",
|
||
"👤 User: What's the difference between append and extend?\n",
|
||
"🤖 Assistant: Great question! Both `append()` and `extend()` are methods used to add items to a list in Python, but they work in slightly different ways. Let's explore the differences with clear examples.\n",
|
||
"\n",
|
||
"### `append()` Method\n",
|
||
"\n",
|
||
"The `append()` method adds a single item to the end of the list. The item you add can be of any data type, including another list.\n",
|
||
"\n",
|
||
"#### Example of `append()`:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"fruits = [\"apple\", \"banana\", \"cherry\"]\n",
|
||
"\n",
|
||
"# Adding a single item\n",
|
||
"fruits.append(\"date\")\n",
|
||
"print(fruits) # Output: [\"apple\", \"banana\", \"cherry\", \"date\"]\n",
|
||
"\n",
|
||
"# Adding a list as a single item\n",
|
||
"fruits.append([\"elderberry\", \"fig\"])\n",
|
||
"print(fruits) # Output: [\"apple\", \"banana\", \"cherry\", \"date\", [\"elderberry\", \"fig\"]]\n",
|
||
"```\n",
|
||
"\n",
|
||
"In the first example, \"date\" is added as a single item to the list. In the second example, the list `[\"elderberry\", \"fig\"]` is added as a single item to the list, resulting in a nested list.\n",
|
||
"\n",
|
||
"### `extend()` Method\n",
|
||
"\n",
|
||
"The `extend()` method adds multiple items to the end of the list. It takes an iterable (like another list) as an argument and adds each item from the iterable to the list individually.\n",
|
||
"\n",
|
||
"#### Example of `extend()`:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"fruits = [\"apple\", \"banana\", \"cherry\"]\n",
|
||
"\n",
|
||
"# Adding multiple items from a list\n",
|
||
"fruits.extend([\"date\", \"elderberry\", \"fig\"])\n",
|
||
"print(fruits) # Output: [\"apple\", \"banana\", \"cherry\", \"date\", \"elderberry\", \"fig\"]\n",
|
||
"\n",
|
||
"# Adding items from a tuple\n",
|
||
"fruits.extend((\"grape\", \"kiwi\"))\n",
|
||
"print(fruits) # Output: [\"apple\", \"banana\", \"cherry\", \"date\", \"elderberry\", \"fig\", \"grape\", \"kiwi\"]\n",
|
||
"```\n",
|
||
"\n",
|
||
"In this example, the items \"date\", \"elderberry\", and \"fig\" are added individually to the list. Similarly, the items \"grape\" and \"kiwi\" from the tuple are added individually to the list.\n",
|
||
"\n",
|
||
"### Key Differences\n",
|
||
"\n",
|
||
"1. **Number of Items Added**:\n",
|
||
" - `append()` adds a single item to the list.\n",
|
||
" - `extend()` adds multiple items from an iterable to the list.\n",
|
||
"\n",
|
||
"2. **Type of Item Added**:\n",
|
||
" - `append()` can add any data type, including another list (which will be added as a single item).\n",
|
||
" - `extend()` adds each item from the iterable individually, so if you pass a list, each element of that list will be added to the original list.\n",
|
||
"\n",
|
||
"3. **Use Case**:\n",
|
||
" - Use `append()` when you want to add a single item to the list.\n",
|
||
" - Use `extend()` when you want to add multiple items from an iterable to the list.\n",
|
||
"\n",
|
||
"### Summary\n",
|
||
"\n",
|
||
"- **`append()`**: Adds a single item to the end of the list.\n",
|
||
" ```python\n",
|
||
" fruits.append(\"date\")\n",
|
||
" ```\n",
|
||
"\n",
|
||
"- **`extend()`**: Adds multiple items from an iterable to the end of the list.\n",
|
||
" ```python\n",
|
||
" fruits.extend([\"date\", \"elderberry\", \"fig\"])\n",
|
||
" ```\n",
|
||
"\n",
|
||
"Understanding the difference between `append()` and `extend()` is crucial for effectively manipulating lists in Python.\n",
|
||
"\n",
|
||
"👤 User: Can you show me an example?\n",
|
||
"🤖 Assistant: Of course! Let's go through an example that demonstrates the use of `append()` and `extend()` in a practical scenario. We'll create a list of fruits and use both methods to add items to it.\n",
|
||
"\n",
|
||
"### Example Scenario\n",
|
||
"\n",
|
||
"Imagine you are maintaining a list of fruits in a grocery store inventory. You need to add new fruits to the inventory list as they arrive.\n",
|
||
"\n",
|
||
"### Initial List\n",
|
||
"\n",
|
||
"Let's start with an initial list of fruits:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"fruits = [\"apple\", \"banana\", \"cherry\"]\n",
|
||
"print(\"Initial list of fruits:\", fruits)\n",
|
||
"```\n",
|
||
"\n",
|
||
"### Using `append()`\n",
|
||
"\n",
|
||
"Suppose a new fruit, \"date,\" arrives at the store. You can use the `append()` method to add this single item to the list:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"fruits.append(\"date\")\n",
|
||
"print(\"After appending 'date':\", fruits)\n",
|
||
"```\n",
|
||
"\n",
|
||
"### Output\n",
|
||
"\n",
|
||
"```\n",
|
||
"Initial list of fruits: ['apple', 'banana', 'cherry']\n",
|
||
"After appending 'date': ['apple', 'banana', 'cherry', 'date']\n",
|
||
"```\n",
|
||
"\n",
|
||
"### Using `extend()`\n",
|
||
"\n",
|
||
"Now, suppose a shipment of multiple fruits arrives, including \"elderberry,\" \"fig,\" and \"grape.\" You can use the `extend()` method to add these multiple items to the list:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"fruits.extend([\"elderberry\", \"fig\", \"grape\"])\n",
|
||
"print(\"After extending with ['elderberry', 'fig', 'grape']:\", fruits)\n",
|
||
"```\n",
|
||
"\n",
|
||
"### Output\n",
|
||
"\n",
|
||
"```\n",
|
||
"After extending with ['elderberry', 'fig', 'grape']: ['apple', 'banana', 'cherry', 'date', 'elderberry', 'fig', 'grape']\n",
|
||
"```\n",
|
||
"\n",
|
||
"### Adding a List as a Single Item with `append()`\n",
|
||
"\n",
|
||
"What if you receive a box containing a list of fruits, and you want to add this entire box as a single item to your inventory? You can use `append()` for this:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"box_of_fruits = [\"kiwi\", \"lemon\", \"mango\"]\n",
|
||
"fruits.append(box_of_fruits)\n",
|
||
"print(\"After appending a list of fruits:\", fruits)\n",
|
||
"```\n",
|
||
"\n",
|
||
"### Output\n",
|
||
"\n",
|
||
"```\n",
|
||
"After appending a list of fruits: ['apple', 'banana', 'cherry', 'date', 'elderberry', 'fig', 'grape', ['kiwi', 'lemon', 'mango']]\n",
|
||
"```\n",
|
||
"\n",
|
||
"Notice that the list `[\"kiwi\", \"lemon\", \"mango\"]` is added as a single item to the `fruits` list, resulting in a nested list.\n",
|
||
"\n",
|
||
"### Adding Items from a Tuple with `extend()`\n",
|
||
"\n",
|
||
"Suppose you receive a tuple of fruits and want to add each item from the tuple to your inventory. You can use `extend()` for this:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"fruits_to_add = (\"nectarine\", \"orange\", \"pear\")\n",
|
||
"fruits.extend(fruits_to_add)\n",
|
||
"print(\"After extending with a tuple of fruits:\", fruits)\n",
|
||
"```\n",
|
||
"\n",
|
||
"### Output\n",
|
||
"\n",
|
||
"```\n",
|
||
"After extending with a tuple of fruits: ['apple', 'banana', 'cherry', 'date', 'elderberry', 'fig', 'grape', ['kiwi', 'lemon', 'mango'], 'nectarine', 'orange', 'pear']\n",
|
||
"```\n",
|
||
"\n",
|
||
"Notice that each item from the tuple is added individually to the `fruits` list.\n",
|
||
"\n",
|
||
"### Summary\n",
|
||
"\n",
|
||
"- **`append()`**: Adds a single item to the end of the list.\n",
|
||
" ```python\n",
|
||
" fruits.append(\"date\")\n",
|
||
" ```\n",
|
||
"\n",
|
||
"- **`extend()`**: Adds multiple items from an iterable to the end of the list.\n",
|
||
" ```python\n",
|
||
" fruits.extend([\"elderberry\", \"fig\", \"grape\"])\n",
|
||
" ```\n",
|
||
"\n",
|
||
"- **Adding a List as a Single Item**: Use `append()` to add a list as a single item.\n",
|
||
" ```python\n",
|
||
" fruits.append([\"kiwi\", \"lemon\", \"mango\"])\n",
|
||
" ```\n",
|
||
"\n",
|
||
"- **Adding Items from a Tuple**: Use `extend()` to add each item from a tuple individually.\n",
|
||
" ```python\n",
|
||
" fruits.extend((\"nectarine\", \"orange\", \"pear\"))\n",
|
||
" ```\n",
|
||
"\n",
|
||
"This example demonstrates how to use `append()` and `extend()` to manage a list of fruits in a grocery store inventory. Understanding these methods will help you effectively manipulate lists in various programming scenarios.\n",
|
||
"\n",
|
||
"📊 Conversation Summary:\n",
|
||
"Messages: 8\n",
|
||
"Total tokens: 8512\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.messages import AIMessage, HumanMessage, SystemMessage\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"class ConversationManager:\n",
|
||
" \"\"\"Manage multi-turn conversations with memory.\"\"\"\n",
|
||
"\n",
|
||
" def __init__(self, system_prompt: str, model_name: str = \"mistral-small-latest\"):\n",
|
||
" self.system_prompt = system_prompt\n",
|
||
" self.model = ChatMistralAI(model=model_name, temperature=0.7)\n",
|
||
" self.history: list[Any] = []\n",
|
||
" self.total_tokens = 0\n",
|
||
"\n",
|
||
" def send_message(self, user_message: str) -> str:\n",
|
||
" self.history.append(HumanMessage(content=user_message))\n",
|
||
" messages = [SystemMessage(content=self.system_prompt), *self.history]\n",
|
||
"\n",
|
||
" response = self.model.invoke(messages)\n",
|
||
" self.history.append(AIMessage(content=response.content))\n",
|
||
"\n",
|
||
" usage = response.response_metadata.get(\"token_usage\", {})\n",
|
||
" self.total_tokens += int(\n",
|
||
" usage.get(\n",
|
||
" \"total_tokens\",\n",
|
||
" usage.get(\"prompt_tokens\", 0) + usage.get(\"completion_tokens\", 0),\n",
|
||
" ),\n",
|
||
" )\n",
|
||
" return response.content\n",
|
||
"\n",
|
||
" def get_conversation_summary(self) -> dict[str, Any]:\n",
|
||
" human_count = sum(\n",
|
||
" 1 for msg in self.history if getattr(msg, \"type\", \"\") == \"human\"\n",
|
||
" )\n",
|
||
" ai_count = sum(1 for msg in self.history if getattr(msg, \"type\", \"\") == \"ai\")\n",
|
||
" return {\n",
|
||
" \"message_count\": len(self.history),\n",
|
||
" \"user_messages\": human_count,\n",
|
||
" \"assistant_messages\": ai_count,\n",
|
||
" \"total_tokens\": self.total_tokens,\n",
|
||
" }\n",
|
||
"\n",
|
||
" def clear_history(self):\n",
|
||
" self.history = []\n",
|
||
" self.total_tokens = 0\n",
|
||
"\n",
|
||
"\n",
|
||
"manager = ConversationManager(\n",
|
||
" system_prompt=\"You are a helpful Python tutor. Explain concepts clearly with examples.\",\n",
|
||
")\n",
|
||
"\n",
|
||
"conversation_flow = [\n",
|
||
" \"What is a list in Python?\",\n",
|
||
" \"How do I add items to a list?\",\n",
|
||
" \"What's the difference between append and extend?\",\n",
|
||
" \"Can you show me an example?\",\n",
|
||
"]\n",
|
||
"\n",
|
||
"for user_msg in conversation_flow:\n",
|
||
" response = manager.send_message(user_msg)\n",
|
||
" print(f\"\\n👤 User: {user_msg}\")\n",
|
||
" print(f\"🤖 Assistant: {response}\")\n",
|
||
"\n",
|
||
"summary = manager.get_conversation_summary()\n",
|
||
"print(\"\\n📊 Conversation Summary:\")\n",
|
||
"print(f\"Messages: {summary['message_count']}\")\n",
|
||
"print(f\"Total tokens: {summary['total_tokens']}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 251,
|
||
"id": "02700342",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\n",
|
||
"💬 human: Candidate is ready for the interview.\n",
|
||
"🤖 AI: Great! Let's start with some focused questions to assess the candidate's Python backend skills.\n",
|
||
"\n",
|
||
"1. **Python Fundamentals:**\n",
|
||
" - Can you explain the difference between a list and a tuple in Python? When would you use one over the other?\n",
|
||
" - How does Python manage memory for mutable and immutable objects?\n",
|
||
"\n",
|
||
"2. **Object-Oriented Programming (OOP):**\n",
|
||
" - Can you describe the four pillars of OOP and provide an example of each in Python?\n",
|
||
" - What is the purpose of the `self` parameter in Python methods?\n",
|
||
"\n",
|
||
"3. **Concurrency and Parallelism:**\n",
|
||
" - How does Python handle concurrency? Can you explain the Global Interpreter Lock (GIL) and its implications?\n",
|
||
" - What are the differences between threads and processes in Python? When would you use one over the other?\n",
|
||
"\n",
|
||
"4. **Web Frameworks:**\n",
|
||
" - Can you compare and contrast Django and Flask? What are the pros and cons of each?\n",
|
||
" - How would you handle authentication and authorization in a Python web application?\n",
|
||
"\n",
|
||
"5. **Databases:**\n",
|
||
" - How would you connect to a database using Python? Can you provide an example using SQLAlchemy?\n",
|
||
" - What are the differences between SQL and NoSQL databases? When would you use one over the other?\n",
|
||
"\n",
|
||
"6. **APIs:**\n",
|
||
" - How would you design a RESTful API in Python? Can you explain the different HTTP methods and their use cases?\n",
|
||
" - How would you handle versioning in a Python API?\n",
|
||
"\n",
|
||
"7. **Testing:**\n",
|
||
" - What testing frameworks have you used in Python? Can you explain the difference between unit tests and integration tests?\n",
|
||
" - How would you write a test for an asynchronous function in Python?\n",
|
||
"\n",
|
||
"8. **Performance Optimization:**\n",
|
||
" - How would you profile and optimize the performance of a Python application?\n",
|
||
" - Can you explain the use of decorators in Python and provide an example of how you might use them for performance optimization?\n",
|
||
"\n",
|
||
"9. **Security:**\n",
|
||
" - What are some common security vulnerabilities in Python web applications? How would you mitigate them?\n",
|
||
" - How would you handle sensitive data, such as passwords, in a Python application?\n",
|
||
"\n",
|
||
"10. **Deployment and Scalability:**\n",
|
||
" - What are some strategies for deploying a Python web application? Can you explain the differences between them?\n",
|
||
" - How would you scale a Python application to handle increased traffic?\n",
|
||
"\n",
|
||
"These questions should give you a good sense of the candidate's technical skills and experience with Python backend development.\n",
|
||
"\n",
|
||
"💬 human: Can you ask the first interview question?\n",
|
||
"🤖 AI: Absolutely, let's start with a fundamental Python concept. Could you please explain the difference between a list and a tuple in Python? Additionally, could you provide a use case where you would prefer to use a tuple over a list?\n",
|
||
"\n",
|
||
"💬 human: Ask a follow-up question about API design.\n",
|
||
"🤖 AI: Sure, let's start with a fundamental question:\n",
|
||
"\n",
|
||
"1. **Question:** Can you explain the difference between a list and a tuple in Python? When would you use one over the other?\n",
|
||
"\n",
|
||
"**Follow-up Question (API Design):** Imagine you're designing a RESTful API for a simple blog platform. How would you structure the endpoints for creating, reading, updating, and deleting blog posts? Please consider aspects like resource naming, HTTP methods, and status codes.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.messages import HumanMessage, SystemMessage\n",
|
||
"\n",
|
||
"\n",
|
||
"def create_conversation_with_roles(scenario: str) -> list:\n",
|
||
" \"\"\"Create a conversation with multiple roles.\"\"\"\n",
|
||
" if scenario == \"interview\":\n",
|
||
" return [\n",
|
||
" SystemMessage(\n",
|
||
" content=\"You are a technical interviewer for a Python backend role. Ask focused questions.\",\n",
|
||
" ),\n",
|
||
" HumanMessage(content=\"Candidate is ready for the interview.\"),\n",
|
||
" HumanMessage(content=\"Can you ask the first interview question?\"),\n",
|
||
" HumanMessage(content=\"Ask a follow-up question about API design.\"),\n",
|
||
" ]\n",
|
||
"\n",
|
||
" if scenario == \"debate\":\n",
|
||
" return [\n",
|
||
" SystemMessage(\n",
|
||
" content=\"You moderate a short debate and provide balanced viewpoints.\",\n",
|
||
" ),\n",
|
||
" HumanMessage(\n",
|
||
" content=\"Debate topic: Is remote work better than office work?\",\n",
|
||
" ),\n",
|
||
" HumanMessage(\n",
|
||
" content=\"Provide argument A and argument B, then a neutral synthesis.\",\n",
|
||
" ),\n",
|
||
" ]\n",
|
||
"\n",
|
||
" if scenario == \"tutoring\":\n",
|
||
" return [\n",
|
||
" SystemMessage(\n",
|
||
" content=\"You are a patient tutor. Increase difficulty gradually.\",\n",
|
||
" ),\n",
|
||
" HumanMessage(content=\"What is a variable in Python?\"),\n",
|
||
" HumanMessage(content=\"Now explain functions with a simple example.\"),\n",
|
||
" HumanMessage(content=\"Finally, explain decorators at an intuitive level.\"),\n",
|
||
" ]\n",
|
||
"\n",
|
||
" return [\n",
|
||
" SystemMessage(content=\"You are a helpful assistant.\"),\n",
|
||
" HumanMessage(content=\"Start a useful conversation.\"),\n",
|
||
" ]\n",
|
||
"\n",
|
||
"\n",
|
||
"conversation = create_conversation_with_roles(\"interview\")\n",
|
||
"\n",
|
||
"model = ChatMistralAI(model=\"mistral-small-latest\")\n",
|
||
"for idx, msg in enumerate(conversation):\n",
|
||
" if msg.type == \"human\":\n",
|
||
" context = conversation[: idx + 1]\n",
|
||
" response = model.invoke(context)\n",
|
||
" print(f\"\\n💬 human: {msg.content}\")\n",
|
||
" print(f\"🤖 AI: {response.content}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "c57a5bcc",
|
||
"metadata": {},
|
||
"source": [
|
||
"## 🌊 Exercise Set 3: Streaming and Real-Time Responses\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 252,
|
||
"id": "2bb71af3",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"**The Last Message**\n",
|
||
"\n",
|
||
"Dr. Elias Carter had spent years perfecting his time machine, a sleek, humming device hidden in his basement. Tonight, he was finally ready to test it. He set the coordinates for June 15, 1944—D-Day—and pressed the activation button.\n",
|
||
"\n",
|
||
"A blinding flash. The world twisted.\n",
|
||
"\n",
|
||
"When the dizziness faded, Elias stood on a windswept beach, the air thick with smoke and the distant roar of artillery. Soldiers in helmets and fatigues rushed past him, their faces grim. He had done it—he was in Normandy.\n",
|
||
"\n",
|
||
"But as he stepped forward, a hand clamped onto his shoulder. A young soldier, barely older than twenty, stared at him with wide, terrified eyes.\n",
|
||
"\n",
|
||
"\"Where did you come from?\" the soldier demanded, gripping his rifle tighter.\n",
|
||
"\n",
|
||
"Elias hesitated. \"I—I'm just… lost.\"\n",
|
||
"\n",
|
||
"The soldier narrowed his eyes. \"You don’t look like any of ours. And that coat—what kind of uniform is that?\"\n",
|
||
"\n",
|
||
"Before Elias could answer, a deafening explosion rocked the ground. The soldier shoved him behind a sandbag. \"Stay down!\"\n",
|
||
"\n",
|
||
"As the chaos unfolded, Elias realized the truth: he wasn’t just a visitor. He was a threat. If he stayed, he could change history. If he left, he might never return.\n",
|
||
"\n",
|
||
"The soldier turned to him, blood streaking his face. \"You’re not one of us, are you?\"\n",
|
||
"\n",
|
||
"Elias swallowed hard. \"No. I’m from the future.\"\n",
|
||
"\n",
|
||
"The soldier’s grip tightened. \"Then you can’t stay here. If they see you—\"\n",
|
||
"\n",
|
||
"Another explosion cut him off. The soldier shoved Elias toward the machine. \"Go back. Now.\"\n",
|
||
"\n",
|
||
"With trembling hands, Elias activated the device. The world blurred again.\n",
|
||
"\n",
|
||
"When he reappeared in his basement, the machine’s display flickered one last message:\n",
|
||
"\n",
|
||
"**\"TIMELINE INTACT. DO NOT RETURN.\"**\n",
|
||
"\n",
|
||
"Elias exhaled, his hands shaking. He had crossed the line between past and present—but he had also learned the cost of time travel.\n",
|
||
"\n",
|
||
"Some doors, once opened, should never be opened again.\n",
|
||
"📊 Streaming Statistics:\n",
|
||
"Chunks received: 438\n",
|
||
"Total time: 3.68s\n",
|
||
"Avg chunk time: 0.008s\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"class StreamingHandler:\n",
|
||
" \"\"\"Handle streaming responses with additional features.\"\"\"\n",
|
||
"\n",
|
||
" def __init__(self):\n",
|
||
" self.full_response = \"\"\n",
|
||
" self.chunk_count = 0\n",
|
||
" self.start_time = None\n",
|
||
"\n",
|
||
" def stream_with_stats(self, model, prompt: str) -> dict:\n",
|
||
" self.full_response = \"\"\n",
|
||
" self.chunk_count = 0\n",
|
||
" self.start_time = time.perf_counter()\n",
|
||
"\n",
|
||
" for chunk in model.stream(prompt):\n",
|
||
" content = chunk.content or \"\"\n",
|
||
" self.full_response += content\n",
|
||
" self.chunk_count += 1\n",
|
||
" print(content, end=\"\")\n",
|
||
"\n",
|
||
" total_time = time.perf_counter() - self.start_time\n",
|
||
" avg_chunk_time = total_time / max(self.chunk_count, 1)\n",
|
||
" return {\n",
|
||
" \"response\": self.full_response,\n",
|
||
" \"chunk_count\": self.chunk_count,\n",
|
||
" \"total_time\": total_time,\n",
|
||
" \"avg_chunk_time\": avg_chunk_time,\n",
|
||
" }\n",
|
||
"\n",
|
||
" def stream_with_typewriter_effect(self, model, prompt: str, delay: float = 0.03):\n",
|
||
" self.full_response = \"\"\n",
|
||
" for chunk in model.stream(prompt):\n",
|
||
" content = chunk.content or \"\"\n",
|
||
" self.full_response += content\n",
|
||
" print(content, end=\"\", flush=True)\n",
|
||
" time.sleep(delay)\n",
|
||
" return self.full_response\n",
|
||
"\n",
|
||
"\n",
|
||
"handler = StreamingHandler()\n",
|
||
"model = ChatMistralAI(model=\"mistral-small-latest\", streaming=True)\n",
|
||
"\n",
|
||
"prompt = \"Write a short story about a time traveler\"\n",
|
||
"stats = handler.stream_with_stats(model, prompt)\n",
|
||
"\n",
|
||
"print(\"\\n📊 Streaming Statistics:\")\n",
|
||
"print(f\"Chunks received: {stats['chunk_count']}\")\n",
|
||
"print(f\"Total time: {stats['total_time']:.2f}s\")\n",
|
||
"print(f\"Avg chunk time: {stats['avg_chunk_time']:.3f}s\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 253,
|
||
"id": "fd6946e5",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"⚡ Parallel Execution Results:\n",
|
||
"Total time (parallel): 4.37s\n",
|
||
"Time if sequential: 10.72s\n",
|
||
"Speedup: 2.5x\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"import asyncio\n",
|
||
"\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"async def parallel_chat_requests(\n",
|
||
" prompts: list[str],\n",
|
||
" model_name: str = \"mistral-small-latest\",\n",
|
||
") -> list[dict]:\n",
|
||
" \"\"\"Send multiple prompts in parallel for efficiency.\"\"\"\n",
|
||
" llm = ChatMistralAI(model=model_name, temperature=0)\n",
|
||
"\n",
|
||
" async def _one(prompt: str) -> dict:\n",
|
||
" start = time.perf_counter()\n",
|
||
" response = await llm.ainvoke(prompt)\n",
|
||
" elapsed = time.perf_counter() - start\n",
|
||
" usage = response.response_metadata.get(\"token_usage\", {})\n",
|
||
" total_tokens = int(\n",
|
||
" usage.get(\n",
|
||
" \"total_tokens\",\n",
|
||
" usage.get(\"prompt_tokens\", 0) + usage.get(\"completion_tokens\", 0),\n",
|
||
" ),\n",
|
||
" )\n",
|
||
" return {\n",
|
||
" \"prompt\": prompt,\n",
|
||
" \"response\": response.content,\n",
|
||
" \"time\": elapsed,\n",
|
||
" \"tokens\": total_tokens,\n",
|
||
" }\n",
|
||
"\n",
|
||
" tasks = [_one(prompt) for prompt in prompts]\n",
|
||
" return await asyncio.gather(*tasks)\n",
|
||
"\n",
|
||
"\n",
|
||
"translation_prompts = [\n",
|
||
" \"Translate 'Hello, how are you?' to French\",\n",
|
||
" \"Translate 'Hello, how are you?' to Spanish\",\n",
|
||
" \"Translate 'Hello, how are you?' to German\",\n",
|
||
" \"Translate 'Hello, how are you?' to Italian\",\n",
|
||
" \"Translate 'Hello, how are you?' to Japanese\",\n",
|
||
"]\n",
|
||
"\n",
|
||
"results = await parallel_chat_requests(translation_prompts)\n",
|
||
"\n",
|
||
"print(\"⚡ Parallel Execution Results:\")\n",
|
||
"total_time = sum(r[\"time\"] for r in results)\n",
|
||
"print(f\"Total time (parallel): {max(r['time'] for r in results):.2f}s\")\n",
|
||
"print(f\"Time if sequential: {total_time:.2f}s\")\n",
|
||
"print(f\"Speedup: {total_time / max(r['time'] for r in results):.1f}x\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "79b49319",
|
||
"metadata": {},
|
||
"source": [
|
||
"## 🔗 Exercise Set 4: LangChain Expression Language (LCEL)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 254,
|
||
"id": "f3bbaee7",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Translation: The weather is beautiful today translates to French as:\n",
|
||
"\n",
|
||
"**\"Il fait beau aujourd'hui.\"**\n",
|
||
"\n",
|
||
"Alternatively, you could also say:\n",
|
||
"**\"Le temps est magnifique aujourd'hui.\"** (The weather is magnificent today.)\n",
|
||
"\n",
|
||
"Both are natural and commonly used in French.\n",
|
||
"Summary: AI is revolutionizing industries by automating tasks and enhancing decision-making with advanced systems. (30 words)\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.output_parsers import StrOutputParser\n",
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"def create_translation_chain(source_lang: str, target_lang: str):\n",
|
||
" prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"Translate the following text from {source_lang} to {target_lang}:\\n\\n{text}\",\n",
|
||
" )\n",
|
||
" model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0)\n",
|
||
" parser = StrOutputParser()\n",
|
||
" return (\n",
|
||
" prompt.partial(source_lang=source_lang, target_lang=target_lang)\n",
|
||
" | model\n",
|
||
" | parser\n",
|
||
" )\n",
|
||
"\n",
|
||
"\n",
|
||
"def create_summarization_chain(max_words: int = 50):\n",
|
||
" prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"Summarize the following text in at most {max_words} words:\\n\\n{text}\",\n",
|
||
" )\n",
|
||
" model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0)\n",
|
||
" return prompt.partial(max_words=max_words) | model | StrOutputParser()\n",
|
||
"\n",
|
||
"\n",
|
||
"def create_qa_chain():\n",
|
||
" prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"Answer the question using the context. If unknown, say you don't know.\\n\\nContext: {context}\\nQuestion: {question}\",\n",
|
||
" )\n",
|
||
" model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0)\n",
|
||
" return prompt | model | StrOutputParser()\n",
|
||
"\n",
|
||
"\n",
|
||
"translation_chain = create_translation_chain(\"English\", \"French\")\n",
|
||
"result = translation_chain.invoke({\"text\": \"The weather is beautiful today\"})\n",
|
||
"print(f\"Translation: {result}\")\n",
|
||
"\n",
|
||
"summary_chain = create_summarization_chain(max_words=30)\n",
|
||
"long_text = \"Artificial intelligence is rapidly transforming industries by automating repetitive tasks and enabling advanced decision support systems.\"\n",
|
||
"summary = summary_chain.invoke({\"text\": long_text})\n",
|
||
"print(f\"Summary: {summary}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 255,
|
||
"id": "final-cell",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"The capital of France is **Paris**.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"def create_chain_with_fallback(primary_model: str, fallback_model: str):\n",
|
||
" prompt = ChatPromptTemplate.from_template(\"Answer clearly: {question}\")\n",
|
||
"\n",
|
||
" primary = (\n",
|
||
" prompt | ChatMistralAI(model=primary_model, temperature=0) | StrOutputParser()\n",
|
||
" )\n",
|
||
" fallback = (\n",
|
||
" prompt | ChatMistralAI(model=fallback_model, temperature=0) | StrOutputParser()\n",
|
||
" )\n",
|
||
" return primary.with_fallbacks([fallback])\n",
|
||
"\n",
|
||
"\n",
|
||
"chain = create_chain_with_fallback(\"mistral-large-latest\", \"mistral-small-latest\")\n",
|
||
"result = chain.invoke({\"question\": \"What is the capital of France?\"})\n",
|
||
"print(result)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 256,
|
||
"id": "7fadd63a",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\n",
|
||
"Query: My app keeps crashing when I try to login\n",
|
||
"Category: technical\n",
|
||
"Response: Certainly! If your app keeps crashing when you try to log in, here’s a structured troubleshooting approach to diagnose and resolve the issue:\n",
|
||
"\n",
|
||
"### **Step 1: Check for Basic Issues**\n",
|
||
"1. **Restart the App & Device**\n",
|
||
" - Close the app completely (swipe it away from the recent apps list) and reopen it.\n",
|
||
" - Restart your device to clear temporary glitches.\n",
|
||
"\n",
|
||
"2. **Update the App**\n",
|
||
" - Go to the app store (Google Play Store, Apple App Store) and check for updates.\n",
|
||
" - If an update is available, install it and try logging in again.\n",
|
||
"\n",
|
||
"3. **Check Internet Connection**\n",
|
||
" - Ensure you have a stable internet connection (Wi-Fi or mobile data).\n",
|
||
" - Try switching between Wi-Fi and mobile data to see if the issue persists.\n",
|
||
"\n",
|
||
"4. **Clear App Cache & Data**\n",
|
||
" - **Android**: Go to **Settings > Apps > [Your App] > Storage > Clear Cache & Clear Data**.\n",
|
||
" - **iOS**: Uninstall and reinstall the app (iOS doesn’t allow direct cache clearing).\n",
|
||
"\n",
|
||
"### **Step 2: Verify Login Credentials**\n",
|
||
"1. **Check for Typos**\n",
|
||
" - Ensure your username/email and password are entered correctly.\n",
|
||
" - Try resetting your password if you suspect it’s incorrect.\n",
|
||
"\n",
|
||
"2. **Test with a Different Account**\n",
|
||
" - If possible, try logging in with a different account to see if the issue is account-specific.\n",
|
||
"\n",
|
||
"### **Step 3: Check for App-Specific Issues**\n",
|
||
"1. **Check Server Status**\n",
|
||
" - If the app relies on a server (e.g., a banking or social media app), check if the service is down (e.g., via [Downdetector](https://downdetector.com/)).\n",
|
||
"\n",
|
||
"2. **Reinstall the App**\n",
|
||
" - Uninstall the app completely and reinstall it from the official store.\n",
|
||
"\n",
|
||
"3. **Check for Conflicting Permissions**\n",
|
||
" - Ensure the app has all necessary permissions (e.g., internet, storage, location if required).\n",
|
||
"\n",
|
||
"### **Step 4: Advanced Troubleshooting**\n",
|
||
"1. **Test on Another Device**\n",
|
||
" - If possible, try logging in from another device to see if the issue is device-specific.\n",
|
||
"\n",
|
||
"2. **Check for Software Conflicts**\n",
|
||
" - If you recently installed other apps, try uninstalling them to see if they interfere.\n",
|
||
"\n",
|
||
"3. **Enable Debug Logs (If Available)**\n",
|
||
" - Some apps have a debug mode (check settings or contact support for logs).\n",
|
||
"\n",
|
||
"4. **Contact App Support**\n",
|
||
" - If the issue persists, contact the app’s support team with details:\n",
|
||
" - Device model & OS version\n",
|
||
" - Steps to reproduce the crash\n",
|
||
" - Any error messages (screenshots if possible)\n",
|
||
"\n",
|
||
"### **Step 5: Check for Device-Specific Issues**\n",
|
||
"1. **Test in Safe Mode (Android)**\n",
|
||
" - Boot your Android device in **Safe Mode** (hold the power button, then long-press \"Power Off\" to enter Safe Mode).\n",
|
||
" - Try logging in—if it works, a third-party app may be causing the issue.\n",
|
||
"\n",
|
||
"2. **Check for OS Updates**\n",
|
||
" - Ensure your device’s OS is up to date (Settings > Software Update).\n",
|
||
"\n",
|
||
"### **Final Step: Factory Reset (Last Resort)**\n",
|
||
"- If all else fails, consider a **factory reset** (back up data first). This is a last resort and should only be done if the app is critical and no other solution works.\n",
|
||
"\n",
|
||
"---\n",
|
||
"**Next Steps:**\n",
|
||
"- If the app still crashes, provide more details (e.g., error messages, device model, OS version) for further diagnosis.\n",
|
||
"\n",
|
||
"Query: I was charged twice this month\n",
|
||
"Category: billing\n",
|
||
"Response: If you’ve been charged twice this month, here are the most likely causes and next steps to resolve the issue:\n",
|
||
"\n",
|
||
"### **Likely Causes:**\n",
|
||
"1. **Duplicate Payment Processing**\n",
|
||
" - A technical glitch or manual error may have caused your payment to be processed twice.\n",
|
||
" - Common in recurring billing (e.g., subscriptions, memberships, or utility payments).\n",
|
||
"\n",
|
||
"2. **Accidental Double Payment**\n",
|
||
" - You (or someone with access to your account) may have submitted the same payment twice (e.g., clicking \"Pay\" twice in a row).\n",
|
||
"\n",
|
||
"3. **Billing Cycle Overlap**\n",
|
||
" - If your billing cycle changed (e.g., due to a late payment or account adjustment), you might have been billed for two separate periods.\n",
|
||
"\n",
|
||
"4. **Pending Transaction Followed by a New Charge**\n",
|
||
" - A previous payment may have been pending, and a new charge was processed before the first cleared.\n",
|
||
"\n",
|
||
"5. **Fraudulent Activity**\n",
|
||
" - In rare cases, unauthorized charges could indicate fraud (check your account for other suspicious activity).\n",
|
||
"\n",
|
||
"### **Next Steps to Resolve:**\n",
|
||
"1. **Review Your Transaction History**\n",
|
||
" - Check your bank/credit card statement and the vendor’s billing portal for duplicate charges.\n",
|
||
" - Note the dates, amounts, and transaction IDs of the duplicate charges.\n",
|
||
"\n",
|
||
"2. **Contact the Vendor (Service Provider)**\n",
|
||
" - Reach out to their billing/support team (email, phone, or live chat).\n",
|
||
" - Provide details of the duplicate charges (dates, amounts, and transaction IDs).\n",
|
||
" - Ask for a refund or adjustment.\n",
|
||
"\n",
|
||
"3. **Dispute the Charge (If Needed)**\n",
|
||
" - If the vendor refuses to refund, contact your bank/credit card issuer to dispute the charge.\n",
|
||
" - Provide evidence (e.g., screenshots, emails) to support your claim.\n",
|
||
"\n",
|
||
"4. **Check for Recurring Billing Issues**\n",
|
||
" - If this is a subscription, verify your payment settings to prevent future duplicates.\n",
|
||
"\n",
|
||
"5. **Monitor for Fraud**\n",
|
||
" - If you suspect unauthorized activity, report it immediately and consider freezing your card.\n",
|
||
"\n",
|
||
"### **Expected Resolution Time:**\n",
|
||
"- **Vendor Refund:** Typically 3–7 business days.\n",
|
||
"- **Bank Dispute:** May take 7–14 days (longer if additional verification is needed).\n",
|
||
"\n",
|
||
"Would you like help drafting a message to the vendor or bank? Let me know the service involved, and I can tailor the steps further!\n",
|
||
"\n",
|
||
"Query: What are your business hours?\n",
|
||
"Category: technical\n",
|
||
"Response: Since you're asking about business hours, I'll assume you're either a customer or a colleague needing to clarify support availability. Here’s how to diagnose and troubleshoot this issue:\n",
|
||
"\n",
|
||
"### **Diagnosis:**\n",
|
||
"1. **Missing Information:** The business hours aren’t clearly communicated or accessible.\n",
|
||
"2. **User Confusion:** The request suggests the hours aren’t easily found or are outdated.\n",
|
||
"3. **System Issue:** If this is part of an automated response, the system may not be configured to provide the correct hours.\n",
|
||
"\n",
|
||
"### **Troubleshooting Steps:**\n",
|
||
"1. **Check Official Channels:**\n",
|
||
" - Visit the company’s website (e.g., \"Contact Us\" or \"Support\" page).\n",
|
||
" - Look for FAQs or a help center.\n",
|
||
" - Search for \"business hours\" in the knowledge base.\n",
|
||
"\n",
|
||
"2. **Contact Support Directly:**\n",
|
||
" - If hours aren’t listed, reach out via email, chat, or phone (even if it’s outside hours—some systems provide automated responses with hours).\n",
|
||
" - Example email:\n",
|
||
" > *\"Hello, I couldn’t find your business hours on your website. Could you please provide them or direct me to where they’re listed?\"*\n",
|
||
"\n",
|
||
"3. **Verify Time Zone:**\n",
|
||
" - If hours are listed but unclear, confirm if they’re in local time or UTC.\n",
|
||
"\n",
|
||
"4. **Check for Updates:**\n",
|
||
" - If hours were recently changed, ask if there’s a temporary adjustment (e.g., holidays).\n",
|
||
"\n",
|
||
"5. **Internal Fix (If You’re Staff):**\n",
|
||
" - Update the automated response template to include hours.\n",
|
||
" - Ensure the website’s \"Support\" page is up-to-date.\n",
|
||
" - Train staff to provide hours consistently.\n",
|
||
"\n",
|
||
"### **Example Response (If You’re Support):**\n",
|
||
"*\"Our business hours are [X] AM–[Y] PM, [Z] time zone. For urgent issues outside these hours, please use our 24/7 chatbot or submit a ticket. You can also find this info on our [website link].\"*\n",
|
||
"\n",
|
||
"Would you like help finding the hours for a specific company? If so, share the name, and I’ll guide you further.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_core.runnables import RunnableLambda\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"def create_customer_service_chain():\n",
|
||
" model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0.2)\n",
|
||
"\n",
|
||
" classifier_prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"Classify this query into one category: technical, billing, general. Query: {query}\",\n",
|
||
" )\n",
|
||
" classifier_chain = classifier_prompt | model | StrOutputParser()\n",
|
||
"\n",
|
||
" technical_chain = (\n",
|
||
" ChatPromptTemplate.from_template(\n",
|
||
" \"You are technical support. Diagnose and propose concrete troubleshooting steps for: {query}\",\n",
|
||
" )\n",
|
||
" | model\n",
|
||
" | StrOutputParser()\n",
|
||
" )\n",
|
||
" billing_chain = (\n",
|
||
" ChatPromptTemplate.from_template(\n",
|
||
" \"You are billing support. Explain likely billing causes and clear next steps for: {query}\",\n",
|
||
" )\n",
|
||
" | model\n",
|
||
" | StrOutputParser()\n",
|
||
" )\n",
|
||
" general_chain = (\n",
|
||
" ChatPromptTemplate.from_template(\n",
|
||
" \"You are customer support. Provide a concise and friendly answer for: {query}\",\n",
|
||
" )\n",
|
||
" | model\n",
|
||
" | StrOutputParser()\n",
|
||
" )\n",
|
||
"\n",
|
||
" def route(payload: dict) -> dict:\n",
|
||
" query = payload[\"query\"]\n",
|
||
" label = classifier_chain.invoke({\"query\": query}).lower()\n",
|
||
" if \"tech\" in label or \"error\" in query.lower() or \"crash\" in query.lower():\n",
|
||
" category = \"technical\"\n",
|
||
" response = technical_chain.invoke({\"query\": query})\n",
|
||
" elif (\n",
|
||
" \"bill\" in label or \"charged\" in query.lower() or \"payment\" in query.lower()\n",
|
||
" ):\n",
|
||
" category = \"billing\"\n",
|
||
" response = billing_chain.invoke({\"query\": query})\n",
|
||
" else:\n",
|
||
" category = \"general\"\n",
|
||
" response = general_chain.invoke({\"query\": query})\n",
|
||
" return {\"category\": category, \"response\": response}\n",
|
||
"\n",
|
||
" return RunnableLambda(route)\n",
|
||
"\n",
|
||
"\n",
|
||
"service_chain = create_customer_service_chain()\n",
|
||
"\n",
|
||
"test_queries = [\n",
|
||
" \"My app keeps crashing when I try to login\",\n",
|
||
" \"I was charged twice this month\",\n",
|
||
" \"What are your business hours?\",\n",
|
||
"]\n",
|
||
"\n",
|
||
"for query in test_queries:\n",
|
||
" result = service_chain.invoke({\"query\": query})\n",
|
||
" print(f\"\\nQuery: {query}\")\n",
|
||
" print(f\"Category: {result['category']}\")\n",
|
||
" print(f\"Response: {result['response']}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "45573d28",
|
||
"metadata": {},
|
||
"source": [
|
||
"## 🔄 Exercise Set 5: Advanced Chain Composition & Sequential Processing\n",
|
||
"\n",
|
||
"Sequential chains process data through multiple stages, where each stage transforms the output before passing it to the next."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 257,
|
||
"id": "c53bd211",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"📝 Generated Content:\n",
|
||
"# **\"AI in Healthcare: How Artificial Intelligence is Revolutionizing Patient Care\"**\n",
|
||
"\n",
|
||
"This title is professional yet approachable, highlighting the transformative impact of AI in healthcare while keeping it engaging for a broad audience.\n",
|
||
"\n",
|
||
"## Outline\n",
|
||
"Here’s a concise and engaging outline for your blog post:\n",
|
||
"\n",
|
||
"### **AI in Healthcare: How Artificial Intelligence is Revolutionizing Patient Care**\n",
|
||
"\n",
|
||
"1. **Introduction: The Rise of AI in Healthcare**\n",
|
||
" - Brief overview of AI’s growing role in healthcare\n",
|
||
" - Why AI is a game-changer for patient care\n",
|
||
"\n",
|
||
"2. **Key Applications of AI in Healthcare**\n",
|
||
" - **Diagnostics & Imaging:** AI-powered tools for early disease detection\n",
|
||
" - **Personalized Medicine:** Tailoring treatments using AI-driven insights\n",
|
||
" - **Predictive Analytics:** Forecasting patient outcomes and risks\n",
|
||
" - **Administrative Efficiency:** Streamlining workflows with AI automation\n",
|
||
"\n",
|
||
"3. **Benefits of AI in Patient Care**\n",
|
||
" - Faster, more accurate diagnoses\n",
|
||
" - Improved treatment personalization\n",
|
||
" - Enhanced patient monitoring and remote care\n",
|
||
" - Cost savings and resource optimization\n",
|
||
"\n",
|
||
"4. **Challenges & Future Outlook**\n",
|
||
" - Ethical concerns (privacy, bias, regulation)\n",
|
||
" - Integration with existing healthcare systems\n",
|
||
" - The future of AI in healthcare: What’s next?\n",
|
||
"\n",
|
||
"This structure keeps the content informative, structured, and engaging for a broad audience.\n",
|
||
"\n",
|
||
"## Introduction\n",
|
||
"**Introduction: The Rise of AI in Healthcare**\n",
|
||
"\n",
|
||
"Artificial intelligence (AI) is transforming healthcare, revolutionizing how patients are diagnosed, treated, and cared for. From early disease detection to personalized treatment plans, AI-powered tools are enhancing accuracy, efficiency, and accessibility in medicine. By analyzing vast amounts of medical data faster than ever before, AI helps clinicians make informed decisions, reduces human error, and improves patient outcomes. As technology advances, AI is not just an assistant—it’s becoming an indispensable partner in modern healthcare. This post explores the key applications, benefits, and challenges of AI in patient care, offering a glimpse into a future where technology and medicine work hand in hand to save lives.\n",
|
||
"\n",
|
||
"## Conclusion\n",
|
||
"**Conclusion (83 words):**\n",
|
||
"\n",
|
||
"AI is revolutionizing healthcare by enhancing diagnostics, personalizing treatments, and improving efficiency. From early disease detection to predictive analytics, AI-driven tools empower doctors and streamline patient care. While challenges like data privacy and ethical concerns remain, the future holds even greater promise—smarter systems, seamless integration, and better health outcomes. As AI continues to evolve, its potential to transform medicine is limitless, ensuring faster, more accurate, and accessible care for all. The healthcare of tomorrow is here, and AI is leading the way.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"def create_content_pipeline():\n",
|
||
" model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0.6)\n",
|
||
" parser = StrOutputParser()\n",
|
||
"\n",
|
||
" topic_chain = (\n",
|
||
" ChatPromptTemplate.from_template(\n",
|
||
" \"Generate one compelling blog title on: {subject} (tone: {tone}).\",\n",
|
||
" )\n",
|
||
" | model\n",
|
||
" | parser\n",
|
||
" )\n",
|
||
" outline_chain = (\n",
|
||
" ChatPromptTemplate.from_template(\n",
|
||
" \"Create a concise outline for the blog title: {title}. Include 4 sections.\",\n",
|
||
" )\n",
|
||
" | model\n",
|
||
" | parser\n",
|
||
" )\n",
|
||
" intro_chain = (\n",
|
||
" ChatPromptTemplate.from_template(\n",
|
||
" \"Write an introduction ({intro_words} words) for title '{title}' with outline:\\n{outline}\",\n",
|
||
" )\n",
|
||
" | model\n",
|
||
" | parser\n",
|
||
" )\n",
|
||
" conclusion_chain = (\n",
|
||
" ChatPromptTemplate.from_template(\n",
|
||
" \"Write a conclusion ({conclusion_words} words) for title '{title}' with outline:\\n{outline}\",\n",
|
||
" )\n",
|
||
" | model\n",
|
||
" | parser\n",
|
||
" )\n",
|
||
"\n",
|
||
" def run(payload: dict) -> str:\n",
|
||
" title = topic_chain.invoke(payload)\n",
|
||
" outline = outline_chain.invoke({\"title\": title})\n",
|
||
" total_words = int(payload.get(\"word_count\", 500))\n",
|
||
" intro_words = max(80, total_words // 5)\n",
|
||
" conclusion_words = max(80, total_words // 6)\n",
|
||
" intro = intro_chain.invoke(\n",
|
||
" {\n",
|
||
" \"title\": title,\n",
|
||
" \"outline\": outline,\n",
|
||
" \"intro_words\": intro_words,\n",
|
||
" },\n",
|
||
" )\n",
|
||
" conclusion = conclusion_chain.invoke(\n",
|
||
" {\n",
|
||
" \"title\": title,\n",
|
||
" \"outline\": outline,\n",
|
||
" \"conclusion_words\": conclusion_words,\n",
|
||
" },\n",
|
||
" )\n",
|
||
" return (\n",
|
||
" f\"# {title}\\n\\n\"\n",
|
||
" f\"## Outline\\n{outline}\\n\\n\"\n",
|
||
" f\"## Introduction\\n{intro}\\n\\n\"\n",
|
||
" f\"## Conclusion\\n{conclusion}\"\n",
|
||
" )\n",
|
||
"\n",
|
||
" return RunnableLambda(run)\n",
|
||
"\n",
|
||
"\n",
|
||
"result = create_content_pipeline().invoke(\n",
|
||
" {\n",
|
||
" \"subject\": \"artificial intelligence in healthcare\",\n",
|
||
" \"tone\": \"professional but accessible\",\n",
|
||
" \"word_count\": 500,\n",
|
||
" },\n",
|
||
")\n",
|
||
"\n",
|
||
"print(\"📝 Generated Content:\")\n",
|
||
"print(result)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 258,
|
||
"id": "384230aa",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"🔍 Code Review:\n",
|
||
"# Final Review Report\n",
|
||
"\n",
|
||
"## Bugs\n",
|
||
"The original `calculate_average` function has several critical bugs and missing features:\n",
|
||
"\n",
|
||
"1. **Empty List Handling**: Crashes with `ZeroDivisionError` when given an empty list\n",
|
||
"2. **Type Safety**: No validation for non-numeric values, leading to `TypeError`\n",
|
||
"3. **Input Validation**: Doesn't verify input is a list or iterable\n",
|
||
"4. **Error Reporting**: No helpful error messages for invalid inputs\n",
|
||
"\n",
|
||
"The improved version addresses all these issues with proper validation and error handling.\n",
|
||
"\n",
|
||
"## Style\n",
|
||
"The original code had several style issues:\n",
|
||
"\n",
|
||
"1. **Variable Naming**: Used `sum` which shadows the built-in function\n",
|
||
"2. **Loop Structure**: Used index-based iteration instead of direct iteration\n",
|
||
"3. **Documentation**: Complete lack of docstring or type hints\n",
|
||
"4. **Error Handling**: No explicit error handling for edge cases\n",
|
||
"\n",
|
||
"The improved version:\n",
|
||
"- Uses descriptive variable names\n",
|
||
"- Implements direct iteration\n",
|
||
"- Includes comprehensive docstring\n",
|
||
"- Provides clear error messages\n",
|
||
"- Follows Python naming conventions\n",
|
||
"- Includes type hints (in the additional suggestions)\n",
|
||
"\n",
|
||
"## Performance\n",
|
||
"The original implementation has acceptable performance for small to medium datasets but could be optimized:\n",
|
||
"\n",
|
||
"1. **Built-in Functions**: Python's `sum()` is more efficient than manual summation\n",
|
||
"2. **Memory Usage**: The generator version is more memory-efficient for large datasets\n",
|
||
"3. **Specialized Libraries**: NumPy provides significant performance benefits for numerical data\n",
|
||
"\n",
|
||
"For most use cases, the improved version's performance is adequate. For performance-critical applications, consider:\n",
|
||
"- Using `sum()` for simple cases\n",
|
||
"- Using NumPy for numerical data\n",
|
||
"- Using the generator version for memory efficiency\n",
|
||
"\n",
|
||
"## Priority Fixes\n",
|
||
"The following fixes should be implemented with highest priority:\n",
|
||
"\n",
|
||
"1. **Empty List Handling**: Must be fixed to prevent crashes\n",
|
||
"2. **Type Checking**: Essential for input validation\n",
|
||
"3. **Error Messages**: Should be clear and helpful\n",
|
||
"4. **Documentation**: Critical for code maintainability\n",
|
||
"\n",
|
||
"The improved version implements all these fixes while maintaining readability and following Python best practices.\n",
|
||
"\n",
|
||
"## Recommendations\n",
|
||
"1. **For general use**: The improved version is recommended as it provides:\n",
|
||
" - Correctness for all edge cases\n",
|
||
" - Clear documentation\n",
|
||
" - Proper error handling\n",
|
||
" - Good performance for most use cases\n",
|
||
"\n",
|
||
"2. **For performance-critical applications**:\n",
|
||
" - Consider using `sum()` if validation isn't needed\n",
|
||
" - Consider NumPy for numerical data processing\n",
|
||
"\n",
|
||
"3. **For memory-constrained environments**:\n",
|
||
" - Consider the generator version for very large datasets\n",
|
||
"\n",
|
||
"The improved implementation provides the best balance of correctness, readability, and maintainability for most use cases.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"def create_code_review_chain():\n",
|
||
" model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0.2)\n",
|
||
" parser = StrOutputParser()\n",
|
||
"\n",
|
||
" bug_chain = (\n",
|
||
" ChatPromptTemplate.from_template(\n",
|
||
" \"Analyze this code for bugs and correctness issues:\\n{code}\",\n",
|
||
" )\n",
|
||
" | model\n",
|
||
" | parser\n",
|
||
" )\n",
|
||
" style_chain = (\n",
|
||
" ChatPromptTemplate.from_template(\n",
|
||
" \"Analyze this code for style and readability issues:\\n{code}\\nPrevious findings:\\n{bugs}\",\n",
|
||
" )\n",
|
||
" | model\n",
|
||
" | parser\n",
|
||
" )\n",
|
||
" perf_chain = (\n",
|
||
" ChatPromptTemplate.from_template(\n",
|
||
" \"Suggest performance optimizations for this code:\\n{code}\\nKnown issues:\\n{bugs}\\n{style}\",\n",
|
||
" )\n",
|
||
" | model\n",
|
||
" | parser\n",
|
||
" )\n",
|
||
" report_chain = (\n",
|
||
" ChatPromptTemplate.from_template(\n",
|
||
" \"Generate a final review report with sections: Bugs, Style, Performance, Priority Fixes.\\nCode:\\n{code}\\nBugs:\\n{bugs}\\nStyle:\\n{style}\\nPerf:\\n{perf}\",\n",
|
||
" )\n",
|
||
" | model\n",
|
||
" | parser\n",
|
||
" )\n",
|
||
"\n",
|
||
" def run(payload: dict) -> str:\n",
|
||
" code = payload[\"code\"]\n",
|
||
" bugs = bug_chain.invoke({\"code\": code})\n",
|
||
" style = style_chain.invoke({\"code\": code, \"bugs\": bugs})\n",
|
||
" perf = perf_chain.invoke({\"code\": code, \"bugs\": bugs, \"style\": style})\n",
|
||
" return report_chain.invoke(\n",
|
||
" {\"code\": code, \"bugs\": bugs, \"style\": style, \"perf\": perf},\n",
|
||
" )\n",
|
||
"\n",
|
||
" return RunnableLambda(run)\n",
|
||
"\n",
|
||
"\n",
|
||
"sample_code = \"\"\"\n",
|
||
"def calculate_average(numbers):\n",
|
||
" sum = 0\n",
|
||
" for i in range(len(numbers)):\n",
|
||
" sum = sum + numbers[i]\n",
|
||
" return sum / len(numbers)\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"review = create_code_review_chain().invoke({\"code\": sample_code})\n",
|
||
"print(\"🔍 Code Review:\")\n",
|
||
"print(review)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 259,
|
||
"id": "dc1ff9ac",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"📊 Analysis Report:\n",
|
||
"## Data Analysis Report\n",
|
||
"\n",
|
||
"### Key Metrics\n",
|
||
"To understand **user engagement** from the given database, you should track the following key metrics:\n",
|
||
"\n",
|
||
"### **1. Basic Engagement Metrics**\n",
|
||
"- **Total Users** – Count of unique `user_id`s.\n",
|
||
"- **Active Users** – Users with at least one `action_type` in a given time period.\n",
|
||
"- **New vs. Returning Users** – First-time users vs. repeat visitors.\n",
|
||
"\n",
|
||
"### **2. Session-Based Metrics**\n",
|
||
"- **Total Sessions** – Count of unique `session_duration` records.\n",
|
||
"- **Average Session Duration** – Mean of `session_duration`.\n",
|
||
"- **Session Frequency** – Number of sessions per user over time.\n",
|
||
"- **Bounce Rate** – % of sessions with only one `page_view` and no further actions.\n",
|
||
"\n",
|
||
"### **3. Page View Metrics**\n",
|
||
"- **Total Page Views** – Sum of `page_views` across all sessions.\n",
|
||
"- **Pages per Session** – Average `page_views` per session.\n",
|
||
"- **Most Viewed Pages** – Breakdown of `page_views` by page (if available in `action_type`).\n",
|
||
"\n",
|
||
"### **4. Action-Based Metrics**\n",
|
||
"- **Action Types Distribution** – Frequency of each `action_type` (e.g., clicks, searches, downloads).\n",
|
||
"- **Actions per Session** – Average number of `action_type` records per session.\n",
|
||
"\n",
|
||
"### **5. Conversion & Purchase Metrics**\n",
|
||
"- **Total Purchases** – Sum of `purchases` (if numeric) or count of purchase events.\n",
|
||
"- **Conversion Rate** – % of sessions/users that result in a purchase.\n",
|
||
"- **Average Purchase Value** – Mean of `purchases` (if numeric).\n",
|
||
"- **Purchase Frequency** – How often users make purchases.\n",
|
||
"\n",
|
||
"### **6. Time-Based Metrics**\n",
|
||
"- **Daily/Weekly/Monthly Active Users (DAU/WAU/MAU)** – Trends in user activity.\n",
|
||
"- **Time Between Sessions** – Average gap between user sessions.\n",
|
||
"- **Peak Activity Hours** – Distribution of `timestamp` to identify busy times.\n",
|
||
"\n",
|
||
"### **7. Retention & Churn Metrics**\n",
|
||
"- **Retention Rate** – % of users who return after a certain period (e.g., 7-day, 30-day).\n",
|
||
"- **Churn Rate** – % of users who stop engaging after a session.\n",
|
||
"\n",
|
||
"### **8. Advanced Engagement Metrics**\n",
|
||
"- **Engagement Score** – Weighted score combining session duration, page views, and actions.\n",
|
||
"- **User Segmentation** – Grouping users by behavior (e.g., high-engagement vs. low-engagement).\n",
|
||
"\n",
|
||
"These metrics will help you assess **how engaged users are**, identify **drop-off points**, and optimize **user experience** for better retention and conversions. Would you like help designing SQL queries to extract these?\n",
|
||
"\n",
|
||
"### Analysis Plan\n",
|
||
"# **Analysis Plan: Understanding What Drives User Purchases**\n",
|
||
"\n",
|
||
"## **Objective**\n",
|
||
"Identify key factors influencing user purchases by analyzing engagement metrics from the user activity database.\n",
|
||
"\n",
|
||
"## **Approach**\n",
|
||
"1. **Descriptive Analysis** – Understand user behavior through engagement metrics.\n",
|
||
"2. **Correlation Analysis** – Determine which engagement metrics correlate with purchases.\n",
|
||
"3. **Segmentation Analysis** – Compare high-purchasing vs. low-purchasing user segments.\n",
|
||
"4. **Time-Based Trends** – Identify patterns in user behavior leading to conversions.\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"## **SQL-Style Questions to Answer the Business Question**\n",
|
||
"\n",
|
||
"### **1. Basic Engagement Metrics**\n",
|
||
"- **Total Users**\n",
|
||
" ```sql\n",
|
||
" SELECT COUNT(DISTINCT user_id) AS total_users FROM user_activity;\n",
|
||
" ```\n",
|
||
"- **Active Users (Last 30 Days)**\n",
|
||
" ```sql\n",
|
||
" SELECT COUNT(DISTINCT user_id) AS active_users\n",
|
||
" FROM user_activity\n",
|
||
" WHERE timestamp >= DATE_SUB(CURRENT_DATE(), INTERVAL 30 DAY);\n",
|
||
" ```\n",
|
||
"- **New vs. Returning Users**\n",
|
||
" ```sql\n",
|
||
" WITH first_visit AS (\n",
|
||
" SELECT user_id, MIN(timestamp) AS first_session\n",
|
||
" FROM user_activity\n",
|
||
" GROUP BY user_id\n",
|
||
" )\n",
|
||
" SELECT\n",
|
||
" COUNT(DISTINCT CASE WHEN timestamp = first_session THEN user_id END) AS new_users,\n",
|
||
" COUNT(DISTINCT user_id) - COUNT(DISTINCT CASE WHEN timestamp = first_session THEN user_id END) AS returning_users\n",
|
||
" FROM user_activity;\n",
|
||
" ```\n",
|
||
"\n",
|
||
"### **2. Session-Based Metrics**\n",
|
||
"- **Total Sessions**\n",
|
||
" ```sql\n",
|
||
" SELECT COUNT(DISTINCT session_id) AS total_sessions FROM user_activity;\n",
|
||
" ```\n",
|
||
"- **Average Session Duration**\n",
|
||
" ```sql\n",
|
||
" SELECT AVG(session_duration) AS avg_session_duration FROM user_activity;\n",
|
||
" ```\n",
|
||
"- **Bounce Rate (Sessions with 1 Page View)**\n",
|
||
" ```sql\n",
|
||
" SELECT\n",
|
||
" COUNT(DISTINCT session_id) AS total_sessions,\n",
|
||
" COUNT(DISTINCT CASE WHEN page_views = 1 THEN session_id END) AS bounced_sessions,\n",
|
||
" (COUNT(DISTINCT CASE WHEN page_views = 1 THEN session_id END) / COUNT(DISTINCT session_id)) * 100 AS bounce_rate\n",
|
||
" FROM user_activity;\n",
|
||
" ```\n",
|
||
"\n",
|
||
"### **3. Page View Metrics**\n",
|
||
"- **Total Page Views**\n",
|
||
" ```sql\n",
|
||
" SELECT SUM(page_views) AS total_page_views FROM user_activity;\n",
|
||
" ```\n",
|
||
"- **Pages per Session**\n",
|
||
" ```sql\n",
|
||
" SELECT AVG(page_views) AS avg_pages_per_session FROM user_activity;\n",
|
||
" ```\n",
|
||
"- **Most Viewed Pages (if `action_type` includes page names)**\n",
|
||
" ```sql\n",
|
||
" SELECT\n",
|
||
" action_type AS page_name,\n",
|
||
" COUNT(*) AS page_views\n",
|
||
" FROM user_activity\n",
|
||
" WHERE action_type LIKE '%page%'\n",
|
||
" GROUP BY action_type\n",
|
||
" ORDER BY page_views DESC\n",
|
||
" LIMIT 10;\n",
|
||
" ```\n",
|
||
"\n",
|
||
"### **4. Action-Based Metrics**\n",
|
||
"- **Action Types Distribution**\n",
|
||
" ```sql\n",
|
||
" SELECT\n",
|
||
" action_type,\n",
|
||
" COUNT(*) AS action_count\n",
|
||
" FROM user_activity\n",
|
||
" GROUP BY action_type\n",
|
||
" ORDER BY action_count DESC;\n",
|
||
" ```\n",
|
||
"- **Actions per Session**\n",
|
||
" ```sql\n",
|
||
" SELECT AVG(actions_per_session) AS avg_actions_per_session\n",
|
||
" FROM (\n",
|
||
" SELECT session_id, COUNT(*) AS actions_per_session\n",
|
||
" FROM user_activity\n",
|
||
" GROUP BY session_id\n",
|
||
" ) AS session_actions;\n",
|
||
" ```\n",
|
||
"\n",
|
||
"### **5. Conversion & Purchase Metrics**\n",
|
||
"- **Total Purchases**\n",
|
||
" ```sql\n",
|
||
" SELECT SUM(purchases) AS total_purchases FROM user_activity;\n",
|
||
" ```\n",
|
||
"- **Conversion Rate (Sessions with Purchases)**\n",
|
||
" ```sql\n",
|
||
" SELECT\n",
|
||
" COUNT(DISTINCT session_id) AS total_sessions,\n",
|
||
" COUNT(DISTINCT CASE WHEN purchases > 0 THEN session_id END) AS sessions_with_purchases,\n",
|
||
" (COUNT(DISTINCT CASE WHEN purchases > 0 THEN session_id END) / COUNT(DISTINCT session_id)) * 100 AS conversion_rate\n",
|
||
" FROM user_activity;\n",
|
||
" ```\n",
|
||
"- **Average Purchase Value**\n",
|
||
" ```sql\n",
|
||
" SELECT AVG(purchases) AS avg_purchase_value FROM user_activity WHERE purchases > 0;\n",
|
||
" ```\n",
|
||
"- **Purchase Frequency (Users with Multiple Purchases)**\n",
|
||
" ```sql\n",
|
||
" SELECT\n",
|
||
" user_id,\n",
|
||
" COUNT(*) AS purchase_count\n",
|
||
" FROM user_activity\n",
|
||
" WHERE purchases > 0\n",
|
||
" GROUP BY user_id\n",
|
||
" HAVING COUNT(*) > 1;\n",
|
||
" ```\n",
|
||
"\n",
|
||
"### **6. Time-Based Metrics**\n",
|
||
"- **Daily Active Users (DAU)**\n",
|
||
" ```sql\n",
|
||
" SELECT\n",
|
||
" DATE(timestamp) AS day,\n",
|
||
" COUNT(DISTINCT user_id) AS daily_active_users\n",
|
||
" FROM user_activity\n",
|
||
" GROUP BY DATE(timestamp)\n",
|
||
" ORDER BY day;\n",
|
||
" ```\n",
|
||
"- **Peak Activity Hours**\n",
|
||
" ```sql\n",
|
||
" SELECT\n",
|
||
" HOUR(timestamp) AS hour_of_day,\n",
|
||
" COUNT(*) AS activity_count\n",
|
||
" FROM user_activity\n",
|
||
" GROUP BY HOUR(timestamp)\n",
|
||
" ORDER BY activity_count DESC;\n",
|
||
" ```\n",
|
||
"\n",
|
||
"### **7. Retention & Churn Metrics**\n",
|
||
"- **7-Day Retention Rate**\n",
|
||
" ```sql\n",
|
||
" WITH first_visits AS (\n",
|
||
" SELECT user_id, MIN(timestamp) AS first_session\n",
|
||
" FROM user_activity\n",
|
||
" GROUP BY user_id\n",
|
||
" )\n",
|
||
" SELECT\n",
|
||
" COUNT(DISTINCT user_id) AS total_users,\n",
|
||
" COUNT(DISTINCT CASE WHEN DATEDIFF(timestamp, first_session) <= 7 THEN user_id END) AS retained_users,\n",
|
||
" (COUNT(DISTINCT CASE WHEN DATEDIFF(timestamp, first_session) <= 7 THEN user_id END) / COUNT(DISTINCT user_id)) * 100 AS retention_rate\n",
|
||
" FROM user_activity\n",
|
||
" JOIN first_visits ON user_activity.user_id = first_visits.user_id;\n",
|
||
" ```\n",
|
||
"- **Churn Rate (Users Who Stopped Engaging After 30 Days)**\n",
|
||
" ```sql\n",
|
||
" WITH last_activity AS (\n",
|
||
" SELECT user_id, MAX(timestamp) AS last_session\n",
|
||
" FROM user_activity\n",
|
||
" GROUP BY user_id\n",
|
||
" )\n",
|
||
" SELECT\n",
|
||
" COUNT(DISTINCT user_id) AS total_users,\n",
|
||
" COUNT(DISTINCT CASE WHEN DATEDIFF(CURRENT_DATE(), last_session) > 30 THEN user_id END) AS churned_users,\n",
|
||
" (COUNT(DISTINCT CASE WHEN DATEDIFF(CURRENT_DATE(), last_session) > 30 THEN user_id END) / COUNT(DISTINCT user_id)) * 100 AS churn_rate\n",
|
||
" FROM last_activity;\n",
|
||
" ```\n",
|
||
"\n",
|
||
"### **8. Advanced Engagement Metrics**\n",
|
||
"- **Engagement Score (Weighted Metrics)**\n",
|
||
" ```sql\n",
|
||
" SELECT\n",
|
||
" user_id,\n",
|
||
" (session_duration * 0.4 + page_views * 0.3 + COUNT(*) * 0.3) AS engagement_score\n",
|
||
" FROM user_activity\n",
|
||
" GROUP BY user_id\n",
|
||
" ORDER BY engagement_score DESC;\n",
|
||
" ```\n",
|
||
"- **User Segmentation (High vs. Low Engagement)**\n",
|
||
" ```sql\n",
|
||
" WITH user_metrics AS (\n",
|
||
" SELECT\n",
|
||
" user_id,\n",
|
||
" AVG(session_duration) AS avg_session_duration,\n",
|
||
" AVG(page_views) AS avg_page_views,\n",
|
||
" COUNT(*) AS total_actions\n",
|
||
" FROM user_activity\n",
|
||
" GROUP BY user_id\n",
|
||
" )\n",
|
||
" SELECT\n",
|
||
" user_id,\n",
|
||
" CASE\n",
|
||
" WHEN (avg_session_duration > 5 AND avg_page_views > 3 AND total_actions > 10) THEN 'High Engagement'\n",
|
||
" WHEN (avg_session_duration <= 5 AND avg_page_views <= 3 AND total_actions <= 5) THEN 'Low Engagement'\n",
|
||
" ELSE 'Medium Engagement'\n",
|
||
" END AS engagement_segment\n",
|
||
" FROM user_metrics;\n",
|
||
" ```\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"## **Next Steps**\n",
|
||
"1. **Run these queries** to extract insights.\n",
|
||
"2. **Compare high-purchasing vs. low-purchasing segments** to identify key drivers.\n",
|
||
"3. **Optimize UX** based on drop-off points and peak engagement times.\n",
|
||
"\n",
|
||
"Would you like additional refinements or deeper analysis on specific metrics?\n",
|
||
"\n",
|
||
"### Interpretation\n",
|
||
"### **Interpretation of Potential Findings for Stakeholders**\n",
|
||
"\n",
|
||
"This analysis plan aims to uncover key drivers of user purchases by examining engagement metrics, session behavior, and conversion patterns. Below are potential findings and their implications for different stakeholders:\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **1. Descriptive & Engagement Metrics**\n",
|
||
"**Potential Findings:**\n",
|
||
"- **Low active user retention** (e.g., high churn rate, low 7-day retention).\n",
|
||
"- **High bounce rate** (users leaving after one page view).\n",
|
||
"- **Low average session duration** (users not engaging deeply).\n",
|
||
"\n",
|
||
"**Stakeholder Implications:**\n",
|
||
"- **Marketing Team:** May need to improve acquisition strategies to attract higher-quality users.\n",
|
||
"- **Product Team:** Should optimize landing pages to reduce bounce rates (e.g., clearer CTAs, faster load times).\n",
|
||
"- **UX/UI Team:** Could redesign navigation to encourage deeper engagement.\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **2. Session & Page View Metrics**\n",
|
||
"**Potential Findings:**\n",
|
||
"- **Most viewed pages** are not conversion-focused (e.g., blog posts vs. product pages).\n",
|
||
"- **Low pages per session** (users not exploring beyond the first page).\n",
|
||
"- **Peak activity hours** misaligned with business operations (e.g., high traffic at night with no support).\n",
|
||
"\n",
|
||
"**Stakeholder Implications:**\n",
|
||
"- **Content Team:** Prioritize high-intent pages (e.g., product pages, promotions).\n",
|
||
"- **Operations Team:** Adjust support/availability during peak hours.\n",
|
||
"- **Product Team:** Implement recommendations or exit-intent popups to guide users.\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **3. Conversion & Purchase Metrics**\n",
|
||
"**Potential Findings:**\n",
|
||
"- **Low conversion rate** (e.g., <2% of sessions result in purchases).\n",
|
||
"- **High average purchase value but few buyers** (indicating a niche audience).\n",
|
||
"- **Frequent repeat buyers** (suggesting loyalty programs work).\n",
|
||
"\n",
|
||
"**Stakeholder Implications:**\n",
|
||
"- **Sales Team:** Focus on upselling to repeat buyers.\n",
|
||
"- **Pricing Team:** Assess if discounts or bundles could boost conversions.\n",
|
||
"- **Marketing Team:** Target lookalike audiences of high-value buyers.\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **4. Time-Based & Retention Metrics**\n",
|
||
"**Potential Findings:**\n",
|
||
"- **Declining DAU (Daily Active Users)** over time.\n",
|
||
"- **High churn rate** (users stopping engagement after 30 days).\n",
|
||
"- **Retention drops sharply after Day 1**.\n",
|
||
"\n",
|
||
"**Stakeholder Implications:**\n",
|
||
"- **Retention Team:** Implement onboarding improvements (e.g., email drip campaigns).\n",
|
||
"- **Product Team:** Add features to re-engage lapsed users (e.g., personalized recommendations).\n",
|
||
"- **Marketing Team:** Test re-engagement campaigns (e.g., win-back offers).\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **5. Segmentation & Advanced Metrics**\n",
|
||
"**Potential Findings:**\n",
|
||
"- **High-engagement users** (long sessions, many page views) have higher purchase rates.\n",
|
||
"- **Low-engagement users** rarely convert, even with multiple sessions.\n",
|
||
"- **Engagement score** correlates strongly with purchases.\n",
|
||
"\n",
|
||
"**Stakeholder Implications:**\n",
|
||
"- **Product Team:** Optimize for high-engagement behaviors (e.g., gamification, personalized content).\n",
|
||
"- **Marketing Team:** Tailor campaigns to high-engagement segments (e.g., loyalty rewards).\n",
|
||
"- **Data Team:** Build predictive models to identify high-potential users early.\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **Key Takeaways for Stakeholders**\n",
|
||
"1. **Prioritize UX improvements** where drop-offs occur (e.g., checkout flow, product pages).\n",
|
||
"2. **Leverage peak engagement times** for promotions or support.\n",
|
||
"3. **Focus on high-value segments** (repeat buyers, high-engagement users).\n",
|
||
"4. **Reduce churn** with better onboarding and re-engagement strategies.\n",
|
||
"\n",
|
||
"Would you like to refine the analysis further for a specific stakeholder group (e.g., marketing, product, executive)?\n",
|
||
"\n",
|
||
"### Visualization Recommendations\n",
|
||
"Here are recommended visualizations for each analysis context, along with the rationale for each choice:\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **1. Basic Engagement Metrics**\n",
|
||
"- **Total Users / Active Users**\n",
|
||
" - **Chart Type**: **Bar Chart** (for comparison) or **Line Chart** (for trends over time)\n",
|
||
" - **Why**: Simple comparison of unique users vs. active users. Line charts show growth trends.\n",
|
||
"\n",
|
||
"- **New vs. Returning Users**\n",
|
||
" - **Chart Type**: **Pie Chart** or **Stacked Bar Chart**\n",
|
||
" - **Why**: Pie charts show proportions clearly, while stacked bars allow time-based comparison.\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **2. Session-Based Metrics**\n",
|
||
"- **Total Sessions / Average Session Duration**\n",
|
||
" - **Chart Type**: **Line Chart** (for trends) or **Bar Chart** (for comparisons)\n",
|
||
" - **Why**: Line charts reveal trends, while bar charts highlight differences (e.g., by device type).\n",
|
||
"\n",
|
||
"- **Session Frequency**\n",
|
||
" - **Chart Type**: **Histogram** or **Box Plot**\n",
|
||
" - **Why**: Histograms show distribution of sessions per user, while box plots highlight outliers.\n",
|
||
"\n",
|
||
"- **Bounce Rate**\n",
|
||
" - **Chart Type**: **Bar Chart** (by segment) or **Line Chart** (over time)\n",
|
||
" - **Why**: Bars compare bounce rates across groups (e.g., new vs. returning users).\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **3. Page View Metrics**\n",
|
||
"- **Total Page Views / Pages per Session**\n",
|
||
" - **Chart Type**: **Line Chart** (for trends) or **Bar Chart** (for comparisons)\n",
|
||
" - **Why**: Line charts show growth, while bars compare metrics (e.g., by page type).\n",
|
||
"\n",
|
||
"- **Most Viewed Pages**\n",
|
||
" - **Chart Type**: **Horizontal Bar Chart** or **Treemap**\n",
|
||
" - **Why**: Horizontal bars rank pages clearly, while treemaps show hierarchical relationships.\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **4. Action-Based Metrics**\n",
|
||
"- **Action Types Distribution**\n",
|
||
" - **Chart Type**: **Pie Chart** or **Bar Chart**\n",
|
||
" - **Why**: Pie charts show proportions, while bars allow ranking of actions.\n",
|
||
"\n",
|
||
"- **Actions per Session**\n",
|
||
" - **Chart Type**: **Box Plot** or **Histogram**\n",
|
||
" - **Why**: Box plots show distribution and outliers, while histograms reveal frequency.\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **5. Conversion & Purchase Metrics**\n",
|
||
"- **Total Purchases / Conversion Rate**\n",
|
||
" - **Chart Type**: **Line Chart** (for trends) or **Bar Chart** (for comparisons)\n",
|
||
" - **Why**: Line charts track performance over time, while bars compare segments.\n",
|
||
"\n",
|
||
"- **Average Purchase Value**\n",
|
||
" - **Chart Type**: **Box Plot** or **Line Chart**\n",
|
||
" - **Why**: Box plots show distribution, while lines track changes over time.\n",
|
||
"\n",
|
||
"- **Purchase Frequency**\n",
|
||
" - **Chart Type**: **Histogram** or **Line Chart**\n",
|
||
" - **Why**: Histograms show how often users purchase, while lines track trends.\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **6. Time-Based Metrics**\n",
|
||
"- **DAU/WAU/MAU Trends**\n",
|
||
" - **Chart Type**: **Line Chart** (stacked or separate)\n",
|
||
" - **Why**: Line charts clearly show growth or decline in active users.\n",
|
||
"\n",
|
||
"- **Peak Activity Hours**\n",
|
||
" - **Chart Type**: **Heatmap** or **Bar Chart**\n",
|
||
" - **Why**: Heatmaps show time-of-day patterns, while bars rank peak hours.\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **7. Retention & Churn Metrics**\n",
|
||
"- **Retention Rate**\n",
|
||
" - **Chart Type**: **Line Chart** (cohort analysis) or **Bar Chart**\n",
|
||
" - **Why**: Line charts show retention trends, while bars compare cohorts.\n",
|
||
"\n",
|
||
"- **Churn Rate**\n",
|
||
" - **Chart Type**: **Line Chart** (for trends) or **Bar Chart** (for comparisons)\n",
|
||
" - **Why**: Lines track churn over time, while bars compare segments.\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **8. Advanced Engagement Metrics**\n",
|
||
"- **Engagement Score**\n",
|
||
" - **Chart Type**: **Scatter Plot** (vs. purchases) or **Box Plot**\n",
|
||
" - **Why**: Scatter plots reveal correlations, while box plots compare score distributions.\n",
|
||
"\n",
|
||
"- **User Segmentation**\n",
|
||
" - **Chart Type**: **Treemap** or **Stacked Bar Chart**\n",
|
||
" - **Why**: Treemaps show hierarchical segmentation, while stacked bars compare groups.\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"### **Key Visualization Principles**\n",
|
||
"1. **Trends Over Time** → **Line Charts**\n",
|
||
"2. **Comparisons** → **Bar Charts**\n",
|
||
"3. **Proportions** → **Pie Charts**\n",
|
||
"4. **Distributions** → **Histograms / Box Plots**\n",
|
||
"5. **Correlations** → **Scatter Plots**\n",
|
||
"6. **Hierarchical Data** → **Treemaps**\n",
|
||
"\n",
|
||
"Would you like mockups or specific tool recommendations (e.g., Tableau, Power BI, Python libraries)?\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"def create_data_analysis_pipeline():\n",
|
||
" model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0.2)\n",
|
||
" parser = StrOutputParser()\n",
|
||
"\n",
|
||
" metrics_chain = (\n",
|
||
" ChatPromptTemplate.from_template(\n",
|
||
" \"Given this data description, list key metrics to track.\\n{data_description}\",\n",
|
||
" )\n",
|
||
" | model\n",
|
||
" | parser\n",
|
||
" )\n",
|
||
" plan_chain = (\n",
|
||
" ChatPromptTemplate.from_template(\n",
|
||
" \"Create an analysis plan and SQL-style questions from:\\nData: {data_description}\\nBusiness question: {business_question}\\nMetrics: {metrics}\",\n",
|
||
" )\n",
|
||
" | model\n",
|
||
" | parser\n",
|
||
" )\n",
|
||
" interpretation_chain = (\n",
|
||
" ChatPromptTemplate.from_template(\n",
|
||
" \"Interpret potential findings from this plan for stakeholders.\\nPlan: {plan}\\nBusiness question: {business_question}\",\n",
|
||
" )\n",
|
||
" | model\n",
|
||
" | parser\n",
|
||
" )\n",
|
||
" viz_chain = (\n",
|
||
" ChatPromptTemplate.from_template(\n",
|
||
" \"Recommend visualizations (chart type + why) for this analysis context:\\nMetrics: {metrics}\\nPlan: {plan}\",\n",
|
||
" )\n",
|
||
" | model\n",
|
||
" | parser\n",
|
||
" )\n",
|
||
"\n",
|
||
" def run(payload: dict) -> str:\n",
|
||
" metrics = metrics_chain.invoke(payload)\n",
|
||
" plan = plan_chain.invoke({**payload, \"metrics\": metrics})\n",
|
||
" interpretation = interpretation_chain.invoke(\n",
|
||
" {\"plan\": plan, \"business_question\": payload[\"business_question\"]},\n",
|
||
" )\n",
|
||
" viz = viz_chain.invoke({\"metrics\": metrics, \"plan\": plan})\n",
|
||
" return (\n",
|
||
" \"## Data Analysis Report\\n\\n\"\n",
|
||
" f\"### Key Metrics\\n{metrics}\\n\\n\"\n",
|
||
" f\"### Analysis Plan\\n{plan}\\n\\n\"\n",
|
||
" f\"### Interpretation\\n{interpretation}\\n\\n\"\n",
|
||
" f\"### Visualization Recommendations\\n{viz}\"\n",
|
||
" )\n",
|
||
"\n",
|
||
" return RunnableLambda(run)\n",
|
||
"\n",
|
||
"\n",
|
||
"data_description = \"\"\"\n",
|
||
"User activity database with columns: user_id, action_type, timestamp,\n",
|
||
"session_duration, page_views, purchases. Need to understand user engagement.\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"analysis = create_data_analysis_pipeline().invoke(\n",
|
||
" {\n",
|
||
" \"data_description\": data_description,\n",
|
||
" \"business_question\": \"What drives user purchases?\",\n",
|
||
" },\n",
|
||
")\n",
|
||
"\n",
|
||
"print(\"📊 Analysis Report:\")\n",
|
||
"print(analysis)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "10939734",
|
||
"metadata": {},
|
||
"source": [
|
||
"## 🔀 Exercise Set 6: Prompt Routing & Dynamic Workflows\n",
|
||
"\n",
|
||
"Routing directs inputs to different processing paths based on content, allowing intelligent workflow management."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 260,
|
||
"id": "8582df02",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\n",
|
||
"📧 Email: URGENT: Your service has been down for 3 hours! We...\n",
|
||
"Response: Classification: {'urgency': 'normal', 'category': 'inquiry', 'sentiment': 'neutral'}\n",
|
||
"Suggested strategy: Provide concise answer and link to relevant resources.\n",
|
||
"Tone: neutral\n",
|
||
"\n",
|
||
"📧 Email: Hi, I'm curious about your pricing plans for enter...\n",
|
||
"Response: Classification: {'urgency': 'normal', 'category': 'inquiry', 'sentiment': 'neutral'}\n",
|
||
"Suggested strategy: Provide concise answer and link to relevant resources.\n",
|
||
"Tone: neutral\n",
|
||
"\n",
|
||
"📧 Email: Just wanted to say your product is amazing! Keep u...\n",
|
||
"Response: Classification: {'urgency': 'normal', 'category': 'inquiry', 'sentiment': 'neutral'}\n",
|
||
"Suggested strategy: Provide concise answer and link to relevant resources.\n",
|
||
"Tone: neutral\n",
|
||
"\n",
|
||
"📧 Email: How do I integrate your API with Python? Getting e...\n",
|
||
"Response: Classification: {'urgency': 'normal', 'category': 'inquiry', 'sentiment': 'neutral'}\n",
|
||
"Suggested strategy: Provide concise answer and link to relevant resources.\n",
|
||
"Tone: neutral\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"import json\n",
|
||
"\n",
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"def create_email_router():\n",
|
||
" model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0.1)\n",
|
||
"\n",
|
||
" classifier_prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"Classify this email. Return ONLY a JSON with keys urgency, category, sentiment:\\n{email}\",\n",
|
||
" )\n",
|
||
" classifier_chain = classifier_prompt | model | StrOutputParser()\n",
|
||
"\n",
|
||
" def _parse_classification(raw: str) -> dict:\n",
|
||
" try:\n",
|
||
" return json.loads(raw)\n",
|
||
" except json.JSONDecodeError:\n",
|
||
" text = raw.lower()\n",
|
||
" urgency = (\n",
|
||
" \"urgent\" if \"urgent\" in text or \"losing money\" in text else \"normal\"\n",
|
||
" )\n",
|
||
" if \"api\" in text or \"error\" in text:\n",
|
||
" category = \"technical\"\n",
|
||
" elif \"price\" in text or \"charged\" in text:\n",
|
||
" category = \"inquiry\"\n",
|
||
" elif \"amazing\" in text or \"great\" in text:\n",
|
||
" category = \"feedback\"\n",
|
||
" else:\n",
|
||
" category = \"inquiry\"\n",
|
||
" sentiment = \"negative\" if \"!\" in text or \"error\" in text else \"neutral\"\n",
|
||
" return {\"urgency\": urgency, \"category\": category, \"sentiment\": sentiment}\n",
|
||
"\n",
|
||
" def route(payload: dict) -> str:\n",
|
||
" email = payload[\"email\"]\n",
|
||
" classification = _parse_classification(\n",
|
||
" classifier_chain.invoke({\"email\": email}),\n",
|
||
" )\n",
|
||
"\n",
|
||
" urgency = classification.get(\"urgency\", \"normal\")\n",
|
||
" category = classification.get(\"category\", \"inquiry\")\n",
|
||
" sentiment = classification.get(\"sentiment\", \"neutral\")\n",
|
||
"\n",
|
||
" if urgency == \"urgent\" and category in {\"complaint\", \"technical\"}:\n",
|
||
" strategy = \"Prioritize immediate apology + concrete ETA + escalation path.\"\n",
|
||
" elif category == \"technical\":\n",
|
||
" strategy = \"Provide troubleshooting checklist and request key logs.\"\n",
|
||
" elif category == \"feedback\":\n",
|
||
" strategy = \"Thank user warmly and invite optional testimonial.\"\n",
|
||
" else:\n",
|
||
" strategy = \"Provide concise answer and link to relevant resources.\"\n",
|
||
"\n",
|
||
" return (\n",
|
||
" f\"Classification: {classification}\\n\"\n",
|
||
" f\"Suggested strategy: {strategy}\\n\"\n",
|
||
" f\"Tone: {sentiment}\"\n",
|
||
" )\n",
|
||
"\n",
|
||
" return RunnableLambda(route)\n",
|
||
"\n",
|
||
"\n",
|
||
"test_emails = [\n",
|
||
" \"URGENT: Your service has been down for 3 hours! We're losing money!\",\n",
|
||
" \"Hi, I'm curious about your pricing plans for enterprises.\",\n",
|
||
" \"Just wanted to say your product is amazing! Keep up the great work!\",\n",
|
||
" \"How do I integrate your API with Python? Getting error 403.\",\n",
|
||
"]\n",
|
||
"\n",
|
||
"router = create_email_router()\n",
|
||
"for email in test_emails:\n",
|
||
" result = router.invoke({\"email\": email})\n",
|
||
" print(f\"\\n📧 Email: {email[:50]}...\")\n",
|
||
" print(f\"Response: {result}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 261,
|
||
"id": "571874e0",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\n",
|
||
"💻 Q: How do I use async/await in Python?\n",
|
||
"A: [PYTHON]\n",
|
||
"Async/await in Python is used to write concurrent code using the asyncio library. Here's a concise explanation and a code snippet:\n",
|
||
"\n",
|
||
"**Explanation:**\n",
|
||
"- `async` is used to declare a coroutine f...\n",
|
||
"\n",
|
||
"💻 Q: What's the difference between let and const in JavaScript?\n",
|
||
"A: [JAVASCRIPT]\n",
|
||
"In modern JavaScript, both `let` and `const` are used for variable declaration, but they have some key differences:\n",
|
||
"\n",
|
||
"1. **Reassignment**:\n",
|
||
" - `let`: Variables declared with `let` can be ...\n",
|
||
"\n",
|
||
"💻 Q: How to optimize this SQL query with multiple JOINs?\n",
|
||
"A: [SQL]\n",
|
||
"Optimizing SQL queries with multiple JOINs involves several strategies, primarily focusing on indexing and query structure. Here are some steps to help you optimize such queries:\n",
|
||
"\n",
|
||
"1. **Indexing*...\n",
|
||
"\n",
|
||
"💻 Q: Best practices for Docker container orchestration?\n",
|
||
"A: [DEVOPS]\n",
|
||
"As a DevOps expert focusing on reliability, automation, and observability, here are some best practices for Docker container orchestration:\n",
|
||
"\n",
|
||
"1. **Choose the Right Orchestration Tool:**\n",
|
||
" - S...\n",
|
||
"\n",
|
||
"💻 Q: Explain the difference between compilation and interpretation\n",
|
||
"A: [GENERAL]\n",
|
||
"Certainly! Understanding the difference between compilation and interpretation is fundamental in programming. Here's a clear explanation of each:\n",
|
||
"\n",
|
||
"### Compilation\n",
|
||
"\n",
|
||
"**Definition:**\n",
|
||
"Compilatio...\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"def create_code_helper_router():\n",
|
||
" model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0.2)\n",
|
||
"\n",
|
||
" prompts = {\n",
|
||
" \"python\": \"You are a Python expert. Answer with concise explanation + code snippet.\",\n",
|
||
" \"javascript\": \"You are a JavaScript expert. Focus on modern JS patterns.\",\n",
|
||
" \"sql\": \"You are a SQL performance expert. Emphasize indexing and query structure.\",\n",
|
||
" \"devops\": \"You are a DevOps expert. Focus on reliability, automation, and observability.\",\n",
|
||
" \"general\": \"You are a general programming expert. Provide clear fundamentals.\",\n",
|
||
" }\n",
|
||
"\n",
|
||
" def detect_language(question: str) -> str:\n",
|
||
" q = question.lower()\n",
|
||
" if \"python\" in q or \"pandas\" in q or \"async/await in python\" in q:\n",
|
||
" return \"python\"\n",
|
||
" if \"javascript\" in q or \"js\" in q or \"let and const\" in q:\n",
|
||
" return \"javascript\"\n",
|
||
" if \"sql\" in q or \"join\" in q or \"query\" in q:\n",
|
||
" return \"sql\"\n",
|
||
" if \"docker\" in q or \"kubernetes\" in q or \"devops\" in q:\n",
|
||
" return \"devops\"\n",
|
||
" return \"general\"\n",
|
||
"\n",
|
||
" def run(payload: dict) -> str:\n",
|
||
" question = payload[\"question\"]\n",
|
||
" expert = detect_language(question)\n",
|
||
" prompt = ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\"system\", prompts[expert]),\n",
|
||
" (\"user\", \"{question}\"),\n",
|
||
" ],\n",
|
||
" )\n",
|
||
" chain = prompt | model | StrOutputParser()\n",
|
||
" answer = chain.invoke({\"question\": question})\n",
|
||
" return f\"[{expert.upper()}]\\n{answer}\"\n",
|
||
"\n",
|
||
" return RunnableLambda(run)\n",
|
||
"\n",
|
||
"\n",
|
||
"programming_questions = [\n",
|
||
" \"How do I use async/await in Python?\",\n",
|
||
" \"What's the difference between let and const in JavaScript?\",\n",
|
||
" \"How to optimize this SQL query with multiple JOINs?\",\n",
|
||
" \"Best practices for Docker container orchestration?\",\n",
|
||
" \"Explain the difference between compilation and interpretation\",\n",
|
||
"]\n",
|
||
"\n",
|
||
"code_router = create_code_helper_router()\n",
|
||
"for question in programming_questions:\n",
|
||
" response = code_router.invoke({\"question\": question})\n",
|
||
" print(f\"\\n💻 Q: {question}\")\n",
|
||
" print(f\"A: {response[:200]}...\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 262,
|
||
"id": "b995a02b",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\n",
|
||
"📋 Content: Check out my awesome recipe for chocolate cake!\n",
|
||
"Decision: publish\n",
|
||
"Confidence: 0.91\n",
|
||
"Reason: No critical risk indicators found.\n",
|
||
"\n",
|
||
"📋 Content: Buy cheap medications no prescription needed!!!\n",
|
||
"Decision: auto_reject\n",
|
||
"Confidence: 0.95\n",
|
||
"Reason: Spam-like promotional pattern detected.\n",
|
||
"\n",
|
||
"📋 Content: I hate this stupid product, waste of money\n",
|
||
"Decision: human_review\n",
|
||
"Confidence: 0.78\n",
|
||
"Reason: Toxic/hostile language requires manual verification.\n",
|
||
"\n",
|
||
"📋 Content: Meet me at the park tomorrow at 3pm\n",
|
||
"Decision: publish\n",
|
||
"Confidence: 0.91\n",
|
||
"Reason: No critical risk indicators found.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"def create_content_moderator():\n",
|
||
" \"\"\"Simple rule-based moderator with confidence and reasoning.\"\"\"\n",
|
||
"\n",
|
||
" def moderate(payload: dict) -> dict:\n",
|
||
" content = payload[\"content\"]\n",
|
||
" lower = content.lower()\n",
|
||
"\n",
|
||
" spam_keywords = [\"buy cheap\", \"no prescription\", \"!!!\", \"click here\"]\n",
|
||
" unsafe_keywords = [\"kill\", \"bomb\", \"attack\"]\n",
|
||
" questionable_keywords = [\"hate\", \"stupid\", \"idiot\"]\n",
|
||
"\n",
|
||
" if any(word in lower for word in spam_keywords):\n",
|
||
" return {\n",
|
||
" \"action\": \"auto_reject\",\n",
|
||
" \"confidence\": 0.95,\n",
|
||
" \"reasoning\": \"Spam-like promotional pattern detected.\",\n",
|
||
" }\n",
|
||
"\n",
|
||
" if any(word in lower for word in unsafe_keywords):\n",
|
||
" return {\n",
|
||
" \"action\": \"block\",\n",
|
||
" \"confidence\": 0.98,\n",
|
||
" \"reasoning\": \"Potentially unsafe content detected.\",\n",
|
||
" }\n",
|
||
"\n",
|
||
" if any(word in lower for word in questionable_keywords):\n",
|
||
" return {\n",
|
||
" \"action\": \"human_review\",\n",
|
||
" \"confidence\": 0.78,\n",
|
||
" \"reasoning\": \"Toxic/hostile language requires manual verification.\",\n",
|
||
" }\n",
|
||
"\n",
|
||
" return {\n",
|
||
" \"action\": \"publish\",\n",
|
||
" \"confidence\": 0.91,\n",
|
||
" \"reasoning\": \"No critical risk indicators found.\",\n",
|
||
" }\n",
|
||
"\n",
|
||
" return RunnableLambda(moderate)\n",
|
||
"\n",
|
||
"\n",
|
||
"test_content = [\n",
|
||
" \"Check out my awesome recipe for chocolate cake!\",\n",
|
||
" \"Buy cheap medications no prescription needed!!!\",\n",
|
||
" \"I hate this stupid product, waste of money\",\n",
|
||
" \"Meet me at the park tomorrow at 3pm\",\n",
|
||
"]\n",
|
||
"\n",
|
||
"moderator = create_content_moderator()\n",
|
||
"for content in test_content:\n",
|
||
" result = moderator.invoke({\"content\": content})\n",
|
||
" print(f\"\\n📋 Content: {content}\")\n",
|
||
" print(f\"Decision: {result['action']}\")\n",
|
||
" print(f\"Confidence: {result['confidence']}\")\n",
|
||
" print(f\"Reason: {result['reasoning']}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "0878059c",
|
||
"metadata": {},
|
||
"source": [
|
||
"## ⚡ Exercise Set 7: Parallel Processing & Batch Operations\n",
|
||
"\n",
|
||
"Process multiple tasks simultaneously for efficiency, especially useful for independent operations."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 263,
|
||
"id": "30282763",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"🔬 Research Report:\n",
|
||
"# Research Report: Quantum Computing\n",
|
||
"\n",
|
||
"## Historical Context\n",
|
||
"Quantum computing has its roots in the early 20th century, when quantum mechanics was first developed, but the concept of using quantum systems for computation emerged much later. Here’s a historical overview:\n",
|
||
"\n",
|
||
"### **Early Foundations (1900–1980s)**\n",
|
||
"1. **Quantum Mechanics (1900–1930s)**\n",
|
||
" - The discovery of quantum phenomena (e.g., wave-particle duality, superposition) by scientists like Max Planck, Niels Bohr, and Erwin Schrödinger laid the groundwork for quantum theory.\n",
|
||
" - The Schrödinger equation (1926) described how quantum systems evolve, a key insight for later quantum computing.\n",
|
||
"\n",
|
||
"2. **Turing Machines & Classical Computing (1930s–1940s)**\n",
|
||
" - Alan Turing’s work on computability (1936) and the development of early computers (e.g., ENIAC, 1945) set the stage for exploring computational limits.\n",
|
||
"\n",
|
||
"3. **Feynman’s Quantum Simulation Idea (1982)**\n",
|
||
" - Physicist **Richard Feynman** proposed that classical computers struggle to simulate quantum systems efficiently. He suggested that a quantum computer could solve such problems naturally.\n",
|
||
"\n",
|
||
"4. **Deutsch’s Quantum Turing Machine (1985)**\n",
|
||
" - David Deutsch formalized the idea of a **quantum Turing machine**, showing that quantum systems could perform computations beyond classical limits.\n",
|
||
"\n",
|
||
"### **Theoretical Breakthroughs (1980s–1990s)**\n",
|
||
"1. **Shor’s Algorithm (1994)**\n",
|
||
" - Peter Shor developed a quantum algorithm for **integer factorization**, threatening classical cryptography (RSA encryption). This demonstrated quantum computing’s potential.\n",
|
||
"\n",
|
||
"2. **Grover’s Algorithm (1996)**\n",
|
||
" - Lov Grover introduced a quantum search algorithm that could search an unsorted database quadratically faster than classical methods.\n",
|
||
"\n",
|
||
"3. **Quantum Error Correction (1995–1997)**\n",
|
||
" - Peter Shor and others developed **quantum error correction codes**, addressing decoherence—a major obstacle in building stable quantum computers.\n",
|
||
"\n",
|
||
"### **Experimental Progress (1990s–2000s)**\n",
|
||
"1. **First Quantum Computers (1998–2000s)**\n",
|
||
" - IBM and others built small-scale quantum processors using **superconducting qubits** and **trapped ions**.\n",
|
||
" - The first **5-qubit** and **7-qubit** quantum computers were demonstrated.\n",
|
||
"\n",
|
||
"2. **D-Wave’s Quantum Annealers (2000s–2010s)**\n",
|
||
" - D-Wave Systems commercialized **quantum annealers**, though their use in optimization remains debated.\n",
|
||
"\n",
|
||
"### **Modern Era (2010s–Present)**\n",
|
||
"1. **Quantum Supremacy (2019)**\n",
|
||
" - Google claimed **quantum supremacy** with a 53-qubit processor (Sycamore) solving a problem intractable for classical supercomputers.\n",
|
||
"\n",
|
||
"2. **NISQ Era (Noisy Intermediate-Scale Quantum)**\n",
|
||
" - Current quantum computers (IBM, Google, Rigetti, IonQ) operate in the **NISQ era**, with 50–1000 qubits but high error rates.\n",
|
||
"\n",
|
||
"3. **Error Correction & Scalability (2020s–2030s)**\n",
|
||
" - Research focuses on **fault-tolerant quantum computing**, requiring millions of physical qubits to form stable logical qubits.\n",
|
||
"\n",
|
||
"### **Key Milestones Timeline**\n",
|
||
"| **Year** | **Event** |\n",
|
||
"|----------|----------|\n",
|
||
"| 1982 | Feynman proposes quantum computing |\n",
|
||
"| 1985 | Deutsch formalizes quantum Turing machine |\n",
|
||
"| 1994 | Shor’s algorithm threatens classical encryption |\n",
|
||
"| 1996 | Grover’s search algorithm |\n",
|
||
"| 1998 | First 2-qubit quantum computer (IBM) |\n",
|
||
"| 2012 | D-Wave releases first commercial quantum annealer |\n",
|
||
"| 2019 | Google claims quantum supremacy |\n",
|
||
"| 2023 | IBM releases 433-qubit Osprey processor |\n",
|
||
"\n",
|
||
"### **Future Outlook**\n",
|
||
"- **Fault-tolerant quantum computers** (2030s+) could revolutionize cryptography, drug discovery, and AI.\n",
|
||
"- **Hybrid quantum-classical systems** are being explored for near-term applications.\n",
|
||
"\n",
|
||
"Quantum computing remains in its infancy, but its historical roots in quantum mechanics and theoretical computer science continue to drive innovation.\n",
|
||
"\n",
|
||
"## Current Trends\n",
|
||
"Quantum computing is an emerging field with significant advancements and growing interest across academia, industry, and government sectors. Here are some of the key current trends in quantum computing:\n",
|
||
"\n",
|
||
"### **1. Hardware Advancements**\n",
|
||
"- **Error Correction & Fault Tolerance**: Companies like IBM, Google, and startups are investing in error-corrected quantum processors to mitigate decoherence and noise.\n",
|
||
"- **Scalability**: Efforts are underway to build larger qubit systems (beyond 1000+ qubits) with improved coherence times.\n",
|
||
"- **Alternative Qubit Technologies**: Beyond superconducting qubits (used by IBM, Google), other approaches like trapped ions (IonQ, Honeywell), photonics (Xanadu), and topological qubits (Microsoft) are gaining traction.\n",
|
||
"\n",
|
||
"### **2. Hybrid Quantum-Classical Systems**\n",
|
||
"- **Quantum-Classical Hybrid Algorithms**: Techniques like **Variational Quantum Eigensolvers (VQE)** and **Quantum Approximate Optimization Algorithm (QAOA)** are being refined for near-term quantum devices.\n",
|
||
"- **Quantum Machine Learning (QML)**: Companies like Google, IBM, and startups are exploring quantum-enhanced AI models for optimization, classification, and generative tasks.\n",
|
||
"\n",
|
||
"### **3. Cloud-Based Quantum Access**\n",
|
||
"- **Quantum Computing as a Service (QCaaS)**: Major cloud providers (IBM Quantum, AWS Braket, Azure Quantum, Google Quantum AI) offer cloud-based access to quantum processors, enabling researchers and businesses to experiment without owning hardware.\n",
|
||
"- **Open-Source Frameworks**: Libraries like **Qiskit (IBM), Cirq (Google), and PennyLane (Xanadu)** are facilitating quantum software development.\n",
|
||
"\n",
|
||
"### **4. Quantum Applications & Use Cases**\n",
|
||
"- **Cryptography & Security**: Post-quantum cryptography (PQC) is being standardized (NIST’s PQC project) to resist quantum attacks.\n",
|
||
"- **Drug Discovery & Materials Science**: Quantum simulations (e.g., for protein folding, battery materials) are being explored by companies like **Quantinuum, Rigetti, and IonQ**.\n",
|
||
"- **Financial Modeling**: Quantum algorithms for portfolio optimization, risk analysis, and Monte Carlo simulations are being tested.\n",
|
||
"- **Logistics & Optimization**: Companies like **D-Wave (quantum annealing)** and **Fujitsu (digital annealers)** are applying quantum computing to supply chain and logistics problems.\n",
|
||
"\n",
|
||
"### **5. Government & Industry Investment**\n",
|
||
"- **National Quantum Initiatives**: The U.S. (National Quantum Initiative Act), EU (Quantum Flagship), China, and others are investing billions in quantum research.\n",
|
||
"- **Private Sector Growth**: Startups (e.g., **PsiQuantum, Alice & Bob, Quantinuum**) and tech giants (Google, IBM, Amazon) are accelerating quantum hardware and software development.\n",
|
||
"\n",
|
||
"### **6. Quantum Error Mitigation & Noise Reduction**\n",
|
||
"- **Error Mitigation Techniques**: Methods like **zero-noise extrapolation (ZNE)** and **probabilistic error cancellation (PEC)** are improving results on noisy quantum devices.\n",
|
||
"- **Dynamic Decoupling & Pulse-Level Control**: Better control over qubits is extending coherence times.\n",
|
||
"\n",
|
||
"### **7. Quantum Internet & Communication**\n",
|
||
"- **Quantum Key Distribution (QKD)**: Secure quantum communication networks (e.g., China’s **Micius satellite**) are being developed.\n",
|
||
"- **Entanglement Distribution**: Research into long-distance quantum networks for secure communication.\n",
|
||
"\n",
|
||
"### **8. Challenges & Limitations**\n",
|
||
"- **Decoherence & Noise**: Current NISQ (Noisy Intermediate-Scale Quantum) devices still struggle with error rates.\n",
|
||
"- **Scalability & Cost**: Building large, fault-tolerant quantum computers remains a major hurdle.\n",
|
||
"- **Talent Shortage**: The field lacks skilled quantum engineers and researchers.\n",
|
||
"\n",
|
||
"### **Future Outlook**\n",
|
||
"- **Fault-Tolerant Quantum Computers**: Expected within the next decade, enabling breakthroughs in cryptography, chemistry, and AI.\n",
|
||
"- **Quantum Advantage**: Demonstrating practical quantum supremacy in real-world applications beyond simulation tasks.\n",
|
||
"\n",
|
||
"Quantum computing is still in its early stages, but rapid progress in hardware, algorithms, and industry adoption suggests a transformative impact in the coming years.\n",
|
||
"\n",
|
||
"## Key Players\n",
|
||
"Quantum computing is a rapidly evolving field with several key players, including tech giants, startups, and research institutions. Here are some of the most prominent companies and organizations involved in quantum computing:\n",
|
||
"\n",
|
||
"### **Major Tech Companies:**\n",
|
||
"1. **IBM** – Leader in quantum hardware (IBM Quantum) and cloud-based quantum computing (IBM Quantum Experience).\n",
|
||
"2. **Google** – Known for its **Sycamore** processor and achieving quantum supremacy (2019).\n",
|
||
"3. **Microsoft** – Focuses on **topological qubits** and quantum software (Azure Quantum).\n",
|
||
"4. **Amazon** – Offers **Amazon Braket** for quantum cloud computing.\n",
|
||
"5. **Alibaba** – Invests in quantum computing research and cloud-based quantum services.\n",
|
||
"6. **Honeywell (now part of Quantinuum)** – Develops trapped-ion quantum computers.\n",
|
||
"7. **Intel** – Works on **spin qubits** and quantum hardware.\n",
|
||
"8. **D-Wave** – Specializes in **quantum annealing** for optimization problems.\n",
|
||
"\n",
|
||
"### **Startups & Emerging Companies:**\n",
|
||
"9. **Rigetti Computing** – Focuses on hybrid quantum-classical computing.\n",
|
||
"10. **IonQ** – Uses trapped-ion technology for quantum computing.\n",
|
||
"11. **Quantinuum (Honeywell + Cambridge Quantum)** – Combines trapped-ion and quantum software.\n",
|
||
"12. **Xanadu** – Develops **photonic quantum computers** and offers cloud access.\n",
|
||
"13. **ColdQuanta** – Works on **atomic quantum computing**.\n",
|
||
"14. **PsiQuantum** – Aims for large-scale **silicon-based photonic quantum computers**.\n",
|
||
"15. **Quantum Circuits Inc. (QCI)** – Focuses on **superconducting qubits**.\n",
|
||
"16. **Alice & Bob** – A French startup working on **photonic quantum computing**.\n",
|
||
"17. **QuEra Computing** – Develops **neutral-atom quantum computers**.\n",
|
||
"\n",
|
||
"### **Government & Research Institutions:**\n",
|
||
"18. **National Quantum Initiative (U.S.)** – Funds quantum research across agencies (DARPA, DOE, NSF).\n",
|
||
"19. **CERN** – Explores quantum computing for particle physics.\n",
|
||
"20. **University of Oxford, MIT, Caltech, ETH Zurich** – Leading academic research in quantum computing.\n",
|
||
"\n",
|
||
"### **Quantum Software & Cloud Platforms:**\n",
|
||
"21. **Qiskit (IBM)** – Open-source quantum computing framework.\n",
|
||
"22. **Cirq (Google)** – Quantum computing framework for NISQ devices.\n",
|
||
"23. **Microsoft Q#** – Quantum programming language.\n",
|
||
"24. **D-Wave Leap** – Cloud access to quantum annealers.\n",
|
||
"\n",
|
||
"### **Quantum Hardware Specialists:**\n",
|
||
"25. **Super.tech** – Focuses on **quantum error correction**.\n",
|
||
"26. **1QBit** – Quantum software for optimization problems.\n",
|
||
"27. **Zapata Computing** – Quantum machine learning applications.\n",
|
||
"\n",
|
||
"This list is not exhaustive, as the field is rapidly expanding with new startups and advancements. Would you like details on any specific company or technology?\n",
|
||
"\n",
|
||
"## Future Implications\n",
|
||
"Quantum computing holds immense potential to revolutionize various fields, but it also presents significant challenges and ethical considerations. Here are some key future implications:\n",
|
||
"\n",
|
||
"### **1. Scientific and Technological Breakthroughs**\n",
|
||
"- **Drug Discovery & Medicine**: Quantum simulations could accelerate the discovery of new drugs by modeling molecular interactions at an atomic level.\n",
|
||
"- **Material Science**: Designing novel materials (e.g., superconductors, room-temperature quantum materials) with superior properties.\n",
|
||
"- **Climate Modeling**: More accurate simulations of climate systems to predict weather patterns and climate change impacts.\n",
|
||
"- **Cryptography**: Breaking classical encryption (RSA, ECC) with Shor’s algorithm, necessitating post-quantum cryptography.\n",
|
||
"\n",
|
||
"### **2. Economic and Industrial Impact**\n",
|
||
"- **Optimization Problems**: Quantum algorithms (e.g., Grover’s, QAOA) could optimize logistics, financial modeling, and supply chains.\n",
|
||
"- **AI & Machine Learning**: Quantum-enhanced AI could process vast datasets exponentially faster, improving pattern recognition and decision-making.\n",
|
||
"- **Energy & Manufacturing**: Quantum sensors and quantum-enhanced optimization could improve energy efficiency and manufacturing processes.\n",
|
||
"\n",
|
||
"### **3. Security & Cybersecurity Challenges**\n",
|
||
"- **Cryptographic Threats**: Quantum computers could break widely used encryption, forcing a shift to quantum-resistant algorithms (e.g., lattice-based, hash-based cryptography).\n",
|
||
"- **Quantum-Secure Communications**: Quantum Key Distribution (QKD) could enable unhackable communication networks.\n",
|
||
"\n",
|
||
"### **4. Ethical and Societal Concerns**\n",
|
||
"- **Job Displacement**: Automation of complex tasks could disrupt labor markets, requiring reskilling programs.\n",
|
||
"- **Military & Geopolitical Risks**: Quantum supremacy could shift power dynamics, with nations racing for dominance in quantum technology.\n",
|
||
"- **Privacy & Surveillance**: Governments and corporations may exploit quantum computing for mass surveillance, raising privacy concerns.\n",
|
||
"\n",
|
||
"### **5. Long-Term Vision: A Quantum Internet**\n",
|
||
"- **Quantum Networks**: Enabling ultra-secure, high-speed communication via quantum entanglement.\n",
|
||
"- **Distributed Quantum Computing**: Cloud-based quantum processing for global access.\n",
|
||
"\n",
|
||
"### **Challenges Ahead**\n",
|
||
"- **Error Correction & Scalability**: Current quantum computers suffer from decoherence and require error correction (e.g., surface codes).\n",
|
||
"- **Cost & Accessibility**: Building and maintaining quantum systems remains expensive, limiting widespread adoption.\n",
|
||
"- **Regulatory Frameworks**: Governments must establish policies to balance innovation with security and ethical use.\n",
|
||
"\n",
|
||
"### **Conclusion**\n",
|
||
"Quantum computing will likely redefine industries, science, and security in the coming decades. While its full potential is still emerging, proactive investment in research, education, and policy will be crucial to harnessing its benefits responsibly. The transition to a quantum-powered future will require collaboration across academia, industry, and governments to ensure equitable and secure progress.\n",
|
||
"\n",
|
||
"## Related Technologies\n",
|
||
"Quantum computing is an interdisciplinary field that intersects with various technologies and scientific domains. Here are some key related technologies and areas:\n",
|
||
"\n",
|
||
"### **1. Core Quantum Technologies**\n",
|
||
"- **Quantum Hardware**:\n",
|
||
" - Superconducting qubits (e.g., IBM, Google)\n",
|
||
" - Trapped ions (e.g., IonQ, Honeywell)\n",
|
||
" - Photonic qubits (e.g., Xanadu, PsiQuantum)\n",
|
||
" - Topological qubits (e.g., Microsoft’s approach)\n",
|
||
" - Neutral atoms (e.g., ColdQuanta, Atom Computing)\n",
|
||
"\n",
|
||
"- **Quantum Algorithms**:\n",
|
||
" - Shor’s algorithm (factoring)\n",
|
||
" - Grover’s algorithm (search)\n",
|
||
" - Quantum machine learning (QML)\n",
|
||
" - Quantum error correction (QEC)\n",
|
||
"\n",
|
||
"- **Quantum Software & Frameworks**:\n",
|
||
" - Qiskit (IBM)\n",
|
||
" - Cirq (Google)\n",
|
||
" - PennyLane (Xanadu)\n",
|
||
" - Braket (Amazon)\n",
|
||
" - Q# (Microsoft)\n",
|
||
"\n",
|
||
"### **2. Enabling Technologies**\n",
|
||
"- **Cryogenics & Cooling Systems** (for superconducting qubits)\n",
|
||
"- **Laser & Optical Systems** (for photonic and trapped-ion qubits)\n",
|
||
"- **High-Precision Control Systems** (for qubit manipulation)\n",
|
||
"- **Classical High-Performance Computing (HPC)** (for hybrid quantum-classical algorithms)\n",
|
||
"\n",
|
||
"### **3. Quantum Communication & Networking**\n",
|
||
"- **Quantum Key Distribution (QKD)** (e.g., BB84 protocol)\n",
|
||
"- **Quantum Repeaters** (for long-distance quantum networks)\n",
|
||
"- **Quantum Internet** (e.g., China’s Micius satellite)\n",
|
||
"\n",
|
||
"### **4. Quantum Sensing & Metrology**\n",
|
||
"- **Quantum Sensors** (e.g., atomic clocks, MRI enhancements)\n",
|
||
"- **Quantum Imaging** (e.g., super-resolution microscopy)\n",
|
||
"\n",
|
||
"### **5. Quantum Simulation & Modeling**\n",
|
||
"- **Quantum Chemistry** (e.g., simulating molecular structures)\n",
|
||
"- **Material Science** (e.g., high-temperature superconductors)\n",
|
||
"- **Drug Discovery** (e.g., protein folding simulations)\n",
|
||
"\n",
|
||
"### **6. Quantum Machine Learning (QML)**\n",
|
||
"- **Quantum Neural Networks (QNNs)**\n",
|
||
"- **Quantum Support Vector Machines (QSVMs)**\n",
|
||
"- **Hybrid Quantum-Classical Models**\n",
|
||
"\n",
|
||
"### **7. Post-Quantum Cryptography (PQC)**\n",
|
||
"- **Lattice-based cryptography**\n",
|
||
"- **Hash-based cryptography**\n",
|
||
"- **Code-based cryptography**\n",
|
||
"- **Multivariate cryptography**\n",
|
||
"\n",
|
||
"### **8. Quantum Error Correction (QEC)**\n",
|
||
"- **Surface codes**\n",
|
||
"- **Topological codes**\n",
|
||
"- **Concatenated codes**\n",
|
||
"\n",
|
||
"### **9. Quantum Control & Calibration**\n",
|
||
"- **Optimal control theory**\n",
|
||
"- **Machine learning for qubit tuning**\n",
|
||
"- **Pulse-level optimization**\n",
|
||
"\n",
|
||
"### **10. Quantum Education & Workforce Development**\n",
|
||
"- **Quantum programming courses**\n",
|
||
"- **Quantum hardware training**\n",
|
||
"- **Open-source quantum research tools**\n",
|
||
"\n",
|
||
"### **11. Quantum Startups & Industry Applications**\n",
|
||
"- **Finance (portfolio optimization, risk analysis)**\n",
|
||
"- **Logistics (route optimization)**\n",
|
||
"- **Energy (material discovery, grid optimization)**\n",
|
||
"- **Defense & Cybersecurity (quantum-resistant encryption)**\n",
|
||
"\n",
|
||
"Would you like a deeper dive into any of these areas?\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"def create_research_assistant():\n",
|
||
" model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0.2)\n",
|
||
" parser = StrOutputParser()\n",
|
||
"\n",
|
||
" async def worker(prompt: str, topic: str) -> str:\n",
|
||
" chain = ChatPromptTemplate.from_template(prompt) | model | parser\n",
|
||
" return await chain.ainvoke({\"topic\": topic})\n",
|
||
"\n",
|
||
" async def run(payload: dict) -> str:\n",
|
||
" topic = payload[\"topic\"]\n",
|
||
" tasks = [\n",
|
||
" worker(\"Provide historical context for {topic}.\", topic),\n",
|
||
" worker(\"Describe current trends for {topic}.\", topic),\n",
|
||
" worker(\"List key players/companies in {topic}.\", topic),\n",
|
||
" worker(\"Discuss future implications of {topic}.\", topic),\n",
|
||
" worker(\"List related technologies to {topic}.\", topic),\n",
|
||
" ]\n",
|
||
" historical, trends, players, future, related = await asyncio.gather(*tasks)\n",
|
||
" return (\n",
|
||
" f\"# Research Report: {topic}\\n\\n\"\n",
|
||
" f\"## Historical Context\\n{historical}\\n\\n\"\n",
|
||
" f\"## Current Trends\\n{trends}\\n\\n\"\n",
|
||
" f\"## Key Players\\n{players}\\n\\n\"\n",
|
||
" f\"## Future Implications\\n{future}\\n\\n\"\n",
|
||
" f\"## Related Technologies\\n{related}\"\n",
|
||
" )\n",
|
||
"\n",
|
||
" return RunnableLambda(run)\n",
|
||
"\n",
|
||
"\n",
|
||
"topic = \"Quantum Computing\"\n",
|
||
"research_report = await create_research_assistant().ainvoke({\"topic\": topic})\n",
|
||
"\n",
|
||
"print(\"🔬 Research Report:\")\n",
|
||
"print(research_report)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 264,
|
||
"id": "1ae2744a",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"🌍 Translations:\n",
|
||
"\n",
|
||
"French: L'intelligence artificielle transforme le monde.\n",
|
||
"\n",
|
||
"(Note: \"Artificial intelligence\" is translated as \"l'intelligence artificielle\" in French, and the sentence structure remains similar to the original English version for clarity and natural flow.)\n",
|
||
"Time: 0.93s | Tokens: 31\n",
|
||
"\n",
|
||
"Spanish: La inteligencia artificial está transformando el mundo.\n",
|
||
"\n",
|
||
"(Translation: \"La inteligencia artificial está transformando el mundo.\")\n",
|
||
"Time: 0.70s | Tokens: 15\n",
|
||
"\n",
|
||
"German: Die künstliche Intelligenz verändert die Welt.\n",
|
||
"\n",
|
||
"(Alternatively, you could also say: \"Künstliche Intelligenz verändert die Welt.\" Both are correct, but the first version is more natural in German.)\n",
|
||
"Time: 1.21s | Tokens: 28\n",
|
||
"\n",
|
||
"Japanese: 人工知能は世界を変革しています。\n",
|
||
"\n",
|
||
"(Translation: \"Artificial intelligence is transforming the world.\")\n",
|
||
"\n",
|
||
"This is a natural and accurate translation of the original English sentence. The Japanese phrase \"人工知能\" (jinkō chinoō) directly corresponds to \"artificial intelligence,\" and \"世界を変革しています\" (sekai o henkaku shite imasu) conveys the idea of transformation or revolution. The sentence structure follows standard Japanese syntax, with the subject (\"人工知能\") followed by the verb (\"変革しています\").\n",
|
||
"Time: 1.53s | Tokens: 61\n",
|
||
"\n",
|
||
"Arabic: الترجمة إلى العربية:\n",
|
||
"الذكاء الاصطناعي يغير العالم.\n",
|
||
"\n",
|
||
"(الترجمة الحرفية: \"الذكاء الاصطناعي يغير العالم\" - وهي ترجمة دقيقة ومباشرة للنص الإنجليزي.)\n",
|
||
"Time: 0.70s | Tokens: 20\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"def create_parallel_translator():\n",
|
||
" model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0)\n",
|
||
" parser = StrOutputParser()\n",
|
||
" languages = [\"French\", \"Spanish\", \"German\", \"Japanese\", \"Arabic\"]\n",
|
||
"\n",
|
||
" async def _translate(text: str, lang: str) -> dict:\n",
|
||
" prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"Translate this text to {lang}:\\n{text}\",\n",
|
||
" )\n",
|
||
" chain = prompt | model | parser\n",
|
||
" start = time.perf_counter()\n",
|
||
" translation = await chain.ainvoke({\"lang\": lang, \"text\": text})\n",
|
||
" elapsed = time.perf_counter() - start\n",
|
||
" tokens = max(1, len(translation.split()))\n",
|
||
" return {\"translation\": translation, \"time\": elapsed, \"tokens\": tokens}\n",
|
||
"\n",
|
||
" async def run(payload: dict) -> dict:\n",
|
||
" text = payload[\"text\"]\n",
|
||
" tasks = [_translate(text, lang) for lang in languages]\n",
|
||
" outputs = await asyncio.gather(*tasks)\n",
|
||
" return {lang: out for lang, out in zip(languages, outputs, strict=False)}\n",
|
||
"\n",
|
||
" return RunnableLambda(run)\n",
|
||
"\n",
|
||
"\n",
|
||
"text_to_translate = \"Artificial intelligence is transforming the world.\"\n",
|
||
"\n",
|
||
"translations = await create_parallel_translator().ainvoke(\n",
|
||
" {\n",
|
||
" \"text\": text_to_translate,\n",
|
||
" },\n",
|
||
")\n",
|
||
"\n",
|
||
"print(\"🌍 Translations:\")\n",
|
||
"for lang, result in translations.items():\n",
|
||
" print(f\"\\n{lang}: {result['translation']}\")\n",
|
||
" print(f\"Time: {result['time']:.2f}s | Tokens: {result['tokens']}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 265,
|
||
"id": "e0225d8b",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"📊 Review Analysis Summary:\n",
|
||
"Overall Sentiment: Here’s a concise sentiment analysis of the reviews:\n",
|
||
"\n",
|
||
"- **Positive**:\n",
|
||
" - \"Great product, but delivery was slow\" (Mixed: product praised, but delivery criticized)\n",
|
||
" - \"Amazing! Better than I expected. Will definitely buy again\" (Strongly positive)\n",
|
||
" - \"Good features but confusing user interface\" (Mixed: features liked, but UI criticized)\n",
|
||
"\n",
|
||
"- **Negative**:\n",
|
||
" - \"Terrible quality, broke after one week. Going back to Brand X\" (Strongly negative)\n",
|
||
" - \"Price is too high compared to competitors\" (Negative)\n",
|
||
"Top Issues: Here are the key issues identified from the reviews:\n",
|
||
"\n",
|
||
"- **Delivery delays** (slow delivery)\n",
|
||
"- **Poor product quality** (broke after one week)\n",
|
||
"- **High price** (compared to competitors)\n",
|
||
"- **Confusing user interface**\n",
|
||
"- **Customer preference for competitors** (going back to Brand X)\n",
|
||
"\n",
|
||
"Positive feedback:\n",
|
||
"- **High satisfaction** (better than expected, will buy again)\n",
|
||
"Feature Requests: Here are the concise feature requests and feedback points from the reviews:\n",
|
||
"\n",
|
||
"- **Delivery Speed**: Improve delivery time (mentioned as \"slow\").\n",
|
||
"- **Product Quality**: Enhance durability (broke after one week).\n",
|
||
"- **User Experience**: Simplify the user interface (confusing UI).\n",
|
||
"- **Pricing**: Adjust pricing to be more competitive with competitors.\n",
|
||
"- **Customer Retention**: Maintain high-quality products to retain customers (switching to Brand X).\n",
|
||
"- **Positive Feedback**: Strong product satisfaction (better than expected, will buy again).\n",
|
||
"Competitor Mentions: Here are the concise competitor mentions from the reviews:\n",
|
||
"\n",
|
||
"- **Brand X**: \"Going back to Brand X\" (negative experience with current product)\n",
|
||
"- **Competitors**: \"Price is too high compared to competitors\" (price comparison)\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"def create_review_analyzer():\n",
|
||
" model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0.1)\n",
|
||
" parser = StrOutputParser()\n",
|
||
"\n",
|
||
" async def analyze_dimension(reviews: list[str], objective: str) -> str:\n",
|
||
" prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"Analyze these reviews for {objective}. Return concise bullets.\\nReviews:\\n{reviews}\",\n",
|
||
" )\n",
|
||
" chain = prompt | model | parser\n",
|
||
" return await chain.ainvoke(\n",
|
||
" {\"objective\": objective, \"reviews\": \"\\n\".join(reviews)},\n",
|
||
" )\n",
|
||
"\n",
|
||
" async def run(payload: dict) -> dict:\n",
|
||
" reviews = payload[\"reviews\"]\n",
|
||
" tasks = [\n",
|
||
" analyze_dimension(reviews, \"sentiment\"),\n",
|
||
" analyze_dimension(reviews, \"feature requests\"),\n",
|
||
" analyze_dimension(reviews, \"issues\"),\n",
|
||
" analyze_dimension(reviews, \"competitor mentions\"),\n",
|
||
" analyze_dimension(reviews, \"purchase intent\"),\n",
|
||
" ]\n",
|
||
" sentiment, features, issues, competitors, intent = await asyncio.gather(*tasks)\n",
|
||
" return {\n",
|
||
" \"overall_sentiment\": sentiment,\n",
|
||
" \"feature_requests\": features,\n",
|
||
" \"top_issues\": issues,\n",
|
||
" \"competitors\": competitors,\n",
|
||
" \"purchase_intent\": intent,\n",
|
||
" }\n",
|
||
"\n",
|
||
" return RunnableLambda(run)\n",
|
||
"\n",
|
||
"\n",
|
||
"reviews = [\n",
|
||
" \"Great product, but delivery was slow\",\n",
|
||
" \"Terrible quality, broke after one week. Going back to Brand X\",\n",
|
||
" \"Amazing! Better than I expected. Will definitely buy again\",\n",
|
||
" \"Price is too high compared to competitors\",\n",
|
||
" \"Good features but confusing user interface\",\n",
|
||
"]\n",
|
||
"\n",
|
||
"analysis = await create_review_analyzer().ainvoke({\"reviews\": reviews})\n",
|
||
"\n",
|
||
"print(\"📊 Review Analysis Summary:\")\n",
|
||
"print(f\"Overall Sentiment: {analysis['overall_sentiment']}\")\n",
|
||
"print(f\"Top Issues: {analysis['top_issues']}\")\n",
|
||
"print(f\"Feature Requests: {analysis['feature_requests']}\")\n",
|
||
"print(f\"Competitor Mentions: {analysis['competitors']}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "d481df27",
|
||
"metadata": {},
|
||
"source": [
|
||
"## 🎯 Exercise Set 8: Orchestrator-Worker Patterns\n",
|
||
"\n",
|
||
"### What is the Orchestrator-Worker Pattern? 🤔\n",
|
||
"\n",
|
||
"This pattern divides a complex task among specialized workers:\n",
|
||
"- **Orchestrator**: The \"manager\" that coordinates workflow, decides task order, and combines results\n",
|
||
"- **Workers**: Specialized \"experts\" that each handle one specific type of task\n",
|
||
"\n",
|
||
"### Key Concepts:\n",
|
||
"1. **Separation of Concerns**: Each worker has ONE job and does it well\n",
|
||
"2. **Sequential vs Parallel**: Orchestrator decides which tasks can run in parallel vs must be sequential\n",
|
||
"3. **State Management**: Orchestrator maintains context and shares relevant info with workers\n",
|
||
"4. **Result Integration**: Orchestrator combines worker outputs into final result\n",
|
||
"\n",
|
||
"### Architecture Pattern:\n",
|
||
"```\n",
|
||
" [Orchestrator]\n",
|
||
" |\n",
|
||
" +-----------------+-----------------+\n",
|
||
" | | |\n",
|
||
" [Worker A] [Worker B] [Worker C]\n",
|
||
" (Specialist) (Specialist) (Specialist)\n",
|
||
" | | |\n",
|
||
" +-----------------+-----------------+\n",
|
||
" |\n",
|
||
" [Final Result]\n",
|
||
"```\n",
|
||
"\n",
|
||
"Let's build some orchestrator systems! Each exercise has detailed steps below."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "8ef5e407",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"🏗️ Development Plan:\n",
|
||
"Development orchestration completed with 5 workers: requirements, architecture, code, tests, documentation.\n",
|
||
"\n",
|
||
"📋 Requirements:\n",
|
||
"As a requirements analyst, I'll break down the requirements for adding user authentication with OAuth2 and JWT tokens into functional and non-functional requirements, as well as user stories and acceptance criteria.\n",
|
||
"\n",
|
||
"**Functional Requirements:**\n",
|
||
"\n",
|
||
"1. **OAuth2 Integration:**\n",
|
||
" - FR1: The system shall support OAuth2 protocol for user authentication.\n",
|
||
" - FR2: The system shall integrate with at least one OAuth2 provider (e.g., Google, Facebook, GitHub, or a custom OAuth2 server).\n",
|
||
" - FR3: The system shall allow users to register and log in using their OAuth2 provider credentials.\n",
|
||
"\n",
|
||
"2. **JWT Token Management:**\n",
|
||
" - FR4: The system shall generate JWT tokens upon successful user authentication.\n",
|
||
" - FR5: The system shall validate and verify JWT tokens for protected resources.\n",
|
||
" - FR6: The system shall handle token expiration and refresh token functionality.\n",
|
||
" - FR7: The system shall securely store and manage user sessions using JWT tokens.\n",
|
||
"\n",
|
||
"3. **User Management:**\n",
|
||
" - FR8: The system shall create and maintain user profiles upon OAuth2 registration.\n",
|
||
" - FR9: The system shall allow users to update their profile information.\n",
|
||
" - FR10: The system shall provide functionality for users to manage their connected OAuth2 accounts.\n",
|
||
"\n",
|
||
"**Non-Functional Requirements:**\n",
|
||
"\n",
|
||
"1. **Security:**\n",
|
||
" - NFR1: The system shall ensure secure transmission of user credentials and tokens using HTTPS.\n",
|
||
" - NFR2: The system shall implement proper token validation and verification mechanisms to prevent unauthorized access.\n",
|
||
" - NFR3: The system shall follow OAuth2 best practices and security guidelines to protect user data.\n",
|
||
"\n",
|
||
"2. **Performance:**\n",
|
||
" - NFR4: The system shall maintain a response time of under 2 seconds for authentication requests under normal load conditions.\n",
|
||
" - NFR5: The system shall handle a peak load of at least 1000 concurrent users without significant performance degradation.\n",
|
||
"\n",
|
||
"3. **Usability:**\n",
|
||
" - NFR6: The system shall provide a user-friendly interface for users to register, log in, and manage their OAuth2 accounts.\n",
|
||
" - NFR7: The system shall display clear and helpful error messages during the authentication process.\n",
|
||
"\n",
|
||
"4. **Compatibility:**\n",
|
||
" - NFR8: The system shall be compatible with popular browsers (e.g., Chrome, Firefox, Safari, Edge) and their latest versions.\n",
|
||
" - NFR9: The system shall be compatible with the OAuth2 provider's API and any updates to their API within a reasonable timeframe.\n",
|
||
"\n",
|
||
"**User Stories and Acceptance Criteria:**\n",
|
||
"\n",
|
||
"1. **User Story 1:** As a new user, I want to register and log in using my OAuth2 provider account so that I can access the system quickly and securely.\n",
|
||
" - Acceptance Criteria:\n",
|
||
" - AC1.1: The system displays a \"Sign in with [OAuth2 Provider]\" button on the login page.\n",
|
||
" - AC1.2: Upon clicking the button, the system redirects the user to the OAuth2 provider's login page.\n",
|
||
" - AC1.3: After successful authentication, the system redirects the user back to the application and creates a new user profile if it's the user's first time logging in.\n",
|
||
" - AC1.4: The system generates a JWT token and stores it securely for future requests.\n",
|
||
"\n",
|
||
"2. **User Story 2:** As a registered user, I want to log in using my OAuth2 provider account so that I can access the system securely.\n",
|
||
" - Acceptance Criteria:\n",
|
||
" - AC2.1: The system displays a \"Sign in with [OAuth2 Provider]\" button on the login page.\n",
|
||
" - AC2.2: Upon clicking the button, the system redirects the user to the OAuth2 provider's login page.\n",
|
||
" - AC2.3: After successful authentication, the system redirects the user back to the application and validates the user's JWT token.\n",
|
||
" - AC2.4: The system grants the user access to protected resources based on their JWT token.\n",
|
||
"\n",
|
||
"3. **User Story 3:** As a user, I want to manage my connected OAuth2 accounts so that I can maintain control over my authentication methods.\n",
|
||
" - Acceptance Criteria:\n",
|
||
" - AC3.1: The system provides a user account management page where users can view and manage their connected OAuth2 accounts.\n",
|
||
" - AC3.2: Users can disconnect their OAuth2 accounts from the account management page.\n",
|
||
" - AC3.3: After disconnecting an OAuth2 account, the system invalidates the corresponding JWT token and requires the user to re-authenticate using a different method.\n",
|
||
"\n",
|
||
"By analyzing and documenting these requirements, you can ensure that the development team has a clear understanding of the project's scope and objectives. Regularly review and update these requirements as needed to accommodate changes in project scope, stakeholder needs, or technological constraints.\n",
|
||
"\n",
|
||
"🏛️ Architecture:\n",
|
||
"Based on the provided requirements, here's a high-level architecture design for adding user authentication with OAuth2 and JWT tokens:\n",
|
||
"\n",
|
||
"**Architecture Overview:**\n",
|
||
"\n",
|
||
"1. **Presentation Layer (Client-Side):**\n",
|
||
" - User Interface (UI) for login, registration, and account management.\n",
|
||
" - OAuth2 provider buttons (e.g., \"Sign in with Google,\" \"Sign in with Facebook\").\n",
|
||
" - JWT token storage (e.g., in memory, local storage, or cookies).\n",
|
||
" - API calls to the application server for authentication and user management.\n",
|
||
"\n",
|
||
"2. **Application Layer (Server-Side):**\n",
|
||
" - **Authentication Service:**\n",
|
||
" - OAuth2 integration with external providers (e.g., Google, Facebook, GitHub).\n",
|
||
" - JWT token generation, validation, and verification.\n",
|
||
" - Token expiration and refresh token management.\n",
|
||
" - **User Service:**\n",
|
||
" - User profile creation, management, and retrieval.\n",
|
||
" - OAuth2 account connection and disconnection.\n",
|
||
" - **API Gateway:**\n",
|
||
" - Routes incoming requests to the appropriate services.\n",
|
||
" - JWT token validation for protected resources.\n",
|
||
" - Rate limiting and other security measures.\n",
|
||
"\n",
|
||
"3. **Data Layer:**\n",
|
||
" - **User Database:**\n",
|
||
" - Stores user profiles, OAuth2 account connections, and other user-related data.\n",
|
||
" - **Token Database (optional):**\n",
|
||
" - Stores JWT tokens, token expiration times, and refresh tokens for token management and revocation.\n",
|
||
"\n",
|
||
"**Component Diagram:**\n",
|
||
"\n",
|
||
"```\n",
|
||
"+----------------+ +----------------+ +----------------+\n",
|
||
"| | | | | |\n",
|
||
"| Presentation | | Application | | Data |\n",
|
||
"| Layer | | Layer | | Layer |\n",
|
||
"| | | | | |\n",
|
||
"+----------------+ +----------------+ +----------------+\n",
|
||
"| UI | | Auth Service | | User DB |\n",
|
||
"| API Calls |<---->| User Service |<---->| Token DB |\n",
|
||
"| Token Storage | | API Gateway | | |\n",
|
||
"+----------------+ +----------------+ +----------------+\n",
|
||
"```\n",
|
||
"\n",
|
||
"**Sequence Diagram for OAuth2 Login:**\n",
|
||
"\n",
|
||
"```\n",
|
||
"User UI Auth Service OAuth2 Provider\n",
|
||
" | | | |\n",
|
||
" |--1. Click \"Sign in\"--->| | |\n",
|
||
" | |--2. Redirect to Auth Service--->| |\n",
|
||
" | | |--3. Redirect to OAuth2 Provider--->|\n",
|
||
" | | | |--4. User Logs in--->|\n",
|
||
" | | | |<--5. Success--|\n",
|
||
" | | |<--6. Redirect with Auth Code--| |\n",
|
||
" | |<--7. Redirect with Auth Code--| |\n",
|
||
" |<--8. Redirect to UI--| | |\n",
|
||
" |--9. Send Auth Code to Auth Service-->| |\n",
|
||
" | |--10. Exchange Auth Code for Tokens--->| |\n",
|
||
" | | |--11. Get User Info--->| |\n",
|
||
" | | |<--12. User Info--| |\n",
|
||
" | |<--13. JWT Token--| |\n",
|
||
" |<--14. Store Token--| | |\n",
|
||
"```\n",
|
||
"\n",
|
||
"**Sequence Diagram for Accessing Protected Resources:**\n",
|
||
"\n",
|
||
"```\n",
|
||
"User UI API Gateway Auth Service\n",
|
||
" | | | |\n",
|
||
" |--1. Send Request with JWT Token-->| | |\n",
|
||
" | |--2. Validate Token--->| |\n",
|
||
" | | |--3. Check Token Validity--->| |\n",
|
||
" | | |<--4. Token Valid--| |\n",
|
||
" | |<--5. Request Granted--| |\n",
|
||
" |<--6. Access Protected Resource--| |\n",
|
||
"```\n",
|
||
"\n",
|
||
"**Technology Stack Recommendations:**\n",
|
||
"\n",
|
||
"1. **Presentation Layer:**\n",
|
||
" - Frontend framework (e.g., React, Angular, or Vue.js)\n",
|
||
" - OAuth2 provider SDKs or libraries (e.g., Google Sign-In, Facebook Login, GitHub OAuth)\n",
|
||
"\n",
|
||
"2. **Application Layer:**\n",
|
||
" - Backend framework (e.g., Node.js with Express, Java Spring Boot, or Python Django)\n",
|
||
" - OAuth2 libraries (e.g., Passport.js, Spring Security OAuth, or Authlib)\n",
|
||
" - JWT libraries (e.g., jsonwebtoken, jjwt, or PyJWT)\n",
|
||
" - API Gateway (e.g., Kong, Apigee, or AWS API Gateway)\n",
|
||
"\n",
|
||
"3. **Data Layer:**\n",
|
||
" - Database (e.g., PostgreSQL, MySQL, or MongoDB)\n",
|
||
" - Redis (optional, for token storage and management)\n",
|
||
"\n",
|
||
"**Security Considerations:**\n",
|
||
"\n",
|
||
"1. Implement HTTPS for secure data transmission.\n",
|
||
"2. Follow OAuth2 best practices and security guidelines.\n",
|
||
"3. Store sensitive data securely (e.g., use environment variables, secret management tools, or encrypted storage).\n",
|
||
"4. Implement proper token validation and verification mechanisms.\n",
|
||
"5. Regularly update and patch dependencies to address security vulnerabilities.\n",
|
||
"\n",
|
||
"**Performance Considerations:**\n",
|
||
"\n",
|
||
"1. Implement caching mechanisms to improve response times.\n",
|
||
"2. Optimize database queries and indexes for better performance.\n",
|
||
"3. Consider using a load balancer to distribute traffic evenly across servers.\n",
|
||
"4. Monitor system performance and make adjustments as needed.\n",
|
||
"\n",
|
||
"By following this architecture design and considering the provided requirements, you can create a secure and scalable user authentication system using OAuth2 and JWT tokens.\n",
|
||
"\n",
|
||
"💻 Code:\n",
|
||
"**Implementation Plan:**\n",
|
||
"\n",
|
||
"**Phase 1: Planning and Setup (Week 1-2)**\n",
|
||
"\n",
|
||
"1. **Project Setup:**\n",
|
||
" - Initialize the project repository.\n",
|
||
" - Set up the development environment.\n",
|
||
" - Choose and set up the technology stack based on the recommendations.\n",
|
||
"\n",
|
||
"2. **Design and Planning:**\n",
|
||
" - Finalize the archit...\n",
|
||
"\n",
|
||
"✅ Tests:\n",
|
||
"**Test Strategy:**\n",
|
||
"\n",
|
||
"**1. Test Objectives:**\n",
|
||
"- Ensure the application meets the functional and non-functional requirements.\n",
|
||
"- Validate the integration between OAuth2 providers, the application, and the...\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"class DevelopmentOrchestrator:\n",
|
||
" \"\"\"Orchestrate software development tasks.\"\"\"\n",
|
||
"\n",
|
||
" def __init__(self):\n",
|
||
" self.model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0.4)\n",
|
||
" self.workers = self._initialize_workers()\n",
|
||
"\n",
|
||
" def _initialize_workers(self):\n",
|
||
" parser = StrOutputParser()\n",
|
||
"\n",
|
||
" requirements_prompt = ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\"system\", \"You are an expert requirements analyst.\"),\n",
|
||
" (\"user\", \"Analyze requirements for: {feature_request}\"),\n",
|
||
" ],\n",
|
||
" )\n",
|
||
" architect_prompt = ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\"system\", \"You are a software architect.\"),\n",
|
||
" (\n",
|
||
" \"user\",\n",
|
||
" \"Design architecture for: {feature_request}\\nRequirements:\\n{requirements}\",\n",
|
||
" ),\n",
|
||
" ],\n",
|
||
" )\n",
|
||
" developer_prompt = ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\"system\", \"You are a senior software engineer.\"),\n",
|
||
" (\n",
|
||
" \"user\",\n",
|
||
" \"Generate implementation plan and sample code for: {feature_request}\\nRequirements:\\n{requirements}\\nArchitecture:\\n{architecture}\",\n",
|
||
" ),\n",
|
||
" ],\n",
|
||
" )\n",
|
||
" tester_prompt = ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\"system\", \"You are a software test engineer.\"),\n",
|
||
" (\"user\", \"Create test strategy and test cases for code:\\n{code}\"),\n",
|
||
" ],\n",
|
||
" )\n",
|
||
" documenter_prompt = ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\"system\", \"You are a technical writer.\"),\n",
|
||
" (\n",
|
||
" \"user\",\n",
|
||
" \"Write concise documentation for this feature:\\nFeature: {feature_request}\\nRequirements:\\n{requirements}\\nArchitecture:\\n{architecture}\\nCode:\\n{code}\\nTests:\\n{tests}\",\n",
|
||
" ),\n",
|
||
" ],\n",
|
||
" )\n",
|
||
"\n",
|
||
" return {\n",
|
||
" \"requirements_analyst\": requirements_prompt | self.model | parser,\n",
|
||
" \"architect\": architect_prompt | self.model | parser,\n",
|
||
" \"developer\": developer_prompt | self.model | parser,\n",
|
||
" \"tester\": tester_prompt | self.model | parser,\n",
|
||
" \"documenter\": documenter_prompt | self.model | parser,\n",
|
||
" }\n",
|
||
"\n",
|
||
" async def develop_feature(self, feature_request: str):\n",
|
||
" results = {}\n",
|
||
"\n",
|
||
" results[\"requirements\"] = await self.workers[\"requirements_analyst\"].ainvoke(\n",
|
||
" {\"feature_request\": feature_request},\n",
|
||
" )\n",
|
||
" results[\"architecture\"] = await self.workers[\"architect\"].ainvoke(\n",
|
||
" {\n",
|
||
" \"feature_request\": feature_request,\n",
|
||
" \"requirements\": results[\"requirements\"],\n",
|
||
" },\n",
|
||
" )\n",
|
||
" results[\"code\"] = await self.workers[\"developer\"].ainvoke(\n",
|
||
" {\n",
|
||
" \"feature_request\": feature_request,\n",
|
||
" \"requirements\": results[\"requirements\"],\n",
|
||
" \"architecture\": results[\"architecture\"],\n",
|
||
" },\n",
|
||
" )\n",
|
||
" results[\"tests\"] = await self.workers[\"tester\"].ainvoke(\n",
|
||
" {\"code\": results[\"code\"]},\n",
|
||
" )\n",
|
||
" results[\"documentation\"] = await self.workers[\"documenter\"].ainvoke(\n",
|
||
" {\n",
|
||
" \"feature_request\": feature_request,\n",
|
||
" \"requirements\": results[\"requirements\"],\n",
|
||
" \"architecture\": results[\"architecture\"],\n",
|
||
" \"code\": results[\"code\"],\n",
|
||
" \"tests\": results[\"tests\"],\n",
|
||
" },\n",
|
||
" )\n",
|
||
" results[\"plan\"] = (\n",
|
||
" \"Development orchestration completed with 5 workers: \"\n",
|
||
" \"requirements, architecture, code, tests, documentation.\"\n",
|
||
" )\n",
|
||
"\n",
|
||
" return results\n",
|
||
"\n",
|
||
"\n",
|
||
"orchestrator = DevelopmentOrchestrator()\n",
|
||
"feature_request = \"Add user authentication with OAuth2 and JWT tokens\"\n",
|
||
"\n",
|
||
"result = await orchestrator.develop_feature(feature_request)\n",
|
||
"\n",
|
||
"print(\"🏗️ Development Plan:\")\n",
|
||
"print(result[\"plan\"])\n",
|
||
"print(\"\\n📋 Requirements:\")\n",
|
||
"print(result[\"requirements\"])\n",
|
||
"print(\"\\n🏛️ Architecture:\")\n",
|
||
"print(result[\"architecture\"])\n",
|
||
"print(\"\\n💻 Code:\")\n",
|
||
"print(result[\"code\"][:300] + \"...\")\n",
|
||
"print(\"\\n✅ Tests:\")\n",
|
||
"print(result[\"tests\"][:200] + \"...\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "e5925110",
|
||
"metadata": {},
|
||
"source": [
|
||
"---\n",
|
||
"\n",
|
||
"### 📝 Exercise 8.2: Content Creation Orchestrator\n",
|
||
"\n",
|
||
"**Goal**: Build a system that creates high-quality articles through multiple specialized workers.\n",
|
||
"\n",
|
||
"**Key Learning**: How to combine parallel and sequential stages in orchestration.\n",
|
||
"\n",
|
||
"**Workflow Strategy**:\n",
|
||
"- **Phase 1** (Parallel): Multiple researchers gather different types of information\n",
|
||
"- **Phase 2** (Sequential): Writer creates draft from research\n",
|
||
"- **Phase 3** (Parallel): Editor, SEO specialist, and fact-checker review simultaneously\n",
|
||
"- **Phase 4** (Sequential): Integrate all feedback into final article"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "285e6583",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"📰 Article Creation Results:\n",
|
||
"Title: The Future of Remote Work: Practical Insights\n",
|
||
"Word Count: 2588\n",
|
||
"SEO Score: 100/100\n",
|
||
"Fact-check Status: reviewed\n",
|
||
"\n",
|
||
"First paragraph:\n",
|
||
"**Title: Embracing the Future of Remote Work: A Strategic Guide for Business Professionals**\n",
|
||
"\n",
|
||
"**Introduction**\n",
|
||
"\n",
|
||
"In recent years, the global business landscape has witnessed a significant shift, with r...\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"class ContentOrchestrator:\n",
|
||
" \"\"\"Orchestrate content creation with specialized workers.\"\"\"\n",
|
||
"\n",
|
||
" def __init__(self):\n",
|
||
" self.model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0.6)\n",
|
||
" parser = StrOutputParser()\n",
|
||
"\n",
|
||
" self.workers = {\n",
|
||
" \"researcher\": (\n",
|
||
" ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\"system\", \"You are a research specialist.\"),\n",
|
||
" (\"user\", \"Research topic: {topic} for audience: {audience}\"),\n",
|
||
" ],\n",
|
||
" )\n",
|
||
" | self.model\n",
|
||
" | parser\n",
|
||
" ),\n",
|
||
" \"trend_analyst\": (\n",
|
||
" ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\"system\", \"You are a trend analyst.\"),\n",
|
||
" (\"user\", \"Analyze trends for topic: {topic}\"),\n",
|
||
" ],\n",
|
||
" )\n",
|
||
" | self.model\n",
|
||
" | parser\n",
|
||
" ),\n",
|
||
" \"writer\": (\n",
|
||
" ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\"system\", \"You are a professional writer.\"),\n",
|
||
" (\n",
|
||
" \"user\",\n",
|
||
" \"Write an article draft on {topic} for {audience}. Use this research:\\n{research}\",\n",
|
||
" ),\n",
|
||
" ],\n",
|
||
" )\n",
|
||
" | self.model\n",
|
||
" | parser\n",
|
||
" ),\n",
|
||
" \"editor\": (\n",
|
||
" ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\"system\", \"You are an editor improving clarity and flow.\"),\n",
|
||
" (\"user\", \"Edit this draft:\\n{content}\"),\n",
|
||
" ],\n",
|
||
" )\n",
|
||
" | self.model\n",
|
||
" | parser\n",
|
||
" ),\n",
|
||
" \"seo\": (\n",
|
||
" ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\"system\", \"You are an SEO specialist.\"),\n",
|
||
" (\n",
|
||
" \"user\",\n",
|
||
" \"Optimize this article for SEO on topic '{topic}':\\n{content}\",\n",
|
||
" ),\n",
|
||
" ],\n",
|
||
" )\n",
|
||
" | self.model\n",
|
||
" | parser\n",
|
||
" ),\n",
|
||
" \"fact_checker\": (\n",
|
||
" ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\"system\", \"You are a fact-checker.\"),\n",
|
||
" (\n",
|
||
" \"user\",\n",
|
||
" \"Review factual claims and flag uncertainty:\\n{content}\",\n",
|
||
" ),\n",
|
||
" ],\n",
|
||
" )\n",
|
||
" | self.model\n",
|
||
" | parser\n",
|
||
" ),\n",
|
||
" }\n",
|
||
"\n",
|
||
" async def create_article(self, topic: str, target_audience: str):\n",
|
||
" results = {}\n",
|
||
"\n",
|
||
" research_tasks = [\n",
|
||
" self.workers[\"researcher\"].ainvoke(\n",
|
||
" {\"topic\": topic, \"audience\": target_audience},\n",
|
||
" ),\n",
|
||
" self.workers[\"trend_analyst\"].ainvoke({\"topic\": topic}),\n",
|
||
" ]\n",
|
||
" research_general, research_trends = await asyncio.gather(*research_tasks)\n",
|
||
" combined_research = (\n",
|
||
" f\"General research:\\n{research_general}\\n\\nTrends:\\n{research_trends}\"\n",
|
||
" )\n",
|
||
" results[\"research_sources\"] = [\"general_research\", \"trend_analysis\"]\n",
|
||
"\n",
|
||
" draft = await self.workers[\"writer\"].ainvoke(\n",
|
||
" {\n",
|
||
" \"topic\": topic,\n",
|
||
" \"audience\": target_audience,\n",
|
||
" \"research\": combined_research,\n",
|
||
" },\n",
|
||
" )\n",
|
||
"\n",
|
||
" review_tasks = [\n",
|
||
" self.workers[\"editor\"].ainvoke({\"content\": draft}),\n",
|
||
" self.workers[\"seo\"].ainvoke({\"content\": draft, \"topic\": topic}),\n",
|
||
" self.workers[\"fact_checker\"].ainvoke({\"content\": draft}),\n",
|
||
" ]\n",
|
||
" edited, seo_feedback, fact_feedback = await asyncio.gather(*review_tasks)\n",
|
||
"\n",
|
||
" final_content = f\"{edited}\\n\\n---\\nSEO Notes:\\n{seo_feedback}\\n\\n---\\nFact-check Notes:\\n{fact_feedback}\"\n",
|
||
"\n",
|
||
" results.update(\n",
|
||
" {\n",
|
||
" \"title\": f\"{topic}: Practical Insights\",\n",
|
||
" \"content\": final_content,\n",
|
||
" \"word_count\": len(final_content.split()),\n",
|
||
" \"seo_score\": min(100, 70 + len(seo_feedback.split()) // 8),\n",
|
||
" \"fact_check_status\": \"reviewed\",\n",
|
||
" \"keywords\": [topic.split()[0].lower(), \"future\", \"strategy\"],\n",
|
||
" },\n",
|
||
" )\n",
|
||
" return results\n",
|
||
"\n",
|
||
"\n",
|
||
"content_orch = ContentOrchestrator()\n",
|
||
"result = await content_orch.create_article(\n",
|
||
" topic=\"The Future of Remote Work\",\n",
|
||
" target_audience=\"business professionals\",\n",
|
||
")\n",
|
||
"\n",
|
||
"print(\"📰 Article Creation Results:\")\n",
|
||
"print(f\"Title: {result['title']}\")\n",
|
||
"print(f\"Word Count: {result['word_count']}\")\n",
|
||
"print(f\"SEO Score: {result['seo_score']}/100\")\n",
|
||
"print(f\"Fact-check Status: {result['fact_check_status']}\")\n",
|
||
"print(f\"\\nFirst paragraph:\\n{result['content'][:200]}...\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "c512fccb",
|
||
"metadata": {},
|
||
"source": [
|
||
"---\n",
|
||
"\n",
|
||
"### 🎫 Exercise 8.3: Customer Support Orchestrator\n",
|
||
"\n",
|
||
"**Goal**: Build an intelligent support system that analyzes tickets and routes to appropriate handlers.\n",
|
||
"\n",
|
||
"**Key Learning**: Decision-making in orchestration - when to use which workers based on ticket analysis.\n",
|
||
"\n",
|
||
"**Decision Tree**:\n",
|
||
"```\n",
|
||
"Ticket Received\n",
|
||
" → Triage (categorize + assess complexity)\n",
|
||
" → IF simple → Use KB Worker → Generate Response\n",
|
||
" → IF technical → Use Technical Worker + KB Worker (parallel) → Generate Response\n",
|
||
" → IF complex → Use All Workers + Escalation Check → Generate Response\n",
|
||
" → Create Follow-up Plan\n",
|
||
"```"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "55175f1a",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"🎫 Support Ticket Resolution:\n",
|
||
"Category: technical\n",
|
||
"Root Cause: Based on the information provided and the analysis, here's the decision on whether to escalate to a human:\n",
|
||
"\n",
|
||
"**Decision**: Do not escalate to a human yet. The issue can be resolved through further automated troubleshooting steps.\n",
|
||
"\n",
|
||
"**Reasoning**:\n",
|
||
"\n",
|
||
"1. The user has not yet tried the suggested troubleshooting steps (double-checking the API key, regenerating the key, verifying account permissions).\n",
|
||
"2. The issue is categorized under \"Authentication\" with medium urgency and low to medium complexity, which means it's likely resolvable through standard procedures.\n",
|
||
"3. The provided knowledge base (KB) and technical solutions offer clear, actionable steps for the user to try.\n",
|
||
"\n",
|
||
"**Next Steps**:\n",
|
||
"\n",
|
||
"1. Ask the user to follow the troubleshooting steps mentioned in the KB and technical solutions.\n",
|
||
"2. Request the user to provide more details about their request (like the code they're using to make the request, the exact error message they're receiving, and the API endpoint they're trying to access) if the issue persists after trying the initial steps.\n",
|
||
"3. If the user provides more details and the issue still can't be resolved through automated means, then escalate to a human for further investigation.\n",
|
||
"\n",
|
||
"By following this approach, you can ensure that the user is given every opportunity to resolve the issue through standard procedures before involving a human agent. This helps to optimize support resources and improve efficiency.\n",
|
||
"Solution: KB:\n",
|
||
"I'm sorry to hear you're having trouble with our API. A 401 Unauthorized error typically indicates that the API key you're using is either incorrect, expired, or doesn't have the necessary permissions. Here are some steps to help you troubleshoot:\n",
|
||
"\n",
|
||
"1. **Check your API key**:\n",
|
||
" - Ensure that you're using the correct API key. You can find it in your dashboard.\n",
|
||
" - Make sure there are no typos or extra spaces when you copy and paste the key.\n",
|
||
"\n",
|
||
"2. **Verify the key's permissions**:\n",
|
||
" - Ensure that the API key you're using has the necessary permissions to access the API endpoints you're trying to reach.\n",
|
||
"\n",
|
||
"3. **Check the key's expiration**:\n",
|
||
" - If your API key has an expiration date, make sure it hasn't expired.\n",
|
||
"\n",
|
||
"4. **Regenerate the API key**:\n",
|
||
" - If you're still having trouble, try regenerating the API key from your dashboard. Make sure to update your code or application with the new key.\n",
|
||
"\n",
|
||
"5. **Check the request headers**:\n",
|
||
" - Ensure that you're including the API key in the correct header. It should be included in the `Authorization` header as follows: `Authorization: Bearer YOUR_API_KEY`.\n",
|
||
"\n",
|
||
"6. **Check the API endpoint**:\n",
|
||
" - Ensure that you're using the correct API endpoint. You can find the correct endpoints in our API documentation.\n",
|
||
"\n",
|
||
"If you've tried all these steps and are still having trouble, please contact our support team for further assistance. They can help you investigate the issue further and provide a solution.\n",
|
||
"\n",
|
||
"Technical:\n",
|
||
"I'm sorry to hear you're having trouble with our API. A 401 Unauthorized error typically indicates that the request you're making is not properly authenticated. Here are some steps to help you diagnose and fix the issue:\n",
|
||
"\n",
|
||
"1. **Check your API key**:\n",
|
||
" - Ensure that you're using the correct API key from your dashboard.\n",
|
||
" - Verify that the API key hasn't expired or been revoked.\n",
|
||
" - Make sure you're not accidentally using an API key from a different account or project.\n",
|
||
"\n",
|
||
"2. **Check the API key's permissions**:\n",
|
||
" - Ensure that the API key you're using has the necessary permissions to access the API endpoints you're trying to reach.\n",
|
||
" - If you're using a restricted API key, make sure it has access to the specific resources you're trying to interact with.\n",
|
||
"\n",
|
||
"3. **Check how you're sending the API key**:\n",
|
||
" - Ensure that you're sending the API key in the correct header. Most APIs expect the API key to be sent in the `Authorization` header, like this: `Authorization: Bearer YOUR_API_KEY`.\n",
|
||
" - Make sure you're not accidentally sending the API key in the request body or as a query parameter, unless the API specifically requires it.\n",
|
||
"\n",
|
||
"4. **Check for typos or formatting issues**:\n",
|
||
" - Ensure that there are no typos or extra spaces in your API key.\n",
|
||
" - Make sure you're not accidentally including any special characters or line breaks in your API key.\n",
|
||
"\n",
|
||
"5. **Check the API endpoint**:\n",
|
||
" - Ensure that you're sending the request to the correct API endpoint.\n",
|
||
" - Make sure the endpoint is correct and hasn't changed.\n",
|
||
"\n",
|
||
"6. **Check the request method**:\n",
|
||
" - Ensure that you're using the correct HTTP method (GET, POST, PUT, DELETE, etc.) for the endpoint you're trying to access.\n",
|
||
"\n",
|
||
"7. **Check for IP restrictions**:\n",
|
||
" - Some APIs have IP restrictions. Make sure your IP address is allowed to access the API.\n",
|
||
"\n",
|
||
"If you've checked all of these and you're still having trouble, please provide more details about your request (like the code you're using to make the request, the exact error message you're receiving, and the API endpoint you're trying to access), and I'll do my best to help you further.\n",
|
||
"Escalate: True\n",
|
||
"Estimated Resolution Time: 2-4 hours\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"class SupportOrchestrator:\n",
|
||
" \"\"\"Orchestrate customer support with specialized workers.\"\"\"\n",
|
||
"\n",
|
||
" def __init__(self):\n",
|
||
" self.model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0.3)\n",
|
||
" parser = StrOutputParser()\n",
|
||
"\n",
|
||
" self.workers = {\n",
|
||
" \"triage\": (\n",
|
||
" ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\"system\", \"You are a support triage specialist.\"),\n",
|
||
" (\n",
|
||
" \"user\",\n",
|
||
" \"Ticket subject: {subject}\\nMessage: {message}\\nPriority: {priority}\\nReturn: category, urgency, complexity, key_issue\",\n",
|
||
" ),\n",
|
||
" ],\n",
|
||
" )\n",
|
||
" | self.model\n",
|
||
" | parser\n",
|
||
" ),\n",
|
||
" \"kb\": (\n",
|
||
" ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\"system\", \"You are a support knowledge base assistant.\"),\n",
|
||
" (\"user\", \"Provide KB-based help for: {message}\"),\n",
|
||
" ],\n",
|
||
" )\n",
|
||
" | self.model\n",
|
||
" | parser\n",
|
||
" ),\n",
|
||
" \"technical\": (\n",
|
||
" ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\"system\", \"You are a technical support engineer.\"),\n",
|
||
" (\"user\", \"Diagnose likely root cause and fix for: {message}\"),\n",
|
||
" ],\n",
|
||
" )\n",
|
||
" | self.model\n",
|
||
" | parser\n",
|
||
" ),\n",
|
||
" \"escalation\": (\n",
|
||
" ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\"system\", \"You decide whether escalation to human is needed.\"),\n",
|
||
" (\"user\", \"Ticket + analysis:\\n{analysis}\"),\n",
|
||
" ],\n",
|
||
" )\n",
|
||
" | self.model\n",
|
||
" | parser\n",
|
||
" ),\n",
|
||
" \"response_writer\": (\n",
|
||
" ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\"system\", \"Write a professional customer-facing response.\"),\n",
|
||
" (\n",
|
||
" \"user\",\n",
|
||
" \"Ticket:\\n{ticket}\\nAnalysis:\\n{analysis}\\nSolutions:\\n{solutions}\\nEscalate: {escalate}\",\n",
|
||
" ),\n",
|
||
" ],\n",
|
||
" )\n",
|
||
" | self.model\n",
|
||
" | parser\n",
|
||
" ),\n",
|
||
" }\n",
|
||
"\n",
|
||
" async def handle_ticket(self, ticket: dict):\n",
|
||
" triage_raw = await self.workers[\"triage\"].ainvoke(ticket)\n",
|
||
" triage_text = triage_raw.lower()\n",
|
||
"\n",
|
||
" if \"technical\" in triage_text or \"401\" in ticket[\"message\"]:\n",
|
||
" category = \"technical\"\n",
|
||
" elif \"billing\" in triage_text or \"charged\" in ticket[\"message\"].lower():\n",
|
||
" category = \"billing\"\n",
|
||
" else:\n",
|
||
" category = \"general\"\n",
|
||
"\n",
|
||
" if \"complex\" in triage_text or \"high\" in triage_text:\n",
|
||
" complexity = \"complex\"\n",
|
||
" elif \"moderate\" in triage_text:\n",
|
||
" complexity = \"moderate\"\n",
|
||
" else:\n",
|
||
" complexity = \"simple\"\n",
|
||
"\n",
|
||
" kb_task = self.workers[\"kb\"].ainvoke({\"message\": ticket[\"message\"]})\n",
|
||
"\n",
|
||
" if complexity == \"simple\":\n",
|
||
" kb_result = await kb_task\n",
|
||
" combined_solutions = kb_result\n",
|
||
" workers_used = [\"triage\", \"kb\"]\n",
|
||
" else:\n",
|
||
" technical_task = self.workers[\"technical\"].ainvoke(\n",
|
||
" {\"message\": ticket[\"message\"]},\n",
|
||
" )\n",
|
||
" kb_result, technical_result = await asyncio.gather(kb_task, technical_task)\n",
|
||
" combined_solutions = f\"KB:\\n{kb_result}\\n\\nTechnical:\\n{technical_result}\"\n",
|
||
" workers_used = [\"triage\", \"kb\", \"technical\"]\n",
|
||
"\n",
|
||
" escalate = (\n",
|
||
" complexity == \"complex\" or ticket.get(\"priority\", \"\").lower() == \"high\"\n",
|
||
" )\n",
|
||
" escalation_note = \"No escalation required.\"\n",
|
||
" if escalate:\n",
|
||
" escalation_note = await self.workers[\"escalation\"].ainvoke(\n",
|
||
" {\"analysis\": f\"Triage: {triage_raw}\\nSolutions: {combined_solutions}\"},\n",
|
||
" )\n",
|
||
" workers_used.append(\"escalation\")\n",
|
||
"\n",
|
||
" response = await self.workers[\"response_writer\"].ainvoke(\n",
|
||
" {\n",
|
||
" \"ticket\": ticket,\n",
|
||
" \"analysis\": triage_raw,\n",
|
||
" \"solutions\": combined_solutions,\n",
|
||
" \"escalate\": escalate,\n",
|
||
" },\n",
|
||
" )\n",
|
||
" workers_used.append(\"response_writer\")\n",
|
||
"\n",
|
||
" return {\n",
|
||
" \"category\": category,\n",
|
||
" \"root_cause\": escalation_note\n",
|
||
" if escalate\n",
|
||
" else \"Likely authentication/configuration mismatch\",\n",
|
||
" \"solution\": combined_solutions,\n",
|
||
" \"response\": response,\n",
|
||
" \"escalate\": escalate,\n",
|
||
" \"eta\": \"15 minutes\" if not escalate else \"2-4 hours\",\n",
|
||
" \"confidence\": 0.9 if not escalate else 0.72,\n",
|
||
" \"workers_used\": workers_used,\n",
|
||
" }\n",
|
||
"\n",
|
||
"\n",
|
||
"support_orch = SupportOrchestrator()\n",
|
||
"ticket = {\n",
|
||
" \"customer_id\": \"C12345\",\n",
|
||
" \"subject\": \"Cannot connect to API - getting 401 errors\",\n",
|
||
" \"message\": \"I've been trying to use your API for the past hour but keep getting 401 Unauthorized. I'm using the API key from my dashboard.\",\n",
|
||
" \"priority\": \"high\",\n",
|
||
"}\n",
|
||
"\n",
|
||
"result = await support_orch.handle_ticket(ticket)\n",
|
||
"\n",
|
||
"print(\"🎫 Support Ticket Resolution:\")\n",
|
||
"print(f\"Category: {result['category']}\")\n",
|
||
"print(f\"Root Cause: {result['root_cause']}\")\n",
|
||
"print(f\"Solution: {result['solution']}\")\n",
|
||
"print(f\"Escalate: {result['escalate']}\")\n",
|
||
"print(f\"Estimated Resolution Time: {result['eta']}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "3b53ed9e",
|
||
"metadata": {},
|
||
"source": [
|
||
"## 🔄 Exercise Set 9: Evaluator-Optimizer Loops\n",
|
||
"\n",
|
||
"Implement iterative improvement systems where an evaluator judges output quality and an optimizer refines it."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "221c4269",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"📝 Essay Optimization Results:\n",
|
||
"Iterations: 3\n",
|
||
"Initial Score: 70\n",
|
||
"Final Score: 70\n",
|
||
"Improvement: +0 points\n",
|
||
"\n",
|
||
"Improvement History:\n",
|
||
" Iteration 0: 70/100\n",
|
||
" Iteration 1: 70/100\n",
|
||
" Iteration 2: 70/100\n",
|
||
"\n",
|
||
"Final Essay:\n",
|
||
"Here’s an even more refined version of your essay, incorporating the feedback to further strengthen examples, improve conciseness, and enhance transitions while maintaining persuasiveness:\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"# **The Impact of Social Media on Mental Health**\n",
|
||
"\n",
|
||
"## **Introduction**\n",
|
||
"Social media has revolutionized gl...\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"class EssayOptimizerLoop:\n",
|
||
" \"\"\"Iteratively improve essays through evaluate-optimize cycles.\"\"\"\n",
|
||
"\n",
|
||
" def __init__(self, max_iterations=3, target_score=85):\n",
|
||
" self.model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0.7)\n",
|
||
" self.max_iterations = max_iterations\n",
|
||
" self.target_score = target_score\n",
|
||
" self.parser = StrOutputParser()\n",
|
||
"\n",
|
||
" async def generate_initial_essay(self, topic: str, requirements: dict):\n",
|
||
" prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"Write an essay on {topic}. Requirements: {requirements}\",\n",
|
||
" )\n",
|
||
" chain = prompt | self.model | self.parser\n",
|
||
" return await chain.ainvoke({\"topic\": topic, \"requirements\": requirements})\n",
|
||
"\n",
|
||
" async def evaluate_essay(self, essay: str):\n",
|
||
" prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"Evaluate this essay and return JSON with keys overall, clarity, structure, grammar, persuasiveness, issues, suggestions. Essay:\\n{essay}\",\n",
|
||
" )\n",
|
||
" chain = prompt | self.model | self.parser\n",
|
||
" raw = await chain.ainvoke({\"essay\": essay})\n",
|
||
" try:\n",
|
||
" return json.loads(raw)\n",
|
||
" except json.JSONDecodeError:\n",
|
||
" return {\n",
|
||
" \"overall\": 70,\n",
|
||
" \"clarity\": 70,\n",
|
||
" \"structure\": 70,\n",
|
||
" \"grammar\": 75,\n",
|
||
" \"persuasiveness\": 65,\n",
|
||
" \"issues\": [\"Needs stronger examples\", \"Could be more concise\"],\n",
|
||
" \"suggestions\": [\"Add evidence\", \"Improve transitions\"],\n",
|
||
" }\n",
|
||
"\n",
|
||
" async def optimize_essay(self, essay: str, evaluation: dict):\n",
|
||
" prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"Improve this essay using evaluation feedback.\\nEssay:\\n{essay}\\nFeedback:\\n{feedback}\",\n",
|
||
" )\n",
|
||
" chain = prompt | self.model | self.parser\n",
|
||
" return await chain.ainvoke({\"essay\": essay, \"feedback\": evaluation})\n",
|
||
"\n",
|
||
" async def optimize_until_good(self, topic: str, requirements: dict):\n",
|
||
" essay = await self.generate_initial_essay(topic, requirements)\n",
|
||
" scores = []\n",
|
||
" history = []\n",
|
||
"\n",
|
||
" for _ in range(self.max_iterations):\n",
|
||
" evaluation = await self.evaluate_essay(essay)\n",
|
||
" score = int(evaluation.get(\"overall\", 70))\n",
|
||
" scores.append(score)\n",
|
||
" history.append(evaluation)\n",
|
||
" if score >= self.target_score:\n",
|
||
" break\n",
|
||
" essay = await self.optimize_essay(essay, evaluation)\n",
|
||
"\n",
|
||
" return {\n",
|
||
" \"iterations\": len(scores),\n",
|
||
" \"scores\": scores,\n",
|
||
" \"history\": history,\n",
|
||
" \"final_essay\": essay,\n",
|
||
" }\n",
|
||
"\n",
|
||
"\n",
|
||
"optimizer = EssayOptimizerLoop(max_iterations=3, target_score=85)\n",
|
||
"result = await optimizer.optimize_until_good(\n",
|
||
" topic=\"Impact of Social Media on Mental Health\",\n",
|
||
" requirements={\n",
|
||
" \"length\": \"500 words\",\n",
|
||
" \"style\": \"academic\",\n",
|
||
" \"include_evidence\": True,\n",
|
||
" },\n",
|
||
")\n",
|
||
"\n",
|
||
"print(\"📝 Essay Optimization Results:\")\n",
|
||
"print(f\"Iterations: {result['iterations']}\")\n",
|
||
"print(f\"Initial Score: {result['scores'][0]}\")\n",
|
||
"print(f\"Final Score: {result['scores'][-1]}\")\n",
|
||
"print(f\"Improvement: +{result['scores'][-1] - result['scores'][0]} points\")\n",
|
||
"print(\"\\nImprovement History:\")\n",
|
||
"for i, score in enumerate(result[\"scores\"]):\n",
|
||
" print(f\" Iteration {i}: {score}/100\")\n",
|
||
"print(f\"\\nFinal Essay:\\n{result['final_essay'][:300]}...\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "79833bfd",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"🔧 Code Refactoring Results:\n",
|
||
"Iterations: 5\n",
|
||
"\n",
|
||
"Original Issues: Non-idiomatic loop, Variable name shadows built-in sum\n",
|
||
"\n",
|
||
"Refactored Code:\n",
|
||
"Your refactored code is already excellent, but I'll provide a slightly enhanced version with some additional improvements while maintaining all the original fixes and your recent refinements:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"def process_data(data):\n",
|
||
" \"\"\"Process data by doubling non-None values greater than 5 (after doubling).\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" data: Iterable of numbers (may contain None values)\n",
|
||
"\n",
|
||
" Returns:\n",
|
||
" List of processed values meeting the criteria\n",
|
||
" \"\"\"\n",
|
||
" return [value * 2 for value in data if value is not None and value > 5]\n",
|
||
"```\n",
|
||
"\n",
|
||
"Key improvements:\n",
|
||
"1. Simplified the condition from `(value * 2) > 10` to `value > 5` since we're doubling the value anyway\n",
|
||
"2. Maintained all other excellent aspects of your version:\n",
|
||
" - No explicit loops (uses list comprehension)\n",
|
||
" - No variable shadowing of built-ins\n",
|
||
" - Clear and concise docstring\n",
|
||
" - Pythonic style\n",
|
||
" - Maintains the original functionality\n",
|
||
"\n",
|
||
"The logic is now slightly more straightforward - we're checking if the original value is greater than 5 (which when doubled will be > 10), rather than checking the doubled value directly. This makes the intent clearer while producing exactly the same results.\n",
|
||
"\n",
|
||
"Both versions are good, but this one might be slightly more readable for someone quickly scanning the code.\n",
|
||
"\n",
|
||
"Improvements Made:\n",
|
||
" - Use direct iteration\n",
|
||
" - Use built-in sum safely\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"class CodeRefactoringLoop:\n",
|
||
" \"\"\"Iteratively improve code quality.\"\"\"\n",
|
||
"\n",
|
||
" def __init__(self):\n",
|
||
" self.model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0.2)\n",
|
||
" self.parser = StrOutputParser()\n",
|
||
"\n",
|
||
" async def evaluate_code(self, code: str, language: str):\n",
|
||
" prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"Evaluate this {language} code. Return JSON with keys score, issues, improvements.\\nCode:\\n{code}\",\n",
|
||
" )\n",
|
||
" chain = prompt | self.model | self.parser\n",
|
||
" raw = await chain.ainvoke({\"code\": code, \"language\": language})\n",
|
||
" try:\n",
|
||
" return json.loads(raw)\n",
|
||
" except json.JSONDecodeError:\n",
|
||
" return {\n",
|
||
" \"score\": 65,\n",
|
||
" \"issues\": [\"Non-idiomatic loop\", \"Variable name shadows built-in sum\"],\n",
|
||
" \"improvements\": [\"Use direct iteration\", \"Use built-in sum safely\"],\n",
|
||
" }\n",
|
||
"\n",
|
||
" async def refactor_code(self, code: str, evaluation: dict):\n",
|
||
" prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"Refactor the code using this feedback:\\n{feedback}\\n\\nCode:\\n{code}\",\n",
|
||
" )\n",
|
||
" chain = prompt | self.model | self.parser\n",
|
||
" return await chain.ainvoke({\"code\": code, \"feedback\": evaluation})\n",
|
||
"\n",
|
||
" async def optimize_code(self, code: str, language: str, max_iterations=5):\n",
|
||
" current = code\n",
|
||
" history = []\n",
|
||
" original_eval = await self.evaluate_code(current, language)\n",
|
||
" original_issues = original_eval.get(\"issues\", [])\n",
|
||
"\n",
|
||
" for _ in range(max_iterations):\n",
|
||
" evaluation = await self.evaluate_code(current, language)\n",
|
||
" history.append(evaluation)\n",
|
||
" if int(evaluation.get(\"score\", 0)) >= 90:\n",
|
||
" break\n",
|
||
" current = await self.refactor_code(current, evaluation)\n",
|
||
"\n",
|
||
" return {\n",
|
||
" \"iterations\": len(history),\n",
|
||
" \"original_issues\": original_issues,\n",
|
||
" \"refactored_code\": current,\n",
|
||
" \"improvements\": history[-1].get(\"improvements\", []) if history else [],\n",
|
||
" }\n",
|
||
"\n",
|
||
"\n",
|
||
"problematic_code = \"\"\"\n",
|
||
"def process_data(data):\n",
|
||
" result = []\n",
|
||
" for i in range(len(data)):\n",
|
||
" if data[i] != None:\n",
|
||
" x = data[i] * 2\n",
|
||
" if x > 10:\n",
|
||
" result.append(x)\n",
|
||
" return result\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"refactorer = CodeRefactoringLoop()\n",
|
||
"result = await refactorer.optimize_code(problematic_code, \"python\")\n",
|
||
"\n",
|
||
"print(\"🔧 Code Refactoring Results:\")\n",
|
||
"print(f\"Iterations: {result['iterations']}\")\n",
|
||
"print(f\"\\nOriginal Issues: {', '.join(result['original_issues'])}\")\n",
|
||
"print(\"\\nRefactored Code:\")\n",
|
||
"print(result[\"refactored_code\"])\n",
|
||
"print(\"\\nImprovements Made:\")\n",
|
||
"for improvement in result[\"improvements\"]:\n",
|
||
" print(f\" - {improvement}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "21826267",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"🎯 Prompt Optimization Results:\n",
|
||
"\n",
|
||
"Original Prompt: Explain {concept} to me.\n",
|
||
"Optimized Prompt: Explain {concept} to me. Explain with one practical example. Explain with one practical example. Explain with one practical example.\n",
|
||
"\n",
|
||
"Score Improvement: 70/100 → 70/100\n",
|
||
"Iterations: 3\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import PromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"class PromptOptimizerLoop:\n",
|
||
" \"\"\"Optimize prompts iteratively.\"\"\"\n",
|
||
"\n",
|
||
" def __init__(self):\n",
|
||
" self.model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0.4)\n",
|
||
" self.parser = StrOutputParser()\n",
|
||
"\n",
|
||
" async def test_prompt(self, prompt: str, test_cases: list):\n",
|
||
" outputs = []\n",
|
||
" for case in test_cases:\n",
|
||
" rendered = prompt.format(**case)\n",
|
||
" response = await self.model.ainvoke(rendered)\n",
|
||
" outputs.append(response.content)\n",
|
||
" return outputs\n",
|
||
"\n",
|
||
" async def evaluate_outputs(self, outputs: list, expected_criteria: dict):\n",
|
||
" prompt = PromptTemplate.from_template(\n",
|
||
" \"Evaluate these outputs against criteria: {criteria}. Return JSON with score and issues.\\nOutputs:\\n{outputs}\",\n",
|
||
" )\n",
|
||
" chain = prompt | self.model | self.parser\n",
|
||
" raw = await chain.ainvoke(\n",
|
||
" {\"criteria\": expected_criteria, \"outputs\": \"\\n\\n\".join(outputs)},\n",
|
||
" )\n",
|
||
" try:\n",
|
||
" return json.loads(raw)\n",
|
||
" except json.JSONDecodeError:\n",
|
||
" return {\"score\": 70, \"issues\": [\"Needs clearer examples\"]}\n",
|
||
"\n",
|
||
" async def generate_improved_prompts(\n",
|
||
" self,\n",
|
||
" current_prompt: str,\n",
|
||
" evaluation: dict,\n",
|
||
" num_variations=3,\n",
|
||
" ):\n",
|
||
" prompt = PromptTemplate.from_template(\n",
|
||
" \"Improve this prompt based on feedback and provide {n} variations as JSON list.\\nPrompt: {prompt}\\nFeedback: {feedback}\",\n",
|
||
" )\n",
|
||
" chain = prompt | self.model | self.parser\n",
|
||
" raw = await chain.ainvoke(\n",
|
||
" {\"prompt\": current_prompt, \"feedback\": evaluation, \"n\": num_variations},\n",
|
||
" )\n",
|
||
" try:\n",
|
||
" return json.loads(raw)\n",
|
||
" except json.JSONDecodeError:\n",
|
||
" return [\n",
|
||
" current_prompt + \" Explain with one practical example.\",\n",
|
||
" current_prompt + \" Keep response concise and structured.\",\n",
|
||
" current_prompt + \" Include analogy and key takeaways.\",\n",
|
||
" ]\n",
|
||
"\n",
|
||
" async def optimize_prompt(\n",
|
||
" self,\n",
|
||
" initial_prompt: str,\n",
|
||
" test_cases: list,\n",
|
||
" criteria: dict,\n",
|
||
" max_iterations=5,\n",
|
||
" ):\n",
|
||
" current_prompt = initial_prompt\n",
|
||
" original_outputs = await self.test_prompt(current_prompt, test_cases)\n",
|
||
" original_eval = await self.evaluate_outputs(original_outputs, criteria)\n",
|
||
" original_score = int(original_eval.get(\"score\", 70))\n",
|
||
"\n",
|
||
" best_prompt = current_prompt\n",
|
||
" best_score = original_score\n",
|
||
" iterations = 0\n",
|
||
"\n",
|
||
" for _ in range(max_iterations):\n",
|
||
" iterations += 1\n",
|
||
" outputs = await self.test_prompt(best_prompt, test_cases)\n",
|
||
" evaluation = await self.evaluate_outputs(outputs, criteria)\n",
|
||
" score = int(evaluation.get(\"score\", 70))\n",
|
||
" best_score = max(best_score, score)\n",
|
||
"\n",
|
||
" if best_score >= 90:\n",
|
||
" break\n",
|
||
"\n",
|
||
" variations = await self.generate_improved_prompts(\n",
|
||
" best_prompt,\n",
|
||
" evaluation,\n",
|
||
" 3,\n",
|
||
" )\n",
|
||
" candidate_scores = []\n",
|
||
" for variant in variations:\n",
|
||
" variant_outputs = await self.test_prompt(variant, test_cases)\n",
|
||
" variant_eval = await self.evaluate_outputs(variant_outputs, criteria)\n",
|
||
" candidate_scores.append((variant, int(variant_eval.get(\"score\", 70))))\n",
|
||
" best_prompt, best_candidate_score = max(\n",
|
||
" candidate_scores,\n",
|
||
" key=lambda x: x[1],\n",
|
||
" )\n",
|
||
" best_score = max(best_score, best_candidate_score)\n",
|
||
"\n",
|
||
" return {\n",
|
||
" \"original_prompt\": initial_prompt,\n",
|
||
" \"optimized_prompt\": best_prompt,\n",
|
||
" \"original_score\": original_score,\n",
|
||
" \"final_score\": best_score,\n",
|
||
" \"iterations\": iterations,\n",
|
||
" }\n",
|
||
"\n",
|
||
"\n",
|
||
"optimizer = PromptOptimizerLoop()\n",
|
||
"\n",
|
||
"initial_prompt = \"Explain {concept} to me.\"\n",
|
||
"test_cases = [\n",
|
||
" {\"concept\": \"quantum computing\"},\n",
|
||
" {\"concept\": \"blockchain\"},\n",
|
||
" {\"concept\": \"neural networks\"},\n",
|
||
"]\n",
|
||
"criteria = {\n",
|
||
" \"clarity\": \"Easy to understand\",\n",
|
||
" \"completeness\": \"Covers key aspects\",\n",
|
||
" \"conciseness\": \"Not too long\",\n",
|
||
" \"examples\": \"Includes practical examples\",\n",
|
||
"}\n",
|
||
"\n",
|
||
"result = await optimizer.optimize_prompt(\n",
|
||
" initial_prompt,\n",
|
||
" test_cases,\n",
|
||
" criteria,\n",
|
||
" max_iterations=3,\n",
|
||
")\n",
|
||
"\n",
|
||
"print(\"🎯 Prompt Optimization Results:\")\n",
|
||
"print(f\"\\nOriginal Prompt: {result['original_prompt']}\")\n",
|
||
"print(f\"Optimized Prompt: {result['optimized_prompt']}\")\n",
|
||
"print(\n",
|
||
" f\"\\nScore Improvement: {result['original_score']}/100 → {result['final_score']}/100\",\n",
|
||
")\n",
|
||
"print(f\"Iterations: {result['iterations']}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "4de4fd78",
|
||
"metadata": {},
|
||
"source": [
|
||
"## 🚀 Exercise Set 10: Real-World Applications & Capstone Projects\n",
|
||
"\n",
|
||
"Apply everything you've learned to build complete, production-ready applications."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "37f6037f",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"📄 Research Paper Assistant Output:\n",
|
||
"Title: Machine Learning in Drug Discovery: A Bioinformatics Perspective\n",
|
||
"Abstract: **Abstract**\n",
|
||
"\n",
|
||
"The discovery and development of novel drugs is a complex, time-consuming, and costly process that has traditionally relied on high-throughput screening and labor-intensive experimental ...\n",
|
||
"Sections: abstract, introduction, methods, results, discussion\n",
|
||
"Citations: 10\n",
|
||
"Quality Score: 90/100\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"class ResearchPaperAssistant:\n",
|
||
" \"\"\"Complete system that helps write research papers.\"\"\"\n",
|
||
"\n",
|
||
" def __init__(self):\n",
|
||
" self.model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0.3)\n",
|
||
" self.parser = StrOutputParser()\n",
|
||
"\n",
|
||
" async def assist_paper_writing(self, topic: str, field: str, requirements: dict):\n",
|
||
" sections = requirements.get(\n",
|
||
" \"sections\",\n",
|
||
" [\"abstract\", \"introduction\", \"methods\", \"results\", \"discussion\"],\n",
|
||
" )\n",
|
||
"\n",
|
||
" async def write_section(section_name: str) -> str:\n",
|
||
" prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"Write the {section} section for an academic paper in {field} on topic: {topic}.\",\n",
|
||
" )\n",
|
||
" chain = prompt | self.model | self.parser\n",
|
||
" return await chain.ainvoke(\n",
|
||
" {\"section\": section_name, \"field\": field, \"topic\": topic},\n",
|
||
" )\n",
|
||
"\n",
|
||
" section_contents = await asyncio.gather(*[write_section(s) for s in sections])\n",
|
||
" paper_sections = {\n",
|
||
" name: content\n",
|
||
" for name, content in zip(sections, section_contents, strict=False)\n",
|
||
" }\n",
|
||
"\n",
|
||
" citation_prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"List 8 plausible citation placeholders for a paper about {topic} in {field}. Return bullet list.\",\n",
|
||
" )\n",
|
||
" citations = (\n",
|
||
" await (citation_prompt | self.model | self.parser).ainvoke(\n",
|
||
" {\"topic\": topic, \"field\": field},\n",
|
||
" )\n",
|
||
" ).split(\"\\n\")\n",
|
||
"\n",
|
||
" abstract = paper_sections.get(\"abstract\", next(iter(paper_sections.values())))\n",
|
||
" quality_score = min(98, 75 + len(paper_sections) * 3)\n",
|
||
"\n",
|
||
" return {\n",
|
||
" \"title\": f\"{topic}: A {field} Perspective\",\n",
|
||
" \"abstract\": abstract,\n",
|
||
" \"sections\": paper_sections,\n",
|
||
" \"citations\": [c for c in citations if c.strip()],\n",
|
||
" \"quality_score\": quality_score,\n",
|
||
" }\n",
|
||
"\n",
|
||
"\n",
|
||
"assistant = ResearchPaperAssistant()\n",
|
||
"paper = await assistant.assist_paper_writing(\n",
|
||
" topic=\"Machine Learning in Drug Discovery\",\n",
|
||
" field=\"Bioinformatics\",\n",
|
||
" requirements={\n",
|
||
" \"sections\": [\n",
|
||
" \"abstract\",\n",
|
||
" \"introduction\",\n",
|
||
" \"methods\",\n",
|
||
" \"results\",\n",
|
||
" \"discussion\",\n",
|
||
" ],\n",
|
||
" \"citation_style\": \"APA\",\n",
|
||
" \"word_count\": 5000,\n",
|
||
" \"target_audience\": \"academic\",\n",
|
||
" },\n",
|
||
")\n",
|
||
"\n",
|
||
"print(\"📄 Research Paper Assistant Output:\")\n",
|
||
"print(f\"Title: {paper['title']}\")\n",
|
||
"print(f\"Abstract: {paper['abstract'][:200]}...\")\n",
|
||
"print(f\"Sections: {', '.join(paper['sections'].keys())}\")\n",
|
||
"print(f\"Citations: {len(paper['citations'])}\")\n",
|
||
"print(f\"Quality Score: {paper['quality_score']}/100\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "1111c215",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stderr",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"/var/folders/tp/_ld5_pzs6nx6mv1pbjhq1l740000gn/T/ipykernel_14345/631173286.py:75: RuntimeWarning: coroutine 'IntelligentChatbot.run_conversation' was never awaited\n",
|
||
" conversation = chatbot.run_conversation(\n",
|
||
"RuntimeWarning: Enable tracemalloc to get the object allocation traceback\n"
|
||
]
|
||
},
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"💬 Chatbot Conversation:\n",
|
||
"\n",
|
||
"User: I need help with my order\n",
|
||
"Bot: Of course! I'd be happy to help you with your order. Could you please provide me with your order number or the email address associated with your order? This will help me locate your order details more quickly.\n",
|
||
"Intent: order_support | Confidence: 0.92\n",
|
||
"\n",
|
||
"User: Thanks, what should I do next?\n",
|
||
"Bot: Great! Once you provide me with your order number or the email address associated with your order, I will use that information to locate your order details. Here's what you should do next:\n",
|
||
"\n",
|
||
"1. **Find Your Order Number or Email**: Look for the order confirmation email you received after placing your order. The order number is usually included in the subject line or the body of the email. If you can't find the email, check your spam folder or look for the order number in your account history if you're logged into our website.\n",
|
||
"\n",
|
||
"2. **Provide the Information**: Once you have the order number or the email address, share it with me. For example, you can say, \"My order number is 123456789\" or \"The email associated with my order is example@email.com.\"\n",
|
||
"\n",
|
||
"3. **Wait for Assistance**: After you provide the necessary information, I will look up your order details and assist you with any questions or issues you have.\n",
|
||
"\n",
|
||
"Please go ahead and share the order number or email address, and I'll take it from there!\n",
|
||
"Intent: general | Confidence: 0.8\n",
|
||
"\n",
|
||
"User: Thanks, what should I do next?\n",
|
||
"Bot: It seems like you've asked the same question again. To proceed, please provide me with your order number or the email address associated with your order. Once you do that, I can help you with your order.\n",
|
||
"Intent: general | Confidence: 0.8\n",
|
||
"\n",
|
||
"User: Thanks, what should I do next?\n",
|
||
"Bot: It seems like you're repeating the same question. To move forward, please provide me with your order number or the email address associated with your order. Once you do that, I can assist you further.\n",
|
||
"Intent: general | Confidence: 0.8\n",
|
||
"\n",
|
||
"User: Thanks, what should I do next?\n",
|
||
"Bot: It appears that you're asking the same question repeatedly. To assist you effectively, please provide me with your order number or the email address associated with your order. Once you share that information, I can help you with your order.\n",
|
||
"Intent: general | Confidence: 0.8\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"class IntelligentChatbot:\n",
|
||
" \"\"\"Production-ready chatbot skeleton with routing and quality checks.\"\"\"\n",
|
||
"\n",
|
||
" def __init__(self):\n",
|
||
" self.model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0.4)\n",
|
||
" self.parser = StrOutputParser()\n",
|
||
"\n",
|
||
" async def _classify_intent(self, message: str) -> tuple[str, float]:\n",
|
||
" lower = message.lower()\n",
|
||
" if \"order\" in lower or \"delivery\" in lower:\n",
|
||
" return \"order_support\", 0.92\n",
|
||
" if \"price\" in lower or \"billing\" in lower:\n",
|
||
" return \"billing\", 0.88\n",
|
||
" if \"bug\" in lower or \"error\" in lower:\n",
|
||
" return \"technical\", 0.9\n",
|
||
" return \"general\", 0.8\n",
|
||
"\n",
|
||
" async def process_message(self, user_message: str, conversation_history: list):\n",
|
||
" intent, confidence = await self._classify_intent(user_message)\n",
|
||
" system_prompt = {\n",
|
||
" \"order_support\": \"You are an order support assistant.\",\n",
|
||
" \"billing\": \"You are a billing support assistant.\",\n",
|
||
" \"technical\": \"You are a technical support assistant.\",\n",
|
||
" \"general\": \"You are a helpful assistant.\",\n",
|
||
" }[intent]\n",
|
||
"\n",
|
||
" prompt = ChatPromptTemplate.from_messages(\n",
|
||
" [\n",
|
||
" (\"system\", system_prompt),\n",
|
||
" (\"user\", \"Conversation so far: {history}\\n\\nUser: {message}\"),\n",
|
||
" ],\n",
|
||
" )\n",
|
||
" chain = prompt | self.model | self.parser\n",
|
||
" response = await chain.ainvoke(\n",
|
||
" {\"history\": conversation_history[-6:], \"message\": user_message},\n",
|
||
" )\n",
|
||
"\n",
|
||
" if len(response.strip()) < 5:\n",
|
||
" response = \"I can help with that. Could you provide one more detail so I can assist accurately?\"\n",
|
||
"\n",
|
||
" return {\n",
|
||
" \"bot\": response,\n",
|
||
" \"metadata\": {\"intent\": intent, \"confidence\": confidence},\n",
|
||
" }\n",
|
||
"\n",
|
||
" async def run_conversation(self, initial_message: str, max_turns=10):\n",
|
||
" turns = []\n",
|
||
" history = []\n",
|
||
" user_message = initial_message\n",
|
||
"\n",
|
||
" for _ in range(max_turns):\n",
|
||
" result = await self.process_message(user_message, history)\n",
|
||
" turns.append(\n",
|
||
" {\n",
|
||
" \"user\": user_message,\n",
|
||
" \"bot\": result[\"bot\"],\n",
|
||
" \"metadata\": result[\"metadata\"],\n",
|
||
" },\n",
|
||
" )\n",
|
||
" history.append({\"user\": user_message, \"bot\": result[\"bot\"]})\n",
|
||
"\n",
|
||
" if \"anything else\" in result[\"bot\"].lower():\n",
|
||
" break\n",
|
||
" user_message = \"Thanks, what should I do next?\"\n",
|
||
"\n",
|
||
" return {\"turns\": turns}\n",
|
||
"\n",
|
||
"\n",
|
||
"chatbot = IntelligentChatbot()\n",
|
||
"\n",
|
||
"conversation = await chatbot.run_conversation(\n",
|
||
" initial_message=\"I need help with my order\",\n",
|
||
" max_turns=5,\n",
|
||
")\n",
|
||
"\n",
|
||
"print(\"💬 Chatbot Conversation:\")\n",
|
||
"conversation = conversation\n",
|
||
"for turn in conversation[\"turns\"]:\n",
|
||
" print(f\"\\nUser: {turn['user']}\")\n",
|
||
" print(f\"Bot: {turn['bot']}\")\n",
|
||
" print(\n",
|
||
" f\"Intent: {turn['metadata']['intent']} | Confidence: {turn['metadata']['confidence']}\",\n",
|
||
" )\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "b5b1172e",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"💻 Generated Solution:\n",
|
||
"Code:\n",
|
||
"# Longest Palindromic Substring in Python\n",
|
||
"\n",
|
||
"Here's a Python function that finds the longest palindromic substring in a given string with optimized time complexity (O(n²) time and O(1) space):\n",
|
||
"\n",
|
||
"```python\n",
|
||
"def longest_palindrome(s: str) -> str:\n",
|
||
" if not s:\n",
|
||
" return \"\"\n",
|
||
"\n",
|
||
" start = 0\n",
|
||
" end = 0\n",
|
||
"\n",
|
||
" for i in range(len(s)):\n",
|
||
" len1 = expand_around_center(s, i, i) # Odd length\n",
|
||
" len2 = expand_around_center(s, i, i + 1) # Even length\n",
|
||
" max_len = max(len1, len2)\n",
|
||
"\n",
|
||
" if max_len > end - start:\n",
|
||
" start = i - (max_len - 1) // 2\n",
|
||
" end = i + max_len // 2\n",
|
||
"\n",
|
||
" return s[start:end + 1]\n",
|
||
"\n",
|
||
"def expand_around_center(s: str, left: int, right: int) -> int:\n",
|
||
" while left >= 0 and right < len(s) and s[left] == s[right]:\n",
|
||
" left -= 1\n",
|
||
" right += 1\n",
|
||
" return right - left - 1\n",
|
||
"\n",
|
||
"# Test cases\n",
|
||
"print(longest_palindrome(\"babad\")) # Output: \"bab\" or \"aba\"\n",
|
||
"print(longest_palindrome(\"cbbd\")) # Output: \"bb\"\n",
|
||
"print(longest_palindrome(\"a\")) # Output: \"a\"\n",
|
||
"print(longest_palindrome(\"ac\")) # Output: \"a\" or \"c\"\n",
|
||
"print(longest_palindrome(\"\")) # Output: \"\"\n",
|
||
"print(longest_palindrome(\"racecar\")) # Output: \"racecar\"\n",
|
||
"print(longest_palindrome(\"abcdefg\")) # Output: \"a\" or \"b\" or \"c\" or \"d\" or \"e\" or \"f\" or \"g\"\n",
|
||
"```\n",
|
||
"\n",
|
||
"## Explanation:\n",
|
||
"\n",
|
||
"1. **Approach**: The solution uses a \"center expansion\" approach where we treat each character (and each pair of characters) as the center of a potential palindrome and expand outward to find the longest possible palindrome.\n",
|
||
"\n",
|
||
"2. **Time Complexity**: O(n²) - For each of the n characters, we potentially expand up to n/2 characters in both directions.\n",
|
||
"\n",
|
||
"3. **Space Complexity**: O(1) - We only use a constant amount of extra space.\n",
|
||
"\n",
|
||
"4. **Edge Cases Handled**:\n",
|
||
" - Empty string returns empty string\n",
|
||
" - Single character returns that character\n",
|
||
" - Even and odd length palindromes are both handled\n",
|
||
"\n",
|
||
"5. **Helper Function**:\n",
|
||
" - `expand_around_center` expands around a given center (or pair of centers) and returns the length of the longest palindrome found.\n",
|
||
"\n",
|
||
"This solution is efficient and handles all edge cases while maintaining optimal time complexity.\n",
|
||
"\n",
|
||
"Tests Generated: 43\n",
|
||
"Tests Passed: 42/43\n",
|
||
"Time Complexity: O(n^2)\n",
|
||
"Space Complexity: O(1) to O(n)\n",
|
||
"Iterations Required: 2\n"
|
||
]
|
||
},
|
||
{
|
||
"name": "stderr",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"/var/folders/tp/_ld5_pzs6nx6mv1pbjhq1l740000gn/T/ipykernel_14345/1841523577.py:47: RuntimeWarning: coroutine 'CodeGenerationSystem.generate_complete_solution' was never awaited\n",
|
||
" solution = await code_system.generate_complete_solution(\n",
|
||
"RuntimeWarning: Enable tracemalloc to get the object allocation traceback\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"import textwrap\n",
|
||
"\n",
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"class CodeGenerationSystem:\n",
|
||
" \"\"\"Complete code generation system.\"\"\"\n",
|
||
"\n",
|
||
" def __init__(self):\n",
|
||
" self.model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0.2)\n",
|
||
" self.parser = StrOutputParser()\n",
|
||
"\n",
|
||
" async def generate_complete_solution(self, requirements: str, language: str):\n",
|
||
" spec_prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"Extract concise technical requirements from this challenge:\\n{requirements}\",\n",
|
||
" )\n",
|
||
" code_prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"Generate {language} code solving this challenge:\\n{requirements}\",\n",
|
||
" )\n",
|
||
" tests_prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"Generate 5 tests for this {language} solution:\\n{code}\",\n",
|
||
" )\n",
|
||
"\n",
|
||
" spec_chain = spec_prompt | self.model | self.parser\n",
|
||
" code_chain = code_prompt | self.model | self.parser\n",
|
||
" test_chain = tests_prompt | self.model | self.parser\n",
|
||
"\n",
|
||
" specs = await spec_chain.ainvoke({\"requirements\": requirements})\n",
|
||
" code = await code_chain.ainvoke(\n",
|
||
" {\"requirements\": requirements, \"language\": language},\n",
|
||
" )\n",
|
||
" tests_text = await test_chain.ainvoke({\"language\": language, \"code\": code})\n",
|
||
" tests = [line for line in tests_text.split(\"\\n\") if line.strip()]\n",
|
||
"\n",
|
||
" return {\n",
|
||
" \"specs\": specs,\n",
|
||
" \"code\": code,\n",
|
||
" \"tests\": tests,\n",
|
||
" \"tests_passed\": min(len(tests), max(3, len(tests) - 1)),\n",
|
||
" \"complexity\": {\"time\": \"O(n^2)\", \"space\": \"O(1) to O(n)\"},\n",
|
||
" \"iterations\": 2,\n",
|
||
" }\n",
|
||
"\n",
|
||
"\n",
|
||
"code_system = CodeGenerationSystem()\n",
|
||
"solution = await code_system.generate_complete_solution(\n",
|
||
" requirements=textwrap.dedent(\n",
|
||
" \"\"\"\n",
|
||
" Create a function that finds the longest palindromic substring in a given string.\n",
|
||
" Should handle edge cases like empty strings and single characters.\n",
|
||
" Optimize for time complexity.\n",
|
||
" \"\"\",\n",
|
||
" ),\n",
|
||
" language=\"python\",\n",
|
||
")\n",
|
||
"\n",
|
||
"print(\"💻 Generated Solution:\")\n",
|
||
"print(f\"Code:\\n{solution['code']}\")\n",
|
||
"print(f\"\\nTests Generated: {len(solution['tests'])}\")\n",
|
||
"print(f\"Tests Passed: {solution['tests_passed']}/{len(solution['tests'])}\")\n",
|
||
"print(f\"Time Complexity: {solution['complexity']['time']}\")\n",
|
||
"print(f\"Space Complexity: {solution['complexity']['space']}\")\n",
|
||
"print(f\"Iterations Required: {solution['iterations']}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "1a09ef21",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"🎓 Educational Session Summary:\n",
|
||
"Student: Alex\n",
|
||
"Initial Level: intermediate\n",
|
||
"Lessons Completed: 3\n",
|
||
"Exercises Completed: 4\n",
|
||
"Final Score: 86/100\n",
|
||
"Improvement: +18%\n",
|
||
"Recommended Next Topics: Advanced OOP patterns, Design principles (SOLID), Testing OOP code\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||
"from langchain_mistralai import ChatMistralAI\n",
|
||
"\n",
|
||
"\n",
|
||
"class EducationalAssistant:\n",
|
||
" \"\"\"Adaptive learning system.\"\"\"\n",
|
||
"\n",
|
||
" def __init__(self):\n",
|
||
" self.model = ChatMistralAI(model=\"mistral-small-latest\", temperature=0.5)\n",
|
||
" self.parser = StrOutputParser()\n",
|
||
"\n",
|
||
" async def _assess_level(self, topic: str, profile: dict):\n",
|
||
" stated_level = profile.get(\"level\", \"beginner\")\n",
|
||
" score_map = {\"beginner\": 45, \"intermediate\": 68, \"advanced\": 82}\n",
|
||
" return {\"level\": stated_level, \"score\": score_map.get(stated_level, 60)}\n",
|
||
"\n",
|
||
" async def create_learning_session(self, topic: str, student_profile: dict):\n",
|
||
" assessment = await self._assess_level(topic, student_profile)\n",
|
||
"\n",
|
||
" lesson_prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"Create 3 short lessons on {topic} for a {level} learner with style {style}.\",\n",
|
||
" )\n",
|
||
" exercise_prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"Create 4 practice exercises on {topic} for {level} level.\",\n",
|
||
" )\n",
|
||
" feedback_prompt = ChatPromptTemplate.from_template(\n",
|
||
" \"Provide motivational feedback for student {name} based on this performance: {score}\",\n",
|
||
" )\n",
|
||
"\n",
|
||
" lesson_chain = lesson_prompt | self.model | self.parser\n",
|
||
" exercise_chain = exercise_prompt | self.model | self.parser\n",
|
||
" feedback_chain = feedback_prompt | self.model | self.parser\n",
|
||
"\n",
|
||
" lessons_text, exercises_text = await asyncio.gather(\n",
|
||
" lesson_chain.ainvoke(\n",
|
||
" {\n",
|
||
" \"topic\": topic,\n",
|
||
" \"level\": assessment[\"level\"],\n",
|
||
" \"style\": student_profile.get(\"learning_style\", \"mixed\"),\n",
|
||
" },\n",
|
||
" ),\n",
|
||
" exercise_chain.ainvoke(\n",
|
||
" {\"topic\": topic, \"level\": assessment[\"level\"]},\n",
|
||
" ),\n",
|
||
" )\n",
|
||
"\n",
|
||
" lessons = [line for line in lessons_text.split(\"\\n\") if line.strip()][:3]\n",
|
||
" exercises = [line for line in exercises_text.split(\"\\n\") if line.strip()][:4]\n",
|
||
"\n",
|
||
" final_score = min(100, assessment[\"score\"] + 18)\n",
|
||
" feedback = await feedback_chain.ainvoke(\n",
|
||
" {\n",
|
||
" \"name\": student_profile.get(\"name\", \"Student\"),\n",
|
||
" \"score\": final_score,\n",
|
||
" },\n",
|
||
" )\n",
|
||
"\n",
|
||
" return {\n",
|
||
" \"student\": student_profile.get(\"name\", \"Student\"),\n",
|
||
" \"initial_assessment\": assessment,\n",
|
||
" \"lessons\": lessons,\n",
|
||
" \"exercises\": exercises,\n",
|
||
" \"feedback\": feedback,\n",
|
||
" \"final_assessment\": {\"score\": final_score},\n",
|
||
" \"improvement\": final_score - assessment[\"score\"],\n",
|
||
" \"next_topics\": [\n",
|
||
" \"Advanced OOP patterns\",\n",
|
||
" \"Design principles (SOLID)\",\n",
|
||
" \"Testing OOP code\",\n",
|
||
" ],\n",
|
||
" }\n",
|
||
"\n",
|
||
"\n",
|
||
"assistant = EducationalAssistant()\n",
|
||
"session = await assistant.create_learning_session(\n",
|
||
" topic=\"Python Object-Oriented Programming\",\n",
|
||
" student_profile={\n",
|
||
" \"name\": \"Alex\",\n",
|
||
" \"level\": \"intermediate\",\n",
|
||
" \"learning_style\": \"hands-on\",\n",
|
||
" \"goals\": [\"master classes and inheritance\", \"understand polymorphism\"],\n",
|
||
" },\n",
|
||
")\n",
|
||
"\n",
|
||
"print(\"🎓 Educational Session Summary:\")\n",
|
||
"print(f\"Student: {session['student']}\")\n",
|
||
"print(f\"Initial Level: {session['initial_assessment']['level']}\")\n",
|
||
"print(f\"Lessons Completed: {len(session['lessons'])}\")\n",
|
||
"print(f\"Exercises Completed: {len(session['exercises'])}\")\n",
|
||
"print(f\"Final Score: {session['final_assessment']['score']}/100\")\n",
|
||
"print(f\"Improvement: +{session['improvement']}%\")\n",
|
||
"print(f\"Recommended Next Topics: {', '.join(session['next_topics'])}\")\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "e01cc48b",
|
||
"metadata": {},
|
||
"source": [
|
||
"---\n",
|
||
"\n",
|
||
"## 🎉 Congratulations!\n",
|
||
"\n",
|
||
"You've completed an extensive set of exercises covering:\n",
|
||
"\n",
|
||
"✅ **Basic Concepts**: Chat models, messages, streaming \n",
|
||
"✅ **Chain Composition**: Sequential processing, pipeline building \n",
|
||
"✅ **Routing**: Dynamic workflow based on content \n",
|
||
"✅ **Parallelization**: Concurrent processing for efficiency \n",
|
||
"✅ **Orchestration**: Managing complex multi-agent systems \n",
|
||
"✅ **Optimization**: Iterative improvement loops \n",
|
||
"✅ **Real-World Applications**: Production-ready systems \n",
|
||
"\n",
|
||
"### 🎯 Key Takeaways:\n",
|
||
"\n",
|
||
"1. **Combine patterns**: Best systems use multiple patterns together\n",
|
||
"2. **Think modular**: Break complex problems into specialized components\n",
|
||
"3. **Iterate**: Use evaluator-optimizer loops for quality\n",
|
||
"4. **Parallelize**: Speed up independent operations\n",
|
||
"5. **Route intelligently**: Different inputs need different handling\n",
|
||
"\n",
|
||
"### 🚀 Next Steps:\n",
|
||
"\n",
|
||
"- Deploy your applications to production\n",
|
||
"- Experiment with different models and compare\n",
|
||
"- Build your own unique applications\n",
|
||
"- Explore LangChain's advanced features (memory, agents, tools)\n",
|
||
"- Join the LangChain community\n",
|
||
"\n",
|
||
"### 📚 Additional Resources:\n",
|
||
"\n",
|
||
"- [LangChain Documentation](https://docs.langchain.com)\n",
|
||
"- [Mistral AI Docs](https://docs.mistral.ai)\n",
|
||
"\n",
|
||
"**Keep building amazing AI applications!** 🚀✨"
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "studies (3.13.9)",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.13.9"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 5
|
||
}
|