mirror of
https://github.com/ArthurDanjou/artsite.git
synced 2026-01-28 22:56:01 +01:00
Lint code
This commit is contained in:
@@ -19,7 +19,7 @@ I use tools like
|
||||
:prose-icon[scikit-learn]{icon="devicon-scikitlearn" color="orange"} for supervised learning,
|
||||
:prose-icon[pandas]{icon="i-logos:pandas-icon" color="blue"} for efficient data manipulation,
|
||||
:prose-icon[NumPy]{icon="i-logos:numpy" color="indigo"} for scientific computation, and
|
||||
:prose-icon[TensorFlow]{icon="i-logos:tensorflow" color="orange"} to build and train deep learning models.
|
||||
:prose-icon[TensorFlow]{icon="i-logos:tensorflow" color="orange"} to build and train deep learning models.
|
||||
I also learned other important technologies, such as
|
||||
:prose-icon[Docker]{icon="i-logos:docker-icon" color="sky"},
|
||||
:prose-icon[Redis]{icon="i-logos:redis" color="red"},
|
||||
|
||||
@@ -21,4 +21,4 @@ Create categories and tabs to group your shortcuts, personalize them with icons
|
||||
- [ESLint](https://eslint.org): A linter that identifies and fixes problems in your JavaScript/TypeScript code.
|
||||
- [Drizzle ORM](https://orm.drizzle.team/): A lightweight, type-safe ORM built for TypeScript, designed for simplicity and performance.
|
||||
- [Zod](https://zod.dev/): A TypeScript-first schema declaration and validation library with full static type inference.
|
||||
- and a lot of ❤️
|
||||
- and a lot of ❤️
|
||||
|
||||
@@ -28,4 +28,4 @@ It’s designed to be fast, accessible, and fully responsive. The site also serv
|
||||
- **Linter** → [ESLint](https://eslint.org/): A tool for identifying and fixing problems in JavaScript/TypeScript code.
|
||||
- **ORM** → [Drizzle ORM](https://orm.drizzle.team/): A lightweight, type-safe ORM for TypeScript.
|
||||
- **Validation** → [Zod](https://zod.dev/): A TypeScript-first schema declaration and validation library with full static type inference.
|
||||
- **Deployment** → [NuxtHub](https://hub.nuxt.com/): A platform to deploy and scale Nuxt apps globally with minimal latency and full-stack capabilities.
|
||||
- **Deployment** → [NuxtHub](https://hub.nuxt.com/): A platform to deploy and scale Nuxt apps globally with minimal latency and full-stack capabilities.
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
slug: neural-network
|
||||
title: What is a Neural Network?
|
||||
description: This article introduces neural networks, explaining their structure, training, and key concepts like activation functions and backpropagation. It includes an example with a neural network featuring two hidden layers using TensorFlow.
|
||||
description: This article introduces neural networks, explaining their structure, training, and key concepts like activation functions and backpropagation. It includes an example with a neural network featuring two hidden layers using TensorFlow.
|
||||
readingTime: 3
|
||||
publishedAt: 2025/03/30
|
||||
tags:
|
||||
|
||||
@@ -34,7 +34,7 @@ However, LLMs have their limitations. They can sometimes generate **hallucinatio
|
||||
When interacting with LLMs or agents, information is transmitted through **messages** and **tokens**.
|
||||
|
||||
- **Messages** are the pieces of communication sent between the user and the system (or between different components of the AI system). These can be user queries, responses, or commands.
|
||||
|
||||
|
||||
- **Tokens** are the basic units of text that an LLM processes. A token could be a word, part of a word, or even punctuation. In GPT models, a single token can represent a word like "dog" or even part of a word like "re-" in "reliable."
|
||||
|
||||
Managing tokens is essential because LLMs have a **token limit**, meaning they can only handle a fixed number of tokens in a single input/output sequence. This limit impacts performance and context retention. Long conversations or documents might require careful handling of token counts to maintain coherence.
|
||||
@@ -99,7 +99,7 @@ Here's how it works:
|
||||
1. The LLM retrieves relevant data or documents using a search engine or database query.
|
||||
2. The LLM then generates a response based on the retrieved information.
|
||||
|
||||
RAG solves a major problem with LLMs: the **outdated or incomplete information** they may have. By pulling in real-time data, RAG ensures that the generated content is relevant and grounded in current knowledge.
|
||||
RAG solves a major problem with LLMs: the **outdated or incomplete information** they may have. By pulling in real-time data, RAG ensures that the generated content is relevant and grounded in current knowledge.
|
||||
|
||||
A classic example of RAG is when you ask an AI to summarize the latest research on a particular topic. Instead of relying on the model’s static knowledge base, the model can retrieve relevant papers or articles and generate an accurate summary.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user