mirror of
https://github.com/ArthurDanjou/artsite.git
synced 2026-02-04 09:32:12 +01:00
Refactor: Split portfolio to projects and writings sections, and update content structure
- Renamed 'portfolio' collection to 'projects' in content configuration. - Introduced a new 'writings' collection with corresponding schema. - Updated README to reflect changes in content structure and navigation. - Removed the old portfolio page and added new pages for projects and writings. - Added multiple new project and writing markdown files with relevant content. - Updated license year to 2025. - Enhanced AppHeader for new navigation links. - Improved ProseImg component styling.
This commit is contained in:
107
content/writings/how-my-website-works.md
Normal file
107
content/writings/how-my-website-works.md
Normal file
@@ -0,0 +1,107 @@
|
||||
---
|
||||
slug: how-my-website-works
|
||||
title: How my website works?
|
||||
description: My new website is using a fantastical stack and I am explaining how my playground works
|
||||
publishedAt: 2024/06/21
|
||||
readingTime: 5
|
||||
tags:
|
||||
- web
|
||||
---
|
||||
|
||||
My personal website is an overengineered playground where I tinker, explore new technologies, experiment with tools, break conventional rules, and satisfy my deep curiosity about web software.
|
||||
|
||||
While it's still fresh in my mind, I wanted to document how this version of the site works, the tools I used to build it, and the challenges I overcame to bring it to its current state.
|
||||
|
||||

|
||||
|
||||
## 1 - Ideas and Goals
|
||||
|
||||
Most of the time, I work on my site for fun and without any profit motive. However, while building this latest version, I managed to keep a few key ideas and goals in mind:
|
||||
|
||||
### 1.1 - Reduce writing friction
|
||||
|
||||
This new version of my website was built with the idea that I should be able to add, edit, and delete content directly from the front-end. This means that everything needs to be backed by a database or CMS, which quickly adds complexity. But at the end of the day, adding a bookmark should be a matter of pasting a URL and clicking save. Writing a blog post should be a matter of typing some Markdown and clicking publication.
|
||||
|
||||
Extra friction on these processes would make me less likely to keep things up to date or share new things.
|
||||
|
||||
### 1.2 - A playground for ideas
|
||||
|
||||
I want my website to be a playground where I can safely experiment with new technologies and packages, including testing frameworks (like the new Nuxt 3 stack), improving CSS styles with Tailwind, and discovering new technologies and frameworks, in a way that allows for easy isolation and deletion. This requirement made Nuxt.js an obvious choice, thanks to its support for hybrid page rendering strategies—static, server-rendered, or client-rendered. More on this below.
|
||||
|
||||
### 1.3 - Fast
|
||||
|
||||
The new version of my website is faster than the old one, thanks to the latest version of Nuxt. This improvement enhances the overall user experience and ensures that the site remains responsive and efficient.
|
||||
|
||||
## 2 - FrontEnd & BackEnd with Nuxt 3
|
||||
|
||||
I wanted this version of my site to reflect my personality, especially because it seemed like a fun project! What would a 'personal application' look like, showcasing everything I've created? I aimed for a clean, monochrome design with plenty of 'Easter eggs' to keep things interesting.
|
||||
|
||||
### 2.1 - Nuxt 3
|
||||
|
||||
Nuxt.js is my front-end framework of choice. I particularly appreciate it for its comprehensive and complementary Vue and Nuxt ecosystem. The filesystem-based router provides an intuitive and powerful abstraction for building the route hierarchy. Nuxt.js also benefits from a large community that has thoroughly tested the framework, addressing edge cases and developing creative solutions for common Vue, data recovery, and performance issues. Whenever I encounter a problem, I turn to the Nuxt.js discussions on [GitHub](https://github.com/nuxt) or their [Discord server](https://go.nuxt.com/discord). Almost every time, I find that others have already come up with innovative solutions to similar challenges.
|
||||
|
||||
Nuxt.js is also fast. It optimizes performance by speeding up local builds, automatically compressing static assets, and ensuring quick deployment times. The regular project updates mean my site continually gets faster over time—at no extra cost!
|
||||
|
||||
### 2.2 - Styling
|
||||
|
||||
#### Tailwind CSS
|
||||
|
||||
Tailwind is my favorite CSS authoring tool... ever. It's incredibly effective. I often see debates on Twitter about whether Tailwind is the best or worst thing ever, and I prefer not to engage in that discussion. Here's my take:
|
||||
|
||||
Tailwind is a toolkit that makes everything great by default and fast. The magic lies in its token system and the default values built into the framework. Once I grasped the semantics of Tailwind, I was able to style my tags at the speed of thought.
|
||||
|
||||
Tailwind provides everything I need out of the box, but I've gradually added a bit of custom CSS to make things more unique.
|
||||
|
||||
#### Nuxt UI
|
||||
|
||||
Nuxt UI is a new tool I've been using since its release to enhance and streamline my Nuxt projects. It’s a module that offers a collection of Vue components and composables built with Tailwind CSS and Headless UI, designed to help you create beautiful and accessible user interfaces.
|
||||
|
||||
Nuxt UI aims to provide everything you need for the UI when building a Nuxt app, including components, icons, colors, dark mode, and keyboard shortcuts. It's an excellent tool for both beginners and experienced developers.
|
||||
|
||||
### 2.3 - Database & Deployment
|
||||
|
||||
#### NuxtHub & Cloudflare workers
|
||||
|
||||

|
||||
|
||||
NuxtHub is an innovative deployment and management platform tailored for Nuxt, leveraging the power of Cloudflare. Deploy your application effortlessly with database, key-value, and blob storage—all configured seamlessly within your Cloudflare account.
|
||||
|
||||
NuxtHub enables cost-effective hosting of high-performance Nuxt applications across multiple environments. It simplifies the process of launching your app swiftly, eliminating concerns about server setup or complex deployment pipelines with just a single command.
|
||||
|
||||
#### Drizzle
|
||||
|
||||
Drizzle is a unique ORM that offers both relational and SQL-like query APIs, combining the best of both worlds for accessing relational data. Lightweight, performant, typesafe, and designed to be serverless-ready, Drizzle is also flexible and gluten-free—delivering a sober and seamless experience.
|
||||
|
||||
Drizzle isn't just a library; it's an exceptional journey 🤩. It empowers you to build your project without imposing on your structure or workflow. With Drizzle, you can define and manage database schemas in TypeScript, access your data using SQL-like or relational methods, and use optional tools to enhance your development experience significantly.
|
||||
|
||||
One word : `If you know SQL — you know Drizzle.`
|
||||
|
||||
### 2.4 - Writing
|
||||
|
||||
#### Nuxt Studio
|
||||
|
||||

|
||||
|
||||
Nuxt Studio introduces a fresh editing experience for your Nuxt Content website, providing limitless customization and a user-friendly interface. Edit your website effortlessly with our editor reminiscent of Notion, fostering seamless collaboration between developers and copywriters. It offers a rich text editor, markdown support, and a live preview, enabling you to create and edit content with ease.
|
||||
|
||||
#### Markdown
|
||||
|
||||
I've abandoned using rich text editors on the web. They're overly complex, each with its own intricate abstractions for blocks and elements. To avoid another major rewrite soon, I've sought the simplest, most straightforward solution for publishing content on my site: plain text.
|
||||
|
||||
The article you're currently reading is plain text stored in MySQL, rendered using vue-markdown. You can view my custom element renderings here. I enhance my Markdown capabilities by employing plugins like remark-gfm, which add support for tables, strikethrough, footnotes, and other features.
|
||||
|
||||
Compromises are inevitable! I've chosen to sacrifice some features for simplicity and speed. I'm content with my decision, as it aligns with my goal of reducing friction in the writing process.
|
||||
|
||||
## 3 - How much everything costs
|
||||
|
||||
I'm often asked how much it costs to run my website. Here's a breakdown of the costs:
|
||||
|
||||
- NuxtHub: 0€
|
||||
- Cloudflare Workers: 0€
|
||||
- Nuxt Studio: 0€
|
||||
|
||||
Total: 0€ thanks to nuxt free plan and cloudflare free plan
|
||||
|
||||
## 4 - Thanks
|
||||
|
||||
I want to thank the Nuxt team for their hard work and dedication to the project. I also want to thank the community for their support and for providing me with the tools I needed to build this site. I want to add a special thanks to [Estéban](https://x.com/soubiran_) for solving `All` my problems and for inspiring me to rewrite my website.
|
||||
192
content/writings/neural-network.md
Normal file
192
content/writings/neural-network.md
Normal file
@@ -0,0 +1,192 @@
|
||||
---
|
||||
slug: neural-network
|
||||
title: What is a Neural Network?
|
||||
description: This article introduces neural networks, explaining their structure, training, and key concepts like activation functions and backpropagation. It includes an example with a neural network featuring two hidden layers using TensorFlow.
|
||||
readingTime: 3
|
||||
publishedAt: 2025/03/30
|
||||
tags:
|
||||
- ai
|
||||
- maths
|
||||
---
|
||||
|
||||
Neural networks are a class of machine learning algorithms inspired by the functioning of biological neurons. They are widely used in artificial intelligence for image recognition, natural language processing, time series forecasting, and many other fields. Thanks to their ability to model complex relationships in data, they have become one of the pillars of **deep learning**.
|
||||
|
||||
## 1 - Basic Structure of a Neural Network
|
||||
|
||||
### 1.1 - Neurons and Biological Inspiration
|
||||
|
||||
Neural networks are inspired by the way the human brain processes information. Each artificial neuron mimics a biological neuron, receiving inputs, applying a transformation, and passing the result to the next layer.
|
||||
|
||||
### 1.2 - Layer Organization (Input, Hidden, Output)
|
||||
|
||||
A neural network consists of layers:
|
||||
- **Input layer**: Receives raw data.
|
||||
- **Hidden layers**: Perform intermediate transformations and extract important features.
|
||||
- **Output layer**: Produces the final model prediction.
|
||||
|
||||
### 1.3 - Weights and Biases
|
||||
|
||||
Each neuron connection has an associated **weight** $ w $, and each neuron has a **bias** $ b $. The transformation applied at each neuron before activation is given by:
|
||||
|
||||
$$
|
||||
z = W \cdot X + b
|
||||
$$
|
||||
|
||||
### 1.4 - Neural Network Structure Visualization
|
||||
|
||||
::prose-img
|
||||
---
|
||||
src: /portfolio/neural-network/neural-network-viz.png
|
||||
label: Neural Network Structure
|
||||
caption: Neural Network Structure
|
||||
---
|
||||
::
|
||||
|
||||
Starting from the left, we have:
|
||||
|
||||
- The input layer of our model in orange.
|
||||
- Our first hidden layer of neurons in blue.
|
||||
- Our second hidden layer of neurons in magenta.
|
||||
- The output layer (a.k.a. the prediction) of our model in green.
|
||||
- The arrows that connect the dots shows how all the neurons are interconnected and how data travels from the input layer all the way through to the output layer.
|
||||
|
||||
## 2 - Information Propagation (Forward Pass)
|
||||
|
||||
### 2.1 - Linear Transformation $ z = W \cdot X + b $
|
||||
|
||||
Each neuron computes a weighted sum of its inputs plus a bias term before applying an activation function.
|
||||
|
||||
### 2.2 - Activation Functions (ReLU, Sigmoid, Softmax)
|
||||
|
||||
Activation functions introduce **non-linearity**, enabling networks to learn complex patterns:
|
||||
- **ReLU**: $ f(z) = \max(0, z) $ (fast training, avoids vanishing gradients)
|
||||
- **Sigmoid**: $ \sigma(z) = \frac{1}{1 + e^{-z}} $ (useful for binary classification)
|
||||
- **Softmax**: Converts outputs into probability distributions for multi-class classification.
|
||||
|
||||
## 3 - Learning and Backpropagation
|
||||
|
||||
### 3.1 - Cost Function (MSE, Cross-Entropy)
|
||||
|
||||
To measure error, different loss functions are used:
|
||||
- **Mean Squared Error (MSE)**:
|
||||
$$
|
||||
L(y, \hat{y}) = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2
|
||||
$$
|
||||
- **Cross-Entropy** for classification:
|
||||
$$
|
||||
L(y, \hat{y}) = - \sum_{i=1}^{n} y_i \log(\hat{y}_i)
|
||||
$$
|
||||
|
||||
Where, $y$ represents the true values or labels, while $\hat{y}$represents the predicted values produced by the model. The goal is to minimize this difference during training.
|
||||
|
||||
### 3.2 - Gradient Descent and Weight Updates
|
||||
|
||||
Training consists of adjusting weights to minimize loss using **gradient descent**:
|
||||
|
||||
$$
|
||||
w := w - \alpha \frac{\partial L}{\partial w}, \quad b := b - \alpha \frac{\partial L}{\partial b}
|
||||
$$
|
||||
|
||||
### 3.3 - Gradient Propagation via the Chain Rule
|
||||
|
||||
Using **backpropagation**, the error is propagated backward through the network using the chain rule, adjusting each weight accordingly.
|
||||
|
||||
## 4 - Optimization and Regularization
|
||||
|
||||
### 4.1 - Optimization Algorithms (SGD, Adam)
|
||||
|
||||
- **Stochastic Gradient Descent (SGD)**: Updates weights after each sample.
|
||||
- **Adam**: A more advanced optimizer that adapts learning rates per parameter.
|
||||
|
||||
The gradient of a function is the vector whose elements are its partial derivatives with respect to each parameter.So each element of the gradient tells us how the cost function would change if we applied a small change to that particular parameter – so we know what to tweak and by how much. To summarize, we can march towards the minimum by following these steps:
|
||||
|
||||
::prose-img
|
||||
---
|
||||
src: /portfolio/neural-network/gradient-descent.png
|
||||
label: Gradient Descent
|
||||
caption: Gradient Descent
|
||||
---
|
||||
::
|
||||
|
||||
1. Compute the gradient of our "current location" (calculate the gradient using our current parameter values).
|
||||
2. Modify each parameter by an amount proportional to its gradient element and in the opposite direction of its gradient element. For example, if the partial derivative of our cost function with respect to B0 is positive but tiny and the partial derivative with respect to B1 is negative and large, then we want to decrease B0 by a tiny amount and increase B1 by a large amount to lower our cost function.
|
||||
3. Recompute the gradient using our new tweaked parameter values and repeat the previous steps until we arrive at the minimum.
|
||||
|
||||
### 4.2 - Regularization to Avoid Overfitting (Dropout, L1/L2)
|
||||
|
||||
To prevent overfitting:
|
||||
- **Dropout** randomly disables neurons during training.
|
||||
- **L1/L2 regularization** penalizes large weights to encourage simpler models.
|
||||
|
||||
## 5 - Network Architectures
|
||||
|
||||
Multi-Layer Perceptron (MLP)
|
||||
A standard feedforward neural network with multiple layers.
|
||||
|
||||
Convolutional Neural Networks (CNN) for Images
|
||||
Specialized for image processing using convolutional filters.
|
||||
|
||||
Recurrent Neural Networks (RNN, LSTM, GRU) for Sequences
|
||||
Useful for time series and natural language tasks.
|
||||
|
||||
Transformers for NLP and Vision
|
||||
State-of-the-art architecture for language understanding and vision tasks.
|
||||
|
||||
## 6 - Training and Evaluation
|
||||
|
||||
### 6.1 - Data Splitting (Train/Test Split)
|
||||
|
||||
To evaluate performance, data is split into **training** and **test** sets.
|
||||
|
||||
### 6.2 - Evaluation Metrics (Accuracy, Precision, Recall, RMSE, R²)
|
||||
|
||||
Metrics depend on the task:
|
||||
- **Accuracy, Precision, Recall** for classification.
|
||||
- **RMSE, R²** for regression.
|
||||
|
||||
### 6.3 - Hyperparameter Tuning
|
||||
|
||||
Choosing the right:
|
||||
- **Learning rate**
|
||||
- **Batch size**
|
||||
- **Number of layers and neurons**
|
||||
|
||||
## 7 - Example: A Neural Network with Two Hidden Layers
|
||||
|
||||
The following example demonstrates a simple **multi-layer perceptron (MLP)** with two hidden layers, trained to perform linear regression.
|
||||
|
||||
```python
|
||||
import numpy as np
|
||||
import tensorflow as tf
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
# Generating data
|
||||
X = np.linspace(-1, 1, 100).reshape(-1, 1)
|
||||
y = 2 * X + 1 + 0.1 * np.random.randn(100, 1) # y = 2x + 1 with noise
|
||||
|
||||
# Defining the model
|
||||
model = tf.keras.Sequential([
|
||||
tf.keras.layers.Dense(10, input_shape=(1,), activation='relu'), # First hidden layer
|
||||
tf.keras.layers.Dense(5, activation='relu'), # Second hidden layer
|
||||
tf.keras.layers.Dense(1, activation='linear') # Output layer
|
||||
])
|
||||
|
||||
# Compiling the model
|
||||
model.compile(optimizer='adam', loss='mse')
|
||||
|
||||
# Training the model
|
||||
model.fit(X, y, epochs=200, verbose=0)
|
||||
|
||||
# Predictions
|
||||
predictions = model.predict(X)
|
||||
|
||||
# Visualizing results
|
||||
plt.scatter(X, y, label="Actual Data")
|
||||
plt.plot(X, predictions, color='red', label="Model Predictions")
|
||||
plt.legend()
|
||||
plt.show()
|
||||
```
|
||||
|
||||
## 8 - Conclusion
|
||||
|
||||
Neural networks form the foundation of modern artificial intelligence. Their ability to learn from data and generalize to new situations makes them essential for applications like computer vision, automatic translation, and predictive medicine. 🚀
|
||||
113
content/writings/what-is-machine-learning.md
Normal file
113
content/writings/what-is-machine-learning.md
Normal file
@@ -0,0 +1,113 @@
|
||||
---
|
||||
slug: what-is-machine-learning
|
||||
title: What is Machine Learning?
|
||||
description: An introduction to machine learning, exploring its main types, key model selection criteria, and the workflow from training to evaluation, with a focus on practical insights.
|
||||
readingTime: 3
|
||||
publishedAt: 2024/11/26
|
||||
tags:
|
||||
- ai
|
||||
- maths
|
||||
---
|
||||
|
||||
## 1 - Introduction
|
||||
|
||||
Machine Learning (ML) is a key discipline in artificial intelligence (AI), enabling systems to learn from data to make predictions or discover patterns. It is the driving force behind many modern innovations, from personalised recommendations to autonomous vehicles.
|
||||
|
||||
In this article, we will cover:
|
||||
|
||||
1. **The types of Machine Learning**, to understand the different approaches.
|
||||
2. **Three considerations for choosing a supervised learning model**, one of the most common ML paradigms.
|
||||
3. **The typical ML workflow**, exploring the essential steps for developing a model.
|
||||
4. **Model evaluation through the R² score**, an important metric for regression problems.
|
||||
|
||||
## 2 - The Types of Machine Learning
|
||||
|
||||
To start, it is important to understand the three main categories of machine learning:
|
||||
|
||||
1. **Supervised Learning** This type of learning relies on labeled data, where the model learns to map inputs $X$ to outputs $y$. Common tasks include:
|
||||
- **Classification**: Assigning data to categories (e.g., spam detection).
|
||||
- **Regression**: Predicting continuous values (e.g., house price estimation).
|
||||
|
||||
2. **Unsupervised Learning** In this case, no labels are provided, and the goal is to find structures or patterns. Common tasks include:
|
||||
- **Clustering**: Grouping similar data points (e.g., customer segmentation).
|
||||
- **Dimensionality Reduction**: Simplifying data while retaining key information (e.g., PCA).
|
||||
- **Anomaly Detection**: Identifying unusual data points (e.g., fraud detection).
|
||||
|
||||
3. **Reinforcement Learning** This learning type involves an agent interacting with an environment. The agent learns by trial and error to maximize cumulative rewards, as seen in robotics and gaming.
|
||||
|
||||
::prose-img
|
||||
---
|
||||
src: /portfolio/ML/types.png
|
||||
label: ML Model Types
|
||||
caption: The different types of machine learning models
|
||||
---
|
||||
::
|
||||
|
||||
With this overview of ML types, let’s now focus on supervised learning, the most widely used approach, and explore how to choose the right model.
|
||||
|
||||
## 3 - Three Considerations for Choosing a Supervised Learning Model
|
||||
|
||||
Selecting the right supervised learning model is critical and depends on several factors:
|
||||
|
||||
1. **Problem Type**
|
||||
- Is it a regression or classification problem?
|
||||
- **Key Point**: Determine if the target variable is continuous or discrete.
|
||||
|
||||
2. **Problem Complexity**
|
||||
- Is the relationship between input features and the target variable linear or nonlinear?
|
||||
- **Key Point**: Understand whether the data allows for easy predictions or requires more complex models.
|
||||
|
||||
3. **Algorithmic Approach**
|
||||
- Should you choose a feature-based or similarity-based model?
|
||||
- **Key Point**: The choice of the model (e.g., linear regressions vs k-NN) depends on the dataset’s size and complexity.
|
||||
|
||||
Once the model type is defined, the next step is to delve into the full workflow of developing an ML model.
|
||||
|
||||
## 4 - The Typical Workflow in Machine Learning
|
||||
|
||||
A machine learning project generally follows these steps:
|
||||
|
||||
1. **Data Preparation**
|
||||
- Splitting data into training and testing sets.
|
||||
- Preprocessing: scaling, handling missing values, etc.
|
||||
2. **Model Training**
|
||||
- Fitting the model on training data: `model.fit(X, y)`.
|
||||
- Optimising parameters and hyperparameters.
|
||||
3. **Prediction and Evaluation**
|
||||
- Making predictions on unseen data: `model.predict(X)`.
|
||||
- Comparing predictions ($$\hat{y}$$) with actual values ($$y$$).
|
||||
|
||||
::prose-img
|
||||
---
|
||||
src: /portfolio/ML/model.png
|
||||
label: Modelization in Progress
|
||||
caption: Modelization in Progress
|
||||
---
|
||||
::
|
||||
|
||||
Evaluation is a crucial step to verify the performance of a model. For regression problems, the R² score is a key indicator.
|
||||
|
||||
## 5 - Evaluating Models: The R² Score
|
||||
|
||||
For regression problems, the **R² score** measures the proportion of the target’s variance explained by the model:
|
||||
|
||||
$$R^2 = 1 - \frac{\text{SS}_{\text{residual}}}{\text{SS}_{\text{total}}}$$ where:
|
||||
|
||||
- $$\text{SS}_{\text{residual}}$$ : Sum of squared residuals between actual and predicted values.
|
||||
- $$\text{SS}_{\text{total}}$$ : Total sum of squares relative to the target’s mean.
|
||||
|
||||
A $$R^2$$ close to 1 indicates a good fit.
|
||||
|
||||
::prose-img
|
||||
---
|
||||
src: /portfolio/ML/r2.png
|
||||
label: R² Score
|
||||
caption: R² Score
|
||||
---
|
||||
::
|
||||
|
||||
With these concepts in mind, you are better equipped to understand and apply ML models in your projects.
|
||||
|
||||
## 6 - Conclusion
|
||||
|
||||
Machine learning is revolutionising how we solve complex problems using data. Supervised, unsupervised, and reinforcement learning approaches allow us to tackle a wide variety of use cases. In supervised learning, the model choice depends on the problem type, its complexity, and the appropriate algorithmic approach. Finally, a structured workflow and metrics like the R² score ensure the quality of predictions and analyses.
|
||||
Reference in New Issue
Block a user