mirror of
https://github.com/ArthurDanjou/artsite.git
synced 2026-01-29 07:57:20 +01:00
Refactor: Split portfolio to projects and writings sections, and update content structure
- Renamed 'portfolio' collection to 'projects' in content configuration. - Introduced a new 'writings' collection with corresponding schema. - Updated README to reflect changes in content structure and navigation. - Removed the old portfolio page and added new pages for projects and writings. - Added multiple new project and writing markdown files with relevant content. - Updated license year to 2025. - Enhanced AppHeader for new navigation links. - Improved ProseImg component styling.
This commit is contained in:
@@ -1,12 +1,11 @@
|
||||
---
|
||||
slug: arthome
|
||||
title: ArtHome
|
||||
description: 🏡 Your personalised home page in your browser
|
||||
title: 🏡 ArtHome
|
||||
description: Your personalised home page in your browser
|
||||
publishedAt: 2024/09/04
|
||||
readingTime: 1
|
||||
cover: arthome/cover.png
|
||||
tags:
|
||||
- project
|
||||
- web
|
||||
---
|
||||
|
||||
@@ -1,12 +1,11 @@
|
||||
---
|
||||
slug: artsite
|
||||
title: ArtSite
|
||||
description: 🌍 My personal website, my portfolio, and my blog. 🚀
|
||||
title: 🌍 ArtSite
|
||||
description: My personal website, my portfolio, and my blog.
|
||||
publishedAt: 2024/06/01
|
||||
readingTime: 1
|
||||
cover: artsite/cover.png
|
||||
tags:
|
||||
- project
|
||||
- web
|
||||
---
|
||||
|
||||
@@ -1,11 +1,10 @@
|
||||
---
|
||||
slug: bikes-glm
|
||||
title: Generalized Linear Models for Bikes prediction
|
||||
description: 🚲 Predicting the number of bikes rented in a bike-sharing system using Generalized Linear Models.
|
||||
title: 🚲 Generalized Linear Models for Bikes prediction
|
||||
description: Predicting the number of bikes rented in a bike-sharing system using Generalized Linear Models.
|
||||
publishedAt: 2025/01/24
|
||||
readingTime: 1
|
||||
tags:
|
||||
- project
|
||||
- r
|
||||
- data
|
||||
- maths
|
||||
@@ -15,5 +14,5 @@ The project was done as part of the course `Generalised Linear Model` at the Par
|
||||
|
||||
You can find the code here: [GLM Bikes Code](https://github.com/ArthurDanjou/Studies/blob/master/M1/General%20Linear%20Models/Projet/GLM%20Code%20-%20DANJOU%20%26%20DUROUSSEAU.rmd)
|
||||
|
||||
<iframe src="/portfolio/bikes-glm/Report.pdf" width="100%" height="1000px">
|
||||
<iframe src="/projects/bikes-glm/Report.pdf" width="100%" height="1000px">
|
||||
</iframe>
|
||||
@@ -1,11 +1,10 @@
|
||||
---
|
||||
slug: monte-carlo-project
|
||||
title: Monte Carlo Methods Project
|
||||
title: 💻 Monte Carlo Methods Project
|
||||
description: A project to demonstrate the use of Monte Carlo methods in R.
|
||||
publishedAt: 2024/11/24
|
||||
readingTime: 3
|
||||
tags:
|
||||
- project
|
||||
- r
|
||||
- maths
|
||||
---
|
||||
@@ -22,5 +21,5 @@ Methods and algorithms implemented:
|
||||
|
||||
You can find the code here: [Monte Carlo Project Code](https://github.com/ArthurDanjou/Studies/blob/0c83e7e381344675e113c43b6f8d32e88a5c00a7/M1/Monte%20Carlo%20Methods/Project%201/003_rapport_DANJOU_DUROUSSEAU.rmd)
|
||||
|
||||
<iframe src="/portfolio/monte-carlo-project/Report.pdf" width="100%" height="1000px">
|
||||
<iframe src="/projects/monte-carlo-project/Report.pdf" width="100%" height="1000px">
|
||||
</iframe>
|
||||
@@ -1,11 +1,10 @@
|
||||
---
|
||||
slug: python-data-ml
|
||||
title: Python Data & ML
|
||||
description: 🧠 A repository dedicated to learning and practicing Python libraries for machine learning.
|
||||
title: 🐍 Python Data & ML
|
||||
description: A repository dedicated to learning and practicing Python libraries for machine learning.
|
||||
publishedAt: 2024/11/01
|
||||
readingTime: 1
|
||||
tags:
|
||||
- project
|
||||
- data
|
||||
- ai
|
||||
- python
|
||||
@@ -1,11 +1,10 @@
|
||||
---
|
||||
slug: schelling-segregation-model
|
||||
title: Schelling Segregation Model
|
||||
description: 📊 A Python implementation of the Schelling Segregation Model using Statistics and Data Visualization.
|
||||
title: 📊 Schelling Segregation Model
|
||||
description: A Python implementation of the Schelling Segregation Model using Statistics and Data Visualization.
|
||||
publishedAt: 2024/05/03
|
||||
readingTime: 4
|
||||
tags:
|
||||
- project
|
||||
- python
|
||||
- maths
|
||||
---
|
||||
@@ -14,5 +13,5 @@ This is the French version of the report for the Schelling Segregation Model pro
|
||||
|
||||
You can find the code here: [Schelling Segregation Model Code](https://github.com/ArthurDanjou/Studies/blob/e1164f89bd11fc59fa79d94aa51fac69b425d68b/L3/Projet%20Num%C3%A9rique/Segregation.ipynb)
|
||||
|
||||
<iframe src="/portfolio/schelling/Projet.pdf" width="100%" height="1000px">
|
||||
<iframe src="/projects/schelling/Projet.pdf" width="100%" height="1000px">
|
||||
</iframe>
|
||||
@@ -1,11 +1,10 @@
|
||||
---
|
||||
slug: studies
|
||||
title: Studies projects
|
||||
description: 🎓 Studies projects - a collection of projects done during my studies.
|
||||
title: 🎓 Studies projects
|
||||
description: A collection of projects done during my studies.
|
||||
publishedAt: 2023/09/01
|
||||
readingTime: 1
|
||||
tags:
|
||||
- project
|
||||
- data
|
||||
- python
|
||||
- r
|
||||
@@ -5,7 +5,6 @@ description: My new website is using a fantastical stack and I am explaining how
|
||||
publishedAt: 2024/06/21
|
||||
readingTime: 5
|
||||
tags:
|
||||
- article
|
||||
- web
|
||||
---
|
||||
|
||||
@@ -15,35 +14,35 @@ While it's still fresh in my mind, I wanted to document how this version of the
|
||||
|
||||

|
||||
|
||||
## Ideas and Goals
|
||||
## 1 - Ideas and Goals
|
||||
|
||||
Most of the time, I work on my site for fun and without any profit motive. However, while building this latest version, I managed to keep a few key ideas and goals in mind:
|
||||
|
||||
### Reduce writing friction
|
||||
### 1.1 - Reduce writing friction
|
||||
|
||||
This new version of my website was built with the idea that I should be able to add, edit, and delete content directly from the front-end. This means that everything needs to be backed by a database or CMS, which quickly adds complexity. But at the end of the day, adding a bookmark should be a matter of pasting a URL and clicking save. Writing a blog post should be a matter of typing some Markdown and clicking publication.
|
||||
|
||||
Extra friction on these processes would make me less likely to keep things up to date or share new things.
|
||||
|
||||
### A playground for ideas
|
||||
### 1.2 - A playground for ideas
|
||||
|
||||
I want my website to be a playground where I can safely experiment with new technologies and packages, including testing frameworks (like the new Nuxt 3 stack), improving CSS styles with Tailwind, and discovering new technologies and frameworks, in a way that allows for easy isolation and deletion. This requirement made Nuxt.js an obvious choice, thanks to its support for hybrid page rendering strategies—static, server-rendered, or client-rendered. More on this below.
|
||||
|
||||
### Fast
|
||||
### 1.3 - Fast
|
||||
|
||||
The new version of my website is faster than the old one, thanks to the latest version of Nuxt. This improvement enhances the overall user experience and ensures that the site remains responsive and efficient.
|
||||
|
||||
## FrontEnd & BackEnd with Nuxt 3
|
||||
## 2 - FrontEnd & BackEnd with Nuxt 3
|
||||
|
||||
I wanted this version of my site to reflect my personality, especially because it seemed like a fun project! What would a 'personal application' look like, showcasing everything I've created? I aimed for a clean, monochrome design with plenty of 'Easter eggs' to keep things interesting.
|
||||
|
||||
### Nuxt 3
|
||||
### 2.1 - Nuxt 3
|
||||
|
||||
Nuxt.js is my front-end framework of choice. I particularly appreciate it for its comprehensive and complementary Vue and Nuxt ecosystem. The filesystem-based router provides an intuitive and powerful abstraction for building the route hierarchy. Nuxt.js also benefits from a large community that has thoroughly tested the framework, addressing edge cases and developing creative solutions for common Vue, data recovery, and performance issues. Whenever I encounter a problem, I turn to the Nuxt.js discussions on [GitHub](https://github.com/nuxt) or their [Discord server](https://go.nuxt.com/discord). Almost every time, I find that others have already come up with innovative solutions to similar challenges.
|
||||
|
||||
Nuxt.js is also fast. It optimizes performance by speeding up local builds, automatically compressing static assets, and ensuring quick deployment times. The regular project updates mean my site continually gets faster over time—at no extra cost!
|
||||
|
||||
### Styling
|
||||
### 2.2 - Styling
|
||||
|
||||
#### Tailwind CSS
|
||||
|
||||
@@ -59,7 +58,7 @@ Nuxt UI is a new tool I've been using since its release to enhance and streamlin
|
||||
|
||||
Nuxt UI aims to provide everything you need for the UI when building a Nuxt app, including components, icons, colors, dark mode, and keyboard shortcuts. It's an excellent tool for both beginners and experienced developers.
|
||||
|
||||
### Database & Deployment
|
||||
### 2.3 - Database & Deployment
|
||||
|
||||
#### NuxtHub & Cloudflare workers
|
||||
|
||||
@@ -77,7 +76,7 @@ Drizzle isn't just a library; it's an exceptional journey 🤩. It empowers you
|
||||
|
||||
One word : `If you know SQL — you know Drizzle.`
|
||||
|
||||
### Writing
|
||||
### 2.4 - Writing
|
||||
|
||||
#### Nuxt Studio
|
||||
|
||||
@@ -93,7 +92,7 @@ The article you're currently reading is plain text stored in MySQL, rendered usi
|
||||
|
||||
Compromises are inevitable! I've chosen to sacrifice some features for simplicity and speed. I'm content with my decision, as it aligns with my goal of reducing friction in the writing process.
|
||||
|
||||
## How much everything costs
|
||||
## 3 - How much everything costs
|
||||
|
||||
I'm often asked how much it costs to run my website. Here's a breakdown of the costs:
|
||||
|
||||
@@ -103,6 +102,6 @@ I'm often asked how much it costs to run my website. Here's a breakdown of the c
|
||||
|
||||
Total: 0€ thanks to nuxt free plan and cloudflare free plan
|
||||
|
||||
## Thanks
|
||||
## 4 - Thanks
|
||||
|
||||
I want to thank the Nuxt team for their hard work and dedication to the project. I also want to thank the community for their support and for providing me with the tools I needed to build this site. I want to add a special thanks to [Estéban](https://x.com/soubiran_) for solving `All` my problems and for inspiring me to rewrite my website.
|
||||
@@ -5,27 +5,26 @@ description: This article introduces neural networks, explaining their structure
|
||||
readingTime: 3
|
||||
publishedAt: 2025/03/30
|
||||
tags:
|
||||
- article
|
||||
- ai
|
||||
- maths
|
||||
---
|
||||
|
||||
Neural networks are a class of machine learning algorithms inspired by the functioning of biological neurons. They are widely used in artificial intelligence for image recognition, natural language processing, time series forecasting, and many other fields. Thanks to their ability to model complex relationships in data, they have become one of the pillars of **deep learning**.
|
||||
|
||||
## 1. Basic Structure of a Neural Network
|
||||
## 1 - Basic Structure of a Neural Network
|
||||
|
||||
### 1.1 Neurons and Biological Inspiration
|
||||
### 1.1 - Neurons and Biological Inspiration
|
||||
|
||||
Neural networks are inspired by the way the human brain processes information. Each artificial neuron mimics a biological neuron, receiving inputs, applying a transformation, and passing the result to the next layer.
|
||||
|
||||
### 1.2 Layer Organization (Input, Hidden, Output)
|
||||
### 1.2 - Layer Organization (Input, Hidden, Output)
|
||||
|
||||
A neural network consists of layers:
|
||||
- **Input layer**: Receives raw data.
|
||||
- **Hidden layers**: Perform intermediate transformations and extract important features.
|
||||
- **Output layer**: Produces the final model prediction.
|
||||
|
||||
### 1.3 Weights and Biases
|
||||
### 1.3 - Weights and Biases
|
||||
|
||||
Each neuron connection has an associated **weight** $ w $, and each neuron has a **bias** $ b $. The transformation applied at each neuron before activation is given by:
|
||||
|
||||
@@ -33,7 +32,7 @@ $$
|
||||
z = W \cdot X + b
|
||||
$$
|
||||
|
||||
### 1.4 Neural Network Structure Visualization
|
||||
### 1.4 - Neural Network Structure Visualization
|
||||
|
||||
::prose-img
|
||||
---
|
||||
@@ -51,22 +50,22 @@ Starting from the left, we have:
|
||||
- The output layer (a.k.a. the prediction) of our model in green.
|
||||
- The arrows that connect the dots shows how all the neurons are interconnected and how data travels from the input layer all the way through to the output layer.
|
||||
|
||||
## 2. Information Propagation (Forward Pass)
|
||||
## 2 - Information Propagation (Forward Pass)
|
||||
|
||||
### 2.1 Linear Transformation $ z = W \cdot X + b $
|
||||
### 2.1 - Linear Transformation $ z = W \cdot X + b $
|
||||
|
||||
Each neuron computes a weighted sum of its inputs plus a bias term before applying an activation function.
|
||||
|
||||
### 2.2 Activation Functions (ReLU, Sigmoid, Softmax)
|
||||
### 2.2 - Activation Functions (ReLU, Sigmoid, Softmax)
|
||||
|
||||
Activation functions introduce **non-linearity**, enabling networks to learn complex patterns:
|
||||
- **ReLU**: $ f(z) = \max(0, z) $ (fast training, avoids vanishing gradients)
|
||||
- **Sigmoid**: $ \sigma(z) = \frac{1}{1 + e^{-z}} $ (useful for binary classification)
|
||||
- **Softmax**: Converts outputs into probability distributions for multi-class classification.
|
||||
|
||||
## 3. Learning and Backpropagation
|
||||
## 3 - Learning and Backpropagation
|
||||
|
||||
### 3.1 Cost Function (MSE, Cross-Entropy)
|
||||
### 3.1 - Cost Function (MSE, Cross-Entropy)
|
||||
|
||||
To measure error, different loss functions are used:
|
||||
- **Mean Squared Error (MSE)**:
|
||||
@@ -80,7 +79,7 @@ To measure error, different loss functions are used:
|
||||
|
||||
Where, $y$ represents the true values or labels, while $\hat{y}$represents the predicted values produced by the model. The goal is to minimize this difference during training.
|
||||
|
||||
### 3.2 Gradient Descent and Weight Updates
|
||||
### 3.2 - Gradient Descent and Weight Updates
|
||||
|
||||
Training consists of adjusting weights to minimize loss using **gradient descent**:
|
||||
|
||||
@@ -88,13 +87,13 @@ $$
|
||||
w := w - \alpha \frac{\partial L}{\partial w}, \quad b := b - \alpha \frac{\partial L}{\partial b}
|
||||
$$
|
||||
|
||||
### 3.3 Gradient Propagation via the Chain Rule
|
||||
### 3.3 - Gradient Propagation via the Chain Rule
|
||||
|
||||
Using **backpropagation**, the error is propagated backward through the network using the chain rule, adjusting each weight accordingly.
|
||||
|
||||
## 4. Optimization and Regularization
|
||||
## 4 - Optimization and Regularization
|
||||
|
||||
### 4.1 Optimization Algorithms (SGD, Adam)
|
||||
### 4.1 - Optimization Algorithms (SGD, Adam)
|
||||
|
||||
- **Stochastic Gradient Descent (SGD)**: Updates weights after each sample.
|
||||
- **Adam**: A more advanced optimizer that adapts learning rates per parameter.
|
||||
@@ -113,13 +112,13 @@ caption: Gradient Descent
|
||||
2. Modify each parameter by an amount proportional to its gradient element and in the opposite direction of its gradient element. For example, if the partial derivative of our cost function with respect to B0 is positive but tiny and the partial derivative with respect to B1 is negative and large, then we want to decrease B0 by a tiny amount and increase B1 by a large amount to lower our cost function.
|
||||
3. Recompute the gradient using our new tweaked parameter values and repeat the previous steps until we arrive at the minimum.
|
||||
|
||||
### 4.2 Regularization to Avoid Overfitting (Dropout, L1/L2)
|
||||
### 4.2 - Regularization to Avoid Overfitting (Dropout, L1/L2)
|
||||
|
||||
To prevent overfitting:
|
||||
- **Dropout** randomly disables neurons during training.
|
||||
- **L1/L2 regularization** penalizes large weights to encourage simpler models.
|
||||
|
||||
## 5. Network Architectures
|
||||
## 5 - Network Architectures
|
||||
|
||||
Multi-Layer Perceptron (MLP)
|
||||
A standard feedforward neural network with multiple layers.
|
||||
@@ -133,26 +132,26 @@ Useful for time series and natural language tasks.
|
||||
Transformers for NLP and Vision
|
||||
State-of-the-art architecture for language understanding and vision tasks.
|
||||
|
||||
## 6. Training and Evaluation
|
||||
## 6 - Training and Evaluation
|
||||
|
||||
### 6.1 Data Splitting (Train/Test Split)
|
||||
### 6.1 - Data Splitting (Train/Test Split)
|
||||
|
||||
To evaluate performance, data is split into **training** and **test** sets.
|
||||
|
||||
### 6.2 Evaluation Metrics (Accuracy, Precision, Recall, RMSE, R²)
|
||||
### 6.2 - Evaluation Metrics (Accuracy, Precision, Recall, RMSE, R²)
|
||||
|
||||
Metrics depend on the task:
|
||||
- **Accuracy, Precision, Recall** for classification.
|
||||
- **RMSE, R²** for regression.
|
||||
|
||||
### 6.3 Hyperparameter Tuning
|
||||
### 6.3 - Hyperparameter Tuning
|
||||
|
||||
Choosing the right:
|
||||
- **Learning rate**
|
||||
- **Batch size**
|
||||
- **Number of layers and neurons**
|
||||
|
||||
## Example: A Neural Network with Two Hidden Layers
|
||||
## 7 - Example: A Neural Network with Two Hidden Layers
|
||||
|
||||
The following example demonstrates a simple **multi-layer perceptron (MLP)** with two hidden layers, trained to perform linear regression.
|
||||
|
||||
@@ -188,6 +187,6 @@ plt.legend()
|
||||
plt.show()
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
## 8 - Conclusion
|
||||
|
||||
Neural networks form the foundation of modern artificial intelligence. Their ability to learn from data and generalize to new situations makes them essential for applications like computer vision, automatic translation, and predictive medicine. 🚀
|
||||
@@ -5,12 +5,11 @@ description: An introduction to machine learning, exploring its main types, key
|
||||
readingTime: 3
|
||||
publishedAt: 2024/11/26
|
||||
tags:
|
||||
- article
|
||||
- ai
|
||||
- maths
|
||||
---
|
||||
|
||||
## Introduction
|
||||
## 1 - Introduction
|
||||
|
||||
Machine Learning (ML) is a key discipline in artificial intelligence (AI), enabling systems to learn from data to make predictions or discover patterns. It is the driving force behind many modern innovations, from personalised recommendations to autonomous vehicles.
|
||||
|
||||
@@ -21,7 +20,7 @@ In this article, we will cover:
|
||||
3. **The typical ML workflow**, exploring the essential steps for developing a model.
|
||||
4. **Model evaluation through the R² score**, an important metric for regression problems.
|
||||
|
||||
## The Types of Machine Learning
|
||||
## 2 - The Types of Machine Learning
|
||||
|
||||
To start, it is important to understand the three main categories of machine learning:
|
||||
|
||||
@@ -46,7 +45,7 @@ caption: The different types of machine learning models
|
||||
|
||||
With this overview of ML types, let’s now focus on supervised learning, the most widely used approach, and explore how to choose the right model.
|
||||
|
||||
## Three Considerations for Choosing a Supervised Learning Model
|
||||
## 3 - Three Considerations for Choosing a Supervised Learning Model
|
||||
|
||||
Selecting the right supervised learning model is critical and depends on several factors:
|
||||
|
||||
@@ -64,7 +63,7 @@ Selecting the right supervised learning model is critical and depends on several
|
||||
|
||||
Once the model type is defined, the next step is to delve into the full workflow of developing an ML model.
|
||||
|
||||
## The Typical Workflow in Machine Learning
|
||||
## 4 - The Typical Workflow in Machine Learning
|
||||
|
||||
A machine learning project generally follows these steps:
|
||||
|
||||
@@ -88,7 +87,7 @@ caption: Modelization in Progress
|
||||
|
||||
Evaluation is a crucial step to verify the performance of a model. For regression problems, the R² score is a key indicator.
|
||||
|
||||
## Evaluating Models: The R² Score
|
||||
## 5 - Evaluating Models: The R² Score
|
||||
|
||||
For regression problems, the **R² score** measures the proportion of the target’s variance explained by the model:
|
||||
|
||||
@@ -109,6 +108,6 @@ caption: R² Score
|
||||
|
||||
With these concepts in mind, you are better equipped to understand and apply ML models in your projects.
|
||||
|
||||
## Conclusion
|
||||
## 6 - Conclusion
|
||||
|
||||
Machine learning is revolutionising how we solve complex problems using data. Supervised, unsupervised, and reinforcement learning approaches allow us to tackle a wide variety of use cases. In supervised learning, the model choice depends on the problem type, its complexity, and the appropriate algorithmic approach. Finally, a structured workflow and metrics like the R² score ensure the quality of predictions and analyses.
|
||||
Reference in New Issue
Block a user