fix: ajouter des titres de section avec le composant BackgroundTitle dans plusieurs fichiers markdown

This commit is contained in:
2026-02-17 18:25:05 +01:00
parent 68a3b0468b
commit 5e743cb13e
18 changed files with 127 additions and 65 deletions

View File

@@ -20,11 +20,13 @@ icon: i-ph-flask-duotone
[**ArtLab**](https://go.arthurdanjou.fr/status) is my personal homelab: a controlled environment for experimenting with DevOps, distributed systems, and private cloud architecture.
## 🏗️ Architectural Philosophy
::BackgroundTitle{title="Architectural Philosophy"}
::
The infrastructure follows a **Zero Trust** model. Access is restricted to a private mesh VPN using **Tailscale (WireGuard)**, removing the need for open ports. For select public endpoints, **Cloudflare Tunnels** provide a hardened entry point, keeping my public IP hidden while preserving end-to-end encryption from the edge to the origin.
## 🛠️ Service Stack
::BackgroundTitle{title="Service Stack"}
::
Services are grouped by functional domain to keep orchestration clean and scalable:
@@ -51,7 +53,8 @@ Services are grouped by functional domain to keep orchestration clean and scalab
* **MQTT Broker**: Low-latency message bus for device-to-service communication.
* **Zigbee2MQTT**: Bridge for local Zigbee device control without cloud dependencies.
## 🖥️ Hardware Specifications
::BackgroundTitle{title="Hardware Specifications"}
::
| Component | Hardware | Role |
| :--- | :--- | :--- |

View File

@@ -20,7 +20,8 @@ icon: i-ph-globe-hemisphere-west-duotone
More than a static site, it is a modern **Portfolio** designed to be fast, accessible, and type-safe. It also acts as a live production environment where I test the latest frontend technologies and Edge computing paradigms.
## ⚡ The Nuxt Stack Architecture
::BackgroundTitle{title="The Nuxt Stack Architecture"}
::
This project is built entirely on the **Nuxt ecosystem**, leveraging module synergy for strong developer experience and performance.

View File

@@ -23,7 +23,8 @@ The projects are organized into three main sections:
- **M1** First year of the Master's degree in Mathematics
- **M2** Second year of the Master's degree in Mathematics
## 📁 File Structure
::BackgroundTitle{title="File Structure"}
::
- `L3`
- `Analyse Matricielle`
@@ -52,7 +53,8 @@ The projects are organized into three main sections:
- `VBA`
- `SQL`
## 🛠️ Technologies & Tools
::BackgroundTitle{title="Technologies & Tools"}
::
- **[Python](https://www.python.org)**: A high-level, interpreted programming language, widely used for data science, machine learning, and scientific computing.
- **[R](https://www.r-project.org)**: A statistical computing environment, perfect for data analysis and visualization.

View File

@@ -17,11 +17,13 @@ tags:
icon: i-ph-wind-duotone
---
## Overview
::BackgroundTitle{title="Overview"}
::
This project is a detailed study of **wind risk assessment and modeling** in the context of natural disasters, using the **December 1999 Martin Storm** as a case study. The analysis combines statistical methods, meteorological data, and spatial analysis techniques to understand and quantify the impacts of extreme wind events.
## 🎯 Objectives
::BackgroundTitle{title="Objectives"}
::
The primary objectives of this research were:
@@ -31,7 +33,8 @@ The primary objectives of this research were:
4. **Quantify economic and environmental impacts** of the storm
5. **Develop predictive models** for future risk assessment and disaster preparedness
## 📊 Methodology
::BackgroundTitle{title="Methodology"}
::
### Data Sources
- Historical meteorological records from the 1999 Martin Storm
@@ -52,7 +55,8 @@ The primary objectives of this research were:
- Peak-over-threshold (POT) methods
- Spatial correlation analysis
## 🌍 Key Findings
::BackgroundTitle{title="Key Findings"}
::
The analysis revealed:
- Wind speeds exceeding 100 km/h across multiple regions
@@ -61,7 +65,8 @@ The analysis revealed:
- Seasonal and geographical risk variations
- Return period estimations for comparable extreme events
## 💡 Applications
::BackgroundTitle{title="Applications"}
::
The methodologies developed in this project have applications in:
- **Disaster risk reduction and preparedness** planning
@@ -70,7 +75,8 @@ The methodologies developed in this project have applications in:
- **Climate adaptation** strategies
- **Early warning systems** for extreme weather events
## 📄 Detailed Report
::BackgroundTitle{title="Detailed Report"}
::
<iframe src="/projects/climate-issues.pdf" width="100%" height="1000px">
</iframe>

View File

@@ -21,7 +21,8 @@ The project is complete, but the documentation is still being expanded with more
This project involves building an interactive data visualization application using R and R Shiny. The goal is to deliver dynamic, explorable visualizations that let users interact with the data in meaningful ways.
## 🛠️ Technologies & Tools
::BackgroundTitle{title="Technologies & Tools"}
::
- **[R](https://www.r-project.org)**: A statistical computing environment, perfect for data analysis and visualization.
- **[R Shiny](https://shiny.rstudio.com)**: A web application framework for R that enables the creation of interactive web applications directly from R.
@@ -43,13 +44,15 @@ This project involves building an interactive data visualization application usi
- **[RColorBrewer](https://cran.r-project.org/web/packages/RColorBrewer/)**: An R package providing color palettes for maps and other graphics.
- **[DT](https://rstudio.github.io/DT/)**: An R package for creating interactive data tables.
## 📚 Resources
::BackgroundTitle{title="Resources"}
::
You can find the code here: [Data Visualisation Code](https://go.arthurdanjou.fr/datavis-code)
And the online application here: [Data Visualisation App](https://go.arthurdanjou.fr/datavis-app)
## 📄 Detailed Report
::BackgroundTitle{title="Detailed Report"}
::
<iframe src="/projects/datavis.pdf" width="100%" height="1000px">
</iframe>

View File

@@ -21,7 +21,8 @@ The paper is available at: [https://arxiv.org/abs/2303.01500](https://arxiv.org/
This repository contains a robust, modular **TensorFlow/Keras** implementation of **Early Dropout** and **Late Dropout** strategies. The goal is to verify the hypothesis that dropout, traditionally used to reduce overfitting, can also combat underfitting when applied only during the initial training phase.
## 🎯 Scientific Objectives
::BackgroundTitle{title="Scientific Objectives"}
::
The study aims to validate the operating regimes of Dropout described in the paper:
@@ -30,7 +31,8 @@ The study aims to validate the operating regimes of Dropout described in the pap
3. **Standard Dropout**: Constant rate throughout training (baseline).
4. **No Dropout**: Control experiment without dropout.
## 🛠️ Technical Architecture
::BackgroundTitle{title="Technical Architecture"}
::
Unlike naive Keras callback implementations, this project uses a **dynamic approach via the TensorFlow graph** to ensure the dropout rate updates on the GPU without model recompilation.
@@ -40,7 +42,8 @@ Unlike naive Keras callback implementations, this project uses a **dynamic appro
* **`DropoutScheduler`**: A Keras `Callback` that drives the rate variable based on the current epoch and the chosen strategy (`early`, `late`, `standard`).
* **`ExperimentPipeline`**: An orchestrator class that handles data loading (MNIST, CIFAR-10, Fashion MNIST), model creation (Dense or CNN), and execution of comparative benchmarks.
## File Structure
::BackgroundTitle{title="File Structure"}
::
```
.
@@ -57,7 +60,8 @@ Unlike naive Keras callback implementations, this project uses a **dynamic appro
└── uv.lock # Dependency lock file
```
## 🚀 Installation
::BackgroundTitle{title="Installation"}
::
```bash
# Clone the repository
@@ -65,12 +69,14 @@ git clone https://github.com/arthurdanjou/dropoutreducesunderfitting.git
cd dropoutreducesunderfitting
```
## Install dependencies
::BackgroundTitle{title="Install dependencies"}
::
```bash
pip install tensorflow numpy matplotlib seaborn scikit-learn
```
## 📊 Usage
::BackgroundTitle{title="Usage"}
::
The main notebook pipeline.ipynb contains all necessary code. Here is how to run a typical experiment via the pipeline API.
@@ -133,19 +139,22 @@ exp.run_dataset_size_comparison(
)
```
## 📈 Expected Results
::BackgroundTitle{title="Expected Results"}
::
According to the paper, you should observe:
- Early Dropout: Higher initial loss, followed by a sharp drop after the switch_epoch, often reaching a lower minimum than Standard Dropout (reduction of underfitting).
- Late Dropout: Rapid rise in accuracy at the start (potential overfitting), then stabilized by the activation of dropout.
## 📄 Detailed Report
::BackgroundTitle{title="Detailed Report"}
::
<iframe src="/projects/dropout-reduces-underfitting.pdf" width="100%" height="1000px">
</iframe>
## 📝 Authors
::BackgroundTitle{title="Authors"}
::
- [Arthur Danjou](https://github.com/ArthurDanjou)
- [Alexis Mathieu](https://github.com/Alex6535)

View File

@@ -17,14 +17,16 @@ icon: i-ph-bicycle-duotone
This project was completed as part of the **Generalized Linear Models** course at Paris-Dauphine PSL University. The objective was to develop and compare statistical models that predict bicycle rentals in a bike-sharing system using environmental and temporal features.
## 📊 Project Objectives
::BackgroundTitle{title="Project Objectives"}
::
- Determine the best predictive model for bicycle rental counts
- Analyze the impact of key features (temperature, humidity, wind speed, seasonality, etc.)
- Apply and evaluate different generalized linear modeling techniques
- Validate model assumptions and performance metrics
## 🔍 Methodology
::BackgroundTitle{title="Methodology"}
::
The study uses a rigorous statistical workflow, including:
@@ -34,7 +36,8 @@ The study uses a rigorous statistical workflow, including:
- **Model Diagnostics** - Validating assumptions and checking residuals
- **Cross-validation** - Ensuring robust performance estimates
## 📁 Key Findings
::BackgroundTitle{title="Key Findings"}
::
The analysis identified critical factors influencing bike-sharing demand:
- Seasonal patterns and weather conditions
@@ -42,11 +45,13 @@ The analysis identified critical factors influencing bike-sharing demand:
- Holiday and working day distinctions
- Time-based trends and cyclical patterns
## 📚 Resources
::BackgroundTitle{title="Resources"}
::
You can find the code here: [GLM Bikes Code](https://go.arthurdanjou.fr/glm-bikes-code)
## 📄 Detailed Report
::BackgroundTitle{title="Detailed Report"}
::
<iframe src="/projects/bikes-glm.pdf" width="100%" height="1000px">
</iframe>

View File

@@ -19,7 +19,8 @@ This project targets high-precision calibration of the **Implied Volatility Surf
The core objective is to stress-test classic statistical models against modern predictive algorithms. **Generalized Linear Models (GLMs)** provide a transparent baseline, while more complex "black-box" architectures are evaluated on whether their accuracy gains justify reduced interpretability in a risk management context.
## 📊 Dataset & Scale
::BackgroundTitle{title="Dataset & Scale"}
::
The modeling is performed on a high-dimensional dataset with over **1.2 million observations**.
@@ -27,7 +28,8 @@ The modeling is performed on a high-dimensional dataset with over **1.2 million
- **Features**: Option strike price ($K$), underlying asset price ($S$), and time to maturity ($\tau$).
- **Volume**: A training set of $1,251,307$ rows and a test set of identical size.
## 🛠️ Modeling Methodology
::BackgroundTitle{title="Modeling Methodology"}
::
The project follows a rigorous statistical pipeline to compare two modeling philosophies:
@@ -42,7 +44,8 @@ Key financial indicators are derived from the raw data:
- **Moneyness**: Calculated as the ratio $K/S$.
- **Temporal Dynamics**: Transformations of time to maturity to linearize the term structure.
## 📈 Evaluation & Reproducibility
::BackgroundTitle{title="Evaluation & Reproducibility"}
::
Performance is measured strictly via RMSE on the original scale of the target variable. To ensure reproducibility and precise comparisons across model iterations, a fixed random seed is maintained throughout the workflow.
@@ -58,7 +61,8 @@ rmse_eval <- function(actual, predicted) {
```
## 🔍 Critical Analysis
::BackgroundTitle{title="Critical Analysis"}
::
Beyond pure prediction, the project addresses:

View File

@@ -16,13 +16,15 @@ tags:
icon: i-ph-shield-check-duotone
---
## The Setting: Fort de Mont-Valérien
::BackgroundTitle{title="The Setting: Fort de Mont-Valérien"}
::
This was not a typical university hackathon. Organized by the **Commissariat au Numerique de Defense (CND)**, the event took place over three intense days within the walls of the **Fort de Mont-Valerien**, a highly secured military fortress.
Working in this environment underscored the real-world stakes of the mission. Our **team of six**, representing **Universite Paris-Dauphine**, competed against several elite engineering schools to solve critical defense-related data challenges.
## The Mission: Classifying the "Invisible"
::BackgroundTitle{title="The Mission: Classifying the Invisible"}
::
The core task involved processing poorly labeled and noisy firewall logs. In a defense context, a "missing" log or a mislabeled entry can be the difference between a minor system bug and a coordinated intrusion.
@@ -38,13 +40,15 @@ In military cybersecurity, the cost of a **False Negative** (an undetected attac
> **Key Achievement:** Our model significantly reduced the rate of undetected threats compared to the baseline configurations provided at the start of the challenge.
## Deployment & Interaction
::BackgroundTitle{title="Deployment & Interaction"}
::
To make our findings operational, we built a **Streamlit-based command center**:
* **On-the-Fly Analysis:** Security officers can paste a single log line to get an immediate "Bug vs. Attack" probability score.
* **Bulk Audit:** The interface supports CSV uploads, allowing for the rapid analysis of entire daily log batches to highlight high-risk anomalies.
## Technical Stack
::BackgroundTitle{title="Technical Stack"}
::
* **Language:** Python
* **ML Library:** Scikit-learn, XGBoost
* **Deployment:** Streamlit

View File

@@ -16,13 +16,15 @@ tags:
icon: i-ph-database-duotone
---
## The Challenge
::BackgroundTitle{title="The Challenge"}
::
Organized by **Natixis**, this hackathon followed a high-intensity format: **three consecutive Saturdays** of on-site development, bridged by two full weeks of remote collaboration.
Working in a **team of four**, our goal was to bridge the gap between non-technical stakeholders and complex financial databases by creating an autonomous "Data Talk" agent.
## Core Features
::BackgroundTitle{title="Core Features"}
::
### 1. Data Engineering & Schema Design
Before building the AI layer, we handled a significant data migration task. I led the effort to:
@@ -39,14 +41,16 @@ Data is only useful if its readable. Our Nuxt application goes beyond raw tab
* **Dynamic Charts:** The agent automatically determines the best visualization type (Bar, Line, Pie) based on the query result and renders it using interactive components.
* **Narrative Explanations:** A final LLM pass summarizes the data findings in plain English, highlighting anomalies or key trends.
## Technical Stack
::BackgroundTitle{title="Technical Stack"}
::
* **Frontend/API:** **Nuxt 3** for a seamless, reactive user interface.
* **Orchestration:** **Vercel AI SDK** to manage streams and tool-calling logic.
* **Inference:** **Ollama** for running LLMs locally, ensuring data privacy during development.
* **Storage:** **PostgreSQL** for the converted data warehouse.
## Impact & Results
::BackgroundTitle{title="Impact & Results"}
::
This project demonstrated that a modern stack (Nuxt + local LLMs) can drastically reduce the time needed for data discovery. By the final Saturday, our team presented a working prototype capable of handling multi-table joins and generating real-time financial dashboards from simple chat prompts.

View File

@@ -18,14 +18,16 @@ icon: i-ph-money-wavy-duotone
This project focuses on building machine learning models to predict loan approval outcomes and assess default risk. The objective is to develop robust classification models that identify creditworthy applicants.
## 📊 Project Objectives
::BackgroundTitle{title="Project Objectives"}
::
- Build and compare multiple classification models for loan prediction
- Identify key factors influencing loan approval decisions
- Evaluate model performance using appropriate metrics
- Optimize model parameters for better predictive accuracy
## 🔍 Methodology
::BackgroundTitle{title="Methodology"}
::
The study employs a range of machine learning approaches:
@@ -35,7 +37,8 @@ The study employs a range of machine learning approaches:
- **Hyperparameter Tuning** - Optimizing model performance
- **Cross-validation** - Ensuring robust generalization
## 📄 Detailed Report
::BackgroundTitle{title="Detailed Report"}
::
<iframe src="/projects/loan-ml.pdf" width="100%" height="1000px">
</iframe>

View File

@@ -19,7 +19,8 @@ icon: i-ph-dice-five-duotone
This report presents the Monte Carlo Methods Project completed as part of the **Monte Carlo Methods** course at Paris-Dauphine University. The goal was to implement a range of Monte Carlo methods and algorithms in R.
## 🛠️ Methods and Algorithms
::BackgroundTitle{title="Methods and Algorithms"}
::
- Plotting graphs of functions
- Inverse CDF random variation simulation
@@ -28,11 +29,13 @@ This report presents the Monte Carlo Methods Project completed as part of the **
- Cumulative density function
- Empirical quantile function
## 📚 Resources
::BackgroundTitle{title="Resources"}
::
You can find the code here: [Monte Carlo Project Code](https://go.arthurdanjou.fr/monte-carlo-code)
## 📄 Detailed Report
::BackgroundTitle{title="Detailed Report"}
::
<iframe src="/projects/monte-carlo.pdf" width="100%" height="1000px">
</iframe>

View File

@@ -17,11 +17,13 @@ tags:
icon: i-ph-plugs-connected-duotone
---
## Overview
::BackgroundTitle{title="Overview"}
::
This project focuses on designing and implementing autonomous workflows that leverage Large Language Models (LLMs) to streamline productivity and academic research. By orchestrating Generative AI through a self-hosted infrastructure on my **[ArtLab](/projects/artlab)**, I built a private ecosystem that acts as both a personal assistant and a specialized research agent.
## Key Workflows
::BackgroundTitle{title="Key Workflows"}
::
### 1. Centralized Productivity Hub
I developed a synchronization engine that bridges **Notion**, **Google Calendar**, and **Todoist**.
@@ -35,7 +37,8 @@ To stay at the forefront of AI research, I built an automated pipeline for acade
* **Knowledge Base:** Relevant papers and posts are automatically stored in a structured Notion database.
* **Interactive Research Agent:** I integrated a chat interface within n8n that allows me to query this collected data. I can request summaries, ask specific technical questions about a paper, or extract the most relevant insights for my current thesis work.
## Technical Architecture
::BackgroundTitle{title="Technical Architecture"}
::
The environment is built to handle complex multi-step chains, moving beyond simple API calls to create context-aware agents.
@@ -44,7 +47,8 @@ The environment is built to handle complex multi-step chains, moving beyond simp
* **Data Sources:** RSS feeds and Notion databases.
* **Notifications & UI:** Gmail for briefings and Discord for real-time system alerts.
## Key Objectives
::BackgroundTitle{title="Key Objectives"}
::
1. **Privacy-Centric AI:** Ensuring that sensitive academic data and personal schedules remain within a self-hosted or controlled environment.
2. **Academic Efficiency:** Reducing the "noise" of information overload by using AI to surface only the most relevant research papers.

View File

@@ -16,13 +16,15 @@ tags:
icon: i-ph-lightning-duotone
---
## Overview
::BackgroundTitle{title="Overview"}
::
This project serves as a practical application of theoretical Reinforcement Learning (RL) principles. The goal is to develop and train autonomous agents capable of mastering the complex dynamics of **Atari Tennis**, using the **Arcade Learning Environment (ALE)** via Farama Foundation's Gymnasium.
Instead of simply reaching a high score, this project focuses on **strategy optimization** and **comparative performance** through a multi-stage tournament architecture.
## Technical Objectives
::BackgroundTitle{title="Technical Objectives"}
::
The project is divided into three core phases:
@@ -32,16 +34,17 @@ I am implementing several key RL algorithms covered during my academic curriculu
* **Policy Gradient Methods:** Proximal Policy Optimization (PPO) for more stable continuous action control.
* **Exploration Strategies:** Implementing epsilon-greedy and entropy-based exploration to handle the sparse reward signals in tennis rallies.
### 2. The "Grand Slam" Tournament (Self-Play)
#### 2. The "Grand Slam" Tournament (Self-Play)
To determine the most robust strategy, I developed a competitive framework:
* **Agent vs. Agent:** Different algorithms (e.g., PPO vs. DQN) are pitted against each other in head-to-head matches.
* **Evolutionary Ranking:** Success is measured not just by points won, but by the ability to adapt to the opponent's playstyle (serve-and-volley vs. baseline play).
* **Winner Identification:** The agent with the highest win rate and most stable policy is crowned the "Optimal Strategist."
### 3. Benchmarking Against Atari Baselines
#### 3. Benchmarking Against Atari Baselines
The final "Boss Level" involves taking my best-performing trained agent and testing it against the pre-trained, high-performance algorithms provided by the Atari/ALE benchmarks. This serves as a validation step to measure the efficiency of my custom implementations against industry-standard baselines.
## Tech Stack & Environment
::BackgroundTitle{title="Tech Stack & Environment"}
::
* **Environment:** [ALE (Arcade Learning Environment) - Tennis](https://ale.farama.org/environments/tennis/)
* **Frameworks:** Python, Gymnasium, PyTorch (for neural network backends).

View File

@@ -18,11 +18,13 @@ icon: i-ph-city-duotone
This report presents the Schelling Segregation Model project completed as part of the **Projet Numerique** course at Paris-Saclay University. The goal was to implement the Schelling Segregation Model in Python and analyze the results using statistics and data visualization.
## 📚 Resources
::BackgroundTitle{title="Resources"}
::
You can find the code here: [Schelling Segregation Model Code](https://go.arthurdanjou.fr/schelling-code)
## 📄 Detailed Report
::BackgroundTitle{title="Detailed Report"}
::
<iframe src="/projects/schelling.pdf" width="100%" height="1000px">
</iframe>

View File

@@ -20,13 +20,15 @@ icon: i-ph-dog-duotone
Committed to digital innovation, Sevetys leverages centralized data systems to optimize clinic operations, improve patient data management, and enhance the overall client experience. This combination of medical excellence and operational efficiency supports veterinarians in delivering high-quality care nationwide.
## 🎯 Internship Objectives
::BackgroundTitle{title="Internship Objectives"}
::
During my two-month internship as a Data Engineer, I focused primarily on cleaning and standardizing customer and patient data, a critical task because this data is extensively used by clinics, Marketing, and Performance teams. Ensuring data quality was essential to the company's operations.
Additionally, I revised and enhanced an existing data quality report designed to evaluate the effectiveness of my cleaning processes. The report covered 47 detailed metrics assessing data completeness and consistency, providing valuable insights that helped maintain high standards across the organization.
## ⚙️ Technology Stack
::BackgroundTitle{title="Technology Stack"}
::
- **[Microsoft Azure Cloud](https://azure.microsoft.com/)**: Cloud infrastructure platform
- **[PySpark](https://spark.apache.org/docs/latest/api/python/)**: Distributed data processing framework

View File

@@ -17,7 +17,8 @@ icon: i-ph-heart-half-duotone
This project was carried out as part of the **Statistical Learning** course at Paris-Dauphine PSL University. The objective is to identify the most effective model for predicting or explaining the presence of breast cancer based on a set of biological and clinical features.
## 📊 Project Objectives
::BackgroundTitle{title="Project Objectives"}
::
Develop and evaluate several supervised classification models to predict the presence of breast cancer based on biological features extracted from the Breast Cancer Coimbra dataset, provided by the UCI Machine Learning Repository.
@@ -27,7 +28,8 @@ The dataset contains 116 observations divided into two classes:
There are 9 explanatory variables, including clinical measurements such as age, insulin levels, leptin, insulin resistance, among others.
## 🔍 Methodology
::BackgroundTitle{title="Methodology"}
::
The project follows a comparative approach between several algorithms:
@@ -40,11 +42,13 @@ Model evaluation is primarily based on the F1-score, which is more suitable in a
This project illustrates a concrete application of data science techniques to a public health issue, while implementing a rigorous methodology for supervised modeling.
## 📚 Resources
::BackgroundTitle{title="Resources"}
::
You can find the code here: [Breast Cancer Detection](https://go.arthurdanjou.fr/breast-cancer-detection-code)
## 📄 Detailed Report
::BackgroundTitle{title="Detailed Report"}
::
<iframe src="/projects/breast-cancer.pdf" width="100%" height="1000px">
</iframe>