25 Commits

Author SHA1 Message Date
c45b1d6f25 fix: mettre à jour les liens vers le dépôt GitHub et l'application en direct dans le projet de visualisation de la tuberculose 2026-03-10 12:25:20 +01:00
1537343e44 fix: supprimer le fichier de projet "Data Visualisation Project" 2026-03-10 12:23:32 +01:00
ac5ccb3555 Refactor project documentation and structure
- Updated data visualization project documentation to remove incomplete warning.
- Deleted the glm-financial-assets project file and replaced it with glm-implied-volatility project file, detailing a comprehensive study on implied volatility prediction using GLMs and machine learning.
- Marked n8n automations project as completed.
- Added new project on reinforcement learning applied to Atari Tennis, detailing agent comparisons and results.
- Removed outdated rl-tennis project file.
- Updated package dependencies in package.json for improved stability and performance.
2026-03-10 12:07:09 +01:00
6d0e55e188 fix: mettre à jour les versions des dépendances @iconify-json/vscode-icons et wrangler 2026-02-17 18:26:05 +01:00
5e743cb13e fix: ajouter des titres de section avec le composant BackgroundTitle dans plusieurs fichiers markdown 2026-02-17 18:25:05 +01:00
68a3b0468b feat: ajouter le projet "Wind Risk Modeling - The 1999 Martin Storm" avec description et méthodologie 2026-02-17 18:17:22 +01:00
0703ac7ff7 fix: ajuster la largeur du composant BackgroundTitle pour un meilleur affichage 2026-02-17 09:11:37 +01:00
20f17fba4e fix: ajuster la taille de la police dans le composant BackgroundTitle 2026-02-16 22:46:50 +01:00
81747fb458 fix: corriger la syntaxe des titres et des sections dans hobbies.md 2026-02-16 21:53:49 +01:00
9d2e485e76 fix: mettre à jour la version de dépendance pour @nuxthub/core dans bun.lock 2026-02-16 21:29:24 +01:00
027c24f728 fix: corriger l'indentation du code Python dans la documentation 2026-02-16 21:28:33 +01:00
2234ab1ea7 feat: ajouter des variables de couleur et de police pour améliorer le thème 2026-02-16 21:26:05 +01:00
0beb1d8b4e feat: ajouter le composant BackgroundTitle et l'utiliser dans les pages de projets et de contenu 2026-02-16 21:16:07 +01:00
f489f933b5 fix: mettre à jour la date de fin du doctorat dans le fichier de description 2026-02-16 21:00:23 +01:00
08fecc5bfa fix: corriger les majuscules dans les statuts des projets et mettre à jour les descriptions des projets 2026-02-16 20:22:11 +01:00
89a914e130 feat: ajouter des étiquettes de projet avec des badges dans la liste des projets 2026-02-16 19:50:40 +01:00
5a4a4f380f feat: Add CLAUDE.md for project guidance and update project files
- Created CLAUDE.md to provide development commands, architecture overview, and environment variables for the Nuxt 3 portfolio website.
- Refactored project pages to remove unused color mappings and improve project filtering logic.
- Updated content.config.ts to enforce stricter project type definitions and added short descriptions for projects.
- Deleted outdated project files and added new projects related to hackathons and academic research.
- Enhanced existing project descriptions with short summaries for better clarity.
2026-02-16 19:48:31 +01:00
51c60a2790 fix: supprimer la fonction useProjectColors et ses définitions de couleurs 2026-02-16 18:51:34 +01:00
2685aac920 feat: ajouter une nouvelle utilité text-stroke pour le style de texte 2026-02-16 18:51:28 +01:00
b78d4ef983 Add new research and academic projects: Dropout Reduces Underfitting, GLM Bikes, ML Loan Prediction, and Breast Cancer Detection
- Implemented a new research project on Dropout strategies in deep learning, including detailed objectives, methodology, and usage instructions.
- Created a project for predicting bike rentals using Generalized Linear Models, outlining objectives, methodology, and key findings.
- Developed a machine learning project for loan prediction, detailing objectives, methodology, and a report on model performance.
- Added a project focused on breast cancer detection using various classification models, including objectives, methodology, and resources.
- Updated package.json with author information and upgraded dependencies.
2026-02-16 18:14:00 +01:00
572a9af72e fix: ajouter le fichier .node-version avec la version 25.6.1 2026-02-15 22:40:18 +01:00
da5a859d0f fix: mettre à jour l'année de copyright dans le fichier LICENSE 2026-01-06 16:23:03 +01:00
e33ef53329 fix: supprimer la route de prévisualisation dans la configuration de Wrangler 2026-01-06 15:00:24 +01:00
de2e3cc0a5 fix: améliorer le déploiement avec Cloudflare Wrangler en capturant l'URL de déploiement 2026-01-06 14:56:53 +01:00
2197c23062 fix: supprimer les fichiers de configuration inutiles de .gitignore 2026-01-06 14:52:44 +01:00
39 changed files with 2681 additions and 1486 deletions

View File

@@ -59,13 +59,19 @@ jobs:
echo "env_name=Preview" >> $GITHUB_OUTPUT
fi
- name: Run Cloudflare Wrangler
uses: cloudflare/wrangler-action@v3
with:
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
command: ${{ steps.target.outputs.wrangler_command }}
gitHubToken: ${{ secrets.GITHUB_TOKEN }}
- name: Run Cloudflare Wrangler & Capture URL
id: wrangler
run: |
# Exécuter wrangler et rediriger la sortie vers un fichier tout en l'affichant (tee)
bunx wrangler ${{ steps.target.outputs.wrangler_command }} | tee wrangler.log
# Extraction de l'URL
if [ "${{ steps.target.outputs.env_name }}" = "Preview" ]; then
PREVIEW_URL=$(grep -o 'https://[^ ]*\.workers\.dev' wrangler.log | head -n 1)
echo "DEPLOY_URL=$PREVIEW_URL" >> $GITHUB_OUTPUT
else
echo "DEPLOY_URL=https://arthurdanjou.fr" >> $GITHUB_OUTPUT
fi
env:
CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}
CLOUDFLARE_ACCOUNT_ID: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
@@ -80,7 +86,8 @@ jobs:
title: "Déploiement Portfolio (${{ steps.target.outputs.env_name }})"
description: |
Build terminé sur la branche **${{ github.ref_name }}**.
Environnement cible : **${{ steps.target.outputs.env_name }}**.
Environnement : **${{ steps.target.outputs.env_name }}**
URL : **${{ steps.wrangler.outputs.DEPLOY_URL }}**
Commit: `${{ github.sha }}` par ${{ github.actor }}.
nofail: false
nodetail: false

2
.gitignore vendored
View File

@@ -16,8 +16,6 @@ logs
# Misc
.DS_Store
.fleet
.idea
# Local env files
.env

1
.node-version Normal file
View File

@@ -0,0 +1 @@
v25.6.1

86
CLAUDE.md Normal file
View File

@@ -0,0 +1,86 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Development Commands
```bash
# Install dependencies
bun install
# Start development server
bun run dev
# Build for production
bun run build
# Preview production build
bun run preview
# Lint code
bun run lint
# Deploy to Cloudflare
bun run deploy
# Generate Cloudflare types
bun run cf-typegen
```
## Architecture Overview
This is a **Nuxt 3 portfolio website** deployed to **Cloudflare Workers** with the following architecture:
### Tech Stack
- **Framework**: Nuxt 3 (SSR/ISR with Cloudflare preset)
- **UI**: Nuxt UI v4 with Tailwind CSS
- **Content**: Nuxt Content with D1 database backend
- **Styling**: SASS (main.css) + Tailwind CSS
- **Database**: Cloudflare D1 (SQLite)
- **Cache**: Cloudflare KV
- **Icons**: Iconify
- **Composables**: VueUse
- **Validation**: Zod
- **Deployment**: Cloudflare Wrangler + NuxtHub
### Key Patterns
1. **Content Collections**: Content is organized into typed collections (projects, experiences, education, skills, contact, languages, hobbies) defined in `content.config.ts` with Zod schemas
2. **Server API Routes**: Data-fetching endpoints in `server/api/` use `queryCollection()` to fetch from D1 and return JSON to the client
3. **Composables**: Shared logic lives in `app/composables/`:
- `content.ts`: Fetches all main content collections
- `projects.ts`: Project status/type color mappings
4. **Component Structure**: Components are organized by domain in `app/components/`:
- `home/`: Homepage-specific components
- `content/`: Content projection components
- Shared components at root level
5. **Pages**: File-based routing with dynamic routes for projects (`pages/projects/[slug].vue`)
6. **Internationalization**: English/French support with content files in appropriate locales
7. **Server-side Resumes**: Static PDF resume endpoints in `server/routes/resumes/` (en.get.ts, fr.get.ts)
### Cloudflare Configuration
- **Binding**: DB (D1), CACHE (KV), ASSETS (static files)
- **Workers preset**: `cloudflare_module`
- **Exported file**: `.output/server/index.mjs`
- **Preview URLs**: Enabled for branch deployments
### Environment Variables
Required for full functionality (see `.env.example`):
- `STUDIO_GITHUB_CLIENT_ID` / `STUDIO_GITHUB_CLIENT_SECRET`: Nuxt Studio integration
- `NUXT_WAKATIME_*`: Coding statistics API keys
- `NUXT_DISCORD_USER_ID`: Discord activity display
- `NUXT_STATUS_PAGE`: Status page URL
### Build Output
- Client assets: `.output/public/`
- Server code: `.output/server/`
- Database migrations: `.output/server/db/migrations/`

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2025 Arthur Danjou
Copyright (c) 2026 Arthur Danjou
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,11 +1,25 @@
@import "tailwindcss";
@import "@nuxt/ui";
@utility text-stroke-* {
-webkit-text-stroke-width: calc(--value(integer) * 1px);
-webkit-text-stroke-color: --value(--color-*);
-webkit-text-stroke-color: --value([*]);
}
@theme {
--animate-wave: wave 2.5s infinite;
--font-mono: 'Monaspace Neon', 'ui monaspace', monospace;
--font-sofia: 'Sofia Sans', 'ui sans-serif', sans-serif;
--font-sans: 'DM Sans', 'ui sans-serif', sans-serif;
--font-geist: 'Geist', 'Inter', 'ui sans-serif', sans-serif;
--color-artlab-primary: #f2a272;
--color-artlab-primary-dark: #f5b89b;
--color-artlab-bg: #ffffff;
--color-artlab-bg-dark: #121212;
--color-artlab-text: #393a34;
--color-artlab-text-dark: #dbd7caee;
@keyframes wave {
0% {
@@ -40,15 +54,24 @@
}
:root {
--ui-bg: #f8f8f8;
--ui-font-family: 'DM Sans', sans-serif;
--ui-bg: #ffffff;
--ui-text: #393a34;
--ui-border: #f0f0f0;
--ui-muted: #7c7f93;
--ui-font-family: 'Geist', sans-serif;
transition-duration: 0.7s;
}
.dark {
--ui-bg: #0f0f0f;
--ui-bg: #121212;
--ui-text: #dbd7caee;
--ui-border: #191919;
--ui-muted: #9399b2;
--input-bg: #181818;
}
.sofia {
--ui-font-family: 'Sofia Sans', sans-serif;
}
}

View File

@@ -0,0 +1,11 @@
<script lang="ts" setup>
defineProps<{
title: string
}>()
</script>
<template>
<h1 class="w-full md:w-[110%] mt-4 mb-2 font-bold text-4xl md:text-7xl text-transparent opacity-15 text-stroke-neutral-500 text-stroke-2 md:-translate-x-16">
{{ title }}
</h1>
</template>

View File

@@ -1,7 +1,7 @@
<script lang="ts" setup>
import type { StatusPageData } from '~~/types'
const { data, status } = await useAsyncData<StatusPageData>('home-status', () =>
const { data, status, error } = await useAsyncData<StatusPageData>('home-status', () =>
$fetch('/api/status-page'),
{ lazy: true }
)
@@ -49,7 +49,7 @@ const statusState = computed(() => {
<template>
<ClientOnly>
<UCard class="h-full flex flex-col overflow-hidden">
<UCard v-if="!error" class="h-full flex flex-col overflow-hidden">
<div class="p-5 border-b border-neutral-200 dark:border-neutral-800">
<div class="flex items-center justify-between mb-2">
<h3 class="font-bold text-neutral-900 dark:text-white text-sm">

View File

@@ -1,21 +0,0 @@
export function useProjectColors() {
const statusColors: Record<string, string> = {
'Active': 'blue',
'Completed': 'green',
'Archived': 'neutral',
'In progress': 'amber'
}
const typeColors: Record<string, string> = {
'Personal Project': 'purple',
'Academic Project': 'sky',
'Infrastructure Project': 'emerald',
'Internship Project': 'orange',
'Research Project': 'blue'
}
return {
statusColors,
typeColors
}
}

View File

@@ -38,8 +38,6 @@ defineOgImageComponent('NuxtSeo', {
theme: '#F43F5E'
})
const { statusColors, typeColors } = useProjectColors()
const formattedDate = computed(() => {
if (!project.value?.publishedAt) return null
return new Date(project.value.publishedAt).toLocaleDateString('en-US', {
@@ -82,14 +80,12 @@ const formattedDate = computed(() => {
<div class="flex flex-wrap items-center gap-2 mb-4">
<UBadge
v-if="project.type"
:color="(typeColors[project.type] || 'neutral') as any"
variant="subtle"
>
{{ project.type }}
</UBadge>
<UBadge
v-if="project.status"
:color="(statusColors[project.status] || 'neutral') as any"
variant="subtle"
>
{{ project.status }}

View File

@@ -1,11 +1,5 @@
<script lang="ts" setup>
const { data: projects } = await useAsyncData('projects', () => {
return queryCollection('projects')
.where('extension', '=', 'md')
.order('favorite', 'DESC')
.order('publishedAt', 'DESC')
.all()
})
import type { ProjectsCollectionItem } from '@nuxt/content'
const head = {
title: 'Engineering & Research Labs',
@@ -28,250 +22,144 @@ defineOgImageComponent('NuxtSeo', {
theme: '#F43F5E'
})
const { statusColors, typeColors } = useProjectColors()
const { data: projects } = await useAsyncData('projects', () => {
return queryCollection('projects')
.where('extension', '=', 'md')
.order('favorite', 'DESC')
.order('publishedAt', 'DESC')
.all()
})
const selectedType = ref<string | null>(null)
const selectedStatus = ref<string | null>(null)
const availableTypes = computed(() => {
const types = new Set<string>()
const grouped_projects = computed(() => {
const groups: Record<string, ProjectsCollectionItem[]> = {}
projects.value?.forEach((project) => {
if (project.type) types.add(project.type)
const group = project.type || 'Other'
if (!groups[group]) {
groups[group] = []
}
groups[group].push(project)
})
return Array.from(types).sort()
})
const availableStatuses = computed(() => {
const statuses = new Set<string>()
projects.value?.forEach((project) => {
if (project.status) statuses.add(project.status)
const orderPriority = ['Personal Project', 'Research Project', 'Academic Project']
const sorted = Object.entries(groups).sort(([keyA], [keyB]) => {
const indexA = orderPriority.indexOf(keyA)
const indexB = orderPriority.indexOf(keyB)
if (indexA === -1 && indexB === -1) return 0
if (indexA === -1) return 1
if (indexB === -1) return -1
return indexA - indexB
})
return Array.from(statuses).sort()
return Object.fromEntries(sorted)
})
const filteredProjects = computed(() => {
if (!projects.value) return []
return projects.value.filter((project) => {
const typeMatch = !selectedType.value || project.type === selectedType.value
const statusMatch = !selectedStatus.value || project.status === selectedStatus.value
return typeMatch && statusMatch
})
})
function clearFilters() {
selectedType.value = null
selectedStatus.value = null
}
const hasActiveFilters = computed(() => !!selectedType.value || !!selectedStatus.value)
const activeFilterCount = computed(() => (selectedType.value ? 1 : 0) + (selectedStatus.value ? 1 : 0))
</script>
<template>
<main class="space-y-8 py-4">
<div class="space-y-4">
<div class="flex flex-col items-center justify-center gap-4">
<h1 class="text-3xl sm:text-4xl font-bold text-neutral-900 dark:text-white font-mono tracking-tight">
Engineering & Research Labs
</h1>
<p class="max-w-3xl leading-relaxed">
Bridging the gap between theoretical models and production systems. Explore my experimental labs, open-source contributions, and engineering work.
Bridging the gap between theoretical models and production systems. <br>Explore my experimental labs, open-source contributions, and engineering work.
</p>
<UButton
size="md"
label="See more open source projects on Github"
variant="soft"
color="neutral"
icon="i-ph-github-logo"
to="https://go.arthurdanjou.fr/github"
/>
</div>
<div class="flex flex-col gap-4">
<div class="flex items-center gap-2 overflow-x-auto w-full whitespace-nowrap pb-2">
<span class="text-sm font-medium text-neutral-700 dark:text-neutral-300 mr-2 min-w-12.5">Type:</span>
<UButton
:variant="!selectedType ? 'solid' : 'ghost'"
color="neutral"
size="sm"
@click="selectedType = null"
>
All
</UButton>
<UButton
v-for="type in availableTypes"
:key="type"
:variant="selectedType === type ? 'subtle' : 'ghost'"
:color="(typeColors[type] || 'neutral') as any"
size="sm"
class="transition-all duration-200"
:class="selectedType === type ? 'ring-1 ring-inset' : ''"
@click="selectedType = selectedType === type ? null : type"
>
{{ type }}
</UButton>
</div>
<div class="flex items-center gap-4 overflow-x-auto flex-nowrap md:flex-wrap w-full whitespace-nowrap pb-2">
<div class="flex gap-2 items-center">
<span class="text-sm font-medium text-neutral-700 dark:text-neutral-300">Status:</span>
<UButton
:variant="!selectedStatus ? 'solid' : 'ghost'"
color="neutral"
size="sm"
@click="selectedStatus = null"
<div class="flex flex-col gap-16">
<div
v-for="(projects, group) in grouped_projects"
:key="group"
class="relative"
>
<BackgroundTitle :title="group" />
<div class="grid grid-cols-1 md:grid-cols-2 gap-8 grid-rows-auto">
<NuxtLink
v-for="project in projects"
:key="project.slug"
:to="`/projects/${project.slug}`"
:aria-label="`Open project: ${project.title}`"
class="hover:bg-[#8881] dark:hover:bg-neutral-700/20 duration-500 rounded-lg transition-colors p-4"
>
All
</UButton>
<UButton
v-for="status in availableStatuses"
:key="status"
:variant="selectedStatus === status ? 'solid' : 'ghost'"
:color="(statusColors[status] || 'neutral') as any"
size="sm"
@click="selectedStatus = selectedStatus === status ? null : status"
>
{{ status }}
</UButton>
</div>
<div class="ml-auto">
<UButton
v-if="hasActiveFilters"
variant="ghost"
color="neutral"
size="sm"
icon="i-ph-x-circle-duotone"
aria-label="Clear filters"
@click="clearFilters"
>
Clear filters ({{ activeFilterCount }})
</UButton>
</div>
</div>
<div class="grid grid-cols-1 md:grid-cols-2 gap-6">
<UCard
v-for="project in filteredProjects"
:key="project.slug"
class="relative hover:shadow-sm transition-all duration-300 group"
>
<template #header>
<div class="flex items-start gap-4">
<div
class="p-2 rounded-lg shrink-0 flex items-center justify-center"
:class="project.favorite ? 'ring-2 ring-amber-400 text-amber-400' : 'bg-neutral-200 dark:bg-neutral-800 text-neutral-700 dark:text-neutral-300'"
>
<div class="flex justify-center items-center gap-4 z-50">
<div>
<UIcon
:name="project.icon || 'i-ph-code-duotone'"
class="w-6 h-6"
:name="project.icon"
size="40"
/>
</div>
<div class="flex-1 min-w-0">
<UTooltip
:text="project.title"
:popper="{ placement: 'top-start' }"
class="w-full relative z-10"
>
<NuxtLink
:to="`/projects/${project.slug}`"
class="block focus:outline-none"
<div class="space-y-2">
<h1 class="font-bold">
{{ project.title }}
</h1>
<p class="italic text-xs text-muted">
{{ project.shortDescription }}
</p>
<div class="flex items-center justify-between">
<div
v-if="project.tags?.length"
class="flex flex-wrap gap-1.5"
>
<h3 class="text-lg font-bold truncate group-hover:text-neutral-900 text-neutral-500 dark:group-hover:text-white transition-colors duration-200">
{{ project.title }}
</h3>
</NuxtLink>
</UTooltip>
<div class="flex items-center gap-2 mt-2 flex-wrap relative z-10">
<UBadge
v-if="project.type"
:color="(typeColors[project.type] || 'neutral') as any"
variant="subtle"
size="xs"
>
{{ project.type }}
</UBadge>
<UBadge
v-if="project.status"
:color="(statusColors[project.status] || 'neutral') as any"
variant="subtle"
size="xs"
>
{{ project.status }}
</UBadge>
<UBadge
v-if="project.favorite"
color="amber"
variant="subtle"
size="xs"
>
</UBadge>
<UBadge
v-for="tag in project.tags"
:key="tag"
color="neutral"
variant="outline"
size="xs"
>
{{ tag }}
</UBadge>
</div>
<div class="flex gap-2 items-center justify-center">
<UTooltip
text="Favorite"
:delay-duration="4"
>
<UBadge
v-if="project.favorite"
color="amber"
variant="subtle"
size="sm"
icon="i-ph-star-four-duotone"
/>
</UTooltip>
<UTooltip
text="In Progress"
:delay-duration="4"
>
<UBadge
v-if="project.status === 'In progress'"
color="blue"
variant="soft"
size="sm"
icon="i-ph-hourglass-duotone"
/>
</UTooltip>
<UTooltip
text="Archived"
:delay-duration="4"
>
<UBadge
v-if="project.status === 'Archived'"
color="gray"
variant="soft"
size="sm"
icon="i-ph-archive-duotone"
/>
</UTooltip>
</div>
</div>
</div>
</div>
</template>
<p class="text-sm text-neutral-600 dark:text-neutral-400 line-clamp-3 leading-relaxed">
{{ project.description }}
</p>
<template #footer>
<div class="flex items-center justify-between">
<div class="flex flex-wrap gap-1">
<UBadge
v-for="tag in project.tags?.slice(0, 3)"
:key="tag"
color="neutral"
variant="outline"
size="xs"
class="opacity-75"
>
{{ tag }}
</UBadge>
<span
v-if="project.tags && project.tags.length > 3"
class="text-xs text-neutral-400 font-mono ml-1 self-center"
>
+{{ project.tags.length - 3 }}
</span>
</div>
<span
v-if="project.readingTime"
class="text-xs text-neutral-400 font-mono flex items-center gap-1 shrink-0 ml-2"
>
<UIcon
name="i-ph-hourglass-medium-duotone"
class="w-3 h-3"
/>
{{ project.readingTime }}m
</span>
</div>
</template>
<NuxtLink
:to="`/projects/${project.slug}`"
:aria-label="`Open project: ${project.title}`"
class="absolute inset-0 z-0"
/>
</UCard>
</div>
<div
v-if="filteredProjects.length === 0"
class="text-center py-20 border border-dashed border-neutral-200 dark:border-neutral-800 rounded-xl"
>
<UIcon
name="i-ph-flask-duotone"
class="text-6xl text-neutral-300 dark:text-neutral-700 mb-4"
/>
<h3 class="text-lg font-medium text-neutral-900 dark:text-white">
No experiments found
</h3>
<p class="text-neutral-500 dark:text-neutral-400 mb-6">
No projects match the selected filters.
</p>
<UButton
color="primary"
variant="soft"
@click="clearFilters"
>
Clear Filters
</UButton>
</nuxtlink>
</div>
</div>
</div>
</main>

2356
bun.lock

File diff suppressed because it is too large Load Diff

View File

@@ -13,13 +13,14 @@ export const collections = {
schema: z.object({
slug: z.string(),
title: z.string(),
type: z.string().optional(),
type: z.enum(['Personal Project', 'Academic Project', 'Hackathon', 'Research Project', 'Internship Project']),
description: z.string(),
shortDescription: z.string(),
publishedAt: z.string(),
readingTime: z.number().optional(),
readingTime: z.number(),
tags: z.array(z.string()),
favorite: z.boolean().optional(),
status: z.enum(['active', 'completed', 'archived', 'in progress']),
status: z.enum(['Active', 'Completed', 'Archived', 'In progress']),
icon: z.string()
})
})),

View File

@@ -4,7 +4,7 @@ degree: Doctorate
institution: Academic Labs
location: Paris / International
startDate: 2026-10
endDate: null
endDate: 2029-10
duration: 3 years
description: I am actively seeking a PhD position starting in Fall 2026. My research interest lies at the intersection of Applied Mathematics and Deep Learning, specifically focusing on AI Safety, Adversarial Robustness, and Formal Verification. I aim to contribute to developing mathematically grounded methods to ensure the reliability and alignment of modern AI systems.
tags:
@@ -13,4 +13,4 @@ tags:
- Formal Verification
- Applied Mathematics
icon: i-ph-student-duotone
---
---

View File

@@ -9,11 +9,12 @@ Research demands deep focus, but breakthrough ideas often come from stepping bac
---
## ⚡ High-Velocity Interests
::BackgroundTitle{title="High-Velocity Interests"}
::
I am drawn to environments where strategy, speed, and precision intersect. These are not just pastimes, but exercises in optimization under constraints.
::div{class="grid grid-cols-1 md:grid-cols-2 gap-6"}
:::div{class="grid grid-cols-1 md:grid-cols-2 gap-6"}
::card{title="Motorsports Strategy" icon="i-ph-flag-checkered-duotone"}
**Formula 1 Enthusiast**
@@ -27,15 +28,16 @@ Team sports are my foundation for resilience. As a :hover-text{text="former Rugb
* **Takeaway:** Collective intelligence always outperforms individual brilliance.
::
::
:::
---
## 🌍 Perspectives & Culture
::BackgroundTitle{title="Perspectives & Culture"}
::
Curiosity is the fuel of a researcher. Expanding my horizon helps me approach problems with fresh angles.
::div{class="grid grid-cols-1 md:grid-cols-2 gap-6"}
:::div{class="grid grid-cols-1 md:grid-cols-2 gap-6"}
::card{title="Global Exploration" icon="i-ph-airplane-tilt-duotone"}
**Travel & Adaptation**
@@ -48,14 +50,15 @@ Exposure to diverse systems fosters adaptability. From the history of **Egypt**
As a long-time supporter of **PSG**, I appreciate the tactical analysis and performance management at the highest level. Football is a game of :hover-text{text="spatial optimization" hover="Controlling space & transitions"}, much like architecting a neural network.
::
::
:::
---
## 🎵 Creative Patterns
::BackgroundTitle{title="Creative Patterns"}
::
**Music** serves as my cognitive reset. Training my ear to recognize harmony and structure translates directly to identifying elegant solutions in system design. It reinforces my belief that great engineering, like great music, requires both **technical precision** and **artistic intuition**.
::card{title="Philosophy" icon="i-ph-sparkle-duotone"}
"Balance is not something you find, it's something you create."
::
::

View File

@@ -20,7 +20,8 @@ When I'm not deriving generalization bounds or fixing pipelines, I enjoy :hover-
---
## 🛠 Scientific & Technical Arsenal
::BackgroundTitle{title="Scientific & Technical Arsenal"}
::
My research capabilities rely on a :hover-text{text="dual expertise" hover="The Scientist & The Builder"}: :hover-text{text="advanced mathematical modeling" hover="Stochastic Calculus, Optimization, Probability"} for conception, and :hover-text{text="robust engineering" hover="CI/CD, Docker, Kubernetes"} for execution.
@@ -29,7 +30,8 @@ My research capabilities rely on a :hover-text{text="dual expertise" hover="The
---
## 💼 Research & Engineering Path
::BackgroundTitle{title="Research & Engineering Path"}
::
Theoretical knowledge is nothing without concrete application. From :hover-text{text="building distributed systems" hover="High-availability architectures"} to designing :hover-text{text="defensive AI pipelines" hover="Adversarial Robustness"}, my journey reflects a constant shift towards critical challenges.
@@ -38,7 +40,8 @@ Theoretical knowledge is nothing without concrete application. From :hover-text{
---
## 🎓 Academic Foundation
::BackgroundTitle{title="Academic Foundation"}
::
Mathematical rigor is the cornerstone of Safe AI. My background in :hover-text{text="Statistics, Probability, and Optimization" hover="The M280 Trinity 📐"} provides the necessary tools to understand and secure modern Deep Learning architectures.
@@ -47,7 +50,8 @@ Mathematical rigor is the cornerstone of Safe AI. My background in :hover-text{t
---
## 📊 Live Telemetry
::BackgroundTitle{title="Live Telemetry"}
::
Research requires discipline and transparency. Here is a real-time overview of my :hover-text{text="current environment" hover="OS, Editor, Activity"} and historical data.
@@ -58,6 +62,7 @@ Research requires discipline and transparency. Here is a real-time overview of m
::
::home-live-stats
::
---
@@ -65,4 +70,4 @@ Research requires discipline and transparency. Here is a real-time overview of m
::
::home-catch-phrase
::
::

View File

@@ -1,28 +0,0 @@
---
slug: arthome
title: ArtHome - Browser Homepage
type: Personal Project
description: A customizable browser homepage that lets you organize all your favorite links in one place with categories, tabs, icons and colors.
publishedAt: 2024-09-04
readingTime: 1
status: Archived
tags:
- Nuxt
- Vue.js
- Web
- Productivity
icon: i-ph-house-duotone
---
[**ArtHome**](https://go.arthurdanjou.fr/arthome) is a customizable browser homepage that lets you organize all your favorite links in one place.
Create categories and tabs to group your shortcuts, personalize them with icons and colors, and make the page private if you want to keep your links just for yourself. The interface is clean, responsive, and works across all modern browsers.
## 🛠️ Technology Stack
- **[Nuxt](https://nuxt.com)**: An open-source framework for building performant, full-stack web applications with Vue.
- **[NuxtHub](https://hub.nuxt.com)**: A Cloudflare-powered platform to deploy and scale Nuxt apps globally with minimal latency and full-stack capabilities.
- **[NuxtUI](https://ui.nuxt.com)**: A sleek and flexible component library that helps create beautiful, responsive UIs for Nuxt applications.
- **[ESLint](https://eslint.org)**: A linter that identifies and fixes problems in your JavaScript/TypeScript code.
- **[Drizzle ORM](https://orm.drizzle.team/)**: A lightweight, type-safe ORM built for TypeScript, designed for simplicity and performance.
- **[Zod](https://zod.dev/)**: A TypeScript-first schema declaration and validation library with full static type inference.

View File

@@ -1,10 +1,11 @@
---
slug: artlab
title: ArtLab - Personal HomeLab
type: Infrastructure Project
description: A personal homelab environment where I deploy, test, and maintain self-hosted services with privacy-focused networking through VPN and Cloudflare Tunnels.
type: Personal Project
description: A private R&D sandbox and high-availability infrastructure for deploying MLOps pipelines, managing large-scale data, and experimenting with cloud-native automation.
shortDescription: A professional-grade homelab for self-hosting, MLOps, and network security.
publishedAt: 2025-09-04
readingTime: 1
readingTime: 2
favorite: true
status: Active
tags:
@@ -13,32 +14,55 @@ tags:
- HomeLab
- Self-Hosted
- Infrastructure
- Networking
icon: i-ph-flask-duotone
---
[**ArtLab**](https://go.arthurdanjou.fr/status) is my personal homelab, where I experiment with self-hosting and automation.
[**ArtLab**](https://go.arthurdanjou.fr/status) is my personal homelab: a controlled environment for experimenting with DevOps, distributed systems, and private cloud architecture.
My homelab is a self-hosted environment where I deploy, test, and maintain personal services. Everything is securely exposed **only through a private VPN** using [Tailscale](https://tailscale.com/), ensuring encrypted, access-controlled connections across all devices. For selected services, I also use **Cloudflare Tunnels** to enable secure external access without opening ports or exposing my public IP.
::BackgroundTitle{title="Architectural Philosophy"}
::
## 🛠️ Running Services
The infrastructure follows a **Zero Trust** model. Access is restricted to a private mesh VPN using **Tailscale (WireGuard)**, removing the need for open ports. For select public endpoints, **Cloudflare Tunnels** provide a hardened entry point, keeping my public IP hidden while preserving end-to-end encryption from the edge to the origin.
- **MinIO**: S3-compatible object storage for static files and backups.
- **Immich**: Self-hosted photo management platform — a private alternative to Google Photos.
- **Jellyfin**: Media server for streaming movies, shows, and music.
- **Portainer & Docker**: Container orchestration and service management.
- **Traefik**: Reverse proxy and automatic HTTPS with Let's Encrypt.
- **Homepage**: A sleek dashboard to access and monitor all services.
- **Proxmox**: Virtualization platform used to manage VMs and containers.
- **Uptime Kuma**: Self-hosted uptime monitoring.
- **Home Assistant**: Smart home automation and device integration.
- **AdGuard Home**: Network-wide ad and tracker blocking via DNS.
- **Beszel**: Self-hosted, lightweight alternative to Notion for notes and knowledge management.
- **Palmr**: Personal logging and journaling tool.
::BackgroundTitle{title="Service Stack"}
::
## 🖥️ Hardware Specifications
Services are grouped by functional domain to keep orchestration clean and scalable:
- **Beelink EQR6**: AMD Ryzen mini PC, main server host.
- **TP-Link 5-port Switch**: Network connectivity for all devices.
- **UGREEN NASync DXP4800 Plus**: 4-bay NAS, currently populated with 2 × 8TB drives for storage and backups.
### Infrastructure & Virtualization
* **Proxmox VE**: Type-1 hypervisor managing LXC containers and VMs for strict resource isolation.
* **Docker & Portainer**: Container runtime and orchestration for rapid deployment.
* **Traefik**: Edge router and reverse proxy providing automatic HTTPS via Let's Encrypt.
* **Tailscale**: Secure networking layer for cross-device connectivity and remote management.
### Data & Storage
* **Garage**: S3-compatible distributed object storage for backups and static assets.
* **Immich**: High-performance photo management and AI-powered backup solution.
* **Jellyfin**: Media server for hardware-accelerated streaming.
* **Redis**: In-memory data structure store for caching and session management.
### Automation & Observability
* **n8n**: Workflow automation platform for orchestrating complex service interactions.
* **Uptime Kuma**: Real-time status monitoring and incident alerting.
* **Beszel**: Lightweight agent-based resource monitoring for CPU/RAM/Disk metrics.
* **AdGuard Home**: Network-wide DNS sinkhole for ad-blocking and privacy.
### Home Intelligence
* **Home Assistant**: Centralized hub for IoT integration and automation logic.
* **MQTT Broker**: Low-latency message bus for device-to-service communication.
* **Zigbee2MQTT**: Bridge for local Zigbee device control without cloud dependencies.
::BackgroundTitle{title="Hardware Specifications"}
::
| Component | Hardware | Role |
| :--- | :--- | :--- |
| **Main Host** | **Beelink EQR6** (AMD Ryzen) | Compute, Containers & VMs |
| **Storage** | **UGREEN NASync DXP4800 Plus** | 4-bay NAS, 16TB ZFS / Backups |
| **Networking** | **TP-Link 5-port Gigabit Switch** | Local Backbone |
| **Zigbee** | **SLZB-MR4 Coordinator** | Home Automation Mesh |
---
This homelab is a sandbox for DevOps experimentation, infrastructure reliability, and privacy-respecting digital autonomy.

View File

@@ -3,6 +3,7 @@ slug: artsite
title: ArtSite - Personal Research Hub
type: Personal Project
description: My digital headquarters. A high-performance portfolio built on the Edge using the full Nuxt ecosystem, deployed to Cloudflare Workers via Wrangler.
shortDescription: A modern portfolio and experimental lab built on the Nuxt ecosystem and deployed to Cloudflare Workers.
publishedAt: 2024-06-01
readingTime: 2
favorite: true
@@ -15,31 +16,32 @@ tags:
icon: i-ph-globe-hemisphere-west-duotone
---
[**ArtSite**](https://go.arthurdanjou.fr/website) is my digital headquartersa unified platform serving as my engineering portfolio and experimental lab.
[**ArtSite**](https://go.arthurdanjou.fr/website) is my digital headquarters: a unified platform that serves as my engineering portfolio and experimental lab.
More than just a static site, it is a modern **Portfolio** designed to be fast, accessible, and type-safe. It serves as a live production environment where I experiment with the latest frontend technologies and Edge computing paradigms.
More than a static site, it is a modern **Portfolio** designed to be fast, accessible, and type-safe. It also acts as a live production environment where I test the latest frontend technologies and Edge computing paradigms.
## ⚡ The Nuxt Stack Architecture
::BackgroundTitle{title="The Nuxt Stack Architecture"}
::
This project is built entirely on the **Nuxt ecosystem**, leveraging the synergy between its modules for maximum developer experience and performance.
This project is built entirely on the **Nuxt ecosystem**, leveraging module synergy for strong developer experience and performance.
### Core Engine
- **[Nuxt 3](https://nuxt.com/)**: The meta-framework providing the backbone (SSR, Auto-imports, Modules).
- **[Nitro](https://nitro.unjs.io/)**: The high-performance server engine that powers the API routes and renders the app at the Edge.
- **[Nuxt 3](https://nuxt.com/)**: The meta-framework providing the backbone (SSR, auto-imports, modules).
- **[Nitro](https://nitro.unjs.io/)**: The high-performance server engine powering API routes and Edge rendering.
### Infrastructure & Deployment
- **[Cloudflare Workers](https://workers.cloudflare.com/)**: The application runs entirely on Cloudflare's global serverless network (SSR), ensuring minimal latency and high resilience.
- **[Wrangler](https://developers.cloudflare.com/workers/wrangler/)**: The command-line tool used for precise deployment pipelines and worker configuration.
- **[NuxtHub](https://hub.nuxt.com/)**: Integrated specifically for **advanced cache management** and unifying Cloudflare platform features (KV, D1, Blob) within the Nuxt runtime.
- **[Cloudflare Workers](https://workers.cloudflare.com/)**: The application runs entirely on Cloudflare's global serverless network (SSR), delivering low latency and high resilience.
- **[Wrangler](https://developers.cloudflare.com/workers/wrangler/)**: The command-line tool used for deployment pipelines and worker configuration.
- **[NuxtHub](https://hub.nuxt.com/)**: Integrated for **advanced cache management** and to unify Cloudflare platform features (KV, D1, Blob) within the Nuxt runtime.
### Content & Data
- **[Nuxt Content](https://content.nuxtjs.org/)**: A Git-based Headless CMS that treats Markdown as a database.
- **[Nuxt Studio](https://nuxt.studio)**: A live visual editor allowing for seamless content management directly from the browser.
- **[Nuxt Content](https://content.nuxtjs.org/)**: A Git-based headless CMS that treats Markdown as a database.
- **[Nuxt Studio](https://nuxt.studio)**: A live visual editor for seamless content management directly from the browser.
### Interface & Design
- **[Nuxt UI](https://nuxtui.com/)**: A comprehensive component library built on Headless UI and Tailwind CSS.
- **[Tailwind CSS](https://tailwindcss.com/)**: Utility-first styling for rapid and responsive design.
- **[Tailwind CSS](https://tailwindcss.com/)**: Utility-first styling for rapid, responsive design.
### Quality Assurance
- **[TypeScript](https://www.typescriptlang.org/)**: Strict type safety across the entire stack (Frontend & Backend).
- **[Zod](https://zod.dev/)**: Runtime schema validation for API inputs and environment variables.
- **[TypeScript](https://www.typescriptlang.org/)**: Strict type safety across the entire stack (frontend and backend).
- **[Zod](https://zod.dev/)**: Runtime schema validation for API inputs and environment variables.

View File

@@ -3,6 +3,7 @@ slug: artstudies
title: ArtStudies - Academic Projects Collection
type: Academic Project
description: A curated collection of mathematics and data science projects developed during my academic journey, spanning Bachelor's and Master's studies.
shortDescription: A collection of academic projects in mathematics and data science from my university studies.
publishedAt: 2023-09-01
readingTime: 1
favorite: true
@@ -15,14 +16,15 @@ tags:
icon: i-ph-book-duotone
---
[**ArtStudies Projects**](https://github.com/ArthurDanjou/artstudies) is a curated collection of academic projects completed throughout my mathematics studies. The repository showcases work in both _Python_ and _R_, focusing on mathematical modeling, data analysis, and numerical methods.
[**ArtStudies Projects**](https://github.com/ArthurDanjou/artstudies) is a curated collection of academic projects completed throughout my mathematics studies. The repository showcases work in both _Python_ and _R_, with a focus on mathematical modeling, data analysis, and numerical methods.
The projects are organized into three main sections:
- **L3** Third year of the Bachelor's degree in Mathematics
- **M1** First year of the Master's degree in Mathematics
- **M2** Second year of the Master's degree in Mathematics
## 📁 File Structure
::BackgroundTitle{title="File Structure"}
::
- `L3`
- `Analyse Matricielle`
@@ -51,7 +53,8 @@ The projects are organized into three main sections:
- `VBA`
- `SQL`
## 🛠️ Technologies & Tools
::BackgroundTitle{title="Technologies & Tools"}
::
- **[Python](https://www.python.org)**: A high-level, interpreted programming language, widely used for data science, machine learning, and scientific computing.
- **[R](https://www.r-project.org)**: A statistical computing environment, perfect for data analysis and visualization.

View File

@@ -0,0 +1,82 @@
---
slug: climate-issues
title: Wind Risk Modeling - The 1999 Martin Storm
type: Academic Project
description: An advanced study on wind risk modeling and meteorological hazard assessment, focusing on the historical Martin Storm of December 1999. Combines data analysis, statistical modeling, and GIS mapping to quantify natural disaster impacts.
shortDescription: A comprehensive analysis of wind risk modeling during the 1999 Martin Storm using statistical methods and spatial analysis.
publishedAt: 2026-02-17
readingTime: 5
status: Completed
tags:
- Meteorology
- Risk Assessment
- Data Analysis
- Climate Science
- GIS
- Statistics
icon: i-ph-wind-duotone
---
::BackgroundTitle{title="Overview"}
::
This project is a detailed study of **wind risk assessment and modeling** in the context of natural disasters, using the **December 1999 Martin Storm** as a case study. The analysis combines statistical methods, meteorological data, and spatial analysis techniques to understand and quantify the impacts of extreme wind events.
::BackgroundTitle{title="Objectives"}
::
The primary objectives of this research were:
1. **Characterize extreme meteorological events** and their propagation patterns
2. **Model wind risk** using statistical and probabilistic approaches
3. **Assess spatial distribution** of hazards using GIS mapping techniques
4. **Quantify economic and environmental impacts** of the storm
5. **Develop predictive models** for future risk assessment and disaster preparedness
::BackgroundTitle{title="Methodology"}
::
### Data Sources
- Historical meteorological records from the 1999 Martin Storm
- Wind speed measurements from weather stations across France
- Satellite imagery and atmospheric pressure data
- Damage assessments and economic loss records
### Analytical Techniques
- **Time-series analysis** of wind speed and atmospheric pressure
- **Spatial interpolation** using kriging and other geostatistical methods
- **Probability distribution fitting** (Weibull, Gumbel, and Log-Normal distributions)
- **Return period estimation** for extreme wind events
- **Geographic Information Systems (GIS)** for hazard mapping and visualization
### Statistical Models
- Extreme Value Theory (EVT) for tail risk analysis
- Generalized Extreme Value (GEV) distributions
- Peak-over-threshold (POT) methods
- Spatial correlation analysis
::BackgroundTitle{title="Key Findings"}
::
The analysis revealed:
- Wind speeds exceeding 100 km/h across multiple regions
- Non-uniform spatial distribution of damage intensity
- Correlation patterns between meteorological variables and structural damage
- Seasonal and geographical risk variations
- Return period estimations for comparable extreme events
::BackgroundTitle{title="Applications"}
::
The methodologies developed in this project have applications in:
- **Disaster risk reduction and preparedness** planning
- **Insurance and risk assessment** for natural hazards
- **Urban planning** and infrastructure resilience
- **Climate adaptation** strategies
- **Early warning systems** for extreme weather events
::BackgroundTitle{title="Detailed Report"}
::
<iframe src="/projects/climate-issues.pdf" width="100%" height="1000px">
</iframe>

View File

@@ -1,54 +0,0 @@
---
slug: data-visualisation
title: Data Visualisation Project
type: Academic Project
description: An interactive data visualization project built with R, R Shiny, and ggplot2 for creating dynamic, explorable visualizations.
publishedAt: 2026-01-05
readingTime: 1
status: Completed
tags:
- R
- R Shiny
- Data Visualization
- ggplot2
icon: i-ph-chart-bar-duotone
---
::warning
The project is currently in progress, and more details will be added as development continues.
::
This project involves creating an interactive data visualization application using R and R Shiny. The goal is to develop dynamic and explorable visualizations that allow users to interact with the data in meaningful ways.
## 🛠️ Technologies & Tools
- **[R](https://www.r-project.org)**: A statistical computing environment, perfect for data analysis and visualization.
- **[R Shiny](https://shiny.rstudio.com)**: A web application framework for R that enables the creation of interactive web applications directly from R.
- **[ggplot2](https://ggplot2.tidyverse.org)**: A powerful R package for creating static and dynamic visualizations using the Grammar of Graphics.
- **[dplyr](https://dplyr.tidyverse.org)**: An R package for data manipulation, providing a consistent set of verbs to help you solve common data manipulation challenges.
- **[tidyr](https://tidyr.tidyverse.org)**: An R package for tidying data, making it easier to work with and visualize.
- **[tidyverse](https://www.tidyverse.org)**: A collection of R packages designed for data science that share an underlying design philosophy, grammar, and data structures.
- **[sf](https://r-spatial.github.io/sf/)**: An R package for working with simple features, providing support for spatial data manipulation and analysis.
- **[rnaturalearth](https://docs.ropensci.org/rnaturalearth/)**: An R package that provides easy access to natural earth map data for creating geographical visualizations.
- **[rnaturalearthdata](https://github.com/ropensci/rnaturalearthdata)**: Companion package to rnaturalearth containing large natural earth datasets.
- **[knitr](https://yihui.org/knitr/)**: An R package for dynamic report generation, enabling the integration of code and text.
- **[kableExtra](https://haozhu233.github.io/kableExtra/)**: An R package for customizing tables and enhancing their visual presentation.
- **[gridExtra](https://cran.r-project.org/web/packages/gridExtra/)**: An R package for arranging multiple grid-based plots on a single page.
- **[moments](https://cran.r-project.org/web/packages/moments/)**: An R package for computing moments, skewness, kurtosis and related statistics.
- **[factoextra](http://www.sthda.com/english/rpkgs/factoextra/)**: An R package for multivariate data analysis and visualization, including PCA and clustering methods.
- **[shinydashboard](https://rstudio.github.io/shinydashboard/)**: An R package for creating dashboards with Shiny.
- **[leaflet](https://rstudio.github.io/leaflet/)**: An R package for creating interactive maps using the Leaflet JavaScript library.
- **[plotly](https://plotly.com/r/)**: An R package for creating interactive visualizations with the Plotly library.
- **[RColorBrewer](https://cran.r-project.org/web/packages/RColorBrewer/)**: An R package providing color palettes for maps and other graphics.
- **[DT](https://rstudio.github.io/DT/)**: An R package for creating interactive data tables.
## 📚 Resources
You can find the code here: [Data Visualisation Code](https://go.arthurdanjou.fr/datavis-code)
And the online application here: [Data Visualisation App](https://go.arthurdanjou.fr/datavis-app)
## 📄 Detailed Report
<iframe src="/projects/datavis.pdf" width="100%" height="1000px">
</iframe>

View File

@@ -0,0 +1,97 @@
---
slug: dataviz-tuberculose
title: Monitoring & Segmentation of Tuberculosis Cases
type: Academic Project
description: An interactive data visualization project built with R, R Shiny, and ggplot2 for creating dynamic, explorable visualizations.
shortDescription: An interactive data visualization project using R and R Shiny.
publishedAt: 2026-01-05
readingTime: 1
status: Completed
tags:
- R
- R Shiny
- Data Visualization
- ggplot2
icon: i-ph-chart-bar-duotone
---
Interactive Shiny dashboard for WHO tuberculosis data analysis and clustering.
- **GitHub Repository:** [Tuberculose-Visualisation](https://github.com/ArthurDanjou/Tuberculose-Visualisation)
- **Live Application:** [Tuberculose Data Visualization](https://go.arthurdanjou.fr/datavis-app)
::BackgroundTitle{title="Overview"}
::
This project provides an interactive visualization tool for monitoring and segmenting global tuberculosis data from the World Health Organization (WHO). It applies multivariate analysis to reveal operational typologies of global health risks.
**Author:** Arthur Danjou
**Program:** M2 ISF - Dauphine PSL
**Course:** Data Visualisation (2025-2026)
::BackgroundTitle{title="Features"}
::
- Interactive world map with cluster visualization
- K-means clustering for country segmentation (Low/Moderate/Critical Impact)
- Time series analysis with year selector (animated)
- Region filtering by WHO regions
- Key Performance Indicators (KPIs) dashboard
- Raw data exploration with data tables
::BackgroundTitle{title="Project Structure"}
::
```
├── app.R # Shiny application
├── NoticeTechnique.Rmd # Technical report (R Markdown)
├── NoticeTechnique.pdf # Compiled technical report
├── data/
│ ├── TB_analysis_ready.RData # Processed data with clusters
│ └── TB_burden_countries_2025-12-09.csv # Raw WHO data
└── renv/ # R package management
```
::BackgroundTitle{title="Requirements"}
::
- R (>= 4.0.0)
- R packages (see `renv.lock`):
- shiny
- shinydashboard
- leaflet
- plotly
- dplyr
- sf
- RColorBrewer
- DT
- rnaturalearth
::BackgroundTitle{title="Installation"}
::
1. Clone this repository
2. Open R/RStudio in the project directory
3. Restore packages with `renv::restore()`
4. Run the application:
```r
shiny::runApp("app.R")
```
::BackgroundTitle{title="Detailed Report"}
::
<iframe src="/projects/datavis.pdf" width="100%" height="1000px">
</iframe>
::BackgroundTitle{title="License"}
::
© 2026 Arthur Danjou. All rights reserved.
::BackgroundTitle{title="Resources"}
::
You can find the code here: [Data Visualisation Code](https://go.arthurdanjou.fr/datavis-code)
And the online application here: [Data Visualisation App](https://go.arthurdanjou.fr/datavis-app)

View File

@@ -1,8 +1,9 @@
---
slug: dropout-reduces-underfitting
slug: dl-dropout-reduces-underfitting
title: Dropout Reduces Underfitting
type: Research Project
description: TensorFlow/Keras implementation and reproduction of "Dropout Reduces Underfitting" (Liu et al., 2023). A comparative study of Early and Late Dropout strategies to optimize model convergence.
shortDescription: Reproduction of "Dropout Reduces Underfitting" with TensorFlow/Keras, comparing Early and Late Dropout strategies.
publishedAt: 2024-12-10
readingTime: 6
status: Completed
@@ -18,20 +19,22 @@ icon: i-ph-share-network-duotone
The paper is available at: [https://arxiv.org/abs/2303.01500](https://arxiv.org/abs/2303.01500)
This repository contains a robust and modular implementation in **TensorFlow/Keras** of **Early Dropout** and **Late Dropout** strategies. The goal is to verify the hypothesis that dropout, traditionally used to reduce overfitting, can also combat underfitting when applied solely during the initial training phase.
This repository contains a robust, modular **TensorFlow/Keras** implementation of **Early Dropout** and **Late Dropout** strategies. The goal is to verify the hypothesis that dropout, traditionally used to reduce overfitting, can also combat underfitting when applied only during the initial training phase.
## 🎯 Scientific Objectives
::BackgroundTitle{title="Scientific Objectives"}
::
The study aims to validate the three operating regimes of Dropout described in the paper:
The study aims to validate the operating regimes of Dropout described in the paper:
1. **Early Dropout** (Targeting Underfitting): Active only during the initial phase to reduce gradient variance and align their direction, allowing for better final optimization.
2. **Late Dropout** (Targeting Overfitting): Disabled at the start to allow rapid learning, then activated to regularize final convergence.
3. **Standard Dropout**: Constant rate throughout training (Baseline).
4. **No Dropout**: Control experiment without dropout.
1. **Early Dropout** (Targeting Underfitting): Active only during the initial phase to reduce gradient variance and align their direction, enabling better final optimization.
2. **Late Dropout** (Targeting Overfitting): Disabled at the start to allow rapid learning, then activated to regularize final convergence.
3. **Standard Dropout**: Constant rate throughout training (baseline).
4. **No Dropout**: Control experiment without dropout.
## 🛠️ Technical Architecture
::BackgroundTitle{title="Technical Architecture"}
::
Unlike naive Keras callback implementations, this project uses a **dynamic approach via the TensorFlow graph** to ensure the dropout rate is properly updated on the GPU without model recompilation.
Unlike naive Keras callback implementations, this project uses a **dynamic approach via the TensorFlow graph** to ensure the dropout rate updates on the GPU without model recompilation.
### Key Components
@@ -39,7 +42,8 @@ Unlike naive Keras callback implementations, this project uses a **dynamic appro
* **`DropoutScheduler`**: A Keras `Callback` that drives the rate variable based on the current epoch and the chosen strategy (`early`, `late`, `standard`).
* **`ExperimentPipeline`**: An orchestrator class that handles data loading (MNIST, CIFAR-10, Fashion MNIST), model creation (Dense or CNN), and execution of comparative benchmarks.
## File Structure
::BackgroundTitle{title="File Structure"}
::
```
.
@@ -56,7 +60,8 @@ Unlike naive Keras callback implementations, this project uses a **dynamic appro
└── uv.lock # Dependency lock file
```
## 🚀 Installation
::BackgroundTitle{title="Installation"}
::
```bash
# Clone the repository
@@ -64,12 +69,14 @@ git clone https://github.com/arthurdanjou/dropoutreducesunderfitting.git
cd dropoutreducesunderfitting
```
## Install dependencies
::BackgroundTitle{title="Install dependencies"}
::
```bash
pip install tensorflow numpy matplotlib seaborn scikit-learn
```
## 📊 Usage
::BackgroundTitle{title="Usage"}
::
The main notebook pipeline.ipynb contains all necessary code. Here is how to run a typical experiment via the pipeline API.
@@ -85,7 +92,7 @@ exp = ExperimentPipeline(dataset_name="fashion_mnist", model_type="cnn")
### 2. Learning Curves Comparison
Compare training dynamics (Loss & Accuracy) of the three strategies.
Compare training dynamics (loss and accuracy) of the three strategies.
```python
exp.compare_learning_curves(
@@ -132,19 +139,22 @@ exp.run_dataset_size_comparison(
)
```
## 📈 Expected Results
::BackgroundTitle{title="Expected Results"}
::
According to the paper, you should observe:
- Early Dropout: Higher initial Loss, followed by a sharp drop after the switch_epoch, often reaching a lower minimum than Standard Dropout (reduction of underfitting).
- Early Dropout: Higher initial loss, followed by a sharp drop after the switch_epoch, often reaching a lower minimum than Standard Dropout (reduction of underfitting).
- Late Dropout: Rapid rise in accuracy at the start (potential overfitting), then stabilized by the activation of dropout.
## 📄 Detailed Report
::BackgroundTitle{title="Detailed Report"}
::
<iframe src="/projects/dropout-reduces-underfitting.pdf" width="100%" height="1000px">
</iframe>
## 📝 Authors
::BackgroundTitle{title="Authors"}
::
- [Arthur Danjou](https://github.com/ArthurDanjou)
- [Alexis Mathieu](https://github.com/Alex6535)
@@ -154,4 +164,4 @@ According to the paper, you should observe:
M.Sc. Statistical and Financial Engineering (ISF) - Data Science Track at Université Paris-Dauphine PSL
Based on the work of Liu, Z., et al. (2023). Dropout Reduces Underfitting.
Based on the work of Liu, Z., et al. (2023). Dropout Reduces Underfitting.

View File

@@ -1,8 +1,9 @@
---
slug: bikes-glm
slug: glm-bikes
title: Generalized Linear Models for Bikes Prediction
type: Academic Project
description: Predicting the number of bikes rented in a bike-sharing system using Generalized Linear Models and various statistical techniques.
shortDescription: A project applying Generalized Linear Models to predict bike rentals based on environmental and temporal features.
publishedAt: 2025-01-24
readingTime: 1
status: Completed
@@ -14,18 +15,20 @@ tags:
icon: i-ph-bicycle-duotone
---
This project was completed as part of the **Generalized Linear Models** course at Paris-Dauphine PSL University. The objective was to develop and compare statistical models to predict the number of bicycle rentals in a bike-sharing system based on various environmental and temporal characteristics.
This project was completed as part of the **Generalized Linear Models** course at Paris-Dauphine PSL University. The objective was to develop and compare statistical models that predict bicycle rentals in a bike-sharing system using environmental and temporal features.
## 📊 Project Objectives
::BackgroundTitle{title="Project Objectives"}
::
- Determine the best predictive model for bicycle rental counts
- Analyze the impact of various features (temperature, humidity, wind speed, seasonality, etc.)
- Analyze the impact of key features (temperature, humidity, wind speed, seasonality, etc.)
- Apply and evaluate different generalized linear modeling techniques
- Validate model assumptions and performance metrics
## 🔍 Methodology
::BackgroundTitle{title="Methodology"}
::
The study employs rigorous statistical approaches including:
The study uses a rigorous statistical workflow, including:
- **Exploratory Data Analysis (EDA)** - Understanding feature distributions and relationships
- **Model Comparison** - Testing multiple GLM families (Poisson, Negative Binomial, Gaussian)
@@ -33,7 +36,8 @@ The study employs rigorous statistical approaches including:
- **Model Diagnostics** - Validating assumptions and checking residuals
- **Cross-validation** - Ensuring robust performance estimates
## 📁 Key Findings
::BackgroundTitle{title="Key Findings"}
::
The analysis identified critical factors influencing bike-sharing demand:
- Seasonal patterns and weather conditions
@@ -41,11 +45,13 @@ The analysis identified critical factors influencing bike-sharing demand:
- Holiday and working day distinctions
- Time-based trends and cyclical patterns
## 📚 Resources
::BackgroundTitle{title="Resources"}
::
You can find the code here: [GLM Bikes Code](https://go.arthurdanjou.fr/glm-bikes-code)
## 📄 Detailed Report
::BackgroundTitle{title="Detailed Report"}
::
<iframe src="/projects/bikes-glm.pdf" width="100%" height="1000px">
</iframe>

View File

@@ -0,0 +1,336 @@
---
slug: implied-volatility-prediction-from-options-data
title: Implied Volatility Prediction from Options Data
type: Academic Project
description: A large-scale statistical study comparing Generalized Linear Models (GLMs) and black-box machine learning architectures to predict the implied volatility of S&P 500 options.
shortDescription: Predicting implied volatility using advanced regression techniques and machine learning models on financial options data.
publishedAt: 2026-02-28
readingTime: 3
status: Completed
tags:
- R
- GLM
- Finance
- Machine Learning
- Statistical Modeling
icon: i-ph-graph-duotone
---
> **M2 Master's Project** Predicting implied volatility using advanced regression techniques and machine learning models on financial options data.
This project explores the prediction of **implied volatility** from options market data, combining classical statistical methods with modern machine learning approaches. The analysis covers data preprocessing, feature engineering, model benchmarking, and interpretability analysis using real-world financial panel data.
- **GitHub Repository:** [Implied-Volatility-from-Options-Data](https://github.com/ArthurDanjou/Implied-Volatility-from-Options-Data)
---
::BackgroundTitle{title="Project Overview"}
::
### Problem Statement
Implied volatility represents the market's forward-looking expectation of an asset's future volatility. Accurate prediction is crucial for:
- **Option pricing** and valuation
- **Risk management** and hedging strategies
- **Trading strategies** based on volatility arbitrage
### Dataset
The project uses a comprehensive panel dataset tracking **3,887 assets** across **544 observation dates** (2019-2022):
| File | Description | Shape |
|------|-------------|-------|
| `Train_ISF.csv` | Training data with target variable | 1,909,465 rows × 21 columns |
| `Test_ISF.csv` | Test data for prediction | 1,251,308 rows × 18 columns |
| `hat_y.csv` | Final predictions from both models | 1,251,308 rows × 2 columns |
### Key Variables
**Target Variable:**
- `implied_vol_ref` The implied volatility to predict
**Feature Categories:**
- **Identifiers:** `asset_id`, `obs_date`
- **Market Activity:** `call_volume`, `put_volume`, `call_oi`, `put_oi`, `total_contracts`
- **Volatility Metrics:** `realized_vol_short`, `realized_vol_mid1-3`, `realized_vol_long1-4`, `market_vol_index`
- **Option Structure:** `strike_dispersion`, `maturity_count`
---
::BackgroundTitle{title="Methodology"}
::
### Data Pipeline
```
Raw Data
┌─────────────────────────────────────────────────────────┐
│ Data Splitting (Chronological 80/20) │
│ - Training: 2019-10 to 2021-07 │
│ - Validation: 2021-07 to 2022-03 │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ Feature Engineering │
│ - Aggregation of volatility horizons │
│ - Creation of financial indicators │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ Data Preprocessing (tidymodels) │
│ - Winsorization (99.5th percentile) │
│ - Log/Yeo-Johnson transformations │
│ - Z-score normalization │
│ - PCA (95% variance retention) │
└─────────────────────────────────────────────────────────┘
Three Datasets Generated:
├── Tree-based (raw, scale-invariant)
├── Linear (normalized, winsorized)
└── PCA (dimensionality-reduced)
```
### Feature Engineering
New financial indicators created to capture market dynamics:
| Feature | Description | Formula |
|---------|-------------|---------|
| `pulse_ratio` | Volatility trend direction | RV_short / RV_long |
| `stress_spread` | Asset vs market stress | RV_short - Market_VIX |
| `put_call_ratio_volume` | Immediate market stress | Put_Volume / Call_Volume |
| `put_call_ratio_oi` | Long-term risk structure | Put_OI / Call_OI |
| `liquidity_ratio` | Market depth | Total_Volume / Total_OI |
| `option_dispersion` | Market uncertainty | Strike_Dispersion / Total_Contracts |
| `put_low_strike` | Downside protection density | Strike_Dispersion / Put_OI |
| `put_proportion` | Hedging vs speculation | Put_Volume / Total_Volume |
---
::BackgroundTitle{title="Models Implemented"}
::
### Linear Models
| Model | Description | Best RMSE |
|-------|-------------|-----------|
| **OLS** | Ordinary Least Squares | 11.26 |
| **Ridge** | L2 regularization | 12.48 |
| **Lasso** | L1 regularization (variable selection) | 12.03 |
| **Elastic Net** | L1 + L2 combined | ~12.03 |
| **PLS** | Partial Least Squares (on PCA) | 12.79 |
### Linear Mixed-Effects Models (LMM)
Advanced panel data models accounting for asset-specific effects:
| Model | Features | RMSE |
|-------|----------|------|
| LMM Baseline | All variables + Random Intercept | 8.77 |
| LMM Reduced | Collinearity removal | ~8.77 |
| LMM Interactions | Financial interaction terms | ~8.77 |
| LMM + Quadratic | Convexity terms (vol of vol) | 8.41 |
| **LMM + Random Slopes (mod_lmm_5)** | Asset-specific betas | **8.10** ⭐ |
### Tree-Based Models
| Model | Strategy | Validation RMSE | Training RMSE |
|-------|----------|-----------------|---------------|
| **XGBoost** | Level-wise, Bayesian tuning | 10.70 | 0.57 |
| **LightGBM** | Leaf-wise, feature regularization | **10.61** ⭐ | 10.90 |
| Random Forest | Bagging | DNF* | - |
*DNF: Did Not Finish (computational constraints)
### Neural Networks
| Model | Architecture | Status |
|-------|--------------|--------|
| MLP | 128-64 units, tanh activation | Failed to converge |
---
::BackgroundTitle{title="Results Summary"}
::
### Model Comparison
```
RMSE Performance (Lower is Better)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Linear Mixed-Effects (LMM5) 8.38 ████████████████████ Best Linear
Linear Mixed-Effects (LMM4) 8.41 ███████████████████
Linear Mixed-Effects (Baseline) 8.77 ██████████████████
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
LightGBM 10.61 ███████████████ Best Non-Linear
XGBoost 10.70 ██████████████
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
OLS (with interactions) 11.26 █████████████
Lasso 12.03 ███████████
OLS (baseline) 12.01 ███████████
Ridge 12.48 ██████████
PLS 12.79 █████████
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
### Key Findings
1. **Best Linear Model:** LMM with Random Slopes (RMSE = 8.38)
- Captures asset-specific volatility sensitivities
- Includes quadratic terms for convexity effects
2. **Best Non-Linear Model:** LightGBM (RMSE = 10.61)
- Superior generalization vs XGBoost
- Feature regularization prevents overfitting
3. **Interpretability Insights (SHAP Analysis):**
- `realized_vol_mid` dominates (57% of gain)
- Volatility clustering confirmed as primary driver
- Non-linear regime switching in stress_spread
---
::BackgroundTitle{title="Repository Structure"}
::
```
PROJECT/
├── Projet_MRC_DANJOU_LEGRAND_MERIC_VONSIEMENS.qmd # Main analysis (Quarto)
├── Projet_MRC_DANJOU_LEGRAND_MERIC_VONSIEMENS.html # Rendered report
├── packages.R # R dependencies installer
├── Train_ISF.csv # Training data (~1.9M rows)
├── Test_ISF.csv # Test data (~1.25M rows)
├── hat_y.csv # Final predictions
├── README.md # This file
└── results/
├── lightgbm/ # LightGBM model outputs
└── xgboost/ # XGBoost model outputs
```
---
::BackgroundTitle{title="Getting Started"}
::
### Prerequisites
- **R** ≥ 4.0
- Required packages (auto-installed via `packages.R`)
### Installation
```r
# Install all dependencies
source("packages.R")
```
Or manually install key packages:
```r
install.packages(c(
"tidyverse", "tidymodels", "caret", "glmnet",
"lme4", "lmerTest", "xgboost", "lightgbm",
"ranger", "pls", "shapviz", "rBayesianOptimization"
))
```
### Running the Analysis
1. **Open the Quarto document:**
```r
# In RStudio
rstudioapi::navigateToFile("Projet_MRC_DANJOU_LEGRAND_MERIC_VONSIEMENS.qmd")
```
2. **Render the document:**
```r
quarto::quarto_render("Projet_MRC_DANJOU_LEGRAND_MERIC_VONSIEMENS.qmd")
```
3. **Or run specific sections interactively** using the code chunks in the `.qmd` file
---
::BackgroundTitle{title="Technical Details"}
::
### Data Split Strategy
- **Chronological split** at 80th percentile of dates
- Prevents look-ahead bias and data leakage
- Training: ~1.53M observations
- Validation: ~376K observations
### Hyperparameter Tuning
- **Method:** Bayesian Optimization (Gaussian Processes)
- **Acquisition:** Expected Improvement (UCB)
- **Goal:** Maximize negative RMSE
### Evaluation Metric
**Exponential RMSE** on original scale:
$$
RMSE_{real} = \sqrt{\frac{1}{n} \sum_{i=1}^{n} \left( \exp(\hat{y}_{\log, i}) - y_i \right)^2}
$$
Models trained on log-transformed target for variance stabilization.
---
::BackgroundTitle{title="Key Concepts"}
::
### Financial Theories Applied
1. **Volatility Clustering** Past volatility predicts future volatility
2. **Variance Risk Premium** Spread between implied and realized volatility
3. **Fear Gauge** Put-call ratio as sentiment indicator
4. **Mean Reversion** Volatility tends to return to long-term average
5. **Liquidity Premium** Illiquid assets command higher volatility
### Statistical Methods
- Panel data modeling with fixed and random effects
- Principal Component Analysis (PCA)
- Bayesian hyperparameter optimization
- SHAP values for model interpretability
---
::BackgroundTitle{title="Authors"}
::
**Team:**
- Arthur DANJOU
- Camille LEGRAND
- Axelle MERIC
- Moritz VON SIEMENS
**Course:** Classification and Regression (M2)
**Academic Year:** 2025-2026
---
::BackgroundTitle{title="Notes"}
::
- **Computational Constraints:** Some models (Random Forest, MLP) failed due to hardware limitations (16GB RAM, CPU-only)
- **Reproducibility:** Set `seed = 2025` for consistent results
- **Language:** Analysis documented in English, course materials in French
---
::BackgroundTitle{title="References"}
::
Key R packages used:
- `tidymodels` Modern modeling framework
- `glmnet` Regularized regression
- `lme4` / `lmerTest` Mixed-effects models
- `xgboost` / `lightgbm` Gradient boosting
- `shapviz` Model interpretability
- `rBayesianOptimization` Hyperparameter tuning

View File

@@ -0,0 +1,59 @@
---
slug: hackathon-cnd
title: "CND Hackathon: Defense-Grade Log Intelligence"
type: Hackathon
description: A high-stakes cybersecurity challenge organized by the French Ministry of Defense (CND). Representing Université Paris-Dauphine, our team spent 3 days in a high-security military fortress developing ML models to detect stealthy cyber threats in firewall logs.
shortDescription: Cybersecurity threat detection within a high-security military environment.
publishedAt: 2025-10-28
readingTime: 4
status: Completed
tags:
- Python
- Streamlit
- Cybersecurity
- Machine Learning
- Scikit-learn
icon: i-ph-shield-check-duotone
---
::BackgroundTitle{title="The Setting: Fort de Mont-Valérien"}
::
This was not a typical university hackathon. Organized by the **Commissariat au Numerique de Defense (CND)**, the event took place over three intense days within the walls of the **Fort de Mont-Valerien**, a highly secured military fortress.
Working in this environment underscored the real-world stakes of the mission. Our **team of six**, representing **Universite Paris-Dauphine**, competed against several elite engineering schools to solve critical defense-related data challenges.
::BackgroundTitle{title="The Mission: Classifying the Invisible"}
::
The core task involved processing poorly labeled and noisy firewall logs. In a defense context, a "missing" log or a mislabeled entry can be the difference between a minor system bug and a coordinated intrusion.
### 1. Tactical Log Translation
Firewall logs are often cryptic and inconsistent. We developed a preprocessing pipeline to:
* **Feature Extraction:** Parse raw logs into structured data (headers, flags, payloads).
* **Contextual Labeling:** Distinguish between routine system "bugs" (non-malicious failures) and actual "attacks" (malicious intent).
### 2. Strategic Goal: Recalling the Threat
In military cybersecurity, the cost of a **False Negative** (an undetected attack) is catastrophic.
* **Model Priority:** We optimized our classifiers specifically for **Recall**. We would rather investigate a few system bugs (False Positives) than let a single attack slip through the net.
* **Techniques:** We used ensemble methods (XGBoost/Random Forest) combined with advanced resampling to handle the heavy class imbalance typical of network traffic.
> **Key Achievement:** Our model significantly reduced the rate of undetected threats compared to the baseline configurations provided at the start of the challenge.
::BackgroundTitle{title="Deployment & Interaction"}
::
To make our findings operational, we built a **Streamlit-based command center**:
* **On-the-Fly Analysis:** Security officers can paste a single log line to get an immediate "Bug vs. Attack" probability score.
* **Bulk Audit:** The interface supports CSV uploads, allowing for the rapid analysis of entire daily log batches to highlight high-risk anomalies.
::BackgroundTitle{title="Technical Stack"}
::
* **Language:** Python
* **ML Library:** Scikit-learn, XGBoost
* **Deployment:** Streamlit
* **Environment:** High-security on-site military infrastructure
---
Representing Dauphine in such a specialized environment was a highlight of my academic year. I can share more details on the feature engineering techniques we used to clean the raw military logs.

View File

@@ -0,0 +1,59 @@
---
slug: hackathon-natixis
title: "Natixis Hackathon: Generative SQL Analytics"
type: Hackathon
description: An intensive 4-week challenge to build an AI-powered data assistant. Our team developed a GenAI agent that transforms natural language into executable SQL queries, interactive visualizations, and natural language insights.
shortDescription: A team-based project building an NL-to-SQL agent with Nuxt, Ollama, and Vercel AI SDK.
publishedAt: 2026-03-07
readingTime: 4
status: Completed
tags:
- Nuxt
- Ollama
- Vercel AI SDK
- PostgreSQL
- ETL
icon: i-ph-database-duotone
---
::BackgroundTitle{title="The Challenge"}
::
Organized by **Natixis**, this hackathon followed a high-intensity format: **three consecutive Saturdays** of on-site development, bridged by two full weeks of remote collaboration.
Working in a **team of four**, our goal was to bridge the gap between non-technical stakeholders and complex financial databases by creating an autonomous "Data Talk" agent.
::BackgroundTitle{title="Core Features"}
::
### 1. Data Engineering & Schema Design
Before building the AI layer, we handled a significant data migration task. I led the effort to:
* **ETL Pipeline:** Convert fragmented datasets from **.xlsx** and **.csv** formats into a structured **SQL database**.
* **Schema Optimization:** Design robust SQL schemas that allow an LLM to understand relationships (foreign keys, indexing) for accurate query generation.
### 2. Natural Language to SQL (NL-to-SQL)
Using the **Vercel AI SDK** and **Ollama**, we implemented an agentic workflow:
* **Prompt Engineering:** Fine-tuning the agent to translate complex business questions (e.g., "What was our highest growth margin last quarter?") into valid, optimized SQL.
* **Self-Correction:** If a query fails, the agent analyzes the SQL error and self-corrects the syntax before returning a result.
### 3. Automated Insights & Visualization
Data is only useful if its readable. Our Nuxt application goes beyond raw tables:
* **Dynamic Charts:** The agent automatically determines the best visualization type (Bar, Line, Pie) based on the query result and renders it using interactive components.
* **Narrative Explanations:** A final LLM pass summarizes the data findings in plain English, highlighting anomalies or key trends.
::BackgroundTitle{title="Technical Stack"}
::
* **Frontend/API:** **Nuxt 3** for a seamless, reactive user interface.
* **Orchestration:** **Vercel AI SDK** to manage streams and tool-calling logic.
* **Inference:** **Ollama** for running LLMs locally, ensuring data privacy during development.
* **Storage:** **PostgreSQL** for the converted data warehouse.
::BackgroundTitle{title="Impact & Results"}
::
This project demonstrated that a modern stack (Nuxt + local LLMs) can drastically reduce the time needed for data discovery. By the final Saturday, our team presented a working prototype capable of handling multi-table joins and generating real-time financial dashboards from simple chat prompts.
---
*Curious about the ETL logic or the prompt structure we used? I can share how we optimized the LLM's SQL accuracy.*

View File

@@ -1,8 +1,9 @@
---
slug: loan-ml
slug: ml-loan
title: Machine Learning for Loan Prediction
type: Academic Project
description: Predicting loan approval and default risk using machine learning classification techniques.
shortDescription: A project applying machine learning to predict loan approvals and assess default risk.
publishedAt: 2025-01-24
readingTime: 2
status: Completed
@@ -15,18 +16,20 @@ tags:
icon: i-ph-money-wavy-duotone
---
This project focuses on building machine learning models to predict loan approval outcomes and assess default risk. The objective is to develop robust classification models that can effectively identify creditworthy applicants.
This project focuses on building machine learning models to predict loan approval outcomes and assess default risk. The objective is to develop robust classification models that identify creditworthy applicants.
## 📊 Project Objectives
::BackgroundTitle{title="Project Objectives"}
::
- Build and compare multiple classification models for loan prediction
- Identify key factors influencing loan approval decisions
- Evaluate model performance using appropriate metrics
- Optimize model parameters for better predictive accuracy
## 🔍 Methodology
::BackgroundTitle{title="Methodology"}
::
The study employs various machine learning approaches:
The study employs a range of machine learning approaches:
- **Exploratory Data Analysis (EDA)** - Understanding applicant characteristics and patterns
- **Feature Engineering** - Creating meaningful features from raw data
@@ -34,7 +37,8 @@ The study employs various machine learning approaches:
- **Hyperparameter Tuning** - Optimizing model performance
- **Cross-validation** - Ensuring robust generalization
## 📄 Detailed Report
::BackgroundTitle{title="Detailed Report"}
::
<iframe src="/projects/loan-ml.pdf" width="100%" height="1000px">
</iframe>
</iframe>

View File

@@ -3,6 +3,7 @@ slug: monte-carlo-project
title: Monte Carlo Methods Project
type: Academic Project
description: An implementation of different Monte Carlo methods and algorithms in R, including inverse CDF simulation, accept-reject methods, and stratification techniques.
shortDescription: A project implementing various Monte Carlo methods and algorithms in R.
publishedAt: 2024-11-24
readingTime: 3
status: Completed
@@ -16,22 +17,25 @@ tags:
icon: i-ph-dice-five-duotone
---
This report presents the Monte Carlo Methods Project completed as part of the **Monte Carlo Methods** course at Paris-Dauphine University. The goal was to implement different methods and algorithms using Monte Carlo methods in R.
This report presents the Monte Carlo Methods Project completed as part of the **Monte Carlo Methods** course at Paris-Dauphine University. The goal was to implement a range of Monte Carlo methods and algorithms in R.
## 🛠️ Methods and Algorithms
::BackgroundTitle{title="Methods and Algorithms"}
::
- Plotting graphs of functions
- Inverse c.d.f. Random Variation simulation
- Accept-Reject Random Variation simulation
- Random Variable simulation with stratification
- Inverse CDF random variation simulation
- Accept-Reject random variation simulation
- Random variable simulation with stratification
- Cumulative density function
- Empirical Quantile Function
- Empirical quantile function
## 📚 Resources
::BackgroundTitle{title="Resources"}
::
You can find the code here: [Monte Carlo Project Code](https://go.arthurdanjou.fr/monte-carlo-code)
## 📄 Detailed Report
::BackgroundTitle{title="Detailed Report"}
::
<iframe src="/projects/monte-carlo.pdf" width="100%" height="1000px">
</iframe>

View File

@@ -0,0 +1,59 @@
---
slug: n8n-automations
title: n8n Automations
type: Academic Project
description: An academic project exploring the automation of GenAI workflows using n8n and Ollama for self-hosted AI applications, including personalized research agents and productivity hubs.
shortDescription: Automating GenAI workflows with n8n and Ollama in a self-hosted environment.
publishedAt: 2026-03-15
readingTime: 2
status: Completed
tags:
- n8n
- Gemini
- Self-Hosted
- Automation
- RAG
- Productivity
icon: i-ph-plugs-connected-duotone
---
::BackgroundTitle{title="Overview"}
::
This project focuses on designing and implementing autonomous workflows that leverage Large Language Models (LLMs) to streamline productivity and academic research. By orchestrating Generative AI through a self-hosted infrastructure on my **[ArtLab](/projects/artlab)**, I built a private ecosystem that acts as both a personal assistant and a specialized research agent.
::BackgroundTitle{title="Key Workflows"}
::
### 1. Centralized Productivity Hub
I developed a synchronization engine that bridges **Notion**, **Google Calendar**, and **Todoist**.
* **Contextual Sync:** Academic events, such as course schedules and exam dates, are pulled from Notion and reflected in my calendar and task manager.
* **Daily Briefing:** Every morning, the system triggers a workflow that compiles my schedule, pending tasks, and a local weather report into a single, centralized email summary. This ensures a frictionless start to the day with all critical information in one place.
### 2. Intelligent Research Engine (RSS & RAG)
To stay at the forefront of AI research, I built an automated pipeline for academic and technical monitoring.
* **Multi-Source Fetching:** The system monitors RSS feeds from **arXiv**, **Hugging Face**, **Hacker News**, **selfho.st**, and major industry blogs (OpenAI, Google Research, Meta).
* **Semantic Filtering:** Using LLMs, articles are filtered and ranked based on my specific research profile, with a focus on **robust distributed learning**.
* **Knowledge Base:** Relevant papers and posts are automatically stored in a structured Notion database.
* **Interactive Research Agent:** I integrated a chat interface within n8n that allows me to query this collected data. I can request summaries, ask specific technical questions about a paper, or extract the most relevant insights for my current thesis work.
::BackgroundTitle{title="Technical Architecture"}
::
The environment is built to handle complex multi-step chains, moving beyond simple API calls to create context-aware agents.
### Integrated Ecosystem
* **Intelligence Layer:** Integration with **Gemini** (API) and **Ollama** (local) for summarization and semantic sorting.
* **Data Sources:** RSS feeds and Notion databases.
* **Notifications & UI:** Gmail for briefings and Discord for real-time system alerts.
::BackgroundTitle{title="Key Objectives"}
::
1. **Privacy-Centric AI:** Ensuring that sensitive academic data and personal schedules remain within a self-hosted or controlled environment.
2. **Academic Efficiency:** Reducing the "noise" of information overload by using AI to surface only the most relevant research papers.
3. **Low-Code Orchestration:** Utilizing n8n to manage complex logic and API interactions without the overhead of maintaining a massive custom codebase.
---
*This project is currently under active development as I refine the RAG (Retrieval-Augmented Generation) logic and optimize the filtering prompts for my research.*

View File

@@ -0,0 +1,119 @@
---
slug: rl-tennis-atari-game
title: Reinforcement Learning for Tennis Strategy Optimization
type: Academic Project
description: An academic project exploring the application of reinforcement learning to optimize tennis strategies. The project involves training RL agents on Atari Tennis (ALE) to evaluate strategic decision-making through competitive self-play and baseline benchmarking.
shortDescription: Reinforcement learning algorithms applied to Atari tennis matches for strategy optimization and competitive benchmarking.
publishedAt: 2026-03-13
readingTime: 3
status: Completed
tags:
- Reinforcement Learning
- Python
- Gymnasium
- Atari
- ALE
icon: i-ph-lightning-duotone
---
Comparison of Reinforcement Learning algorithms on Atari Tennis (`ALE/Tennis-v5` via Gymnasium/PettingZoo).
- **GitHub Repository:** [Tennis-Atari-Game](https://github.com/ArthurDanjou/Tennis-Atari-Game)
::BackgroundTitle{title="Overview"}
::
This project implements and compares five RL agents playing Atari Tennis against the built-in AI and in head-to-head tournaments.
::BackgroundTitle{title="Algorithms"}
::
| Agent | Type | Policy | Update Rule |
|-------|------|--------|-------------|
| **Random** | Baseline | Uniform random | None |
| **SARSA** | TD(0), on-policy | ε-greedy | $W_a \leftarrow W_a + \alpha \cdot (r + \gamma \hat{q}(s', a') - \hat{q}(s, a)) \cdot \phi(s)$ |
| **Q-Learning** | TD(0), off-policy | ε-greedy | $W_a \leftarrow W_a + \alpha \cdot (r + \gamma \max_{a'} \hat{q}(s', a') - \hat{q}(s, a)) \cdot \phi(s)$ |
| **Monte Carlo** | First-visit MC | ε-greedy | $W_a \leftarrow W_a + \alpha \cdot (G_t - \hat{q}(s, a)) \cdot \phi(s)$ |
| **DQN** | Deep Q-Network | ε-greedy | MLP (256→256) with experience replay & target network |
::BackgroundTitle{title="Architecture"}
::
- **Linear agents** (SARSA, Q-Learning, Monte Carlo): $\hat{q}(s, a; \mathbf{W}) = \mathbf{W}_a^\top \phi(s)$ with $\phi(s) \in \mathbb{R}^{128}$ (RAM observation)
- **DQN**: MLP network (128 → 128 → 64 → 18) trained with Adam optimizer, Huber loss, and periodic target network sync
::BackgroundTitle{title="Environment"}
::
- **Game**: Atari Tennis via PettingZoo (`tennis_v3`)
- **Observation**: RAM state (128 features)
- **Action Space**: 18 discrete actions
- **Agents**: 2 players (`first_0` and `second_0`)
::BackgroundTitle{title="Project Structure"}
::
```
.
├── Project_RL_DANJOU_VON-SIEMENS.ipynb # Main notebook
├── README.md # This file
├── checkpoints/ # Saved agent weights
│ ├── sarsa.pkl
│ ├── q_learning.pkl
│ ├── montecarlo.pkl
│ └── dqn.pkl
└── plots/ # Training & evaluation plots
├── SARSA_training_curves.png
├── Q-Learning_training_curves.png
├── MonteCarlo_training_curves.png
├── DQN_training_curves.png
├── evaluation_results.png
└── championship_matrix.png
```
::BackgroundTitle{title="Key Results"}
::
### Win Rate vs Random Baseline
| Agent | Win Rate |
|-------|----------|
| SARSA | 88.9% |
| Q-Learning | 41.2% |
| Monte Carlo | 47.1% |
| DQN | 6.2% |
### Championship Tournament
Full round-robin tournament where each agent faces every other agent in both positions (first_0/second_0).
::BackgroundTitle{title="Notebook Sections"}
::
1. **Configuration & Checkpoints** — Incremental training workflow with pickle serialization
2. **Utility Functions** — Observation normalization, ε-greedy policy
3. **Agent Definitions**`RandomAgent`, `SarsaAgent`, `QLearningAgent`, `MonteCarloAgent`, `DQNAgent`
4. **Training Infrastructure**`train_agent()`, `plot_training_curves()`
5. **Evaluation** — Match system, random baseline, round-robin tournament
6. **Results & Visualization** — Win rate plots, matchup matrix heatmap
::BackgroundTitle{title="Known Issues"}
::
- **Monte Carlo & DQN**: Checkpoint loading issues — saved weights may not restore properly during evaluation (training works correctly)
::BackgroundTitle{title="Dependencies"}
::
- Python 3.13+
- `numpy`, `matplotlib`
- `torch`
- `gymnasium`, `ale-py`
- `pettingzoo`
- `tqdm`
::BackgroundTitle{title="Authors"}
::
- Arthur DANJOU
- Moritz VON SIEMENS

View File

@@ -3,6 +3,7 @@ slug: schelling-segregation-model
title: Schelling Segregation Model
type: Academic Project
description: A Python implementation of the Schelling Segregation Model using statistics and data visualization to analyze spatial segregation patterns.
shortDescription: A project implementing the Schelling Segregation Model in Python.
publishedAt: 2024-05-03
readingTime: 4
status: Completed
@@ -15,13 +16,15 @@ tags:
icon: i-ph-city-duotone
---
This report presents the Schelling Segregation Model project completed as part of the **Projet Numérique** course at Paris-Saclay University. The goal was to implement the Schelling Segregation Model in Python and analyze the results using statistics and data visualization.
This report presents the Schelling Segregation Model project completed as part of the **Projet Numerique** course at Paris-Saclay University. The goal was to implement the Schelling Segregation Model in Python and analyze the results using statistics and data visualization.
## 📚 Resources
::BackgroundTitle{title="Resources"}
::
You can find the code here: [Schelling Segregation Model Code](https://go.arthurdanjou.fr/schelling-code)
## 📄 Detailed Report
::BackgroundTitle{title="Detailed Report"}
::
<iframe src="/projects/schelling.pdf" width="100%" height="1000px">
</iframe>

View File

@@ -3,6 +3,7 @@ slug: sevetys
title: Data Engineer Internship at Sevetys
type: Internship Project
description: Summary of my internship as a Data Engineer at Sevetys, focusing on data quality, cleaning, standardization, and comprehensive data quality metrics.
shortDescription: A summary of my Data Engineer internship at Sevetys, focusing on data quality and cleaning processes.
publishedAt: 2025-07-31
readingTime: 2
status: Completed
@@ -17,15 +18,17 @@ icon: i-ph-dog-duotone
[**Sevetys**](https://sevetys.fr) is a leading French network of over 200 veterinary clinics, employing more than 1,300 professionals. Founded in 2017, the group provides comprehensive veterinary care for companion animals, exotic pets, and livestock, with services ranging from preventive medicine and surgery to cardiology, dermatology, and 24/7 emergency care.
Committed to digital innovation, Sevetys leverages centralized data systems to optimize clinic operations, improve patient data management, and enhance the overall client experience. This combination of medical excellence and operational efficiency supports veterinarians in delivering the highest quality care nationwide.
Committed to digital innovation, Sevetys leverages centralized data systems to optimize clinic operations, improve patient data management, and enhance the overall client experience. This combination of medical excellence and operational efficiency supports veterinarians in delivering high-quality care nationwide.
## 🎯 Internship Objectives
::BackgroundTitle{title="Internship Objectives"}
::
During my two-month internship as a Data Engineer, I focused primarily on cleaning and standardizing customer and patient data a critical task, as this data is extensively used by clinics, Marketing, and Performance teams. Ensuring data quality was therefore essential to the company's operations.
During my two-month internship as a Data Engineer, I focused primarily on cleaning and standardizing customer and patient data, a critical task because this data is extensively used by clinics, Marketing, and Performance teams. Ensuring data quality was essential to the company's operations.
Additionally, I took charge of revising and enhancing an existing data quality report designed to evaluate the effectiveness of my cleaning processes. The report encompassed 47 detailed metrics assessing data completeness and consistency, providing valuable insights that helped maintain high standards across the organization.
Additionally, I revised and enhanced an existing data quality report designed to evaluate the effectiveness of my cleaning processes. The report covered 47 detailed metrics assessing data completeness and consistency, providing valuable insights that helped maintain high standards across the organization.
## ⚙️ Technology Stack
::BackgroundTitle{title="Technology Stack"}
::
- **[Microsoft Azure Cloud](https://azure.microsoft.com/)**: Cloud infrastructure platform
- **[PySpark](https://spark.apache.org/docs/latest/api/python/)**: Distributed data processing framework

View File

@@ -1,8 +1,9 @@
---
slug: breast-cancer
slug: sl-breast-cancer
title: Breast Cancer Detection
type: Academic Project
description: Prediction of breast cancer presence by comparing several supervised classification models using machine learning techniques.
shortDescription: A project comparing supervised classification models to predict breast cancer presence using machine learning.
publishedAt: 2025-06-06
readingTime: 2
status: Completed
@@ -16,7 +17,8 @@ icon: i-ph-heart-half-duotone
This project was carried out as part of the **Statistical Learning** course at Paris-Dauphine PSL University. The objective is to identify the most effective model for predicting or explaining the presence of breast cancer based on a set of biological and clinical features.
## 📊 Project Objectives
::BackgroundTitle{title="Project Objectives"}
::
Develop and evaluate several supervised classification models to predict the presence of breast cancer based on biological features extracted from the Breast Cancer Coimbra dataset, provided by the UCI Machine Learning Repository.
@@ -26,7 +28,8 @@ The dataset contains 116 observations divided into two classes:
There are 9 explanatory variables, including clinical measurements such as age, insulin levels, leptin, insulin resistance, among others.
## 🔍 Methodology
::BackgroundTitle{title="Methodology"}
::
The project follows a comparative approach between several algorithms:
@@ -39,11 +42,13 @@ Model evaluation is primarily based on the F1-score, which is more suitable in a
This project illustrates a concrete application of data science techniques to a public health issue, while implementing a rigorous methodology for supervised modeling.
## 📚 Resources
::BackgroundTitle{title="Resources"}
::
You can find the code here: [Breast Cancer Detection](https://go.arthurdanjou.fr/breast-cancer-detection-code)
## 📄 Detailed Report
::BackgroundTitle{title="Detailed Report"}
::
<iframe src="/projects/breast-cancer.pdf" width="100%" height="1000px">
</iframe>

View File

@@ -9,11 +9,12 @@ Research requires a reliable environment. This page documents the hardware infra
---
## 🖥️ Workstations & Compute
::BackgroundTitle{title="Workstations & Compute"}
::
My setup is split between mobile efficiency for academic writing and a fixed station for heavier computation.
::div{class="grid grid-cols-1 md:grid-cols-2 gap-6"}
:::div{class="grid grid-cols-1 md:grid-cols-2 gap-6"}
::card{title="Daily Driver" icon="i-ph-laptop-duotone"}
**Apple MacBook Pro 13"**
@@ -30,7 +31,7 @@ My setup is split between mobile efficiency for academic writing and a fixed sta
* **Usage:** Local Deep Learning training, gaming, and heavy compilation tasks.
::
::
:::
### Peripherals
I rely on a specific set of tools to maintain flow during deep work sessions.
@@ -42,23 +43,24 @@ I rely on a specific set of tools to maintain flow during deep work sessions.
---
## 🛠️ Development Ecosystem
::BackgroundTitle{title="Development Ecosystem"}
::
I prioritize tools that offer **AI-integration** and **strong type-checking**.
::div{class="grid grid-cols-1 md:grid-cols-2 gap-6"}
:::div{class="grid grid-cols-1 md:grid-cols-2 gap-6"}
::card{title="IDEs & Editors" icon="i-ph-code-duotone"}
* :prose-icon[VS Code]{color="blue" icon="i-logos:visual-studio-code"} — For general-purpose scripting and remote SSH development.
* :prose-icon[Positron]{color="cyan" icon="i-devicon:positron"} — Lightweight IDE for R and statistical analysis, offering superior performance to RStudio while maintaining VS Code familiarity.
* :prose-icon[JetBrains]{color="purple" icon="i-logos:jetbrains"} — *PyCharm* & *DataGrip* are unrivaled for complex refactoring and database management.
* **Theme:** Catppuccin Latte (Light) / Macchiato (Dark).
* **Theme:** :prose-icon[ArtLab]{color="indigo" icon="i-ph-palette-duotone"} — A custom VS Code theme with optimized contrast for extended coding sessions, supporting both light and dark modes.
* **Font:** GitHub Monaspace Neon (primary, ligatures enabled) & JetBrains Mono.
```python [main.py]
def main():
print("Hello, Research Lab!")
```
```python [main.py]
def main():
print("Hello, Research Lab!")
```
::
::card{title="Terminal & System" icon="i-ph-terminal-window-duotone"}
@@ -69,17 +71,18 @@ I prioritize tools that offer **AI-integration** and **strong type-checking**.
* :prose-icon[Firefox]{color="orange" icon="i-logos:firefox"} — Chosen for its privacy features and robust DevTools.
::
::
:::
---
## 🏠 Infrastructure & Homelab
::BackgroundTitle{title="Infrastructure & Homelab"}
::
To bridge the gap between theory and MLOps, I maintain a **self-hosted cluster**. This allows me to experiment with distributed systems, data pipelines, and network security in a controlled environment.
### Hardware Infrastructure
::div{class="grid grid-cols-1 md:grid-cols-3 gap-4"}
:::div{class="grid grid-cols-1 md:grid-cols-3 gap-4"}
::card{title="Compute Node" icon="i-ph-cpu-duotone"}
**Beelink EQR6** *:hover-text{text="AMD Ryzen" hover="Proxmox Host"}*
@@ -99,7 +102,7 @@ Centralized Data Lake for datasets and backups.
Ensures fast, stable local communication.
::
::
:::
### Service Stack
I run these services using **Docker** and **Portainer**, strictly behind a **Traefik** reverse proxy.
@@ -113,4 +116,4 @@ I run these services using **Docker** and **Portainer**, strictly behind a **Tra
* :prose-icon[Utilities]{icon="i-ph-wrench-duotone"} — BentoPDF, Palmr, Home Assistant.
::
> *This list is constantly updated as I experiment with new tools and equipment.*
> *This list is constantly updated as I experiment with new tools and equipment.*

View File

@@ -1,6 +1,12 @@
{
"name": "artsite",
"private": true,
"author": {
"email": "arthurdanjou@outlook.fr",
"name": "Arthur Danjou",
"url": "https://arthurdanjou.fr"
},
"packageManager": "bun@1.3.8",
"scripts": {
"build": "nuxi build",
"dev": "nuxi dev",
@@ -11,36 +17,36 @@
"cf-typegen": "wrangler types"
},
"dependencies": {
"@libsql/client": "^0.15.15",
"@nuxt/content": "3.10.0",
"@nuxt/eslint": "1.12.1",
"@nuxt/ui": "^4.3.0",
"@nuxthub/core": "0.10.4",
"@nuxtjs/mdc": "0.19.2",
"@nuxtjs/seo": "3.3.0",
"@vueuse/core": "^14.1.0",
"@vueuse/math": "^14.1.0",
"better-sqlite3": "^12.5.0",
"drizzle-kit": "^0.31.8",
"@libsql/client": "^0.17.0",
"@nuxt/content": "3.12.0",
"@nuxt/eslint": "1.15.2",
"@nuxt/ui": "4.5.1",
"@nuxthub/core": "0.10.7",
"@nuxtjs/mdc": "0.20.2",
"@nuxtjs/seo": "3.4.0",
"@vueuse/core": "^14.2.1",
"@vueuse/math": "^14.2.1",
"better-sqlite3": "^12.6.2",
"drizzle-kit": "^0.31.9",
"drizzle-orm": "^0.45.1",
"nuxt": "4.2.2",
"nuxt-studio": "1.0.0",
"vue": "3.5.26",
"vue-router": "4.6.4",
"zod": "^4.3.5"
"nuxt": "4.3.1",
"nuxt-studio": "1.4.0",
"vue": "3.5.30",
"vue-router": "5.0.3",
"zod": "^4.3.6"
},
"devDependencies": {
"@iconify-json/devicon": "1.2.56",
"@iconify-json/devicon": "1.2.59",
"@iconify-json/file-icons": "^1.2.2",
"@iconify-json/logos": "^1.2.10",
"@iconify-json/ph": "^1.2.2",
"@iconify-json/twemoji": "1.2.5",
"@iconify-json/vscode-icons": "1.2.37",
"@types/node": "25.0.3",
"@vueuse/nuxt": "14.1.0",
"eslint": "9.39.2",
"@iconify-json/vscode-icons": "1.2.45",
"@types/node": "25.4.0",
"@vueuse/nuxt": "14.2.1",
"eslint": "10.0.3",
"typescript": "^5.9.3",
"vue-tsc": "3.2.2",
"wrangler": "4.54.0"
"vue-tsc": "3.2.5",
"wrangler": "4.71.0"
}
}

Binary file not shown.

View File

@@ -57,13 +57,6 @@
},
"env": {
"preview": {
"routes": [
{
"pattern": "preview.arthurdanjou.fr",
"zone_name": "arthurdanjou.fr",
"custom_domain": true
}
],
"d1_databases": [
{
"binding": "DB",