Why we named our agents Hugo, Lea and Maya — and not Agent #1
Three first names aren't marketing. They're architectural decisions.
When you launch an AI agent platform, you have two options.
The first: name your agents Agent-001, Agent-002, Agent-003. Or worse, by function: HSE-bot, Voice-bot, Recruiter-AI. That's what 90% of vendors do. It's rational. It's readable. It's cold.
The second: give them a first name. Hugo. Lea. Maya. Three agents with a face, a voice, a mission. That's what we chose. And this choice isn't marketing — it's structural.
Here's why.
A first name forces specialization
When you call your agent Voice-bot, you design it as a generic tool. You'll ask it to answer the phone, transfer calls, maybe take notes. You'll see it as a script that talks.
When you call her Lea, you give her a job. Lea isn't a voice bot. Lea is a senior front-desk operator. She picks up in fewer than three rings, qualifies the call in French, English or Dutch, routes it to the right person, and sends a summary to the CRM. She has a scope, a quality standard, a benchmark.
The first name isn't a disguise. It's a promise of specialization.
A first name changes the relationship to error
A bot that makes mistakes, you unplug. An agent that makes mistakes, you improve.
This difference isn't semantic. It's cultural. When Maya misses a sourcing — when she suggests a candidate off-target — the HR team doesn't say "the AI crashed". They say "Maya got it wrong, let's re-explain the brief". Feedback becomes possible. The learning loop kicks in. The agent improves with use, like a junior finding their footing.
With a recruiter-bot, that process doesn't exist. You open a support ticket.
Three agents, three personalities, three mandates
Our first names aren't picked at random.
Hugo — HSE agent. Hugo is rigorous, methodical, French-speaking by default. He audits sites, generates ISO 45001-compliant reports, identifies regulatory gaps. He has the soberness of a senior safety officer. He doesn't make jokes. He's useful.
Lea — telephony agent. Lea is fast, polite, multilingual (FR/EN/NL/DE). She takes inbound calls, qualifies, transfers, and keeps the customer record up to date. She has the bearing of an experienced executive assistant. She never hangs up first.
Maya — recruitment agent. Maya is curious, strategic, demanding on matching. She scans CVs, sources on LinkedIn, runs video pre-interviews, and delivers a reasoned shortlist. She thinks like an executive search firm, not like an ATS.
These three personalities aren't fluff. They dictate technical choices: each model's tone of voice, the scope of tools each agent has access to, the business guardrails, the learning modes.
The consequence: your teams adopt them
Here's what we observe with our first customers.
After two weeks, the HR director no longer says "I use the BeLogic platform". He says "I asked Maya to pull five profiles for the Lead Designer role". The safety officer no longer says "I ran an AI audit". He says "Hugo flagged three gaps on the Lyon site, we're looking at them this morning".
This adoption is measurable. It increases usage rates by 3× compared to deploying an anonymous chatbot. Because you don't use a tool. You collaborate with a colleague.
More than an AI. A partner.
BeLogic's tagline isn't a slogan. It's a product thesis.
We believe that in the age of agentic AI, the difference between a chatbot vendor and a transformation partner lies in how you design the relationship between your teams and the AI. A tool you configure, or a colleague you onboard.
Hugo, Lea and Maya aren't first names. They're architectural decisions.
And it's probably the most important decision we've made.