Building end-to-end AI systems — from LLMs and RAG architectures to MLOps pipelines and production deployment.
I started my journey in statistics and mathematics, but quickly fell in love with the intersection of data and artificial intelligence. During my studies at Universidad Santo Tomás, I developed my thesis on topic analysis using the LDA model applied to the Colombian case.
I've worked across finance, customer service, and enterprise technology — moving from dashboards and KPI analysis to building production AI systems with LLMs, RAG, and multi-agent orchestration. Each role gave me a different perspective on how AI can create real impact.
Today, as an AI Engineer at H&Co Latam, I design end-to-end GenAIOps pipelines: from architecture to deployment, observability, and continuous improvement. My goal is to build reliable AI systems that don't just work in demos — they work in production.
Full-cycle AI engineering — from raw data and model training to production deployment, observability, and continuous improvement. I choose tools based on the problem, not the trend.
Working daily with frontier models — from prompt design and structured outputs to fine-tuning and guardrails for production safety.
Building multi-agent systems with conditional routing, parallel tool execution, and stateful workflows — from prototype to production.
Designing hybrid RAG pipelines with semantic and lexical fusion — pgvector, dense embeddings, and reranking at scale.
Tracing every agent step, evaluating output quality, and closing the feedback loop — AI systems must be measurable to be trustworthy.
Multi-cloud fluency across AWS, Azure, and GCP — with deep hands-on experience deploying managed AI services and serverless pipelines.
Infrastructure as code from day one — containerized workloads, Kubernetes orchestration, and reproducible deployments across environments.
Strong foundations in classical ML, NLP, and computer vision — from statistical modeling to training custom neural architectures.
Python-first, but fluent across the data layer — SQL, R for analysis, TypeScript for APIs, and modern data tools for clean pipelines.
Every architecture I design has production reliability and measurable business impact as its north star — not hype.
My RLHF and evaluation background gives me a quality lens that goes beyond metrics — systems must behave predictably in the real world.
From Bayesian statistics to autonomous agents, I constantly evolve my knowledge and apply it where it matters most.
Leading end-to-end GenAIOps pipelines with AWS Bedrock, LangGraph, and OpenAI API for production LLM apps with multi-agent orchestration.
Designing RAGOps with vector DBs (OpenSearch, Pinecone), semantic reranking, Phoenix Arize evaluation, and Kubernetes deployments.
Exploring LangGraph orchestration patterns: conditional routing, parallel tool execution, and state management for complex AI workflows.