Senior Data Engineer, Datacraft

Bloomreach
Full-timeSeniorRemoteEU

AI Tools & Frameworks

CursorClaude CodeGitHub CopilotGemini CLI

Tech Stack

PythonGoSQLApache KafkaBigQueryApache IcebergGCSSparkDataProcAirflowSnowflakeDatabricksGCPKubernetesTerraformLLM APIsMCPCubeLooker StudioGitLab CI/CD

Agent Workflow

Datacraft is an AI-first team that believes code is a commodity. Every engineer is expected to fluently use coding agents (Cursor, Claude Code, Copilot, Gemini CLI) as a core part of their daily workflow.

About the Role

Senior Data Engineer, Datacraft

Company: Bloomreach

About Bloomreach:

Bloomreach builds an agentic personalization platform serving 1,400+ global brands. The company focuses on autonomous search, conversational shopping, and autonomous marketing powered by Loomi AI.

About the Team:

Datacraft is an AI-first team. We believe code is a commodity and expect every engineer to fluently use coding agents (e.g., Cursor, Claude Code, Copilot, Gemini CLI) as a core part of their daily workflow.

The Datacraft team handles three interconnected domains:

  • Building data warehouse platforms (~60%): Design data pipelines (Kafka -> GCS -> Iceberg -> DWH) following medallion architecture, maintain orchestration systems
  • Developing the Loomi Analytics Agent's data layer (~20%): Evolving analytics from report builder to agentic assistant
  • Co-building dashboards and analytics infrastructure (~20%): Moving reporting to DWH-backed modern analytics

Key Responsibilities

  • Design data pipelines following medallion architecture (Kafka -> GCS -> Iceberg -> DWH)
  • Own data models and ETL for ID resolution and consent semantics
  • Implement evaluation harnesses and analytics skills for the agent
  • Build MCPs and data interfaces for agentic platforms
  • Design canonical metrics and semantic layers with BI tools

Required Experience

  • Strong SQL and data modeling expertise (star/snowflake schemas, slowly changing dimensions)
  • Production-grade data pipeline experience on GCP (BigQuery, Apache Iceberg, Spark on DataProc)
  • Airflow/Cloud Composer orchestration and DAG-based systems
  • Open table formats familiarity (Iceberg preferred)
  • Python programming (preferred); Scala/Java/Go acceptable
  • Data quality, lineage, and observability understanding

Strongly Preferred

  • Agentic platforms or AI-powered analytics background
  • Marketing analytics, CDPs, or BI/semantic layer experience (Looker, dbt, Cube)
  • Snowflake or Databricks alongside BigQuery

Personal Qualities

Ownership & accountability, Product thinking, Clear communication across engineering and product, Bias for reliability

Tech Stack: Python, Go, SQL, Apache Kafka, BigQuery, Iceberg, GCS, Spark, DataProc, Airflow, Snowflake, Databricks, GCP, Kubernetes, Terraform, LLM APIs, MCP, agent orchestration, Cube, Looker Studio, Grafana, Prometheus, PagerDuty, Sentry, OpenTelemetry, GitLab CI/CD

Success Timeline

  • 30 Days: Understand domain, complete onboarding, dev environment setup
  • 90 Days: Ship first pipeline/model/orchestration to production
  • 180 Days: Own component end-to-end, make independent trade-off decisions

Location: Work in one of our Central European offices (Bratislava, Prague, Brno) or from home on a full-time basis.

Benefits:

  • Restricted Stock Units or Stock Options
  • Company performance bonus
  • $3,000 employee referral bonus
  • $1,500 annual professional education budget
  • Extended parental leave (26 calendar weeks for primary caregivers)
  • 5 paid volunteering days
  • Flexible hours
  • Quarterly DisConnect days
  • Employee Assistance Program
  • Calm app subscription
Apply Now
Apply Now

Similar Jobs

Get jobs like this weekly