About the Company
We are building AI Brain, a knowledge‑graph platform that tackles the hard problem of dynamic ontology generation for enterprise‑scale data. Our solutions enable forward‑deployed engineers to wrap AI into existing workflows while keeping customisations as intellectual property that feeds back into the platform.
Role Overview
You will work on the research‑grade core of AI Brain alongside the CTO and senior engineering team, addressing open‑ended problems such as hierarchical ontology design, per‑tenant configuration, entity consolidation, and temporal graph modeling. The work directly tests the core thesis of the platform.
Responsibilities
- Design and implement hierarchical ontology generation, moving from a flat conceptual space to one with inheritance while preserving source provenance.
- Develop per‑tenant configuration mechanisms that allow forward‑deployed engineers to tune behaviour without modifying the runtime.
- Define metrics and evaluation pipelines to assess the quality of generated ontologies.
- Build robust entity consolidation pipelines that reconcile fuzzy matches, heuristics, and agent‑driven tiebreaking, maintaining end‑to‑end provenance.
- Identify and resolve edge‑case duplication where the same real‑world entity appears under different names in varied contexts.
- Separate consolidation, enrichment, and update logic into distinct, maintainable concerns.
- Determine and store entity‑level attributes to avoid recomputation per data chunk.
- Architect and build a Rust‑based temporal graph database that records timestamps on nodes, edges, and attributes, enabling historical retrieval and back‑testing.
- Benchmark and author a white paper on new enterprise‑context retrieval standards, publishing results independently from our solutions.
- Explore and vet frontier research ideas such as alternative embedding geometries, community‑detection retrieval, graph‑internal monitoring, and encoder‑based privacy primitives.
- Contribute to hiring decisions, client engagements, and technical mentorship.
Qualifications
- Depth in at least one of: knowledge graphs and GraphRAG, retrieval systems, agent orchestration, or large‑scale data ingestion.
- Demonstrated ability to translate research papers or first‑principles thinking into production systems (published work, open‑source contributions, or architecturally walk‑throughable implementations).
- Proficient Rust experience, having written performance‑critical systems (parsers, runtimes, storage engines, services) and an understanding of Rust’s complexity trade‑offs.
- Strong Python skills and sufficient TypeScript expertise to ship product surfaces where it matters.
- Effective code review habits—reading PRs, identifying three improvement points, and providing constructive feedback.
- Ability to distinguish clever from optimal solutions and to push back against suboptimal choices.
- Passion for the problem space of context layers, GraphRAG, dynamic ontology generation, and temporal data systems (not mere hype around AI).
- PhD unnecessary; required level of problem‑solving competence is demonstrable through past work.
Benefits
- Private healthcare and a comprehensive wellness benefits package.
- Sauna and cold‑plunge sessions for recovery and team bonding.
- Team socials, dinners, off‑sites, and access to industry events.
- Supportive environment encouraging high‑quality work while preventing burnout.
Stack
- Agents: LangGraph with Pydantic‑typed state, Claude via Vertex AI, Gemini for fast tagging.
- Graph and data: Postgres + Apache AGE (current) with a Rust temporal graph database in active development.
- Backend: FastAPI, Python 3.12, Pydantic throughout.
- Frontend: Next.js (App Router), TypeScript, Tailwind, shadcn, Vercel.
- Infrastructure: GCP compute, storage, model serving, and key management.
- Tooling: pnpm, Husky commit hooks (lint, format, typecheck, test, agentic check), Linear, Claude Code.
#J-18808-Ljbffr