We're Hiring

Build the Future of AI Content Licensing

Join us in creating the infrastructure that ensures quality content gets fairly compensated in the age of AI. We're a small team solving a problem that didn't exist two years ago.

Who We Are

The Team We're Building

Geodesix is building the commercial infrastructure for AI-native content licensing. We connect premium publishers with large language models through the Model Context Protocol (MCP), creating a marketplace where quality content gets properly attributed and fairly compensated.

The MCP protocol launched in late 2024. We're building the commercial layer on top of it. Best practices don't exist yet. The right answer to most questions is "we need to figure it out." If that excites you rather than terrifies you, keep reading.

We're looking for people who combine intellectual rigour with pragmatic execution. People who can hold complexity without needing everything spelled out. People who are energised by ambiguity, not paralysed by it. People who care about building things that matter—not just technically interesting problems, but problems where getting it right has real consequences for creators, publishers, and the future of how AI systems access quality information.

We're backed by impact.com and launching in January 2026. This is the moment to join.

Open Positions

Click on a role to learn more

Solutions Engineer

London, Tel Aviv, or NYC
Hybrid
£120k / $150k

The Role

You'll be the primary point of contact for both sides of our marketplace: publishers licensing their content and LLM operators integrating our data. Your job is to make integrations feel effortless, turn documentation into a competitive advantage, and build relationships that turn early adopters into advocates.

This is an unusual combination: deeply technical but genuinely energised by client relationships. You'll spend mornings debugging MCP connection issues and afternoons walking a publisher through their analytics dashboard. You need to be credible in both rooms.

What You'll Do

  • Own end-to-end integration for all clients, from technical scoping through production deployment
  • Create and maintain technical documentation, integration guides, and API references
  • Bridge engineering and clients: translate needs into requirements and constraints into explanations
  • Build relationships that drive retention through proactive support and value delivery
  • Surface product insights: what's confusing, what's missing, what clients actually need

What We're Looking For

  • Technical fluency: comfortable reading code, understanding APIs, debugging integration issues
  • Exceptional communication: clear writing, confident presentation, adaptable tone
  • Client empathy: genuine interest in understanding problems, not just solving tickets
  • Self-direction: we're a small team, you'll be building processes from scratch

Bonus Points

Experience with LLMs or AI infrastructure. Background in solutions engineering, developer relations, or technical account management. Familiarity with publishing, affiliate, or commerce content. Experience at early-stage startups.

Why This Role

You'll be the first dedicated client-facing hire at a company solving a problem that didn't exist two years ago. The MCP protocol launched in late 2024. We're building the commercial layer on top of it. If you want to shape how AI and content creators work together, this is the moment.

Query Quality Scientist

London, Tel Aviv, or NYC
Hybrid
£120k / $150k

The Role

When an LLM queries our platform for commerce content, the quality of what comes back determines everything: whether publishers get paid fairly, whether retailers get useful responses, whether the marketplace works. Your job is to make that retrieval as good as it can possibly be.

This is genuinely novel work. The MCP protocol is months old. Best practices don't exist yet. The right answer to most questions is "we need to run an experiment." You need the intellectual rigour of a researcher combined with the pragmatism to ship improvements that work in production.

What You'll Do

  • Own query result quality: develop metrics, measure baselines, design and run experiments
  • Work across embeddings, content ingestion, MCP protocol, and LLM integration layers
  • Build experimentation infrastructure: A/B testing, evaluation datasets, regression testing
  • Explore architectural improvements: intent classification, re-ranking, hybrid retrieval
  • Document learnings and build institutional knowledge in a space we're creating from scratch

What We're Looking For

  • Deep understanding of information retrieval, embeddings, or search ranking systems
  • Strong experimental methodology: hypothesis formation, controlled testing, statistical rigour
  • Comfort with ambiguity: figuring out what to measure, not just optimising existing metrics
  • Programming proficiency in Python and experience with ML frameworks

Bonus Points

Academic research background in NLP, IR, or ML. Experience with RAG systems or vector databases. Published work or open-source contributions. Background at search companies or ML-focused startups. Familiarity with LLM evaluation methodologies.

Why This Role

Most retrieval work is incremental optimisation on established systems. This isn't. We're defining what "good" means for AI-native content retrieval. If you've been frustrated by the gap between research papers and production impact, or want to work on problems where the best practices haven't been written yet, this is the role.

Don't See Your Role?

We're always looking for exceptional people. Send us your CV and tell us why you'd be a great fit.

joinus@geodesix.com