Requirements
- Bachelor’s or Master’s degree in Data Science, Computer Science, Statistics, Mathematics, or a related field
- (Desirable) Demonstrated ability to design and validate features using transactional and behavioral data from financial services
- Hands-on experience building and deploying ML models in Python
- (Desirable) Proven track record in payment risk, fraud detection, or credit decisioning is strongly preferred
- SQL skills and familiarity with AWS for deployment and data workflows
- (Desirable) Experience with model governance and monitoring in regulated environments
- Strong knowledge of statistical modeling, anomaly detection, clustering, and supervised/unsupervised learning
- Hands-on experience with agentic AI frameworks (e.g. Claude with MCP, LangChain); experience in either transaction enrichment workflows or AI-assisted software development tooling is sufficient
- Experience working with large-scale tabular data
- Proven success collaborating with product and engineering teams to ship ML-based features and tools
- Strong communication skills and business acumen to present complex technical ideas to non-technical stakeholders
- Curious, proactive, and comfortable working in ambiguity in a scaling startup environment
,
,
,
,
,
,
,
,
,
,
,
What the job involves
- You will use Python, with a strong grounding in feature engineering, model evaluation, and inference pipelines to help shape the future of our product offerings
- Act as a subject matter expert in Analytics: Strong understanding of statistics, experimental design, and hypothesis testing to identify trends and patterns, and develop predictive models
- Design and deploy Agentic AI systems capable of autonomous, multi-step reasoning, e.g. such as agents that categorise financial transactions by searching for merchant information, applying labels, and surfacing confidence scores with transparent reasoning
- Lead data labeling at scale to produce ground-truth datasets and use ML techniques to maximise labelling efficiency
- Build AI agents that integrate with external tools and data sources via protocols such as MCP, enabling LLMs to interact directly with codebases, APIs, or internal systems to automate complex workflows
- Lead model deployment with an eye for performance, scalability, and real time low-latency inference
- Deliver robust, low-failure-rate models and systems, especially in environments where testing support is limited
- Collaborative Problem Solving: Work closely with cross-functional teams, including product, engineering, and business stakeholders, to understand requirements and deliver data-driven solutions
- Enthusiasm for collaborating across disciplines, including with academic researchers and third-party partners
,
,
,
,
,
,
,
,
#J-18808-Ljbffr…
