Software Engineer – Data

Company: HENI
Apply for the Software Engineer – Data
Location: Greater London
Job Description:

HENI is an international art services business working with leading artists and estates across printmaking, marketplaces for physical artworks, NFTs, publishing, digital, video production, art research and analysis. HENI is at the cutting edge of art and tech using the latest and best technologies, to make art accessible to audiences worldwide.

Position Overview

Summary

We are looking for a Software Engineer to join our Data team. You will build and maintain the data platform that powers customer analytics — pipelines, internal tools, and integrations — while contributing to analytical work that supports commercial teams and C-suite decision-making. You will also play a key role in broader data initiatives across the organisation, working with the wider team to shape and deliver many diverse projects.

Key Responsibilities

  • Build and maintain data pipelines
  • Develop internal data applications using Streamlit or Dash for ad-hoc analysis and customer research
  • Implement data quality checks and validation to ensure pipeline reliability
  • Support data architecture decisions and contribute to broader data platform improvements
  • Integrate third-party data sources (e.g. HubSpot, Facebook Business) into the customer data platform
  • Respond to ad-hoc data requests from across the business
  • Contribute to HENI News data initiatives

Customer Analytics

  • Write analytic SQL queries to support accounts and client liaison teams
  • Build and maintain dashboards in Apache Superset for self-serve business intelligence
  • Support the creation of Customer Data Reports for C-suite stakeholders
  • Contribute to customer analytics: segmentation, retention analysis, and behavioural insights

Required Technical Skills

Software Engineering

  • Strong Python skills — the primary language for all data work
  • Git and version control workflows
  • Automated testing: unit tests, integration tests, and data quality tests
  • Writing clean, maintainable, well-structured code
  • Experience building and maintaining production applications or services

Data Processing

  • Experience with distributed data processing frameworks (e.g. PySpark, Spark SQL)
  • pandas and numpy for data manipulation and analysis
  • SQL for analytical queries and database interaction
  • Experience with cloud-based data pipeline tools (e.g. AWS Glue, Azure Data Factory, GCP Dataproc)
  • Familiarity with cloud object storage (e.g. S3, GCS, Azure Blob) and columnar data formats (e.g. Parquet)
  • Familiarity with Infrastructure as Code, containerization (Docker), CI/CD
  • Experience with container orchestration (e.g. Kubernetes, Docker swarm, AWS ECS)
  • Experience with BI/dashboarding tools (e.g. Superset, Looker, Metabase)
  • Experience building internal data tools or apps (e.g. Streamlit, Dash)

Nice-to-Have Skills

  • Experience with statistical modelling and/or machine learning (e.g. scikit-learn, scipy)
  • Experience with CRM/marketing platform APIs (e.g. HubSpot, Salesforce or similar)
  • Experience integrating LLM APIs (e.g. Gemini/Vertex AI, OpenAI/ChatGPT) to build sophisticated data products
  • Experience with data quality frameworks (e.g. Great Expectations or similar)

Our Stack

  • AWS (S3, RDS, Glue, ECS, EC2)
  • Streamlit, Apache Superset
  • Git

Programming Languages

  • SQL (strong)

Education & Experience

  • Master’s degree in Computer Science, Software Engineering, Data Science, Engineering or a related quantitative discipline
  • 2-3 years of industry experience in a software engineering, data engineering, or similar technical role
  • Experience working with data pipelines
  • Comfortable working across the full stack from data ingestion through to internal tools and dashboards
  • Experience presenting technical work to non-technical stakeholders
  • Able to work autonomously while coordinating with a small data team

#J-18808-Ljbffr…

Posted: April 3rd, 2026