About FairMoney

FairMoney is a credit-led mobile bank for emerging markets. The company was launched in 2017, operates in Nigeria & India, and raised close to €50m from global investors like Tiger Global, DST & Flourish Ventures. The company has offices in France, Nigeria, and India.

Role and responsibilities

At FairMoney, we are making a lot of data driven decisions in real time: risk scoring, fraud detection as examples.

Our data is mainly produced by our backend services, and is being used by data science team, BI team, and management team. We are building more and more real time data driven decision making processes, as well as a self serve data analytics layer.

As a senior data engineer at FairMoney, you will help building our Data Platform:

  • Ensure data quality and availability for all data consumers, mainly data science and BI teams.
  • Ingest raw data into our DataWarehouse (BigQuery / Snowflake)
  • Make sure data is processed and stored efficiently:
  • Work with backend teams to offload data from backend storage
  • Work with data scientists to build a machine learning feature store
  • Spread best practices in terms of data architecture across all tech teams
  • Effectively form relationships with the business in order to help with the adoption of data-driven decision-making.

You will be part of the Datatech team, sitting right between data producers and data consumers. You will help building the central nervous system of our real time data processing layer by building an ecosystem around data contracts between producers and consumers.

Our current stack is made of

  • Batch processing jobs (Apache Spark in Python or Scala)
  • Streaming jobs (Apache Flink deployed on Kinesis Data Analytics - Apache Beam deployed on Google Dataflow)
  • REST apis (Python FastApi)

Our tool stack

  • Programming language: Python, SQL
  • Streaming Applications: Flink, Kafka
  • Databases: MySQL, DynamoDB
  • DWH: BigQuery, Snowflake
  • BI: Tableau, Metabase, dbt
  • ETL: Hevo, Airflow
  • Production Environment: Python API deployed on Amazon EKS (Docker, Kubernetes, Flask)
  • ML: Scikit-Learn, LightGBM, XGBoost, shap
  • Cloud: AWS, GCP


You will work on a daily basis with the below tools, so you need working experience on

  • Languages: Python and Scala.
  • Big data processing frameworks: all or one of Apache Spark (batch/streaming) - Apache Flink (streaming) - Apache Beam.
  • Streaming services: Apache Kafka / AWS Kinesis.
  • Managed cloud services: one of AWS EMR / AWS Kinesis Data Analytics / Google Dataflow.
  • Docker.
  • Building REST APIs.

Ideally, you have experience with:

  • deployment/management of stateful streaming jobs.
  • the Kafka ecosystem: Kafka connects mainly.
  • infrastructure as code frameworks (Terraform).
  • architecture around data contracts: Avro Schemas management, schema registries (Confluent Kafka / AWS Glue).
  • Kubernetes.

Overall experience required for this role: 6+ Years.


  • Training & Development
  • Family Leave (Maternity, Paternity)
  • Paid Time Off (Vacation, Sick & Public Holidays)
  • Remote Work

Recruitment Process

  • A screening interview with one of the members of the Talent Acquisition team for 30 minutes.
  • Takeaway assignment to be done at home.
  • Technical design interview for 60-90 minutes.