FairMoney is a pioneering mobile banking institution specializing in extending credit to emerging markets. Established in 2017, the company currently operates primarily within Nigeria, and it has secured nearly €50 million in funding from renowned global investors, including Tiger Global, DST, and Flourish Ventures. FairMoney maintains a strong international presence, with offices in several countries, including France, Nigeria, Germany, Latvia, the UK, Türkiye, and India.
In alignment with its vision, FairMoney is actively constructing the foremost mobile banking platform and point-of-sale (POS) solution tailored for emerging markets. The journey began with the introduction of a digital microcredit application exclusively available on Android and iOS devices. Today, FairMoney has significantly expanded its range of services, encompassing a comprehensive suite of financial products, such as current accounts, savings accounts, debit cards, and state-of-the-art POS solutions designed to meet the needs of both merchants and agents.
We are building Engineering centres of excellence across multiple regions and are looking for smart, talented, driven engineers. This is a unique opportunity to be part of the core engineering team of a fast-growing fintech poised for more rapid growth in the coming years.
To gain deeper insights into FairMoney's pivotal role in reshaping Africa's financial landscape, we invite you to watch this informative video.
Role and responsibilities
At FairMoney, we are making a lot of data-driven decisions in real-time: risk scoring, and fraud detection as examples.
Our data is mainly produced by our backend services, and is being used by the data science team, BI team, and management team. We are building more and more real-time data-driven decision-making processes, as well as a self-serve data analytics layer.
As a senior data engineer at FairMoney, you will help build our Data Platform:
- Ensure data quality and availability for all data consumers, mainly data science and BI teams. ingest raw data into our BigQuery DataWarehouse
- Make sure data is processed and stored efficiently: work with backend teams to offload data from backend storage
- Work with data scientists to build real-time machine learning feature computation pipelines
- Spread best practices in terms of data architecture across all tech teams
- Effectively form relationships with the business in order to help with the adoption of data-driven decision-making.
You will be part of the Datatech team, sitting right between data producers and data consumers. You will help build the central nervous system of our real-time data processing layer by building an ecosystem around data contracts between producers and consumers.
Our current stack is made of:
- Batch processing jobs (Apache Spark in Python or Scala)
- Streaming jobs (Apache Flink deployed on Kinesis Data Analytics)
- REST apis (Python FastApi)
- AWS Kinesis / Apache Kafka as message bus
- AWS Lambda as lightweight processors
- Apache Iceberg as an analytics table format
- BigQuery as a data warehouse
You will work on a daily basis with the below tools, so you need working experience on:
- Languages: Python and Scala
- Big data processing frameworks:
- Streaming: Apache Flink
- Batch: all or one of Apache Spark - Apache Flink - Apache Beam
- Streaming services: Apache Kafka / AWS Kinesis
- Managed cloud services: one of AWS EMR / AWS Kinesis Data Analytics
- Building REST APIs