L2 Data Engineer
Openfx
The Problem
OpenFX processes billions of dollars in transaction volume across global corridors. Every decision around pricing, risk, liquidity, reconciliation, and compliance depends on accurate, timely, and trustworthy data.
Today, data comes from multiple production systems, external partners, and operational workflows. Without strong data engineering foundations, we risk delayed insights, incorrect decisions, reconciliation gaps, and regulatory exposure.
We need Data Engineers who can build reliable data pipelines, ensure data correctness end to end, and make data usable for analytics, risk, and operations at scale.
If we do not get this right, we will either operate blindly or make expensive mistakes.
What You Will Actually Do & Own
In your first 6 to 12 months, you will:
- Own 1 to 2 critical data pipelines that ingest, transform, and publish production data.
- Design data models that are accurate, auditable, and easy to consume.
- Build reliable ingestion pipelines from transactional systems and external partners.
- Ensure data correctness and freshness through validation, monitoring, and backfills.
- Handle late arriving, duplicate, and inconsistent data safely.
- Partner with Backend, Analytics, Risk, and Ops teams to translate requirements into robust data solutions.
- Improve core data metrics such as pipeline reliability, data latency, and reconciliation accuracy.
- Automate pipeline operations including retries, recovery, and backfills.
What Success Looks Like
You will be measured on:
- Data correctness: Pipelines produce accurate and consistent data
- Reliability: Pipelines run on time with minimal failures
- Trust: Stakeholders rely on data for decisions
- Operational readiness: Issues are detected before consumers are impacted
- Decision quality: Data models support real business questions
Requirements
Required (Non negotiable)
- 3 to 7 years experience in data engineering or backend data systems
- Strong SQL and data modeling fundamentals
- Experience building batch and or streaming pipelines
- Experience working with relational databases and data warehouses
- Experience handling schema evolution and backfills
- Understanding of data quality, validation, and monitoring
- Ability to clearly explain data tradeoffs
Preferred (Accelerates ramp, not required)
- Experience with Kafka, Spark, or Flink
- Experience with modern warehouses (BigQuery, Snowflake, Redshift)
- Experience with Airflow or similar orchestration tools
- Experience in fintech or regulated domains
- Familiarity with event driven architectures
- Experience mentoring junior engineers