Industry: Banking
Job Type: Permanent
Job Location: Makati
Work Setup: Onsite to Hybrid
Experience Level: Experienced
Requirements:
- Design and implement scalable data pipelines using Azure Databricks, Apache Spark, and Delta Lake.
- Develop ETL/ELT processes to ingest, transform, and load data from various sources (SQL, APIs, flat files, cloud storage).
- Collaborate with cross-functional teams to understand data requirements and deliver solutions.
- Optimize data workflows for performance, reliability, and cost-efficiency.
- Ensure data governance, security, and compliance across all data platforms.
- Monitor and troubleshoot data pipeline issues and implement proactive solutions.
- Maintain documentation for data architecture, processes, and best practices.
Requirements:
- Bachelor’s degree in Computer Science, Information Technology, or related field.
- 5+ years of experience in data engineering or related roles.
- Strong proficiency in Azure Databricks, Apache Spark, and PySpark.
- Experience with Azure Data Lake, Azure Synapse, Azure Data Factory, and Azure Blob Storage.
- Solid understanding of SQL, data modeling, and data warehousing concepts.
- Familiarity with CI/CD tools and version control (e.g., Git, Azure DevOps).
- Knowledge of data governance and security best practices.