Responsibilities:
- Design and maintain scalable data pipelines with Spark Structured Streaming.
- Implement Lakehouse architecture with Apache Iceberg.
- Develop ETL processes for data transformation.
- Ensure data integrity, quality, and governance.
- Collaborate with stakeholders and IT teams for seamless solution integration.
- Optimize data processing workflows and performance.
Requirements:
- 5+ years in data engineering.
- Expertise in Apache Spark and Spark Structured Streaming.
- Hands-on experience with Apache Iceberg or similar data lakes.
- Proficiency in Scala, Java, or Python.
- Knowledge of big data technologies (Hadoop, Hive, Presto).
- Experience with cloud platforms (AWS, Azure, GCP) and SQL.
- Strong problem-solving, communication, and collaboration skills.
Location: Ra'anana
Start Date: Immediate
TLVTech is a dynamic technology firm dedicated to building exceptional products using modern technologies for the world's most admired companies. We pride ourselves on innovation, collaboration, and delivering excellence in every project.