Job Tittle: Data Engineer – PySpark(6+ Years)
• 6 to 10 years of overall IT Experience
• Core Skills
• Strong hands on working experience of Big Data stack including PySpark and Core Java
• Good understanding on RDMS database and Linux/UNIX .
• Take ownership of business-critical and complex applications.
• Strong knowledge of multi-threading and high volume batch processing.
• Able to design Big data batch and real time data services.
• Should be good in performance tuning skills in PySpark.
• Should have experience on AWS platform.
• Hands experience on Agile and CI-CD process.
•Exposure to - Presto/Trino, iceberg, Spark, Snowflake
PySpark and Core Java.
Presto/Trino, iceberg, Spark, Snowflake