Skip to content Skip to footer
Location :
Employment Type :
Experience :

Remote
Full Time
4 – 5 Years

Role Overview

We are seeking a hands-on Data Engineer with strong expertise in Databricks, PySpark, and data pipeline development. You will be responsible for building scalable, production-grade data systems that power analytics and business insights.
This role requires someone who can own end-to-end data workflows, from ingestion to transformation and delivery, while working closely with analytics and business teams.

Key Responsibilities

  • Design and implement scalable data pipelines using Databricks and PySpark
  • Build robust ETL/ELT workflows for batch and (preferably) near real-time data processing
  • Develop and maintain high-performance data models for analytics and reporting
  • Optimize data processing jobs for performance, cost, and reliability
  • Collaborate with stakeholders to understand data requirements and translate them into technical solutions
  • Ensure data quality, validation, and monitoring mechanisms
  • Work in an Agile environment, contributing to sprint planning, reviews, and delivery
  • Maintain clean, well-documented, and production-ready code

Core Requirements

  • 4–5 years of experience in Data Engineering
  • Strong hands-on experience with:
    Databricks (mandatory)
    PySpark (mandatory)
    Python
    SQL (advanced level)
    Proven experience in building and managing data pipelines at scale
    Solid understanding of data warehousing and analytics concepts
    Experience working in Agile/Scrum teams

Preferred Qualifications (Good to Have)

  • Experience with AWS (S3, Glue, Redshift, etc.) or Azure
    Familiarity with Medallion Architecture (Bronze/Silver/Gold layers)
  • Experience with orchestration tools like Airflow
  • Exposure to cost optimization and performance tuning in Databricks

Soft Skills

  • Strong communication skills – ability to work with both technical and non-technical stakeholders
  • High ownership and accountability in a remote work setup
  • Problem-solving mindset with attention to detail

What We’re Looking For

Someone who can independently drive data engineering tasks
Comfortable working in a fast-paced, product-oriented environment
Focused on scalable, clean, and efficient data solutions rather than just scripting

Why Join Us

Fully remote role
Opportunity to work on modern data stack (Databricks + cloud)
High-impact role with ownership and growth opportunities

    Data Engineer (Databricks & PySpark)


    We are looking for Data Engineers with 4–5 years of experience to join our remote team.
    If you thrive in an Agile environment and have a passion for building scalable, production-grade data pipelines using Databricks and PySpark, we want to hear from you.


    Please fill out the form below to help us understand your technical expertise and professional background.

    Basic Information

    First Name *

    Last Name *

    Email *

    Phone Number *

    LinkedIn Profile URL

    Current Location *

    Professional Experience

    Total Years of Experience *

    Databricks & PySpark Experience

    Current Company *

    Current Role *

    Notice Period *

    Technical Skills

    Databricks *

    PySpark *

    Python *

    SQL *

    Technical Expertise & Tools

    Cloud Platforms *

    Used Airflow? *

    Tools / Technologies

    Certifications

    Any Certifications? *

    List Certifications

    Willing for Certifications? *

    Final Details

    Current CTC *

    Expected CTC *

    Upload Resume *

    Declaration


    I hereby declare that all the information provided in this application is true, complete, and accurate to the best of my knowledge.
    I understand that any misrepresentation, falsification, or omission of information may result in the rejection of my application or, if employed, termination of my employment.