int(12925)
Bangalore, India

Sr. Data Engineer

Our client is a leading global financial services firm providing a wide range of investment banking, securities, investment management, and wealth management services. This is an exciting opportunity for an experienced software & data engineer. You will join a team delivering the design & development of modern data management tools to curate important data sets, models, and processes while identifying areas for process automation and further efficiencies.

 

Position Type: Permanent

Mode of Interview: Virtual

Experience: 6 yrs – 10 yrs

Notice Period: Immediate

Skills: Hands-on experience in AWS Data Engineer, Databricks, SQL, Python, Unity Catalog

 

Please contact Radhika or email your cv directly in word format with job reference number: JOB 15949 to Radhika@theedgepartnership.com

 

ONLY CANDIDATES SERVING THE NOTICE PERIOD CAN SHARE YOUR UPDATED CV.

 

Please note that due to the high number of applications only shortlisted candidates will be contacted. If you do not hear from us in the next 5 business days, we regret to inform you that your application for this position was unsuccessful.

 

Apply for this Job

Key responsibilities

  • Develop and optimize ETL processes using Databricks and Apache Spark.
  • Design and implement efficient data models and schemas for optimal data storage and retrieval.
  • Collaborate with data scientists and analysts to build and deploy machine learning models.
  • Ensure data quality, consistency, and reliability across all data pipelines.
  • Optimize query performance and troubleshoot data processing issues.
  • Implement robust data security and access control measures.
  • Stay abreast of the latest Databricks features and best practices.
  • Participate in code reviews and contribute to the team’s knowledge base.
  • Mentor junior team members and share expertise in big data technologies.
  • Develop and maintain documentation for data processes and workflows.
  • Collaborate with stakeholders to understand business requirements and translate them into technical solutions.

Role requirements

  • Bachelor’s degree in computer science, data science, or a related field.
  • 6+ years of experience in data engineering or a related field.
  • Strong proficiency in the Databricks platform and Apache Spark.
  • Excellent SQL skills and experience with Python or Scala.
  • Experience with using Databricks Unity Catalog for data governance.
  • Proven experience in designing and implementing ETL processes and data pipelines.
  • Solid understanding of big data technologies and distributed computing concepts.
  • Experience working with cloud platforms such as AWS, Azure, or GCP.
  • Familiarity with version control systems, preferably Git.
  • Strong analytical and problem-solving skills with keen attention to detail.
  • Excellent communication and collaboration skills.
  • Experience with data warehousing and analytics concepts.