<br><br>About The Role<br> As a Data Engineer at Fractal, you will be responsible for developing and managing data pipelines using AWS services and Python. The role requires expertise in Terraform, PySpark, AWS (S3, Glue, SageMaker, Lambda), and strong Python development skills. You will collaborate with a team of professionals dedicated to enhancing business decisions through data-driven insights. <br><br>Candidate Profile<br> We are looking for candidates with a passion for data engineering and a strong foundation in Python and AWS technologies. Ideal candidates should have excellent analytical skills and the ability to work in a fast-paced environment with over-achievers who are enthusiastic about their work. <br><br>Qualifications Required<br>- Hands-on experience with AWS services such as S3, Glue, SageMaker, and Lambda <br>- Proficiency in PySpark and Terraform <br><br>Preferred Qualifications<br>- Experience in large-scale data management <br>- Strong problem-solving and analytical abilities <br>- Ability to work independently and in a team <br><br>Location<br> Bengaluru, Pune, Mumbai, Chennai, Gurgaon <br><br>Job or Requisition ID<br> SR-26409 <br><br>Expected Travel<br> Minimal to moderate travel expected depending on project requirements <br><br>Employment Type<br> Full-time <br><br>About The Company<br> Fractal Analytics is a global leader in artificial intelligence and analytics, partnering with companies worldwide to deliver innovative solutions in digital transformation. At Fractal, we are committed to powering every human decision in the enterprise. Our expertise spans AI, engineering, design, and digital transformation, creating value for clients by unlocking insights from data. We work with top-tier companies globally, ensuring they stay ahead of the curve in a rapidly evolving business environment.