Algonomy
Algonomy - Senior Backend/Data Engineer
Job Location
bangalore, India
Job Description
Senior Backend/Data Engineer Python FastAPI | Databricks | Azure Data Engineering Location : Bangalore Experience Level : 5 Years Job Summary : We are seeking a highly skilled Backend/Data Engineer with expertise in Python FastAPI, Databricks, and Azure Data Engineering tools. The ideal candidate will have a strong grasp of data structures and algorithms, as well as experience building scalable backend services and data pipelines in a cloud environment. Proficiency with the Medallion Architecture, ADLS Gen2, and ADF is essential. Key Responsibilities : - Design and implement RESTful APIs and microservices using Python FastAPI for high-performance data driven applications. - Architect and develop scalable and robust data pipelines using Azure Databricks and PySpark. - Implement and optimize data workflows following the Medallion Architecture (Bronze, Silver, Gold) for structured data processing. - Integrate and orchestrate data movement using Azure Data Factory (ADF) and automate ETL/ELT workflows. - Work with Azure Data Lake Storage Gen2 (ADLS Gen2) for efficient storage and access of large-scale data assets. - Ensure high performance and scalability using strong foundational skills in data structures algorithms and design patterns and distributed systems. - Collaborate with cross-functional teams including data scientists, analysts, and DevOps engineers. - Maintain code quality through unit testing, peer code reviews, and best practices in CI/CD. - Ensure data security, privacy, and compliance across all services and pipelines. - Monitor, debug, and optimize applications and data jobs in production environments. Required Skills and Qualifications : - Bachelors/Masters degree in Computer Science, Engineering, or related field. - 5 years of experience in backend development with Python, including production-grade APIs using FastAPI or Flask. - Strong understanding of data structures, algorithms, and problem-solving. - Hands-on experience with Databricks and PySpark for big data processing. - Solid understanding of Medallion Architecture and experience designing layered data models (Bronze/Silver/Gold). - Expertise in Azure Data Factory (ADF), including pipeline orchestration, parameterization, and triggers. - Proficiency with Azure Data Lake Storage Gen2 (ADLS Gen2) and data partitioning strategies. - Familiarity with Delta Lake, versioning, and ACID transaction handling in a distributed context. - Experience with SQL, performance tuning, and working with structured and semi-structured data. - Knowledge of CI/CD, Git, Docker, and infrastructure-as-code practices. - Solid grasp of cloud-native principles and Azure ecosystem (Databricks, Synapse, Key Vault, Qualifications : - Experience with streaming data frameworks like Kafka, Spark Structured Streaming. - Knowledge of monitoring/logging tools such as Azure Monitor, DataDog, or Prometheus. - Exposure to DevOps tools (Terraform, Azure DevOps, GitHub Actions). - Understanding of MLOps and model deployment using MLflow or similar tools. About the Company : Algonomy helps consumer businesses maximize customer value by automating decisioning across their retail business lifecycle with AI-enabled solutions for eCommerce, Marketing, Merchandising, and Supply Chain. Algonomy is a trusted partner to more than 400 leading brands, with a global presence spanning over 20 countries. Our innovations have garnered recognition from top industry analysts such as Gartner and Forrester (ref:hirist.tech)
Location: bangalore, IN
Posted Date: 5/1/2025
Location: bangalore, IN
Posted Date: 5/1/2025
Contact Information
Contact | Human Resources Algonomy |
---|