Oportunidad publicada el 11 02 2026
Databricks Platform Expert
-
- Lugar
- : Pune, India
-
- Tipo de Contrato
- : Regular
Detalles de la posición
- - - - - - - - - - - -
Key Responsibilities
Databricks Platform Expert
Manage, configure, and administer Databricks workspaces, Clusters, SQL Warehouses, Serverless, jobs, and workspace objects.
Implement and manage Unity Catalog, including catalogs, schemas, tables, access controls, and data lineage.
Optimize cluster policies, auto-scaling strategies, and cost management for Serverless and Classic compute.
Serve as the SME for Databricks infrastructure, governance, and security best practices.
Monitor workspace performance, cluster stability, logs, job reliability, and platform health.
Implement CI/CD pipelines for notebooks, jobs, and Delta Live Tables using Git integration.
Support user provisioning, access controls (ACLs), secrets management, and workspace SSO.
Write efficient Spark (PySpark / SQL / Scala) code for ETL, data transformations, and pipeline optimizations.
Assist data engineering teams with Spark job debugging, performance tuning, and code reviews.
Build and maintain production-grade pipelines leveraging Delta Lake, Databricks Jobs, and DLT.
Implement and manage RBAC, SCIM provisioning, AIM, service principals, and cluster access controls.
Ensure compliance with enterprise data governance, audit, and logging requirements.
Manage secrets Key Vault and enforce secure credential handling.
Support audit reports, compliance reviews, and workspace security configuration.
Monitor job failures, cluster lifecycle performance, and system events using Databricks logs and cloud-native monitoring tools (Azure Monitor).
Create automated alerts and observability dashboards for platform usage, cost, and performance.
Troubleshoot Databricks runtime issues, library conflicts, and Spark execution failures.
Collaborate with cloud and network teams on VNet, peering, and private-link connectivity issues.
Develop cost governance policies for cluster sizes, job policies, and SQL Warehouse tiers.
Identify opportunities to reduce cost via autoscaling, spot instances (classic clusters), and job consolidation.
Required Qualifications
4–6 years of experience working with Databricks as an administrator or data engineer.
Strong expertise in Apache Spark programming (PySpark preferred; SQL or Scala is a plus).
Hands-on experience with Databricks Jobs, cluster configuration, SQL Warehouses, and Unity Catalog.
Deep understanding of Delta Lake, ACID transactions, and lakehouse architecture.
Experience with Git, CI/CD, and DevOps concepts for data engineering workflows.
Knowledge of cloud platforms ( Azure).
Familiarity with IAM, networking basics, monitoring tools, and security patterns