Job Details

GCP Data Enginee

  2026-01-26     Metasys Technologies     all cities,AK  
Description:

GCP Data Engineer
Remote Position
Duration: 12+ months
Potential to convert to perm


Job Summary
Client is seeking an experienced Data Engineer to design, build, and optimize scalable datapipelines and analytics platforms using Databricks and Snowflake. The ideal candidate will have strong expertise in cloud-based data engineering, distributed processing, and modern data lakehouse architectures, enabling data-driven decision-making across the organization.

Key Responsibilities

  • Design, develop, and maintain end-to-end data pipelines using Databricks, Snowflake, and cloud-native services
  • Build and optimize ETL/ELT workflows for structured and semi-structured data
  • Implement data lakehouse architectures leveraging Delta Lake and Snowflake
  • Develop and optimize Spark (PySpark/Scala) jobs for large-scale data processing
  • Ensure data quality, reliability, performance, and scalability
  • Implement data modeling techniques (star/snowflake schema, dimensional modeling) in Snowflake
  • Optimize query performance, clustering, partitioning, and cost management in Snowflake
  • Collaborate with data scientists, analysts, and business stakeholders to deliver analytics-ready datasets
  • Implement CI/CD pipelines and automation for data workflows
  • Monitor and troubleshoot data pipelines and production issues
  • Ensure data governance, security, and compliance standards are met
  • Design, develop, and maintain end-to-end data pipelines using GCP services
  • Build and optimize batch and real-time ETL/ELT pipeline
  • Develop scalable data architectures using BigQuery, Cloud Storage, and Dataflow

Required Skills & Qualifications
  • 5+ years of experience in Data Engineering or related roles
  • Strong hands-on experience with Databricks (Spark, Delta Lake, Notebooks, Jobs)
  • Strong hands-on experience with Snowflake (data modeling, performance tuning, SQL optimization)
  • Proficiency in Python (PySpark) and advanced SQL
  • Experience with cloud platforms: AWS, Azure, or GCP
  • Experience with data ingestion tools (ADF, Fivetran, Airflow, or similar)
  • Solid understanding of data warehousing concepts and lakehouse architectures
  • Experience with Git, CI/CD, and version control
  • Strong analytical and problem-solving skills

Preferred / Nice-to-Have Skills
  • Experience with Databricks Unity Catalog
  • Experience with Snowflake Streams & Tasks
  • Exposure to real-time or streaming data (Kafka, Event Hub, Kinesis)
  • Knowledge of DBT or other transformation frameworks
  • Experience with Terraform or Infrastructure as Code
  • Familiarity with data governance and lineage tools


Apply for this Job

Please use the APPLY HERE link below to view additional details and application instructions.

Apply Here

Back to Search