This job board retrieves part of its jobs from: Emploi Étudiant | Toronto Jobs | Work From Home

Tech jobs every day in Canada!

To post a job, login or create an account |  Post a Job

   Canadian Tech jobs   

Website updated every day with fresh jobs in the IT industry

Slide 1
Slide 2
Slide 3
previous arrow
next arrow

Lead Data Engineer – DevOps / Docker / Kubernetes

HeadSource International

This is a Contract position in Abbotsford, BC posted July 15, 2021.

– Lead Data Engineer LOCATION: This is a full-time permanent role that can be worked from Waterloo or Toronto, Ontario.

(Occasional travel to Waterloo will be required.) ROLE SUMARY: Accelerating releases of quality solutions and extension with partners are key outcomes of our platform strategy and the requirements of this position.

This person will be helping our client to achieve bold ambition and to build digital products, using continuous learning and development, that their business partners will love to use.

RESPONSIBILITIES: Collaborate with product managers, designers, engineers and product owners to discover the best ways to enable business value through the core platform Bring a flexible, adaptive mindset, comfortable with ambiguity in a rapidly changing technology environment Be a continuous learner, not only for your own career, but from the teams’ successes and failures Lead the process of building and operationalizing Data platforms in the cloud using one of the public clouds, preferably MS Azure.

Building Open APIs and microservices, along with loosely coupled architecture.

Work at automating data pipelines, DevOps and CICD.

Embrace open source communities, both internally and externally, sharing your knowledge across your team and peers REQUIRED SKILL SET: Experience with automating data pipelines, DevOps and CICD.

Hands-on Experience with Microservices Expertise with using the tools Kubernetes and Docker IDEAL SKILL SET: Hands on experience with Apache Data ecosystem and toolset – Sqoop, Nifi, Pig, Spark, HDFS, Hive, HBase, etc.

Hands on experience with Big Data streaming frameworks and tools (e.g.

Spark Streaming, Storm, Kafka, etc.) Experience in Exploratory data analysis; Query and process Big Data, provide reports, summarize and visualize the data Experience programming in both compiled languages (Java, Scala) and scripting languages (Python or R) Experience in Canary deployments, 0-downtime, 0-dataloss, hot-hot DR Exposure to and an understanding of Agile scrum methodologies and experience of working in an Agile team Experience in Data processing, performance analysis, tuning and capacity planning Experience in using Git flow.