This job board retrieves part of its jobs from: Emploi Étudiant | Toronto Jobs | Work From Home

Tech jobs every day in Canada!

To post a job, login or create an account |  Post a Job

   Canadian Tech jobs   

Website updated every day with fresh jobs in the IT industry

Slide 1
Slide 2
Slide 3
previous arrow
next arrow

Lead Fullstack Big Data Engineer


This is a Contract position in Kamloops, ia posted September 13, 2021.

Lead Fullstack Big Data Engineer Client: Major Bank Role: Lead Fullstack Big Data Engineer Duration: 6 months, plus likely extension Rate: Open depending on experience Location: Toronto, ON Our client, a globally recognized bank is looking to hire a Lead Fullstack Big data Engineer for a minimum 6 months based in Toronto to join their team Your New Company A leading bank, with multiple offices across Canada and throughout the world are looking for a Lead Fullstack Big Data Engineer for a 6 month contract in their Toronto office.

They have an excellent reputation within their sector and are known as a market leader.

Your New Role Wholesale Data & Analytics is creating a world class “data-driven” organization that leads their competitors and inspires their employees.

They are building a revolutionary data analytics ecosystem to generate business insights and provide great customer experience from well-managed and trusted data assets.

Their global team is partnering with IT to deliver an ecosystem of curated, enriched and protected sets of data – created from global, raw, structured and unstructured sources.

Their Wholesale Big Data Lake is the largest aggregation of data ever within the bank.

They have over 300 sources and a rapidly growing book of work.

They are utilising the latest technologies to solve business problems and deliver value and truly unique insights.

Wholesale are looking for Full Stack Engineers to be part of the core big data technology and design team .

Role-holders will be entrusted to develop solutions/design ideas, identify design ideas to enable the software to meet the acceptance and success criteria.

Work with architects/BA to build data component on the data environment.

RESPONSIBILITIES : As a key member of the technical team alongside Engineers, Data Scientists and Data Users, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process:
• Software design, java development, automated testing of new and existing components in an Agile, DevOps and dynamic environment
• Promoting development standards, code reviews, mentoring, knowledge sharing
• Product and feature design, scrum story writing
• Data Engineering and Management
• Product support & troubleshooting
• Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring
• Liaison with BAs to ensure that requirements are correctly interpreted and implemented.

• Liaison with Testers to ensure that they understand how requirements have been implemented – so that they can be effectively tested.

• Participation in regular planning and status meetings.

Input to the development process – through the involvement in Sprint reviews and retrospectives.

Input into system architecture and design.

• Peer code reviews.

• 3rd line support.

• Wholesale, Global Markets, CMB and Global Banking business lines
• Global and Regional Heads of business
• Distribution Platforms IT
• Internal clients to facilitate effective change and ensure expectations are effectively managed CHALLENGES :
• Integrating with an established, complex Multitenant Hadoop based project working to tight deadline
• Refactoring the current technology stack and architecture from on premise Hadoop, to Google Cloud Platform
• Working with globally dispersed and diversifies team.

• Supporting specific source on-boarding activities in line with project delivery timelines What You’ll Need to Succeed: Must Have Skills
• 2 years in team leading role
• Scala/Spark including spark optimization
• Java – Rest API development
• Hive
• Hadoop (Hdfs)
• Elasticsearch exposure
• Ansible/Jenkins
• Cloud familiarity is a plus
• Experience with additional big data technologies is a plus (Presto, Kafka,HBase etc) Detailed must haves:
• Experienced in Java, Scala and/or Python, Unix/Linux environment on-premises and in the cloud
• Java development and design using Java 1.7/1.8.

Advanced understanding of core features of Java and when to use them
• Experience with most of the following technologies ( Apache Hadoop, Scala, Apache Spark, Spark streaming, YARN, Hive, HBase, ETL frameworks, SQL, RESTful services, Kafka, Presto ).

• Sound knowledge on working Unix/Linux Platform
• Hands-on experience building data pipelines using Hadoop components Sqoop, Hive, Pig, Spark, Spark SQL.

• Must have experience with developing Hive QL, UDF’s for analysing semi structured/structured datasets
• Experience with time-series/analytics db’s such as Elasticsearch
• Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA
• Exposure to Agile Project methodology but also with exposure to other methodologies (such as Kanban)
• Understanding of big data modelling techniques using relational and non-relational techniques
• Coordination between Onsite and Offshore
• Experience on Debugging the Code issues and then publishing the highlighted differences to the development team/Architects;
• Understanding or experience of Cloud design patterns What You’ll get in Return The client is offering a 6 month engagement, with a high likelihood of extension and a very competitive rate for the contract.


If you’re available and interested in this role, please reply to this email as soon as you can attaching your updated resume and hourly rate requirement.