Information Technology - Toronto

Our banking client in Canada (Global Bank) is looking for Data Engineer for their Data Centre Team for Toronto location. This is a permanent position.
 
Description:

Data Engineer

Our Global Banking & Markets (GBM) Data Vision
Build a world class “data-driven” organization that rivals our competitors and inspires our employees. Leveraging a revolutionary Data Analytics ecosystem, to generate business insights and provides great customer experience from well-managed and trusted data assets.
 
Our Mission Statement
Our Data Analytics Ecosystem solves complex problems using cutting-edge technologies; helping to rapidly implement insights from data that can help drive more informed decision making. We will deploy smart machines to process complex and large sets of data, impossible in the legacy manual mining methods. Data underpins everything we do; from risk & regulatory management, through monetization, to predicting client behavior.
 
The Team 
Our Data Science & Engineering teams are partnering with IT to deliver an ecosystem of curated, enriched and protected sets of data – created from global, raw, structured and unstructured sources. Our GBM Big Data Lake is the largest aggregation of data ever. We have over 300 sources which equate to more than 20PTB of data, with a use case portfolio of over 110 projects that span all the business lines within GBM. We are utilising the latest machine learning tools and technologies to solve these hypotheses and deliver value and truly unique insights.
 
The Opportunity 
We are looking for Data Engineers that will work on the collecting, storing, processing, and analysing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company and to help build out some core services that power our Machine Learning and advanced analytics systems
 
Required Skills
• Ability to process and rationalize structured data, message data and semi/unstructured data and ability to integrate multiple large data sources and databases into one system
• Proficient understanding of distributed computing principles and of the fundamental design principles behind a scalable application
• 4-5 years of experience with I am Strong knowledge of the Big Data eco system,  experience with Hortonworks/Cloudera platforms
• Practical experience in using HDFS.
• Practical expertise in developing applications and using querying tools on top of Hive, Spark (PySpark)
• Strong Scala and park skills within a big data environment. Additional experience using ELK stack will be highly regarded
• Experience in Python, particularly the Anaconda environment and Python based ML model deployment
• Experience of Continuous Integration/Continuous Deployment (Jenkins/Hudson/Ansible)
• Experience with using GIT/GITLAB as a version control system.
• Experience in working in Teams using the Agile Methods  ( SCRUM ) and Confluence/JIRA
• Good communication skills (written and spoken), ability to engage with different stakeholders and to synthesize.
 
Nice to Haves
• Knowledge of at least one Python web framework (preferably: Flask, Tornado, and/or twisted)
• Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 would be a plus
• Good understanding of global markets, markets macrostructure and macro economics
• Experience with Google Cloud Platform (Data Proc / Dataflow)
• Managing, training and hiring junior data engineers. Taken a lead role within a team.
• Strong experience in developing pipelines in conjunction with data scientists for machine learning solutions (data extraction, processing, feeding models, deploying models, automation)

 
To apply, please send your resume to: k.sinha@maxsys.ca

Position Type Full Time
Application Deadline June 7, 2019
Experience Required 4 years
Job Duration Permanent
Education Required Bachelors