Big Data Architect

Sylphia Consultancy - Toronto, ON (30+ days ago)

Apply Now

· Expertise with complex architecture frameworks to support an organization with multiple business units, geographic areas and systems footprint across many technologies.

· Deep conceptual and technological understanding of contemporary architecture concepts, industry trends, and best practices.

· Ability to leverage knowledge to assess organizational needs, provide advice on the right solutions to business problems, and address business opportunities with immediate and long-term relevance.

· Ability to provide data integration services: acquire, cleanse, merge, validate, visualize and data mine.

· Strong people leadership skills with a proven record leading a team of data engineers.

· Technical experience with both on-prem and Cloud Infrastructure.

· Data warehousing management, database optimization, and administration experience.

· Experience building systems to transition from datasets ranging from gigabytes to terabytes.

· Experience with Big Data tech (e.g., Spark, Scala, Pig, Hive, Hbase, Presto, Sqoop, Hadoop, Impala).

· Implementation of the SLDC - development, test, and production environments.

· Implement industry best practices including document principles, policies, standards, and governance process; while considering security, performance, scalability, and reliability.

· Evaluating systems: identifying infrastructure constraints, friction points, and implementing change to optimize and improve efficiency.

· Develop and maintain system design documents to illustrate current and future architecture.

What We’re Looking For

· Bachelor’s degree (or higher) in Computer Science, Engineering, Science and Math or related technical discipline required.

· 7+ years of experience in Design and Development with expert knowledge of DW Architecture and ETL Processes. Strong programming skills in ETL.

· Proven skill in data ETL (Extraction, Transformation and Load) - for extracting data from a specific source, applying transformation rules, and loading it into the data warehouse area.

· Data Warehousing: expertise in relational databases/multidimensional data warehouse design

· Ability to diagnose and resolve system bottlenecks

· Experience in capacity planning for computation and storage needs

· Competency in developing solutions for diverse and complex business problems

· Proactive, initiative taker, possess ability to work independently with minimal supervision

· A collaborative working style and ability to work independently in a team environment

· Excellent verbal and written communication skills

· Ability to work under pressure, adapt to changing environments, manage multiple large projects

How to Stand Out

· MS degree in a technical field or equivalent

· Strong in Scala and Spark architecture

· Knowledge of AWS Data Stack using S3, EMR, Data Pipeline

· Experience with Java

· Experience building SSIS packages in Microsoft Visual Studio

· Experience building systems to transition from datasets ranging from gigabytes to terabytes

· Implement Big Data technologies (e.g., Pig, Hive, Spark, Hbase, Presto, Sqoop, Hadoop, Impala, Spark, Scala)


  • Bonus scheme
  • Flexible working hours
  • Vacation & paid time off
  • Work from home opportunities

Reference ID: J1002

Job Types: Full-time, Contract, Permanent


  • AWS: 2 years (Preferred)
  • Scala: 3 years (Preferred)
  • Spark: 3 years (Preferred)
  • Java: 4 years (Preferred)