|Job Category||Information Technology|
|Application Deadline||November 25, 2019|
|Experience Required||4 years|
|Job Duration||1 year|
Our client in Telecom industry is seeking a Big Data Hadoop Specialist in Mississauga.
Responsible for the development, design, and implementation of application systems. Designs and codes programs, including the ability to test their coding, find errors, and correct codes to provide quality coding. Interfaces with technical team to design and implement application systems.
Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support
Develop standardized practices for delivering new products and capabilities using Big Data technologies, including data acquisition, transformation, and analysis
Ensure Big Data practices integrate into overall data architectures and data management principles (e.g. data governance, data security, metadata, data quality)
Create formal written deliverables and other documentation, and ensure designs, code, and documentation are aligned with enterprise direction, principles, and standards
Train and mentor teams in the use of the fundamental components in the Hadoop stack
Assist in the development of comprehensive and strategic business cases used at management and executive levels for funding and scoping decisions on Big Data solutions
Troubleshoot production issues within the Hadoop environment
Performance tuning of a Hadoop processes and applications
Proven experience as a Hadoop Developer/Analyst in Business Intelligence and Data management production support space is needed.
Bachelor in Computer Science, Management Information Systems, or Computer Information Systems is required.
Minimum of 2 years of building and coding applications using Hadoop components – HDFS, Kafka, Flume, Hbase, Hive, Sqoop.
Minimum of 2 years of coding Java, Scala / Spark, Python, Hadoop Streaming, HiveQL
Minimum 4 years experience of traditional ETL tools & Data Warehousing design.
Strong personal leadership and collaborative skills, combined with comprehensive, practical experience and knowledge in end-to-end delivery of Big Data solutions.
Experience in Sysadmin, Exadata and other RDBMS is a plus.
Must be proficient in SQL/HiveQL
Hands on expertise in Linux/Unix and scripting skills are required.
Strong communication, technology awareness and capability to interact work with senior technology leaders is a must
Good knowledge on Agile Methodology and the Scrum process
Delivery of high-quality work, on time and with little supervision
Critical Thinking/Analytic abilities
3 Top skills must be seen on the resume Scala Programming Language With 3-4 Years of experience Spark Programming Language With 3-4 Years of experience Cloudera or Hortonworks environment experience / Intermediate to Senior role/ Any testing in interviews? If so please provide details Yes, in-person interview will involve some coding questions to test their knowledge of ScalaSpark and how to operate in a Hadoop environment. This would be done on a whiteboard or paper and exact syntax is not a must, but the candidate must demonstrate knowledge of the tools and skill in programming./ What types of projects will this candidate be working on Building workflows to bring data into Hadoop from other databases or files or streaming. Building efficient queries in Hadoop (Impala and Hive) Understanding and supporting existing workflows to ensure proper operation / Typical hours worked 9am to 5pm (flexible), 37.5h per week. Why has this position arisen, backfill? The position is to backfill a contractor who left the company. We have open projects that require completion. – Any potential to hire Full time Yes, there is always potential for an exceptional candidate to become full time (pending approvals) – Flex hours, possible to work from remote? Presence in office is requested with remote work possible on occasion if needed.
Candidates must be in Canada and with valid work permit for being able to apply for this role.
Please reply to: