Skip to main content

This job has expired

Big Data Engineer

Employer
Pro Staff
Location
Bellevue, WA
Start date
Nov 18, 2019
Closing date
Dec 18, 2019

View more

Category
Other
Job Type
Employee
Employment Status
Full Time

Job Details

Atterro Workforce Solutions offers this exciting contract opportunity at a global leader in electronics, mobile devices and appliances located in Bellevue, WA. 


Big Data Engineer – Strategic Analytics

In today’s fast evolving technology world, one aspect remains common – reliance on data to drive the next wave of innovation. Strategic Analytics is the company’s Center-of-Excellence for driving the adoption of data driven decision making and product development across the company. The team’s core focus is developing best-in-class solutions that provide the company’s marketing and service organizations with a 360 degree view of the company customers. 
Strategic Analytics is powering a paradigm shift at the company and the global industry. We are looking for highly technical team members who are passionate about data, have the rigor needed to solve billion dollar problems, and possess an innate entrepreneurial spirit to explore the uncharted. Strategic Analytics combines the engineering backbone of a best-in-class Big Data Platform with the analytic expertise of advanced mining and predictive modeling. 

Position Summary
Big Data Engineers serve as the backbone of the Strategic Analytics organization, ensuring both the reliability and applicability of the team’s data products to the entire organization. They have extensive experience with ETL design, coding, and testing patterns as well as engineering software platforms and large-scale data infrastructures. Big Data Engineers have the capability to architect highly scalable end-to-end pipeline using different open source tools, including building and operationalizing high-performance algorithms.

Big Data Engineers understand how to apply technologies to solve big data problems with expert knowledge in programming languages like Java, Python, Linux, PHP, Hive, Impala, and Spark. Extensive experience working with both 1) big data platforms and 2) real-time / streaming deliver of data is essential.

Big data engineers implement complex big data projects with a focus on collecting, parsing, managing, analyzing, and visualizing large sets of data to turn information into actionable deliverables across customer-facing platforms. They have a strong aptitude to decide on the needed hardware and software design and can guide the development of such designs through both proof of concepts and complete implementations.

Additional qualifications should include:
• Tune Hadoop solutions to improve performance and end-user experience
• Proficient in designing efficient and robust data workflows
• Documenting requirements as well as resolve conflicts or ambiguities
• Experience working in teams and collaborate with others to clarify requirements
• Strong co-ordination and project management skills to handle complex projects
• Excellent oral and written communication skills

Job Responsibilities Big Data Engineer:
♣ Translate complex functional and technical requirements into detailed design.
♣ Design for now and future success
♣ Hadoop technical development and implementation.
♣ Loading from disparate data sets. by leveraging various big data technology e.g. Kafka
♣ Pre-processing using Hive, Impala, Spark, and Pig
♣ Design and implement data modeling
♣ Maintain security and data privacy in an environment secured using Kerberos and LDAP
♣ High-speed querying using in-memory technologies such as Spark.
♣ Following and contributing best engineering practice for source control, release management, deployment etc
♣ Production support, job scheduling/monitoring, ETL data quality, data freshness reporting

Skills Required:
♣ 5+years of Python development experience
♣ 3+ years of demonstrated technical proficiency with Hadoop and big data projects
♣ 5-8 years of demonstrated experience and success in data modeling
♣ Fluent in writing shell scripts [bash, korn]
♣ Writing high-performance, reliable and maintainablecode. 
♣ Ability to write MapReduce jobs
♣ Ability to setup, maintain, and implement Kafka topics and processes
♣ Understanding and implementation of Flume processes
♣ Good knowledge of database structures, theories, principles, and practices. 
♣ Understand how to develop code in an environment secured using a local KDC and OpenLDAP.
♣ Familiarity with and implementation knowledge of loading data using Sqoop.
♣ Knowledge and ability to implement workflow/schedulers within Oozie
♣ Experience working with AWS components [EC2, S3, SNS, SQS]
♣ Analytical and problem solving skills, applied to Big Data domain
♣ Proven understanding and hands on experience with Hadoop, Hive, Pig, Impala, and Spark 
♣ Good aptitude in multi-threading and concurrency concepts.
♣ B.S. or M.S. in Computer Science or Engineering

 

Atterro talent working with this client receive competitive compensation and a great benefits package including medical, dental, vision, 401K and Paid Time Off plus more!

Company

Administrative & Light Industrial

As in any job, the key to success is knowing what your strengths are and then finding the opportunity to use them. Pro Staff allows you to utilize and develop your skills through exciting projects and temporary assignments with great companies in your area. And, because we take care of all the work involved in finding the right opportunities, you can be free to let your talent shine.

Company info
Location
Minnesota
US

Get job alerts

Create a job alert and receive personalized job recommendations straight to your inbox.

Create alert