Sr. Hadoop Developer
Randstad Technologies is seeking a skilled Sr. Hadoop Developer for a contract assignment in Hillsboro Oregon. If you are ready to join a leader in the Retail and Technology space, please apply and Randstad will be more than happy to assist in helping you land your next role. We look forward to speaking with you!
location: Hillsboro, Oregon
job type: Contract
salary: $60 - 67 per hour
work hours: 8am to 4pm
We're embracing Big Data technologies to enable data-driven decisions. We're looking to expand our Hadoop Engineering team to keep pace. As a Sr. Hadoop developer you will work with a variety of talented teammates and be a driving force for building solutions for our Digital teams. You will be working on development projects related to consumer behavior, commerce, and web analytics.
- Design and implement distributed data processing pipelines using Spark, Hive, Sqoop, Python, and other tools and languages prevalent in the Hadoop ecosystem. Ability to design and implement end to end solution.
- Build utilities, user defined functions, and frameworks to better enable data flow patterns.
- Research, evaluate and utilize new technologies/tools/frameworks centered around Hadoop and other elements in the Big Data space.
- Define and build data acquisitions and consumption strategies
- Build and incorporate automated unit tests, participate in integration testing efforts.
- Work with teams to resolving operational & performance issues
- Work with architecture/engineering leads and other teams to ensure quality solutions are implements, and engineering best practices are defined and adhered to.
Nice to have:
- MS/BS degree in a computer science field or related discipline
- 6+ years' experience in large-scale software development
- 3+ year experience in Hadoop
- strong Java programming, Python, shell scripting, and SQL
- strong development skills around Hadoop, Spark, MapReduce, Hive
- strong understanding of Hadoop internals
- Experience with messaging & complex event processing systems such as NiFi, Kafka.
- Good understanding of file formats including JSON, Parquet, Avro, and others
- Experience with databases like Oracle
- Experience with performance/scalability tuning, algorithms and computational complexity
- Experience (at least familiarity) with data warehousing, dimensional modeling and ETL development
- Ability to understand ERDs and relational database schemas
- Proven ability to work cross functional teams to deliver appropriate resolution
- Experience with AWS components and services, particularly, EMR, S3, and Lambda
- Experience with open source NOSQL technologies such as HBase, DynamoDB, Cassandra
- Automated testing, Continuous Integration / Continuous Delivery
- Statistical analysis with Python, R or similar
- Experience level: Experienced
- Education: Bachelors
- Hadoop (3 years of experience is required)
Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.