Hadoop/Spark Developer

  • location: Charlotte, NC
  • type: Contract
  • salary: $60 - $65 per hour
easy apply

job description

Hadoop/Spark Developer

job summary:
Design, develop, test, and deploy Scala/Java/Spark data processing framework on Hadoop. Frame work is used by application development teams to generate ETL pipelines to ingest, process/transform, and distribute data. Enable and support the framework using DevOps methodologies - continuous integration, and continuous deployment.

Minimum Requirements:

- 5+ years of Java programming using frameworks such as Spring, Hibernate etc.

- 3+ years of Scala, Apache Spark programming

- 5+ years of Hadoop development experience in Hive / Impala / Oozie / Sqoop etc.

- 5+ years of Unix/Linux programming, Shell Scripting

- 5+ years of Relational database (Teradata, SQL Server, Oracle / Exadata etc.), SQL development experience

- Experience in Agile and Lean Agile methodologies

- Experience in Eclipse/IntelliJ, JIRA, Git, Jenkins, Maven, CI/CD tools

- Experience in software design, build, test, and documentation

- Experience in ETL tools such as Informatica

- Experience in developing java frameworks, software design patterns

- Ability to learn quickly and work independently

- Excellent verbal & written communication skills

- Excellent analytical and problems solving skills

- Experience in Machine Learning technologies is preferable

 
location: Charlotte, North Carolina
job type: Contract
salary: $60 - 65 per hour
work hours: 9 to 5
education: Bachelors
 
responsibilities:
Design, develop, test, and deploy Scala/Java/Spark data processing framework on Hadoop. Frame work is used by application development teams to generate ETL pipelines to ingest, process/transform, and distribute data. Enable and support the framework using DevOps methodologies - continuous integration, and continuous deployment.

 
qualifications:
Design, develop, test, and deploy Scala/Java/Spark data processing framework on Hadoop. Frame work is used by application development teams to generate ETL pipelines to ingest, process/transform, and distribute data. Enable and support the framework using DevOps methodologies - continuous integration, and continuous deployment.

 
skills: Design, develop, test, and deploy Scala/Java/Spark data processing framework on Hadoop. Frame work is used by application development teams to generate ETL pipelines to ingest, process/transform, and distribute data. Enable and support the framework using DevOps methodologies - continuous integration, and continuous deployment.


Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.

easy apply

get jobs in your inbox.

sign up
{{returnMsg}}

related jobs


    DataStage Developer

  • location: Charlotte, NC
  • job type: Contract
  • salary: $65 - $70 per hour
  • date posted: 10/30/2018