Big Data Architect

  • location: Broomfield, CO
  • type: Contract
  • salary: $60 - $65 per hour
easy apply

job description

Big Data Architect

job summary:
Randstad Technologies is seeking a Big Data Architect!

Principal Duties and Responsibilities (Essential Functions**):

Work with cross functional consulting teams within the data science and analytics team to design, develop, and execute solutions to derive business insights and solve clients' operational and strategic problems. Build the platform using cutting-edge capabilities and emerging technologies, including the Data Lake and Cloudera data platform, which will be used by thousands of users. Work in a Scrum-based Agile team environment using Hadoop. Install and configure the Hadoop and HDFS environment using the Cloudera data platform. Create ETL and data ingest jobs using Map Reduce, Pig, or Hive. Work with and integrate multiple types of data, including unstructured, structured, and streaming.

Support the development of data science and analytics solutions and product that improve existing processes and decision making.

Build internal capabilities to better serve clients and demonstrate thought leadership in latest innovations in data science, big data, and advanced analytics.

Contribute to business and market development.

Specific skills and abilities:

- Strong computer science and programing background

- Deep experience in data modeling, EDW, Star, snowflake and other schemas and cubing technologies (OLAP)

- Ability to design and build data models, semantic layer to access data sets

- Ability to own a complete functional area - from analysis to design to development and complete support

- Ability to translate high-level business requirements into detailed design

- Build integration between data systems ( restful API, micro batch, streaming) using technologies ( e.g. Snaplogic - iPaaS, Spark SQL, HQL, Sqoop, Kafka, Pig and Strom)

- Hands on experience working the Cloudera Hadoop ecosystem and technologies

- Strong desire to learn a variety of technologies and processes with a "can do" attitude

- Experience guiding and mentoring 5-8 developers on various tasks

- Aptitude to identify, create, and use best practices and reusable elements

- Ability to solve practical problems and deal with a variety of concrete variables in situations where only limited standardization exits

Qualifications & Skills:

Bachelor's degree, Masters degree required.

Expertise with HBase, NOSQL, HDFS, JAVA map reduce for SOLR indexing, data transformation, back-end programming, java, JS, Node.js and OOAD.

Hands on experience in Scala and Python.

7 + years' of experience in programing and data engineering with minimum 2 years' of experience in Cloudera Hadoop.

 
location: Broomfield, Colorado
job type: Contract
salary: $60 - 65 per hour
work hours: 8 to 5
education: Bachelors
 
responsibilities:
Randstad Technologies is seeking a Big Data Architect!

Principal Duties and Responsibilities (Essential Functions**):

Work with cross functional consulting teams within the data science and analytics team to design, develop, and execute solutions to derive business insights and solve clients' operational and strategic problems. Build the platform using cutting-edge capabilities and emerging technologies, including the Data Lake and Cloudera data platform, which will be used by thousands of users. Work in a Scrum-based Agile team environment using Hadoop. Install and configure the Hadoop and HDFS environment using the Cloudera data platform. Create ETL and data ingest jobs using Map Reduce, Pig, or Hive. Work with and integrate multiple types of data, including unstructured, structured, and streaming.

Support the development of data science and analytics solutions and product that improve existing processes and decision making.

Build internal capabilities to better serve clients and demonstrate thought leadership in latest innovations in data science, big data, and advanced analytics.

Contribute to business and market development.

Specific skills and abilities:

- Strong computer science and programing background

- Deep experience in data modeling, EDW, Star, snowflake and other schemas and cubing technologies (OLAP)

- Ability to design and build data models, semantic layer to access data sets

- Ability to own a complete functional area - from analysis to design to development and complete support

- Ability to translate high-level business requirements into detailed design

- Build integration between data systems ( restful API, micro batch, streaming) using technologies ( e.g. Snaplogic - iPaaS, Spark SQL, HQL, Sqoop, Kafka, Pig and Strom)

- Hands on experience working the Cloudera Hadoop ecosystem and technologies

- Strong desire to learn a variety of technologies and processes with a "can do" attitude

- Experience guiding and mentoring 5-8 developers on various tasks

- Aptitude to identify, create, and use best practices and reusable elements

- Ability to solve practical problems and deal with a variety of concrete variables in situations where only limited standardization exits

Qualifications & Skills:

Bachelor's degree, Masters degree required.

Expertise with HBase, NOSQL, HDFS, JAVA map reduce for SOLR indexing, data transformation, back-end programming, java, JS, Node.js and OOAD.

Hands on experience in Scala and Python.

7 + years' of experience in programing and data engineering with minimum 2 years' of experience in Cloudera Hadoop.

 
qualifications:
Randstad Technologies is seeking a Big Data Architect!

Principal Duties and Responsibilities (Essential Functions**):

Work with cross functional consulting teams within the data science and analytics team to design, develop, and execute solutions to derive business insights and solve clients' operational and strategic problems. Build the platform using cutting-edge capabilities and emerging technologies, including the Data Lake and Cloudera data platform, which will be used by thousands of users. Work in a Scrum-based Agile team environment using Hadoop. Install and configure the Hadoop and HDFS environment using the Cloudera data platform. Create ETL and data ingest jobs using Map Reduce, Pig, or Hive. Work with and integrate multiple types of data, including unstructured, structured, and streaming.

Support the development of data science and analytics solutions and product that improve existing processes and decision making.

Build internal capabilities to better serve clients and demonstrate thought leadership in latest innovations in data science, big data, and advanced analytics.

Contribute to business and market development.

Specific skills and abilities:

- Strong computer science and programing background

- Deep experience in data modeling, EDW, Star, snowflake and other schemas and cubing technologies (OLAP)

- Ability to design and build data models, semantic layer to access data sets

- Ability to own a complete functional area - from analysis to design to development and complete support

- Ability to translate high-level business requirements into detailed design

- Build integration between data systems ( restful API, micro batch, streaming) using technologies ( e.g. Snaplogic - iPaaS, Spark SQL, HQL, Sqoop, Kafka, Pig and Strom)

- Hands on experience working the Cloudera Hadoop ecosystem and technologies

- Strong desire to learn a variety of technologies and processes with a "can do" attitude

- Experience guiding and mentoring 5-8 developers on various tasks

- Aptitude to identify, create, and use best practices and reusable elements

- Ability to solve practical problems and deal with a variety of concrete variables in situations where only limited standardization exits

Qualifications & Skills:

Bachelor's degree, Masters degree required.

Expertise with HBase, NOSQL, HDFS, JAVA map reduce for SOLR indexing, data transformation, back-end programming, java, JS, Node.js and OOAD.

Hands on experience in Scala and Python.

7 + years' of experience in programing and data engineering with minimum 2 years' of experience in Cloudera Hadoop.

 
skills: Randstad Technologies is seeking a Big Data Architect!

Principal Duties and Responsibilities (Essential Functions**):

Work with cross functional consulting teams within the data science and analytics team to design, develop, and execute solutions to derive business insights and solve clients' operational and strategic problems. Build the platform using cutting-edge capabilities and emerging technologies, including the Data Lake and Cloudera data platform, which will be used by thousands of users. Work in a Scrum-based Agile team environment using Hadoop. Install and configure the Hadoop and HDFS environment using the Cloudera data platform. Create ETL and data ingest jobs using Map Reduce, Pig, or Hive. Work with and integrate multiple types of data, including unstructured, structured, and streaming.

Support the development of data science and analytics solutions and product that improve existing processes and decision making.

Build internal capabilities to better serve clients and demonstrate thought leadership in latest innovations in data science, big data, and advanced analytics.

Contribute to business and market development.

Specific skills and abilities:

- Strong computer science and programing background

- Deep experience in data modeling, EDW, Star, snowflake and other schemas and cubing technologies (OLAP)

- Ability to design and build data models, semantic layer to access data sets

- Ability to own a complete functional area - from analysis to design to development and complete support

- Ability to translate high-level business requirements into detailed design

- Build integration between data systems ( restful API, micro batch, streaming) using technologies ( e.g. Snaplogic - iPaaS, Spark SQL, HQL, Sqoop, Kafka, Pig and Strom)

- Hands on experience working the Cloudera Hadoop ecosystem and technologies

- Strong desire to learn a variety of technologies and processes with a "can do" attitude

- Experience guiding and mentoring 5-8 developers on various tasks

- Aptitude to identify, create, and use best practices and reusable elements

- Ability to solve practical problems and deal with a variety of concrete variables in situations where only limited standardization exits

Qualifications & Skills:

Bachelor's degree, Masters degree required.

Expertise with HBase, NOSQL, HDFS, JAVA map reduce for SOLR indexing, data transformation, back-end programming, java, JS, Node.js and OOAD.

Hands on experience in Scala and Python.

7 + years' of experience in programing and data engineering with minimum 2 years' of experience in Cloudera Hadoop.


Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.

easy apply

get jobs in your inbox.

sign up
{{returnMsg}}

related jobs

    Cloud Architect

  • location: Englewood, CO
  • job type: Contract
  • salary: $48 - $50 per hour
  • date posted: 9/12/2018

    Hadoop Developer

  • location: Englewood, CO
  • job type: Contract
  • salary: $62 - $72 per hour
  • date posted: 9/20/2018

    ERP Developer

  • location: Denver, CO
  • job type: Contract
  • salary: $70 - $75 per hour
  • date posted: 9/19/2018