Senior Engineer - IT Analytics - BIG DATA

  • location: Plano, TX
  • type: Contract
easy apply

job description

Senior Engineer - IT Analytics - BIG DATA

job summary:
Senior Engineer - IT Analytics (Big Data)

Contract to hire! Must be eligible to convert to FTE


Project Scope:

Build data quality check framework to validate data within Hadoop ecosystem and between Hadoop and external systems

Building framework to create new Hive datasets from HDFS

Implement new platform enhancements like implementing AtScale etc

**REQUIRED SKILLS**

*8 + years of professional experience

*3+ years of experience with Big data technology and analytics

*3+ years of experience in ETL and ELT data modeling

*Experience working with traditional warehouse and correlation into hive warehouse on big data technologies

*Experience setting data modeling standards in Hive

*Developing automated methods for ingesting large datasets into an enterprise-scale analytical system using Scoop, Spark and Kafka

*Experience with streaming stacks like Nifi and PySpark

*Understanding of Big Data tools (e.g., NoSQL DB, Hadoop, Hbase) and API development consumption.

*Experience in one or more languages (e.g., Python or Java, Groovy)

Additional skills:

- Proficiency in using query languages such as SQL, Hive

- Understanding of data preparation and manipulation using Datameer tool

- Knowledge of SOA, IaaS, and Cloud Computing technologies, particularly in the AWS environment

- Knowledge of setting standards around data dictionary and tagging data assets within DataLake for business consumption.

- Experience with data visualization tools like Tableau

- Identifying technical implementation options and issues

- Partners wand communicates cross-functionally across the enterprise

- Ability to explain technical issues to senior leaders in non-technical and understandable terms

- Foster the continuous evolution of best practices within the development team to ensure data standardization and consistency

- Experience in agile software development paradigm (e.g., Scrum, Kanban)

- Strong written and verbal communication

 
location: Plano, Texas
job type: Contract
work hours: 8am to 5pm
education: Bachelors
 
responsibilities:
- 8 + years of professional experience

- 3+ years of experience with Big data technology and analytics

- 3+ years of experience in ETL and ELT data modeling

- Experience working with traditional warehouse and correlation into hive warehouse on big data technologies

- Experience setting data modeling standards in Hive

- Developing automated methods for ingesting large datasets into an enterprise-scale analytical system using Scoop, Spark and Kafka

- Experience with streaming stacks like Nifi and PySpark

- Understanding of Big Data tools (e.g., NoSQL DB, Hadoop, Hbase) and API development consumption.

- Proficiency in using query languages such as SQL, Hive

- Understanding of data preparation and manipulation using Datameer tool

- Knowledge of SOA, IaaS, and Cloud Computing technologies, particularly in the AWS environment

- Knowledge of setting standards around data dictionary and tagging data assets within DataLake for business consumption.

- Experience in one or more languages (e.g., Python or Java, Groovy)

- Experience with data visualization tools like Tableau

- Identifying technical implementation options and issues

- Partners wand communicates cross-functionally across the enterprise

- Ability to explain technical issues to senior leaders in non-technical and understandable terms

- Foster the continuous evolution of best practices within the development team to ensure data standardization and consistency

- Experience in agile software development paradigm (e.g., Scrum, Kanban)

- Strong written and verbal communication

 
qualifications:
- 8 + years of professional experience

- 3+ years of experience with Big data technology and analytics

- 3+ years of experience in ETL and ELT data modeling

- Experience working with traditional warehouse and correlation into hive warehouse on big data technologies

- Experience setting data modeling standards in Hive

- Developing automated methods for ingesting large datasets into an enterprise-scale analytical system using Scoop, Spark and Kafka

- Experience with streaming stacks like Nifi and PySpark

- Understanding of Big Data tools (e.g., NoSQL DB, Hadoop, Hbase) and API development consumption.

- Proficiency in using query languages such as SQL, Hive

- Understanding of data preparation and manipulation using Datameer tool

- Knowledge of SOA, IaaS, and Cloud Computing technologies, particularly in the AWS environment

- Knowledge of setting standards around data dictionary and tagging data assets within DataLake for business consumption.

- Experience in one or more languages (e.g., Python or Java, Groovy)

- Experience with data visualization tools like Tableau

- Identifying technical implementation options and issues

- Partners wand communicates cross-functionally across the enterprise

- Ability to explain technical issues to senior leaders in non-technical and understandable terms

- Foster the continuous evolution of best practices within the development team to ensure data standardization and consistency

- Experience in agile software development paradigm (e.g., Scrum, Kanban)

- Strong written and verbal communication

 
skills: - 8 + years of professional experience

- 3+ years of experience with Big data technology and analytics

- 3+ years of experience in ETL and ELT data modeling

- Experience working with traditional warehouse and correlation into hive warehouse on big data technologies

- Experience setting data modeling standards in Hive

- Developing automated methods for ingesting large datasets into an enterprise-scale analytical system using Scoop, Spark and Kafka

- Experience with streaming stacks like Nifi and PySpark

- Understanding of Big Data tools (e.g., NoSQL DB, Hadoop, Hbase) and API development consumption.

- Proficiency in using query languages such as SQL, Hive

- Understanding of data preparation and manipulation using Datameer tool

- Knowledge of SOA, IaaS, and Cloud Computing technologies, particularly in the AWS environment

- Knowledge of setting standards around data dictionary and tagging data assets within DataLake for business consumption.

- Experience in one or more languages (e.g., Python or Java, Groovy)

- Experience with data visualization tools like Tableau

- Identifying technical implementation options and issues

- Partners wand communicates cross-functionally across the enterprise

- Ability to explain technical issues to senior leaders in non-technical and understandable terms

- Foster the continuous evolution of best practices within the development team to ensure data standardization and consistency

- Experience in agile software development paradigm (e.g., Scrum, Kanban)

- Strong written and verbal communication


Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.

easy apply

get jobs in your inbox.

sign up
{{returnMsg}}

related jobs

    Senior Systems Engineer

  • location: Dallas, TX
  • job type: Contract
  • salary: $45 - $58 per hour
  • date posted: 10/22/2018

    Data Engineer

  • location: Dallas, TX
  • job type: Contract
  • date posted: 10/19/2018

    Software Engineer

  • location: Addison, TX
  • job type: Permanent
  • salary: $100,000 - $115,000 per year
  • date posted: 8/29/2018