Sr. Data Engineer
Randstad Technologies is seeking a skilled Sr. Data Engineer for a full time role in Downtown Portland Oregon. If you are ready to join a leader in the FinTech space, please apply and Randstad will be more than happy to assist in helping you land your next role. We look forward to speaking with you!
location: Portland, Oregon
job type: Permanent
salary: $145,000 - 160,000 per year
work hours: 8am to 4pm
responsibilities: 4/14- manager call notes:
Had meeting with manu to discuss the Data eng role.
-current team had 3 Data Eng contractors, work with devops and some data science engineers as well
-the contractors all have good programming skills with Python as well as some hadoop experience. But they are mostly all light with AWS, Kafka and Spark. thats why it's mandatory that this lead have exp with all.
-Looking for their 1st FTE hire for the Dat team, this person will be the tech lead, mostly helping with engineering support, coaching, teaching about the tools, etc. they dont need to have actually mgmt exp as they will not be responsible for that type of coaching, raises, perfoamcne reviews, etc. -This goup will be focused on:
-creating and maintaining data lake on AWS
-also building out a streaming platform
-This is one of the main draws to this team. They are using new cutting egde technology and open tointegrating new tools if the person joining has used something that they feel will benefit the company. They will be working on Live Streaming of data within a massive data lake that continues to grow
-Future goals/initiatives: they will be rolling out this new data lake, and soon will have an offering for Azlo Credit cards for their customers. Thisis going to cause huge growth in the data lake as more and more transactions and data will be complied daily.
-Basically going from "Big data" to "fast data". -Tech stack:
Candidates need to have Hadoop experience (at least 3 yearss), Spark is a must, the ideal candidate will have Kafka exp
-Theyre working in NoSQL database
-Near, real time straming. So anyone with Kinssis AWS steaming exp or another streaming tool will be a plus.
-Almost all programming is done in Python, and it would be a plus if the have Jav or Scala exp.
-Using Apache Aurflow to scale pipelines.
-Need some sort of could exp, AWS, Azure, GCP...
-having lead experince is not a must, Manu would prefer to see someone with good solid 5+ years of experience with these technologies mors than the leadership askpect. Interviews:
4 part- 1) phone screen with Paige (recruiter), 2) phone screen with HM Manu and Eve (Sr. Data Science eng), 3) tech panel with a product owner, data eng, dat scientist, possibly some other memebers. And finally 4) call with David (VP of engineering). Willing to sponsor H1B is they have plenty of time left on visa
We're looking for a passionate Senior Data Engineer to join our growing Data & Analytics team.
This role will be responsible for evolving and optimizing our data pipelines. What you'll do
What we're looking for
- Lead the technical effort of a data engineering scrum in the design of a Big Data infrastructure; collaborate and work closely with DevOps, utilizing state-of-the-art technologies and AWS.
- Build and optimize big data solutions that successfully communicate with a variety of complex environments.
- Ensure ETL work is well-designed, scalable and the process provides real-time analytics capabilities.
- Monitor production tasks; ensure scheduled tasks are properly working.
- Establish best practices so that Data Science work is maintainable and scalable.
Technologies we like and use
- 5+ years of Data Engineering experience, working with different databases (both RDBMS and NOSQL), Big Data technologies, data integration and data management.
- Prior experience with Spark and Hadoop ecosystems (HDFS, Hive, Impala), and ideally familiar with distribution, such as Cloudera (Hortonworks) or AWS EMR.
- Understanding of Agile methodologies and CI/CD tools such as git, Jenkins, Sonar, and Jira.
- Prior experience with workflow management tools, such as Airflow, Oozie, Luigi or Azkaban.
- Prior experience with AWS ecosystem; EMR, S3, Redshift, Lambdas, Glue and Athena.
- Prior experience with Software Design Patterns and TDD
- Proficiency in Python and/or scala.
- Familiarity with ORC, Parquet, and Avro data storage formats.
- Innovative mindset - Problem-solving proclivity.
- Strength in both written and verbal communications within all levels of an organization.
- An entrepreneurial attitude and the ability to work in a fast-paced, flexible environment on multiple concurrent projects with a distributed team.
What we bring
- Apache's Spark, Flink, Airflow and Hudi
- Databricks' Delta Lake and MLFlow
- Python, Scala and R
- Tensorflow, Scikit-learn, Statsmodels, BigDL
- Tableau, Shiny, Streamlit
- Docker, kubernetes, git, AWS, MongoDB, Neo4j, Kafka streams
- Microservice architecture, Pub/Sub, event-driven updates, functional programming.
- High impact role in an early-stage fintech company.
- A killer team with decades of experience in finance, tech, and startups.
- A mission to empower business owners, and a mandate to do away with the old models of banking.
- Backing from a leading global bank with resources to support our growth.
- Experience level: Experienced
- Education: Bachelors
- Data Engineering
Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.