job summary: Summary:
- We are looking for a Scala developer with Apache Hadoop/Spark/Flink experience to assist with the development of the regulatory reporting processing engine using Scala and Big Data technologies to provision regulatory reporting data via domain-specific-language (DSL) scripts. The role requires close partnership with the NFRR Program analysis and development teams.
- Financial regulation, has increased dramatically over the past few years. This role is to work on the Global Financial and Markets Non-Financial Regulatory Reporting (NFRR) Data Delivery program, a new initiative to define and implement consistent and efficient regulatory reporting processes that adhere to enterprise standards, simplify controls and enable re-use.
- As part of this initiative, we are developing a Data Processing Framework using Scala and Big Data technologies for authoring domain-specific-language (DSL) components to seamlessly combine data from multiple data sources. The DSL components interpret transformation rules written in a configuration style syntax that can be applied to one or more standard data sets to produce a transformed data set. This framework provides a unified language for describing the data needs of a report.
- Enables users to seamlessly retrieve and combine data from multiple sources
- Enables users to author reports without having to worry about the mechanics of actually retrieving, filtering, projecting, or aggregating the data
- Ensures proper versioning of the report definitions and the running of the reports as they existed at specific points in time
- Ensures that the system is scalable enough to run hundreds of reports in parallel, if require
location: New York, New York
job type: Contract
salary: $67 - 77 per hour
work hours: 9am to 5pm
- The candidate will work directly with the Director of the Data Processing Engine and will participate in all phases of development of the platform and subsequent reports.
- Develop technical specification and component level design for DSL Extensions for input and output
- Design parameterized and configurable modules for DSL Extensions
- Develop DSL for the data outputs needed for NFRR Reports
- Integrate Code Repository and Management
- The candidate must be a self-starter, able to work in a fast paced and results driven environment with minimal oversight. The candidate is required to have excellent communications skills and possess a strong sense of accountability and responsibility.
- 5+ years development experience
- Good general Scala programming skills
- Experience with Hadoop Distributed File System (HDFS), HBase and Hive
- Experience with custom aggregations within Spark/Flink, preferred
- Experience with databases, a plus
- Experience on regulatory or reporting projects, preferred
- Ability to perform detailed and complex data analysis
- Attention to detail and ability to work independently
- Ability to handle tight deadlines, and competing demands in a fast paced environment
- Knowledge of Global Financial and Markets products/asset classes and associated data including fixed income, equities, derivatives, and foreign exchange securities, preferred
- Scala developer with Apache Hadoop/Spark/Flink experience
- 5-7 years
Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.