Title - Big Data Engineer (Core Java Kafka) Duration 6 months Location New York, NY Environment Agile mode, bi-weekly releases, team interaction is important very dynamic environment. SUMMARY OF DAY TO DAY RESPONSIBILITIES Data Engineer - Develop Big Data infrastructure using core Java, to build a Big Data stack to perform ETL's and provide an advanced analytics platform for the Treasury Analytics Group. Developer will build custom modules using Spark framework, HBase, Hadoop, Kafka etc. to ingest data, process it and perform advanced analytics to suit business needs. Developer will also work on core infrastructure to provide generic capabilities using the platform like Rules Engines for meta data driven processing, and Kafka stream processing for adding real time querying capabilities. Ideal candidate is solid with building Spark API frameworks, and solid working experience with HBase for big data and overall Hadoop environment. Kafka will be implemented, so ideal candidates will be proficient with Kafka for upcoming phases of program. MUST HAVE Core Java background - 10+ years Java 8 - 2+ years (recent past) ETL of big data into data service platform - 3+ years Hadoop technology - 3+ years HBase - 2+ years Building Spark APIs framework to build custom modules - 2+ years Working with RDBMS - Oracle, SQL, etc. - 5+ years Proficiency in Java based rules engines (Drools, etc.) Agile environment experience - 2+ years BankingFinancial Industry experience Kafka stream processing Associated topics: developer, java, matlab, programming, python, sdet, software developer, software development engineer, software engineer, sw
* The salary listed in the header is an estimate based on salary data for similar jobs in the same area. Salary or compensation data found in the job description is accurate.