This site uses cookies. To find out more, see our Cookies Policy

Big Data Application Developer in Houston, TX at Vaco

Date Posted: 6/7/2018

Job Snapshot

Job Description

Vaco has a contract to hire opportunity in Houston, TX for a Big Data Application Developer.

One of the nation's largest providers of onshore contract drilling services has created a combined business and technology team focused on drilling processes optimization and the design and development of the next generation of applications that will provide improved business functionality and enhanced analytics in support drilling optimization.

The IT team is embedded with the business team to enhance collaboration, identify process and technology optimization opportunities, "brainstorm" how technology can be leveraged to improve business performance, and design, develop and deliver enhanced business and technology solutions.

This position is a key contributor to the design and development of the foundation applications and the advanced analytics platform. This position requires full SDLC experience and will be involved in all aspects of an Agile-based, application development methodology including business requirements identification, data analysis, integration design, functional design, technical design, programming, testing, and implementation.

This position requires a very strong technical/programming foundation in full-stack Java technologies combined with current experience with "big data" programming technologies and application design.

Skills and Experience

  • Strong experience on Core Java, Hadoop ecosystem and any NoSQL Database.
  • Minimum 3 Years of strong experience on Spark, Kafka, Scala, and Hbase.

Technical/Functional Skills:

  • Core Java, Multi-Threading, OOPS Concept, writing parsers in Core Java
  • Hadoop/Hive/Pig/MapReduce
  • Cloud Computing (AWS/Azure etc.)
  • Should have strong knowledge on Hadoop ecosystem such as Hive/Pig/MapReduce
  • Strong in SQL, NoSQL, RDBMS and Data warehousing concepts
  • Writing complex MapReduce programs
  • Should have strong experience on pipeline building such Spark or Storm or Kafka Streams.
  • Designing efficient and robust ETL workflows
  • Performance optimization in a Big Data environment

***NO 3RD party resumes will be accepted, no C2C allowed for this role. US Citizens and those authorized to work in the US are encouraged to apply. Our client is unable to sponsor or transfer H1b candidates at this time.***