This site uses cookies. To find out more, see our Cookies Policy

Data Engineer - Kafka/Spark in Richmond, VA at Vaco

Date Posted: 12/5/2018

Job Snapshot

Job Description

Data experts-are you ready to grow your career by applying your data expertise to more complex projects with some of the nation's top companies? Let Vaco serve as your advocate in presenting you to clients who are looking for Data Engineers. Our recruiting staff gives you an advantage over your competition by promoting your strengths and assets directly to hiring managers while helping you to be more prepared for your interviews.

From keeping you up to date on market trends and industry expectations to providing you with valuable insight into the company culture, compensation expectations, and growth opportunities of specific clients, Vaco will give you the edge you need in today's highly competitive job marketplace. If you are passionate about data architecture and are ready for a rewarding new challenge, let Vaco help you to make it happen. Apply today!

***NO THIRD PARTIES - U.S. Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this time***

We are looking for a Data Engineer to be part of our scrum teams and perform development and coding based on Kafka & Java/Spring applications.

Job Duties:

  • Develop code and automate functional, system, integration test plans, test scripts and report defects and testing results.
  • Participate in the agile development process
  • Document and communicate issues and bugs relative to data standards
  • Create and maintain an integration and regression testing framework
  • Develop & review technical documentation for artifacts delivered
  • Pair up with experienced data engineers to develop cutting edge Analytic applications leveraging Big Data technologies: Hadoop, NoSQL, and In-memory Data Grids

Qualifications:

  • Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) and a minimum of 2 years of experience
  • Minimum 2 year of experience in deployment of BI & Analytics solutions using Big Data Technologies (such as - MapReduce, Kafka, HBase) in complex large-scale environments
  • Minimum 1 years of experience in at least 3 of the following: HIVE, HBase, MapReduce, Kafka, Spark, Java
  • Hands-on experience writing complex sql queries, extracting and importing large amounts of data