This site uses cookies. To find out more, see our Cookies Policy

Big Data Developer in Richmond, VA at Vaco

Date Posted: 8/3/2018

Job Snapshot

Job Description

Top Skills: Spark, Java, ETL data pipeline with Big Data on AWS
Secondary: Python, Spark is preferred, but would look at Kafka

Soft Skills important, must ask questions. Will be heads down coding most of the time.

Description:

Build data APIs and data delivery services that support critical operational and analytical applications for our internal business operations, customers and partners
Build data pipeline frameworks to automate high-volume and real-time data delivery
Transform complex analytical models into scalable, production-ready solutions
Continuously integrate and ship code into our cloud Production environments
Develop applications from ground up using a modern technology stack such as Scala, Spark, Python, Postgres, Angular JS, and NoSQL
Work directly with Product Owners and customers to deliver data products in a collaborative and agile environment
2+ years of experience working with leading big data technologies like Cassandra, Accumulo, HBase, Spark, Hadoop, HDFS, AVRO, MongoDB, or Zookeeper
2+ years of experience with Agile engineering practices
2+ years of experience with NoSQL implementation (Mongo, Cassandra, etc. a plus)
2+ years of experience developing Java based software solutions
2+ years of experience in at least one scripting language (Python, Perl, JavaScript, Shell)
2+ years of experience developing software solutions to solve complex business problems
2+ years of experience with Relational Database Systems and SQL
2+ years of experience designing, developing, and implementing ETL
2+ years of experience with UNIX/Linux including basic commands and shell scripting
1+ years of experience with AWS


Job Requirements

Spark, Java, Python, ETL data pipeline with Big Data on AWS