Data Engineer - Metro Detroit - Full-Time - No 3rd Parties in Melvindale, MI at Vaco

Date Posted: 2/4/2020

Job Snapshot

Job Description

Data Engineer

Data Engineer

* Senior Data engineer with 7+ years of experience in Python, Spark, Hadoop, AWS Cloud and SQL.

* Create and maintain optimal data pipeline architecture.

* Assemble large, complex data sets that meet functional / non-functional business requirements.

* Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

* Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies.

* Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

* Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.

* Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

* Work with data and analytics experts to strive for greater functionality in our data systems.

* Advanced working SQL knowledge and experience working with relational databases, data modelling, query authoring (SQL) as well as working familiarity with a variety of databases.

* Experience building and optimizing big data pipelines, architectures and data sets.

* Experience with object-oriented/object function scripting languages: Python etc.

* Experience with AWS cloud services: EC2, EMR, RDS, Redshift

Job Requirements

Data Engineer • Senior Data engineer with 7+ years of experience in Python, Spark, Hadoop, AWS Cloud and SQL. • Create and maintain optimal data pipeline architecture. • Assemble large, complex data sets that meet functional / non-functional business requirements. • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies. • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions. • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. • Work with data and analytics experts to strive for greater functionality in our data systems. • Advanced working SQL knowledge and experience working with relational databases, data modelling, query authoring (SQL) as well as working familiarity with a variety of databases. • Experience building and optimizing big data pipelines, architectures and data sets. • Experience with object-oriented/object function scripting languages: Python etc. • Experience with AWS cloud services: EC2, EMR, RDS, Redshift