Discover

Senior Data Engineer (ETL)

Viewed 0 times

Job Description

Discover. A brighter future.

With us, you’ll do meaningful work from Day 1. Our collaborative culture is built on three core behaviors: We Play to Win, We Get Better Every Day & We Succeed Together. And we mean it — we want you to grow and make a difference at one of the world's leading digital banking and payments companies. We value what makes you unique so that you have an opportunity to shine.

Come build your future, while being the reason millions of people find a brighter financial future with Discover.

Job Description 

Position requires excellent communication skills for understanding business vision and the ability to translate the vision into technical artifacts. Strong technical analysis and design background is also a must-have to ensure that technical deliverables are providing flexible, architecturally sound infrastructure with the ability to reuse in future Data Stores, analytical and operational in AWS.

Responsibilities:

  • Provide data analysis and develop data pipeline to meet all technical specifications and business requirements according to the established designs.
  • Provide subject matter expertise in the analysis and preparation of specifications and plans for development/modification of the processes
  • Develop data-driven solutions utilizing current and next generation technologies to meet evolving business needs.
  • Design, develop, test, and implement data-driven solutions to meet business requirements
  • Ability to quickly identify an opportunity and recommend possible technical solutions.
  • Develop application systems that comply with the standard system development methodology and concepts for design, programming, backup, and recovery to deliver solutions that have superior performance and integrity.
  • Contribute to determining programming approach, tools, and techniques that best meet the business requirements.
  • Understand and follow the PDP process to develop, deploy and deliver the solutions.
  • Be pro-active and diligent in identifying and communicating design and development issues.
  • Offer system support as part of a support rotation with other team members.
  • Utilize multiple development languages/tools such as Python, Spark to build prototypes and evaluate results for effectiveness and feasibility.
  • Operationalize open source data-analytic tools for enterprise use.
  • Develop real-time data ingestion and stream-analytic solutions leveraging technologies such as Kafka, Apache Spark, Python and AWS based solutions.
  • Custom Data pipeline development (Cloud and locally hosted)
  • Work heavily within the Cloud ecosystem and migrate data from Teradata to AWS based platform.
  • Provide support for deployed data applications and analytical models by being a trusted advisor to Data Scientists and other data consumers by identifying data problems and guiding issue resolution with partner Data Engineers and source data providers.
  • Provide subject matter expertise in the analysis, preparation of specifications and plans for the development of data processes.
  • Optimize the performance of ETL processes and scripts by working with other technical staff as needed
  • Ensure proper data governance policies are followed by implementing or validating Data Lineage, Quality checks, classification, etc.
  • Designs and develops data ingestion frameworks, real time processing solutions, and data processing/transformation frameworks leveraging open source tools.
  • Deploys application code and analytical models using CI/CD tools and techniques and provides support for deployed data applications and analytical models.
  • Provides senior level technical consulting to peer data engineers during design and development for highly complex and critical data projects

Minimum Qualifications

At a minimum, here’s what we need from you:

  • Bachelors degree in Information Technology, or related field
  • 2+ years of work experience in Data Platform Administration/Engineering, or related

Desired Qualifications

If we had our say, we’d also look for:

  • 4+ years of work experience in Data Platform Administration/Engineering, or related
  • Very strong verbal & written communication skills
  • Hands on experience with Amazon Web Services (AWS) based solutions such as Lambda, Dynamo dB, Snowflake and S3.
  • Knowledge of Data Warehouse technology (Unix/Teradata/Ab Initio/Python/Spark/Snowflake/No SQL)
  • Experience in migrating ETL processes (not just data) from relational warehouse Databases to AWS based solutions.
  • Experience in building & utilizing tools and frameworks within the Big Data ecosystem including Kafka, Spark, and NoSQL.
  • Deep knowledge and very strong in SQL and Relational Databases
  • Knowledge of Data Warehouse technology (Unix/Teradata/Ab Initio)
  • Willingness to continuously learn & share learnings with others
  • Ability to work in a fast-paced, rapidly changing environment
  • Strong attention to detail
  • Experience within the Financial industry

#Remote

#BI-Remote

#LI-KE

What are you waiting for? Apply today!

The same way we treat our employees is how we treat all applicants – with respect. Discover Financial Services is an equal opportunity employer (EEO is the law). We thrive on diversity & inclusion. You will be treated fairly throughout our recruiting process and without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status in consideration for a career at Discover.

Job Summary

wave-1-bottom
Riverwoods, IL Location
Full Time Job Type
Discover

Similar Jobs

The largest community on the web to find and list jobs that aren't restricted by commutes or a specific location.