Data Engineer II - Charlotte, North Carolina United States - 43631



JOB DESCRIPTION

Job #: 43631
Title: Data Engineer II
Job Location: Charlotte, North Carolina - United States
Employment Type:
Salary: contact recruiter for details
Employer Will Recruit From: Local
Relocation Paid?: NO

WHY IS THIS A GREAT OPPORTUNITY?


Pioneer in the FinTech space.  Bank has grown from $40B in assets to $116B in assets in 3 years during COVID.

Huge opportunity for upward mobility. 

JOB DESCRIPTION

Data Engineer II

For over 30 years, The Bank has helped innovative companies and their investors move bold ideas forward, fast. We provide targeted banking services to companies of all sizes in innovation centers around the world. The Data Engineering team is responsible for delivering data solutions that support all lines of business across the organization. This includes providing data integration services for all batch data movement, managing, and enhancing the data warehouse, Data Lake & data marts, and providing support for analytics and business intelligence customers.

Do you get excited when you see data? Constantly looking for value in Data? If that is you, we are looking for you. As a Data Engineer, you will build, append and enhance our existing enterprise data warehouse. You will get an opportunity to closely work with business teams and other application owners, understand the core functionality of banking, credit, risk and finance applications and associated data. You will build data pipelines, tools, and reports that enable analysts, product managers, and business executives. You will also have the opportunity to display your skills in the following areas: Big Data technologies, Design, implement, and build our enterprise data platform (EDP).

Responsibilities:

  • Design and Build ETL jobs to support Enterprise data warehouse.
  • Write Extract-Transform-Load (ETL) jobs using any standard tools and Spark/Hadoop jobs to calculate business metrics.
  • Partnering with business team to understand the business requirements and understand the impact to existing systems and design and Implement new data provisioning pipeline process for Finance / External reporting domains.
  • Design data schema and operate internal data warehouses and SQL/NoSQL database systems. Monitor and troubleshoot operational or data issues in the data pipelines.
  • Drive architectural plans and implementation for future data storage, reporting, and analytic solutions.

Qualifications:

  • Bachelor's degree in Computer Science, Mathematics, Statistics, Finance, related technical field, or equivalent work experience
  • 5+ years of relevant work experience in analytics, data engineering, business intelligence or related field, and 5+ years of professional experience
  • 2+ years of experience in implementing big data processing technology: Hadoop, Apache Spark, etc.
  • Experience using SQL queries, experience in writing and optimizing SQL queries in a business environment with large-scale, complex datasets
  • Detailed knowledge of data warehouse technical architecture, infrastructure components, ETL and reporting/analytic tools and environments
  • Hands-on experience on major ETL tools like Ab Initio, Informatica / IICS, BODS and/or any cloud-based ETL tools.
  • Hands-on experience with scheduling tools like Control-M, Redwood or Tidal.
  • Understanding and experience on reporting tools like Tableau, BOXI etc.
  • Understanding of Database and data warehouse concepts.
  • Hands-on experience on major DBs like Oracle, SQL Server, Postgres.
  • Hands-on experience in cloud technologies (AWS /google cloud/Azure) related to Data Ingestion tool (both real-time and batch-based), CI/CD process, Cloud architecture understanding, and Big data implementation.
  • AWS certification is a plus and working knowledge of Glue, S3, Athena, Redshift is a plus.
  • Coding proficiency in at least one modern programming language (Python, Ruby, Java, etc.) is a plus
  • Experience in a banking domain is a plus

QUALIFICATIONS

  • Implementing big data processing technology: Hadoop, Apache Spark, etc.
  • SQL queries, experience in writing and optimizing SQL queries
  • Knowledge of data warehouse technical architecture, infrastructure components, ETL and reporting/analytic tools and environments
  • ETL tools like Ab Initio, Informatica / IICS, BODS and/or any cloud based ETL tools.
  • Hands on experience in cloud technologies (AWS /google cloud/Azure) related to Data Ingestion tool (both real time and batch based), CI/CD process, Cloud architecture understanding, and Big data implementation.

 

Education:
University - Bachelor's Degree/3-4 Year Degree

APPLY NOW FOR THIS JOB

Our recruiters are currently seeking to fill this position and hundreds like this in our network. If you are a match you'll be contacted with additional details.

We value your privacy and will never share your information with any employer without your consent.

Send your profile and resume to the recruiter who posted this job. You may include a cover letter to introduce yourself.

Cover Letter Text:

5,000 character limit