Senior Data Engineer - Sunnyvale, California United States - 45706



JOB DESCRIPTION

Job #: 45706
Title: Senior Data Engineer
Job Location: Sunnyvale, California - United States
Remote Job: Unknown
Employment Type:
Salary: $150,000.00 - $165,000.00 - US Dollars - Yearly
Other Compensation: stock options
Employer Will Recruit From: Regional
Relocation Paid?: Yes

WHY IS THIS A GREAT OPPORTUNITY?


Client is backed by leading venture firms and innovative insurers and is comprised of computer vision, data science, and risk analysis experts.

JOB DESCRIPTION

THE OPPORTUNITY

As a Senior Data Engineer  you’ll be a core contributor to our push to adopt, integrate, and deliver new data-sources to accelerate the next generation of  products. Working with machine learning, platform and data engineers, you’ll build the systems and pipelines to bring in diverse, novel data-sets and integrate them into our data warehouse, APIs and applications.

 

Our client's solutions have been adopted by leading carriers across the U.S., Canada, and Australia...they are just getting started. Over the past 6 years, they've constructed an analytics platform purpose-built for deep learning. They are leveraging a radically expanded array of input data sources and advanced machine learning technologies.

 

THE TECH STACK

Client leverages all available tools and technologies to build a best-in-class tech-stack, which affords them flexibility of fast-deployments, along with the stability to support aggressive SLAs for critical-path client APIs and applications. Building models using Pytorch and Tensorflow, and leverage Python, Spark and Postgres across our AWS-deployed cloud infrastructure.

WITHIN 1 MONTH, YOU’LL

    • Onboard with the engineering team to learn about our tech-stack, our software development process, and start contributing to our code-base. 
    • Get to know the engineers, data scientists, and machine learning researchers you’ll work with through 1:1s and by sitting in on team and project meetings.
    • Learn our current data-architecture, including how we manage, update and deploy data while maintaining stability and speed within our systems.
    • Build your first pipeline and ship it into production.

WITHIN 3 MONTHS, YOU’LL

    • Be an expert in our data-stack, able to develop effectively, run and debug our primary ETL pipelines. 
    • Onboard and integrate new data-partners and integrate them into our data warehouse and transactional databases. 
    • Learn near- and long-term product and data strategy and understand our technology roadmap.

WITHIN 6 MONTHS, YOU’LL

    • Work with engineering leadership to identify and build long-term technical projects.
    • Efficiently execute to ship products and deliver impact for our engineering teams.
    • Develop new tools, processing and pipelines for managing our data workflows and data infrastructure, leveraging open-source/SAAS tools to accelerate development.
    • Proactively identify opportunities and improvements for our existing infrastructure.

THE SKILL SET

    • Demonstrated success building and maintaining complex data pipelines. 
    • Experience with ETL and data-processing frameworks, like Spark, Hadoop, and Airflow. Strong SQL and data-modeling skills working with cloud-based data warehouses like Redshift, Athena, and Big Query. 
    • Background in programming and software development. 
    • Exposure to BI Tools like Tableau, Periscope, and Google Data Studio. 
    • Excellent communication skills: Able to effectively communicate in a clear, concise manner.

THE TEAM

You will join a growing team of software engineers with years of experience building and shipping product-focused engineering across a multitude of different industries. Their software engineers work daily with machine learning engineers and data scientists to build the databases, APIs and applications that are the backbone of infrastructure. They tackle difficult engineering challenges and focus on delivering impact for our coworkers and clients each day.

QUALIFICATIONS

  • Demonstrated success building and maintaining complex data pipelines. 
  • Experience with ETL and data-processing frameworks, like Spark, Hadoop, and Airflow. Strong SQL and data-modeling skills working with cloud-based data warehouses like Redshift, Athena, and Big Query. 
  • Background in programming and software development. 
  • Exposure to BI Tools like Tableau, Periscope, and Google Data Studio. 
  • Excellent communication skills: Able to effectively communicate in a clear, concise manner.

Education:
University - Bachelor's Degree/3-4 Year Degree

APPLY NOW FOR THIS JOB

Our recruiters are currently seeking to fill this position and hundreds like this in our network. If you are a match you'll be contacted with additional details.

We value your privacy and will never share your information with any employer without your consent.

Send your profile and resume to the recruiter who posted this job. You may include a cover letter to introduce yourself.

Cover Letter Text:

5,000 character limit