Data Engineer interested in ETL and streaming analytics

FieldLevel

FieldLevel is seeking a data engineer excited to help us operate our data processing systems. Data transformation is a key discipline that helps our company deliver software people love to use and if you like working with data we want your help.


As a data engineer you'll join our data department in building exceptional systems to transform data into action. You'll work side-by-side with engineers, data analysts, and data scientists to implement solutions to some of the toughest problems in data processing. With the group you'll be empowered to diagnose, experiment with, and transform how FieldLevel does data processing.


The job of the data engineer allows you to work with some of the most complex systems in our infrastructure. You should have some experience working with ETL. Plus, you should have some experience with a multiple data stores (e.g. sql databases, no-sql databases, event streams). You should know SQL well and have written applications in a language common in data engineering (e.g. scala, Python, C#) using some of the most common libraries.


Few people are experts in all of these areas, but you should have a working knowledge of several and an appetite to gain expertise in several areas.


What you'll do:



  1. Monitor FieldLevel's data processing systems.

  2. Build operations tools to improve system performance and reliability.

  3. Build infrastructure to support collection, transformation, and delivery of data to multiple business units.

  4. Develop and promote best practices for producing and consuming data among engineers and analysts.

  5. Keep focused on how the team learns from data and produces that data.

  6. Assess and advise on data processing opportunities of new product feature work.


Qualifications:



  1. Experienced with good data schema construction

  2. Proficient with data engineering programming languages like Python, C#, Java, SQL, Regex.

  3. Proficient with database like relational (SQL Server, Postgres), columnstore (Vertica, Snowflake), no-SQL (Kafka)

  4. Proficient with cloud based infrastructure (Azure/AWS)

  5. Familiar with containerization (Docker/Kubernetes)


Personality


Self-directed, curious, rigorous, pragmatic, hungry to learn


Education and Experience



  1. BS or higher

  2. Previous position as a data engineer or similar experience


Bonus if



  1. Experience with streaming analytics

  2. Experience with ML

  3. Previous work in big data / large scale data processing

  4. Previous work with real-time streaming analytics

  5. Previous work with an eventbus like Kafka

Apply

👉 Please mention in your application that you found the job on pyremote, this helps us get more companies to post here!

This job is sourced from Stack Overflow Jobs. When clicking on the button to apply above, you will leave pyremote and go to the job application page. pyremote accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.