Join our Team.
Our company is made up of extraordinary people building extraordinary solutions. We are always looking for talented people who are driven to bring impactful results to companies around the world.
iDerive is bringing data integration, visualization, and advanced data science to brands selling on Amazon and other marketplaces. Our platform empowers Marketing, Finance, and Operations to make better decisions by identifying opportunities and making recommendations.
We are a fully distributed team, with team members in five US states.
As a Data Engineer, you will be responsible for designing, implementing, and managing data pipelines using dbt (Data Build Tool), Python, and data partner APIs. You will collaborate closely with cross-functional teams, including Product, Data Science, Tableau developers, and other data engineers to gather requirements, retrieve data, and create an elegant, scalable data architecture for answering a variety of questions. You will work independently, with a high level of autonomy, and take ownership of end-to-end data engineering projects.
- Design, develop, and maintain scalable and efficient data pipelines using dbt, Python, and other ETL tools.
- Extract, transform, and load (ETL) data from various sources into the data warehouse, ensuring data quality and consistency.
- Develop data pipelines and schemas from scratch, ensuring scalability, reusability, integrity, reliability, and performance.
- Collaborate with data scientists, analysts, and software engineers to understand data requirements and implement solutions that meet business needs.
- Monitor and troubleshoot data pipeline issues, ensuring data availability and reliability.
- Implement and maintain data governance policies, including data security, privacy, and compliance.
- Develop and maintain documentation related to data pipelines, data models, and data transformation processes.
- Stay up-to-date with the latest trends and technologies in data engineering and recommend improvements to enhance efficiency and effectiveness.
- Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience).
- Strong experience in building and maintaining data pipelines using dbt.
- Proficiency in Python programming and experience with Linux environments.
- Demonstrated experience in designing and implementing data warehouses from scratch, ensuring performance and scalability.
- Strong understanding of SQL and database management systems.
- Familiarity with data modeling concepts and dimensional modeling techniques.
- Excellent problem-solving and troubleshooting skills, with the ability to work independently and proactively.
- Strong communication skills and ability to collaborate effectively with cross-functional teams.
- Tableau dashboard development experience is a plus.