Data Engineer
Spectraforce
US
Remote
20 hours ago
Job Description
YOUR PURPOSE
The Data Engineer will own and strengthen the data foundation that powers Customer Success for Evergreen//One, our subscription offering.This role is responsible for building and maintaining scalable data pipelines, improving and restructuring Snowflake architecture, and ensuring subscription data is clean, trusted, and analytics-ready. You will play a critical role in resolving recurring data inconsistencies surfaced in Gainsight and Tableau by developing well-modeled, reliable datasets that serve as a clear source of truth.
This is a hands-on role focused on data ownership, modeling, quality, and performance — enabling scalable automation and future AI-driven solutions across Customer Success.
WHAT YOU’LL DO
- Design, build, and maintain ETL/ELT pipelines from Salesforce, Gainsight, and other operational systems into Snowflake.
- Audit, normalize, and restructure existing Snowflake tables and views to improve clarity, consistency, and performance.
- Develop clean, analytics-ready data models that power Tableau dashboards, Gainsight workflows, and executive reporting.
- Translate business logic (ARR, renewals, churn, consumption, health scoring) into structured, documented data definitions.
- Investigate and resolve root causes of reported data discrepancies and implement durable fixes.
- Optimize query performance, warehouse utilization, and overall Snowflake efficiency.
- Implement data validation, monitoring, and quality controls to improve trust in reporting.
- Document data lineage, transformations, and definitions to improve transparency and governance.
- Partner closely with Senior Data Analysts, Customer Success, and Ops to ensure scalable, reusable datasets.
- Prepare structured datasets that support automation initiatives and future AI use cases.
WHAT YOU’LL BRING
Minimum Required Experience & Skills
- Bachelor's degree in Computer Science, Computer Engineering, Information Systems, or equivalent practical experience
- 2+ years of experience with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala or others.)
- 2+ years of experience in designing, developing, and maintaining robust data models from structured and unstructured sources
- Experience proactively identifying opportunities to improve ETL & dashboard performance and cost
- 2+ years of experience where the primary responsibility involves working with data. This could include roles such as data analyst, data scientist, data engineer, analytics engineer, or similar positions
- Experience with data warehouse technologies (Snowflake, BigQuery, Spark, etc) and data build tools such as DBT.
- Experience in Git/GitHub and branching methodologies, code review tools, CI tools, JIRA, Confluence.
- Strong understanding of data modeling principles (normalization, dimensional modeling, schema design).
- Experience proactively identifying and improving data pipeline or dashboard performance.
- Strong analytical and problem-solving skills, including experience investigating and resolving data inconsistencies.
Nice to have
- Experience working with Salesforce data models (Accounts, Opportunities, Contracts, Subscriptions).
- Experience with Gainsight, Tableau, and SnapLogic.
- Experience supporting SaaS or subscription-based business models (ARR, renewals, consumption).
- Exposure to automation, predictive modeling, or AI-related data preparation.
- Experience implementing data governance, access controls, and documentation standards.
- Experience with server-side languages like TypeScript/Node.JS/Python/Kotlin
- Experience with RESTful API design