. . . create world-changing products using God-given talents . . .
Our client began as a family-run women’s apparel business in the late 1930s. Over the decades, the company has evolved into a nationally recognized fashion retailer focused on helping customers feel confident and well dressed for special moments, nights out, and everyday occasions. What started as a small operation has grown into a large retail organization with hundreds of locations, a growing team, and an ongoing expansion strategy.
They are currently looking for a proactive Data Engineer to join the team.
If you’re a passionate individual and think you have what it takes to help carry this legacy forward, we encourage you to apply and join the team.
We are seeking a highly skilled Senior Data Engineer who can own the end-to-end design, development, and operation of robust data pipelines with minimal supervision. This role is an individual contributor who will be the organization’s sole data engineer, responsible for delivering practical, production-ready solutions without direct people management or formal mentoring responsibilities. The ideal candidate will have extensive hands-on experience with BigQuery, Python-based data pipelines, and cloud-native orchestration. Knowledge of Airbyte and dbt is a plus, and cloud-based machine learning experience is an asset.
- Our client is a GCP shop, and they also leverage AWS and Azure to support their operations.
Strong experience working on e-commerce platforms.
Hands-on experience with Shopify.
- Contract: Long-Term
- Location: LATAM
- Start Date: ASAP
- The core team is based in the Los Angeles area, with additional developers located across LATAM in Brazil, Argentina, and Colombia. Working hours will follow the Pacific Time Zone.
- 5+ years of hands-on data engineering experience, with a proven track record of owning data pipelines in production.
- Strong expertise in Google Cloud Platform (GCP), including:
– BigQuery (advanced SQL, partitioning, clustering; BI Engine familiarity a plus)
– Cloud Composer (Apache Airflow) or equivalent workflow orchestration
– Pub/Sub (Cloud Pub/Sub) for event-driven data ingestion - Proficient Python developer with extensive experience building data pipelines, transformations, and automation
- Deep experience extracting data from:
– GraphQL endpoints
– REST APIs
– Relational and/or NoSQL databases
– Flat files (CSV, JSON, Parquet, etc.) - Demonstrated ability to design and implement scalable ETL/ELT pipelines and maintain them in production.
- Strong SQL skills with the ability to optimize BigQuery queries; understanding of data lakehouse concepts.
- Excellent problem-solving, communication, and stakeholder-management skills.
- Ability to work independently, set priorities, meet deadlines, and drive initiatives with minimal guidance.
- Experience with Airbyte for data ingestion and connectors.
- Experience with dbt (data build tool) for transformations and data modeling.
- Familiarity with orchestration patterns, CI/CD for data pipelines, and versioning of data assets.
- Experience with data quality frameworks and testing (e.g., dbt tests, Great Expectations).
- Knowledge of multi-cloud or hybrid data architectures.
- Experience with data instrumentation, monitoring, and SRE practices for data pipelines.
- Exposure to cloud-based ML services and ML data workflows (Vertex AI, AutoML, etc.).
### Cloud ML + Plus (Nice-to-Have)
- Experience or familiarity with cloud-based machine learning services (e.g., Vertex AI, Cloud AI Platform) and integrating data pipelines with ML workflows.
- Building or supporting data prep pipelines for ML model training, feature stores, and model inference data routing.
## Qualifications (Preferred)
- 5+ years of hands-on data engineering experience delivering production-grade pipelines.
- Proven ability to drive end-to-end data initiatives with business impact.
- Excellent communication skills and ability to work cross-functionally with product, analytics, and data science teams.
- Design, implement, and own scalable ETL/ELT data pipelines from GraphQL endpoints, REST APIs, databases, and flat files into BigQuery.
- Lead the architecture and implementation of data models, schemas, partitioning, and clustering to optimize performance and cost.
- Build reusable, maintainable data pipelines using Python, with strong emphasis on reliability, observability, and quality.
- Develop and enforce data quality checks, monitoring, alerting, and incident response processes.
- Define and implement data governance, lineage, and metadata management practices.
- Own workflow orchestration in cloud environments (e.g., Cloud Composer/Airflow), including scheduling, retries, and dependency management.
- Work autonomously with minimal supervision, prioritizing tasks, delivering on deadlines, and communicating progress and risks clearly.
- Collaborate with analytics, data science, and product teams to translate requirements into scalable data solutions.
- Evaluate and integrate new data tooling (e.g., Airbyte, dbt) as needed; contribute to best practices and standards.
- Partner with ML/DS teams to provide data for model training, feature engineering, and inference pipelines when ML workloads are involved.
- Document designs, decisions, and provide clear technical artifacts to support maintenance
- Work your way – Enjoy the freedom to work from anywhere, with flexible hours that match your natural rhythm.
- Work with global clients – Collaborate directly with international teams to create real impact.
- Make extra cash – Earn bonuses for referring great people or bringing in new business opportunities.
- Great people, no micromanagement – Join a supportive, results-focused team where you’re trusted to do your best work.
This flexibility allows developers…
- A better work-life balance
- Increased productivity
- The ability to work any time around the clock
- Reduction in commute time
- Design your ideal daily schedule.
- Build a career, not just a job.
- Work smarter, not longer.
- More time with family and friends

