Lead Data Engineer

Remote
Contracted
27236331347
Experienced

About Fusemachines

Fusemachines is a leading AI strategy, talent, and education services provider. Founded by Sameer Maskey Ph.D., Adjunct Associate Professor at Columbia University, Fusemachines has a core mission of democratizing AI. With a presence in 4 countries (Nepal, United States, Canada, and Dominican Republic and more than 450 employees). Fusemachines seeks to bring its global expertise in AI to transform companies around the world.

Location: Remote (Full-time)

About the role

This is a remote full-time position, responsible for designing, building, testing, optimizing and maintaining the infrastructure and code required for data integration, storage, processing, pipelines and analytics (BI, visualization and Advanced Analytics) from ingestion to consumption, implementing data flow controls, and ensuring high data quality and accessibility for analytics and business intelligence purposes. This role requires a strong foundation in programming, and a keen understanding of how to integrate and manage data effectively across various storage systems and technologies.

We're looking for someone who can quickly ramp up, contribute right away and lead the work in Data & Analytics, helping from backlog definition, to architecture decisions, and lead technical the rest of the team with minimal oversight.

We are looking for a skilled Sr. Data Engineer/Technical Lead with a strong background in Python, SQL, Pyspark, Redshift and AWS cloud-based large scale data solutions with a passion for data quality, performance and cost optimization. The ideal candidate will develop in an Agile environment, and would have GCP experience too, to contribute to the migration from AWS to GCP.

This role is perfect for an individual passionate about leading, leveraging data to drive insights, improve decision-making, and support the strategic goals of the organization through innovative data engineering solutions. 

Qualification / Skill Set Requirement:

  • Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field.
  • 5+ years of data engineering experience in AWS and GCP (certifications preferred) with strong expertise in Python, SQL, PySpark, and experience in Agile environments.
  • Proven track record designing and optimizing data pipelines, architectures, data lakes/warehouses, and integrations (batch and real-time) using tools like Spark, DBT, Kafka, Airflow, and open-source solutions.
  • Strong programming skills (Python, Scala, SQL) with experience in data modeling, database design, relational and NoSQL systems (MySQL, Postgres, Cassandra, MongoDB, etc.).
  • Deep knowledge of AWS services (Lambda, Kinesis, Redshift, S3, EMR, EC2, IAM, CloudWatch) and hands-on experience with orchestration (Airflow/Composer), DevOps (GitHub, CI/CD, Terraform), and data governance practices.
  • Strong problem-solving, leadership, and project management skills, with the ability to collaborate across cross-functional teams and communicate complex concepts to technical and non-technical stakeholders.

Responsibilities: 

  • Design, implement, deploy, test and maintain highly scalable and efficient data architectures, defining and maintaining standards and best practices for data management independently with minimal guidance.
  • Ensuring the scalability, reliability, quality and performance of data systems.
  • Mentoring and guiding junior/mid-level data engineers.
  • Collaborating with Product, Engineering, Data Scientists and Analysts to understand data requirements and develop data solutions, including reusable components.
  • Evaluating and implementing new technologies and tools to improve data integration, data processing and analysis.
  • Design architecture, observability and testing strategies, and building reliable infrastructure and data pipelines.
  • Takes ownership of storage layer, data management tasks, including schema design, indexing, and performance tuning.
  • Swiftly address and resolve complex data engineering issues, incidents and resolve bottlenecks in SQL queries and database operations.
  • Conduct Discovery on existing Data Infrastructure and Proposed Architecture.
  • Evaluate and implement cutting-edge technologies and methodologies and continue learning and expanding skills in data engineering and cloud platforms,  to improve and modernize existing data systems.
  • Evaluate, design, and implement data governance solutions: cataloging, lineage, quality and data governance frameworks that are suitable for a modern analytics solution, considering industry-standard best practices and patterns. 
  • Define and document data engineering architectures, processes and data flows.
  • Assess best practices and design schemas that match business needs for delivering a modern analytics solution (descriptive, diagnostic, predictive, prescriptive).
  • Be an active member of our Agile team, participating in all ceremonies and continuous improvement activities.

Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.
Share

Apply for this position

Required*
We've received your resume. Click here to update it.
Attach resume as .pdf, .doc, .docx, .odt, .txt, or .rtf (limit 5MB) or Paste resume

Paste your resume here or Attach resume file