MarketOnce Logo

MarketOnce

Senior Database Engineer

Posted 13 Hours Ago
Remote
Hiring Remotely in United States
150K-180K Annually
Senior level
Remote
Hiring Remotely in United States
150K-180K Annually
Senior level
In this role, the Senior Database Engineer will optimize and operationalize large-scale data solutions, design data architectures, develop data pipelines, and ensure data reliability while collaborating with cross-functional teams for effective reporting and analytics.
The summary above was generated by AI

At MarketOnce, we empower businesses with the insights and strategies they need to excel in today's dynamic market. With a strong foundation in market research, we offer innovative solutions in research, software, consulting, advertising, and marketing to corporate, private equity firms, and other organizations seeking to achieve their goals. 

 

Our team is distinguished by their client-centric approach—treating each client's business with the dedication and care as if it were their own. This commitment enables us to deliver personalized service and achieve the highest standards of success and innovation in everything we do. Together, our family of companies, including MarketOnce, ROI Rocket, and eAccountable, work towards delivering unparalleled solutions. Headquartered in Denver, Colorado, our global team collaborates from locations across the US and Europe. 

 

We value curiosity, creativity, collaboration, and expertise, continuously striving to push boundaries and exceed our clients' expectations. Join us to be part of a culture that drives meaningful results. 


About the Opportunity:

As a Sr. Database Engineer, you’ll join a close-knit team building the future of a platform designed to power immersive digital experiences for our customers and partners. In this role, you will leverage deep technical expertise to design, optimize, and operationalize large-scale data solutions spanning distributed compute, datalake, Relational and Vector databases that ensure high performance, scalability, and reliability. You'll work across the full modern data stack — from Lakehouse foundations to real-time inference — enabling advanced analytics, semantic search, and AI-driven capabilities at scale.


What You'll Do:

  • Analyze and optimize production data solutions to identify and resolve issues related to performance, locking, and scalability
  • Write and optimize complex SQL across the full stack — Spark SQL for distributed transformations, PostgreSQL for transactional and analytical workloads, and Delta Lake for versioned, ACID-compliant table management
  • Design and build large-scale Spark and Azure Data Factory pipelines for batch and streaming ingestion, transformation, and feature engineering — leveraging both PySpark and Spark SQL for distributed processing at large scale
  • Design and develop data lake architectures including Medallion Architecture to support advanced analytics and large-scale data ingestion
  • Build and maintain production-grade data pipelines for efficient data movement and transformation across systems. Managing Delta table lifecycles, schema evolution, Z-ordering, time travel, and Change Data Feed to support reliable, performant analytics and OLTP and ML workloads
  • Design and operate hybrid storage patterns combining PostgreSQL for transactional workloads — with optimized schemas, indexes, CTEs, window functions, and partitioning — alongside Delta Lake Lakehouse layers for analytical and ML workloads
  • Design and implement reporting solutions that deliver actionable insights to business stakeholders
  • Communicate database and data architecture designs to business and technical audiences, including business users, program sponsors, database administrators, ETL and BI developers
  • Evaluate potential technology/tool solutions that meet business needs and facilitate internal and external discussions towards desirable outcomes
  • Collaborate with solution architects and project resources on systems integration and compatibility, while acting as a leader in coaching, training, and providing guidance
  • Create functional and technical documentation related to data architecture and business intelligence solutions
  • Provide technical consulting to application development teams during application design and development for highly complex or critical projects
  • Design data governance procedures to ensure compliance with internal and external regulations 


What We’re Looking For:

  • 7+ years in data engineering, big data platforms, or a related discipline with hands-on production experience at scale
  • Located in Eastern or Central time zone; you will work extensively with a team member in the UK
  • Proven experience analyzing and tuning production database systems for performance and reliability
  • Expert-level SQL skills — complex joins, CTEs, window functions, query plan analysis, and optimization across both OLTP (PostgreSQL) and distributed engines (Spark SQL, Databricks SQL, Delta Lake)
  • Hands-on experience with data lake technologies and data pipeline frameworks (e.g., Azure Data Lake, Azure Data Factory, Databricks)
  • Deep expertise in Apache Spark — DataFrames, Spark SQL, UDFs, partitioning, broadcast strategies, and hands-on performance tuning experience
  • Data warehouse and visualization experience, including demonstrated strong Logical, Physical, and Dimensional Modeling skills
  • Solid command of Delta Lake internals — transaction log, schema enforcement, schema evolution, time travel, CDF, Z-ordering, liquid clustering, and OPTIMIZE/VACUUM operations
  • Strong PostgreSQL experience — schema design, indexing strategies, partitioning, EXPLAIN/ANALYZE tuning, and extensions including pgvector for similarity search
  • Strong Python skills — PySpark, pandas, async programming, building production data utilities
  • Strong understanding of reporting and BI solution design, including Power BI or similar tools
  • Experience in design of technology roadmaps and the transition from the current architectural framework to target architecture of the future
  • Excellent verbal and written communication skills to document and present data models, strategies, standards, and concepts to both business and IT audiences
  • Experience with the following tech & tools: 
    • SQL Engines: Azure SQL Server, Spark, Cosmos DB, PostgreSQL, Elastic
    • Azure Technologies: Cosmos DB, SQL Database, Analytics, Azure Databricks, Data Factory, Fabric, Power BI, Azure Data Lake
    • ETL Tools: Azure Data Factory, Azure Databricks, Azure Stream Analytics
    • Lakehouse: Delta Lake, Delta Live Tables
    • LANGUAGES: SQL, Python, PySpark


What We Offer:

  • Competitive base salary: $150,000-$180,000/year
  • Flexible vacation policy – take the time you need to recharge
  • Comprehensive health, vision & dental insurance
  • 401k with company contribution
  • Opportunity for career progression with plenty of room for personal growth


What to Expect:

  • 1st Round: 30-45 minute interview with the Recruiter
  • 2nd Round: 45-minute interview with the Hiring Manager (Technical conversational)
  • 3rd Round: 45-minute interview with Tech Leader (Problem solving)

We do not work with outside recruiting agencies.


MarketOnce will accept applications for this role on an ongoing basis

MarketOnce is an Equal Opportunity Employer. We believe in creating a diverse and inclusive workplace where everyone has the opportunity to thrive. We are committed to hiring individuals based on their skills and qualifications, regardless of race, gender, age, sexual orientation, disability, or any other characteristic. We welcome and encourage applications from all backgrounds.

Top Skills

Azure Data Factory
Azure Data Lake
Azure Databricks
Azure Sql Server
Cosmos Db
Delta Lake
Postgres
Power BI
Pyspark
Python
Spark Sql
SQL

Similar Jobs

9 Hours Ago
In-Office or Remote
Senior level
Senior level
Artificial Intelligence • Fintech • Payments • Business Intelligence • Financial Services • Generative AI
The Senior Site Reliability Engineer will design automation tools for database infrastructure, focusing on PostgreSQL and Redis. Responsibilities include building a unified observability platform, developing AI-powered database tools, and ensuring database reliability and security.
Top Skills: DatadogDockerGCPGoGrafanaJavaKotlinKubernetesPostgresPrometheusPythonRedisTerraform
18 Days Ago
Remote
USA
150K-180K Annually
Senior level
150K-180K Annually
Senior level
Cloud • Information Technology • Professional Services
Responsible for managing database environments, designing and optimizing solutions, ensuring security, performance, and reliability of databases, particularly in cloud settings. Collaborate with teams on integrations and provide support.
Top Skills: AWSDocumentdbDynamoDBGitMariadbMongoDBMySQLNeptuneOraclePl/SqlPostgresRdsRedshiftSnowflakeSQLSQL ServerT-SqlTerraform
13 Hours Ago
Remote
United States
110K-221K Annually
Senior level
110K-221K Annually
Senior level
Blockchain • Financial Services • Cryptocurrency • Web3
As a Senior Database Platform Engineer, you will manage and evolve data systems, ensure reliability, optimize performance, and automate operations, collaborating with internal teams.
Top Skills: Ci/CdElasticsearchGoKafkaKubernetesLinuxMariadbNoSQLPythonRedisRustTerraform

What you need to know about the San Francisco Tech Scene

San Francisco and the surrounding Bay Area attracts more startup funding than any other region in the world. Home to Stanford University and UC Berkeley, leading VC firms and several of the world’s most valuable companies, the Bay Area is the place to go for anyone looking to make it big in the tech industry. That said, San Francisco has a lot to offer beyond technology thanks to a thriving art and music scene, excellent food and a short drive to several of the country’s most beautiful recreational areas.

Key Facts About San Francisco Tech

  • Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Google, Apple, Salesforce, Meta
  • Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
  • Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
  • Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account