Lytics, the customer data platform (CDP) built for marketing teams, improves your business performance by connecting the right data to Lytics’ powerful AI engine. By combining unique behavioral insights, machine learning, and real-time campaign orchestration, Lytics equips marketers with the tools to create unique one-to-one marketing campaigns and engagements based on each user’s interests and customer journey. Some of the world’s most innovative brands use Lytics’ CDP technology, including General Mills, Live Nation, Nestlé Purina, AEG, Industry Dive, and Yamaha. We are looking for smart, passionate, and dedicated individuals to grow with us and deliver great value to our customers and company.
POSITION OVERVIEW
Job Description:
Software Engineers on the Lytics Data Engineering team are responsible for the design, implementation, and maintenance of Lytics’ data pipeline and end-user API endpoints related to the data pipeline, and work in collaboration with other teams like Data Science and Platform. Lytics primarily is looking for smart people with a drive to learn and collaborate. Candidates should have experience in one of the following: compilers, ETL, stream processing, graph databases, distributed engineering, and/or processing data at petabyte+ scale.
What Our Technology Looks Like:
- Our services are all in Go and hosted on GCP
- The data pipeline is a collection of distributed services, processing billions of events per day, using a Lambda Architecture
- We've built a unique data storage layer using cutting-edge graph and information retrieval technologies
- We provide access to data through multiple APIs and systems
- Our customers interact with Lytics via a SQL-like interface, which gets translated into AST which is evaluated by Lytic's data layer
Key Responsibilities:
- Be a key contributor to the development of Lytics’ data processing pipeline in all aspects such as availability, latency, performance, efficiency, change management, metrics, and future directions in its design
- Work with team members to design, document, and implement large-scale distributed streams and batch processing of petabytes of data using Go
- Work with Platform to develop new services on Kubernetes and Google Cloud Platform
- rovide tooling and infrastructure for other internal teams, including Engineering, Customer Success, and Sales
Qualifications:
- A learner, critical thinker, and problem-solving individual
- 5+ years of experience working with cloud operations (any cloud is fine), distributed systems, or data processing
- Software engineering knowledge in data structures, concurrent programming, distributed systems, and query processing