At Hopin, we’re reimagining events. Our mission is simple: we exist to make the world feel closer.
Founded in 2019, Hopin brings brands and communities together around highly interactive and engaging experiences. We believe that people should have access to the conversations, moments and ideas they care most about, no matter where they are. Through our highly scalable platform, participants are able to learn, engage, and connect from anywhere in the world.
Hopin started as a virtual events solution but we have since meaningfully expanded our offering from virtual events to hybrid and in-person events, as well as video and workplace collaboration products. This growth has been fueled by a series of strategic acquisitions, including: Boomset - an all-in-one event management platform; Attendify - advancing Event Marketing products; StreamYard - unveiling a video production Studio; video hosting service Streamable and video technology company Jamm.
The Data Platform team at Hopin has an ambitious roadmap to build an enterprise class lakehouse to serve as a single source of truth of analytics and data products across the entire suite of Hopin products. In order to achieve this, we have selected the Databricks platform to support both batch and real-time use cases in a unified manner. This is an exciting opportunity for an experienced data engineer to join the team and make a big impact on the architecture and design of the lakehouse and lead roadmap critical projects which deeply inform Hopin’s product direction and empower teams to make truly data-driven decisions.
- Designing & building robust and scalable data processing pipelines
- Working with the product manager to define the technical scope of projects
- Implement robust testing of data pipelines at different levels (unit, integration & E2E)
- Develop data models for various stakeholders (e.g. finance, product analytics etc)
- 6+ years of experience of data engineering experience
- Experience with Scala and Apache Spark
- Experience with architecture and system design of large scale distributed systems handling large volumes of data
- Solid experience with data warehouse modeling techniques & SQL
- Working in a DevOps manner where the team owns their own infrastructure and use Terraform to manage it
- Interested in being a technical leader within the team and mentoring other team members to level-up their skills
- Work in a Agile environment in a team which has an open culture and a no-ego attitude
- Have experience working with non-technical/less technical stakeholders and translate their needs into robust solutions
- Experience with the Databricks platform is a big plus but not absolutely necessary
- Experience with Looker is a plus but not absolutely necessary