In a nutshell:
- This is a fantastic opportunity to apply your knowledge of DevOps Engineering to deploying and supporting complex, customisable technology to distributed customer-owned cloud accounts for a category-defining startup
- Our mission is to empower every organization to create value through deep understanding of their customers. We have clear guiding principles and company values to keep us aligned on our journey and we now need creative and empowered individuals to help us execute on our mission.
- Running on the hottest AWS and GCP data technologies, it is THE platform for teams who want to serve complex data use cases in an increasingly privacy and security-conscious world, and we collect, validate, enrich and load billions of events for our customers each day, who also benefit from our online experience and expertise in running our platform.
- Our values include key ideas around inclusivity, transparency and growth – we want to build a conscientious company helping each other and our customers achieve brilliant things.
- We recently closed our Series A2 fundraise with Atlantic Bridge and MMC Ventures and Gartner has recognised us as a cool vendor in Marketing and Data Analytics 2020, which we think is, you guessed it, pretty cool.
- To join a team of multidisciplinary engineers as we stitch together yesteryear’s development and operational roles and steer the product development for the ever evolving deployment fabric in an environment that maximises engineering time by minimizing overhead and meetings.
- The Snowplow product has three main components. The core open source Engine, the Management layer and the Fabric. It’s the Fabric that you will be working on and through it having a direct impact on both Snowplow and the customers we serve. If you’re tired of keeping the lights on while others get the kudos, this is the role for you. You will be making a difference.
- We operate primarily in an innovative Private SaaS model. Serving our customers in this paradigm means that we must maintain flexibility within our standardized deployment model while honouring their organisation’s security and compliance requirements within their AWS or GCP cloud account. If a tree falls in a forest and no one is around to hear it, our automation gets it stood back up or plants another of appropriate size without anyone knowing about it.
- As members of our Technical Operations team, DevOps Engineers at Snowplow are instrumental in developing our ability to automate away the burden of day to day duties so that the team has more time to experiment with exciting new technologies while the machines do the work.
- You’ll work closely with our Support, Data Engineering and Site Reliability Engineering teams to build out the workflows and automation for effectively building, testing and deploying application and infrastructure updates to our global cloud estate. With daily involvement not only in contributing to our pioneering modular approach of over 100,000 lines of Terraform but also in defining the interfaces between these teams to best manage these processes as we scale.
- We currently sit across hundreds of AWS & GCP cloud accounts, and we are looking for a proactive engineer with a strong sense of ownership to help us remove bottlenecks that’ll enable us to 10X and then 100X our footprint to 1,000 and then 10,000 accounts across AWS, GCP and Azure. The challenge of automating the maintenance and deployment of thousands of individualised stacks is an enormously ambitious undertaking and a hugely exciting infrastructure automation challenge. We want you to solve this with us!
What you’ll be doing
- Scaling up: Developing and maintaining our growing Terraform infrastructure-as-code estate, which we use to deploy infrastructure and software updates for all internal and client use cases
- Building in resilience: Supporting our internal infrastructure stacks, which include the HashiCorp suite (OSS) as well as our Snowplow Insights UI and internal VPNs (all based in the AWS Cloud)
- Empowering others with tech: Implement ways to automate and improve our development and release processes across our internal and client-facing infrastructure in close collaboration with our Support & Engineering teams to remove the “human in the loop”
- Designing our future: Write solution specifications, diagrams, best practices documentation and playbooks
We’d love to hear from you if
- You’ve worked with Terraform and understand the importance of IaC
- Automation doesn’t end with IaC – you understand that there is BASH in every system and script the rest of your problems away
- Are knowledgeable about CI/CD tooling and techniquesHave production experience with any of the three major on-demand cloud computing platforms (AWS, Azure and GCP)
- Have worked with Docker and are familiar with container-based workflows, concepts and benefits
- Bonus if you have any Golang programming experience!
What you get in return for being awesome
- A competitive package, including share options
- 25 days of holiday a year (plus public holidays)
- Freedom to work from wherever suits you best
- Cycle to work scheme
- Two fantastic company Away Weeks in a different European city each year (or when this isn’t possible, we have “Stay Away Weeks”)
- Mental health support including therapy sessions
- Work alongside a supportive and talented team with the opportunity to work on cutting edge technology and challenging problems
- Grow and develop in a fast-moving, collaborative organisation
- MacBook Pro
- Convenient location in central London for those who want to work there
- Continuous supply of Pact coffee and healthy snacks in the office when you’re here!
Snowplow is dedicated to building and supporting a brilliant, diverse and hugely inclusive team. We don’t discriminate against gender, race, religion or belief, disability, age, marital status or sexual orientation. Whatever your background may be, we welcome anyone with talent, drive and emotional intelligence.