Infrastructure and systems don’t need to suck; we believe deep understanding of fundamentals and solid engineering cure most woes. We are looking for a startup-friendly, dev-opsy type person to work with the latest Hadoop tech. This is a challenging position and a great opportunity to learn the latest technologies used in the data space.
We strive to make our interview process effective. We can probably get a bunch out of the way if you have some public Github code, or open source contributions we can look at. Our goal is to run the interview cycle and come to a conclusion within a week.
Desired Skills & Experience
We’re looking for someone who:
Can work closely with developers to solve systems problems. Blur the line between ops and dev
Can optimize like crazy; squeeze every bit of performance out of servers
Will automate everything
Can code, script, and change hardware (disks, laptops, servers, etc.); support network, desktop, and security
Has a solid foundation in a scripting language like Python/Ruby and Bash but feels confident to jump into a new language to debug issues
Has experience with configuration management systems such as Puppet, Ansible, CFEngine, or Chef
Bonus points for experience with business intelligence concepts
Major bonus points for experience with Hadoop, and Hadoop-ish stuff like Spark
Can help build a great company
May have a literal or metaphoric Beard of Unix Mastery