As a Solutions Architect, you will work as a consultative team member in our Field Services organization. AtScale Solutions Architects function as Hadoop team leads at our customer locations for short term engagements to support the implementation of AtScale. Engaging with customers from Proof of Concept (POC) stages through to implementation of complex distributed production environment, you work collaboratively with them to successfully deploy and implement AtScale in their production environment. You are an all-around player, also helping the AtScale sales team successfully communicate and demonstrate AtScale’s capabilities. You’ve got the technical depth to roll up your sleeves to work with Hadoop and Hive and the polish to represent AtScale with the utmost professionalism.
Designing and architecting solutions with our customers, scoping new engagements and implementations both short and long term, and guiding the team during product implementations.
Resolving technical issues and advising the customer on best practices for big data, Hadoop environments and AtScale.
Driving successful installations of the product, configuration, tuning, and performance.
Assisting the customer with capacity planning for their environment to scale.
Write and produce technical documentation.
Collaborating with internal teams to channel client feedback and solutions into future releases of the product.
Advocating for feature requests and bug fixes on behalf of the customer.
Being meticulous about tracking things and follow-through.
Visiting customer sites.
Experience and Requirements:
BS or higher in Computer Sciences or related field
5+ years of Professional Services (customer facing) experience architecting large scale storage, data center and /or globally distributed solutions
Experience in Hadoop related tools and technologies is a must (HDFS and/or MapReduce, HBase, Hive, Spark, Impala)
BI Experience a must (Tableau, Qlik, Cognos, MicroStrategy, BO, SSAS)
Java, Scala, Python, Shell Scripting a plus
Ability to understand big data use-cases and recommend standard design patterns commonly used in Hadoop-based deployments
Ability to understand and translate customer requirements into technical requirements
Knowledge of distributed systems
Familiarity with data warehousing concepts
Knowledge of complex data pipelines and data transformation
Willingness to roll up the sleeves in a fast-paced, highly varied environment