Scala and spark, Senior Architect

Zoek Pin Bengaluru, Karnataka

Competitive

Permanent (Full time)

Recently Posted

Job Description . Primary Job Responsibilities: Ability to evaluate the documents for the customer engagement and understand the request Engage with customer and quickly understand the design, scope, and requirements of engagement Instill confidence with the customer that the engineer has the necessary talent and skills for the engagement Work with customer and project team, understanding timelines and responsibilities Actively communicate status to customer, account teams, and leads Continually research, test, investigate new technologies, methods, and features Required: Good communication, soft skills and customer handling skills required Always curious and learning Ability to closely follow architectural designs, scope, and requirements Ability to work independently AND act as an integral part of a team Architectural experience with Spark, AWS and Big Data(Hadoop – Cloudera/Mapr/Hortonworks). Experience on design and building of Data Lake on AWS Cloud. Hands on experience in AWS/Azure/GCP Cloud and related tools, storage and architectural aspects are required. Experience with integration of different data sources with Data Lake is required. Experience in Big Data/Analytics/Data science tools and a good understanding of the leading products in the industry are required along with passion, curiosity and technical depth Thorough understanding and working experience in Mapr/Cloudera/Horton Hadoop distribution Solid functional understanding of the Big Data Technologies, Streaming and NoSQL databases Experience in working with Big Data eco-system including tools such as YARN, Impala, Hive, Flume, HBase, Sqoop, Apache Spark, Apache Storm, Crunch, Java, Oozie, SQOOP, Pig, Scala, Python, Kerberos/Active Directory/LDAP,KSQL,Redis,Casandra Experience in solving Streaming use cases using Spark,Spark-SQL,KSQL,Kafka,NiFi ? Thorough understanding, strong technical/architecture insight and working experience/exposure in Docker,Kubernetes Containerization experience with Big Data stack using Open Shift/Azure is added advantage Exposure to Cloud computing and Object Storage services/platforms Experience with Big Data deployment architecture, configuration management, monitoring, debugging and security Experience in performing Cluster Sizing exercise based on capacity requirements Ability to build strong partnership with internal teams, vendors on resolving product gaps/issues and escalate to the management on timely manner ? Good Exposure to CI/CD tools, application hosting, containerization concepts ? Excellent verbal and written skills, Team skills, Proficient with MS Visio, Strong analytical and problem solving skills ? Must be a self-starter, excellent communication and interpersonal skills. ? Strong problem solving and analytical skills Effective verbal and written communication skills. Essential Skillset SQL Spark Spark-SQL, Kafka , KSQL , ETL(ANY TOOL) ? Python/Scala ? HBase/Redis/Cassandra(Any NoSQL database) ? Hadoop Development (any Distribution) ? AWS/Azure/GCP Platform Additional Desired Skills ? Training and experience/Exposure with Docker and Kubernetes (on premise and in the cloud) ? Experience working in Agile development environments ? Knowledge of CI/CD and SDLC. Job Requirements: Spark and scala Architect

Report job View Company Page
Apply on Hirer's Site
Apply on Hirer's Site
Similar Jobs
Loading...