Data Engineer - Kamrup Metropolitan - This Area only | Zoek India
This Browser does not support all the features of .
For the best experience please use a Modern Browser.
Kamrup Metropolitan - This Area only
Permanent (Full time)
The Upstox Story: One of the fastest growing stock broking companies and first brokers in India to introduce commission-free trading for investors. Our customers enjoy zero brokerage on equity delivery trades and a flat Rs. 20 per order pricing model on all other segments. We are on our way to become the leader in low-cost brokerage and stock broker in the country. We scaled from a customer database of 25k customers in 2017 to 2 Lakh customers in 2019 to 2+ Million customers company will continue to be driven by its guiding principle of making trading in stock markets simple and affordable for âs focus on high-design and leading-edge technology seeks to revolutionise the online trading industry in India, all at disruptive price points. Backed by Ratan Tata , Upstox had raised $4 million in Series A funding in early 2016 that was led by Kalaari Capital . The round also saw participation from health care tech firm GVK Davix Technologies Pvt Ltd . US-based investment firm Tiger Global Management invested $25 million in the company in September 2019 in a Series B funding have a team of highly skilled technology and finance professionals, and are currently looking for highly motivated field experts to be part of our high-energy team. Position: Data Engineer Experience: 4-8 Years Job Description: We are looking for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and right candidate will be excited by the prospect of optimizing or even re-designing our companyâs data architecture to support our next generation of products and data initiatives. Roles & Responsibilities: Â·Create and maintain optimal data pipeline architecture,Â·Develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity.Â·Collaborates with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility, and fostering data-driven decision making across the organization. Â·Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.Â·Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.Â·Works closely with a team of frontend and backend engineers, product managers, and analysts.Â·Defines company data assets (data models), spark, sparkSQL, and hiveSQL jobs to populate data models.Â·Designs data integrations and data quality framework.Â·Designs and evaluates open source and vendor tools for data lineage.Â·Works closely with all business units and engineering teams to develop strategy for long term data platform architecture Experience & Skills Required: Â·Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.Â·Be actively involved with the team to design and build new data solutions - OLAP and OLTPÂ·Developing stream-processing systems, using frameworks such as Spark-Streaming or KafkaÂ·Experience building and optimizing âbig dataâ data pipelines, architectures, and data sets.Â·Strong analytic skills related to working with unstructured datasets.Â·Build processes supporting data transformation, data structures, metadata, dependency, and workload management.Â·Working knowledge of message queuing, stream processing, and highly scalable âbig dataâ data stores.Â·Strong project management and organizational skills.Â·Experience supporting and working with cross-functional teams in a dynamic environment.Â·Experience with object-oriented/object function scripting languages: Python.Â·Experience with big data tools: Hadoop, Spark, Kafka, etc.Â·Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.Â·Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.Â·Experience with AWS cloud services: EC2, EMR, RDS, RedshiftÂ·Experience with stream-processing systems: Storm, Spark-Streaming, etc.Â·Experience with Development in Linux Environment Psstâ¦ tips on how you can beat the competition: Â If you can showcase your abilities to:Â· Be self-driven / quick starterÂ· Have an ownership mindsetÂ· Aggressively drive and deliver results