About Next Quarter:
Next Quarter is a customer-centric revenue intelligence platform built within Salesforce that improves sales productivity and accurately predicts forecast performance. We help companies drive growth through account planning, sales methodologies, data-driven forecasts, and conversational AI.
Experience Required – 4-12 Years
Job Location – Hyderabad/Bangalore/ Remote
The role is critical to product development and is responsible for shaping and scaling the data management solution. The position is expected to integrate with any source, perform routine data management activities, design scalable architecture addressing big data w.r.t velocity, variety, and Colome, and be responsible for compliance/audit norms.
High-level attributes of the role –
- Create and maintain optimal data pipeline architecture.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Translating business requirements into technical specifications, including data streams, integrations, transformations, databases, and data warehouses.
- Defining the data architecture framework, standards, and principles, including modeling, metadata, encryption, security, reference data such as product codes and client categories, and master data such as clients, vendors, materials, and employees.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Work with stakeholders, including the Executive, Product, Data, and Design teams, to assist with technical issues and support their data infrastructure needs. Create/Identify data tools for analytics and data scientist team members that help them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Collaborating and coordinating with multiple departments, stakeholders, partners, and external vendors.
- must understand the system development life cycle, project management approaches and requirements, design, and test techniques.
- Established and emerging data technologies. Data architects need to understand based data management and reporting technologies and have some knowledge of columnar and NoSQL databases, predictive analytics, data visualization, and unstructured data.
- Well-versed with AI and AI teams – being able to provide easy access to data.
- The candidate is expected to be hands-on in the development and cloud technologies AWS or Azure.
- Work exp in SAAS/B2C product development for external customers is a must.
- Build data systems of high availability and quality depending on each end user’s specialized role.
- Design and implement database in accordance to end users’ information needs and views
- Define users and enable data distribution to the right user in the appropriate format and in a timely manner.
- Use high-speed transaction recovery techniques and backup data.
- Minimize database downtime and manage parameters to provide fast query responses.
- Provide proactive and reactive data management support and training to users.
- Determine, enforce, and document database policies, procedures, and standards.
- Perform tests and evaluations regularly to ensure data security, privacy, and integrity.
- Monitor database performance, implement changes, and apply new patches and versions when required.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL), and working familiarity with various databases.
- Experience building and optimizing big data pipelines, architectures, and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large, disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Strong project management and organizational skills.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- We are looking for a candidate with five or more years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information. Systems or another quantitative field. They should also have experience using the following software/tools:
- Big data tools: Hadoop, Spark, Kafka, Dremio, Druid, etc.
- Relational SQL and NoSQL databases, including Postgres and Cassandra.
- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- AWS cloud services: EC2, EMR, RDS, Redshift, Snowflake.
- Stream-processing systems: Storm, Spark-Streaming, etc.
- Object-oriented/object function scripting languages: Python, Java, etc.
- Well versed modern ETLs/ELTs like Airbyte, Hevodata, etc.
Any graduate with degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.