Job Description
Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world.
The person on this role will play a crucial role in building scalable and cost-effective data pipelines, data lakes, and analytics systems.
Key Responsibilities: - Data Ingestion: Implement data ingestion processes to collect data from various sources, including databases, streaming data, and external APIs.
- Data Transformation: Develop ETL (Extract, Transform, Load) processes to transform and cleanse raw data into a structured and usable format for analysis.
- Data Storage: Manage and optimize data storage solutions, including Amazon S3, Redshift, and other AWS storage services.
- Data Processing: Utilize AWS services like AWS Glue, Amazon EMR, and AWS Lambda to process and analyze large datasets.
- Data Monitoring and Optimization: Continuously monitor and optimize data pipelines and infrastructure for performance, cost-efficiency, and scalability.
- Integration: Collaborate with data scientists, analysts, and other stakeholders to integrate AWS-based solutions into data analytics and reporting platforms.
- Documentation: Maintain thorough documentation of data engineering processes, data flows, and system configurations.
- Scalability: Design AWS-based solutions that can scale to accommodate growing data volumes and changing business requirements.
- Cost Management: Implement cost-effective solutions by optimizing resource usage and recommending cost-saving measures.
- Troubleshooting: Diagnose and resolve AWS-related issues to minimize downtime and disruptions.
Requirements - Educational Background: A bachelor's degree in computer science, information technology, or a related field is typically required.
- AWS Certifications: AWS certifications like AWS Certified Data Analytics - Specialty or AWS Certified Big Data - Specialty are highly beneficial.
- Programming Skills: Proficiency in programming languages such as Python, Java, or Scala for data processing and scripting. Shell scripting and Linux knowledge would be preferred.
- Database Expertise: Strong knowledge of AWS database services like Amazon Redshift, Amazon RDS, and NoSQL databases.
- ETL Tools: Experience with AWS Glue or other ETL tools for data transformation.
- Version Control: Proficiency in version control systems like Git.
- Problem-Solving: Strong analytical and problem-solving skills to address complex data engineering challenges.
- Communication Skills: Effective communication and collaboration skills to work with cross-functional teams.
- Knowledge of Machine learning concepts would be good to have.
Benefits This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.
Job Tags