Big Data Developer – Data Analytics Team
San Francisco, CA
Job Summary:
Teamsoft builds necessary infrastructure which are the foundation for many customers requiring Data Analytics Solutions. This is an exciting role for someone who loves real-time huge data processing pipeline. We bring our new ideas to the table, and we are excited to create solutions for our customers. We're looking for a talented and passionate person to join our team, if you feel this is you, we'd love to hear from you.
Key Qualifications:
• You have experience with architecting, designing and developing Big-Data processing pipelines.
• You possess proficiency in MapReduce development and experience with Hadoop and spark data processing technologies required.
• Significant experience with distributed key/value store.
• Build instrumentation experience.
• Performance Metrics Reporting.
• Strong Core Java programming experience.
Description:
You will architect, design and build Big-Data Frameworks that automate the creation of a data warehouse. These data services enable developers to build new features at greater speed. As daily activities in this role, you will: Work quickly to deploy necessary data solutions as requested. Recommend best practices of architectural and design performance for distributed data systems. Provide infrastructure and service team members solutions for efficient data-processing and data delivery.
Education:
Required: Bachelor’s Degree in Computer Science, Computer Information Systems, Information Technology, Software Engineering or related field;
Preferred: Master’s Degree in Computer Science, Computer Information Systems, Information Technology, Software Engineering or related field;
Additional Requirements:
Nice to have, but not necessary: Apache Kafka experience, Python and Scala programming background.
Teamsoft builds necessary infrastructure which are the foundation for many customers requiring Data Analytics Solutions. This is an exciting role for someone who loves real-time huge data processing pipeline. We bring our new ideas to the table, and we are excited to create solutions for our customers. We're looking for a talented and passionate person to join our team, if you feel this is you, we'd love to hear from you.
Key Qualifications:
• You have experience with architecting, designing and developing Big-Data processing pipelines.
• You possess proficiency in MapReduce development and experience with Hadoop and spark data processing technologies required.
• Significant experience with distributed key/value store.
• Build instrumentation experience.
• Performance Metrics Reporting.
• Strong Core Java programming experience.
Description:
You will architect, design and build Big-Data Frameworks that automate the creation of a data warehouse. These data services enable developers to build new features at greater speed. As daily activities in this role, you will: Work quickly to deploy necessary data solutions as requested. Recommend best practices of architectural and design performance for distributed data systems. Provide infrastructure and service team members solutions for efficient data-processing and data delivery.
Education:
Required: Bachelor’s Degree in Computer Science, Computer Information Systems, Information Technology, Software Engineering or related field;
Preferred: Master’s Degree in Computer Science, Computer Information Systems, Information Technology, Software Engineering or related field;
Additional Requirements:
Nice to have, but not necessary: Apache Kafka experience, Python and Scala programming background.