Interactions | Hiring | Data Engineer | BigDataKB.com | 08-04-2022

Before u proceed below to check the jobs/CVs, please select your favorite job categories, whose top job alerts you want in your email & Subscribe to our Email Job Alert Service For FREE

 

Job Location: Bangalore

Who is Interactions?

Interactions, LLC is the world’s largest independent AI company. We operate at the intersection of customer experience and AI – two of today’s most innovative and dynamic industries. Since 2004, we’ve helped leading companies like MetLife, Citi , Shutterfly , and LifeLock have millions of successful conversations, resulting in saving operational cost and increasing productivity.

Interactions’ 5-year vision is to accelerate a transition from today’s frustrating and uninspired customer service experiences into amazing conversational engagements! Allowing customers to communicate in their own words and on their channel of choice, to accomplish tasks, all without having to go through an agent. In doing this via our conversational AI engine, our customers benefit from dramatically improved customer experience and increased customer engagement, while also saving significant and demonstrable operational expenses.
Job Description
As a member of one of our Technology teams, you will contribute to building solutions that use natural language processing, cognitive computing, and artificial intelligence applications or the frameworks and infrastructure that support them.
The Data Engineer works side by side with engineering, platform, development and operations teams and will be primarily responsible for designing, implementing and automating build, release, deploy, monitoring and configuration activities. The DevOps Engineer is responsible for bridging the gap between development, operations, and infrastructure.

Job Responsibilities Essential Job Functions
Design, implementation and installation of distributed big data infrastructure for high volume/velocity multi-tiered data storage, high availability and fault tolerance
Experience in installing and maintaining big data environments based on Hadoop and other big data technologies.
Design, implement, and optimize data warehouses and OLAP queries for maximum reporting and analytics performance.
Work with data scientists, senior execs and other engineers to identify, model (represent), and facilitate access to high volume/velocity data sources via stream ingestion pipelines.
Work with the Operations team to perform routine datalake maintenance and implement automated maintenance, compliance and archival strategies.
Other Duties and Responsibilities
Design and implement security strategies including ACLs, systems audits, and data redaction for internal users, external users and automated tools.
Maintain the datalake infrastructure in multiple locations to handle the high throughput use cases that are critical to overall IVA Platform.
Qualifications Required
Bachelor’s Degree or equivalent experience.
2-5 years of experience managing Hadoop clusters including provisioning new nodes,
managing alerts, tuning performance, and managing security
1-3 years of expertise with Hadoop ecosystem internals (HDFS, Hive, HBase, Oozie, Pig, Sqoop, etc) – storage, tuning, replication, etc.
1-2 years Python scripting experience in Big Data related coding.
Solid understanding of query optimization of database technologies built on top of Hadoop and related technologies.
Experience delivering data ingestion / ETL solutions at scale within Hadoop environment, including logging, monitoring, debugging, and security.
Experience with distributed processing technologies like M/R, YARN and Spark, including internals like scheduling and resource management.
Experience in data modeling and schema design.
Practical experience implementing data warehouse architectures, including hardware specifications and selecting, OS and patch
Experience with database sizing, server specification, and network architecture specification.
Virtualization and Data migration experience.
Design and architecture for high availability, redundancy and fault tolerance experience.
Basic Systems Administration skills with Linux based systems
Experience with DevOps tooling like Git, SVN, Ansible, Terraform and Bash scripting.
Preferred
A team player, able to work in an environment of high levels of change and complexity
Strong technical, organizational and interpersonal skills.
Prior experience with managing big data platforms in both CoLo and public cloud platforms
Prior experience with PostgreSQL 9.x and up is desirable

BigDataKB.com Jyotish
BigDataKB.com Jyotish - Career & Life Prediction

Why Work at Interactions?

We’ve created a culture of people who are dedicated to helping each other and the company succeed. We take time to celebrate wins and recognize accomplishments. Whether it’s a seasonal event or friendly competition, we’re always thinking of new ways to have fun.

Our team’s health and well-being is important to us. In addition to a full suite of benefits, we offer 5 weeks of time off with pay, 401k matching, paid parental leave and flexible work schedules. We are all committed to the company’s success by being valued shareowners and are incentivized through individual performance and company results. Come join us!

Interactions is an equal opportunity employer and does not discriminate on the basis of race, color, religion, sex (including pregnancy, sexual orientation, and gender identity), national origin, marital status, age, disability or protected veteran status, or any other characteristic protected by law.

Job Type: Full-time

Pay: ₹401,000.00 per year

Apply Here

Submit CV To All Data Science Job Consultants Across India For Free

LEAVE A REPLY

Please enter your comment!
Please enter your name here