No menu items!
Home Blog Page 5437

Güdel | Hiring | Engineer | BigDataKB.com | 1/11/2022

0

Güdel

Pune

Architectural & Engineering Service

Gudel India Pvt. Ltd. is currently looking for the following position at our

location in Pune:

Engineer

Güdel India is a subsidiary of Güdel Group, a global manufacturer of industrial automation products, systems and

services. Güdel India supplies gantry systems, linear motion modules, robot track motion units, gantry robots and

components to OEMs, system integrators and machine builders serving Automotive, Aerospace, Intralogistics

and Tyre industries.

Güdel India is having a manufacturing facility at Pune with engineering, design, production , sales and customer service

support.

Essential Duties and Responsibilities

  • Good handles on autocad and solidworks software and he/she can able to do simulation (In future if required)
  • Enquiry/ requirement understanding for the process flow defined by the customer.
  • RFQ reading, highlighting the points for which inputs are not received from Sales.
  • Deriving a conceptual solution to the enquiry.
  • Engineering calculations supporting the proposed solution.
  • Deriving suitable gripper & other commodities required for the solution based on the inputs received.
  • Calculating the cycle time for the proposed solution, which will indicate the suitability of the proposed solution to

the customer requirement.

  • Working on layout as per the customer requirement, the layout should include the views indicating all the details of

the proposed system and gripper envelop.

  • Communicate our system requirements, specific considerations and deviations clearly to Sales team.
  • Coordinating with the controls team when the mechanical concept is being derived.
  • Budgetary as well as detailed costing of the proposed solution as per the respective revisions of the enquiry.
  • Maintaining the data in a disciplined manner as per ISO process.

Contact person

Amit Pawgi, Human Resources, +91 20 67910200, amit.pawgi@in.gudel.com

To apply

Please send your resume, cover letter and salary specifications to info@in.gudel.com

or fax to +91 20 67910209.

For more information go to www.gudel.com

Apply Here

Submit CV To All Data Science Job Consultants Across India For Free

Risesmart Inc | Hiring | Data Engineer | BigDataKB.com | 1/11/2022

0

Risesmart Inc

Pune

Internet

About Randstad RiseSmart

At Randstad RiseSmart, the engineering culture is talent centric. We believe in bringing together a team of talented engineers that are passionate about programming, technologies and making meaning to the world.
Headquartered in Silicon Valley, Randstad RiseSmart is the fastest-growing career transition and talent mobility provider, and an operating company of Randstad N.V., a €23.8 billion global provider of flexible work and human resources services that helps more than two million candidates find meaningful work every year. Our outplacement, career development, redeployment and contemporary Tech & Touch solutions strengthen employer brands, improve retention and re-engage talent. Randstad RiseSmart’s contemporary approach to outplacement combines personalized services from trained professionals with unmatched technology delivered through a convenient, cloud-based platform. Employers hire us because we deliver superior outcomes through expert coaching, professional branding, contemporary resources and on-demand analytics. Today, we are a trusted human partner of successful companies in more than 40 industries. Our passion and dedication to innovation, responsiveness and results have earned us extensive recognition and awards from organizations such as Bersin by Deloitte, Gartner Inc., the Brandon Hall Group and Fortune magazine.

For more information, visit https://www.randstadrisesmart.in/

About the Job

The candidate will be responsible for the data warehouse related development of Randstad RiseSmart’s solution. As a member of the data warehouse team, he/she will

  • help in building a highly scalable global data warehouse for our reporting and analytical needs
  • own a set of functionality or modules within the solution and act as a primary point of contact for that area
  • develop code and components using best practices laid out by the organization
  • ensure that the deliverables are adhering to the set quality standards and ready to be integrated/utilized

What You Will Do

  • Design, architect, and iterate upon the next generation of our data lake on Google Cloud
  • Solve our most challenging data ingestion problems, utilizing optimal ETL patterns, frameworks, query techniques, sourcing from structured and unstructured data sources
  • Setup data ingestion from AWS Aurora to Cloud BigQuery using Google Cloud Dataflow

randstad risesmart – [Job Description – Data Engineer] 2

  • Perform Operational and predictive analytics using any analytical tool or Big Query ML features
  • Design and develop reporting data model (data warehouse) with GDPR compliance
  • Write, test, and deploy scripts in Python & MySQL queries for Tableau reporting

What You Will Need

  • BE/BTech/MCA with 3 to 5 years of experience working in the data warehousing
  • Good experience with the ETL tools such as Informatica
  • 3 to 5 years of experience in managing data processing systems and know how to safely move data from one to the other
  • Working experience in the Cloud architecture with tools/frameworks such as

o Google Cloud Dataflow
o Google Cloud BigQuery
o Apache Beam pipelines
o DataStream service
o Flycs framework

  • Understanding of the Agile methodologies
  • Good Verbal and written communication skills
  • Experience in schema design and dimensional data modelling will be an added advantage
  • Good to have a Google Professional Data Engineer certification

Apply Here

Submit CV To All Data Science Job Consultants Across India For Free

Biofourmis | Hiring | Data Engineer | BigDataKB.com | 1/11/2022

0

Biofourmis

Bangalore

Healthcare Services & Hospital

Biofourmis is a rapidly growing, global digital health company filled with committed, passionate professionals who care about augmenting personalized care and empowering people with complex chronic conditions to live better and healthier lives. We are pioneering an entirely new category of medicine by developing clinically validated, software-based therapeutics to provide improved outcomes for patients, smarter engagement & tracking tools for clinicians, and cost-effective solutions for payers. We are collectively devoted to a single-minded idea: powering personally predictive care.

Our dynamic growth has been marked by quadrupled headcount in the last 12 months via both expansion & acquisition, yielding a global footprint with offices in Boston, Singapore, Bangalore, and Zurich. We are backed by prominent international venture capital investment & have cultivated relationships with worldwide healthcare stakeholders over the last 5 years. Our talented team features numerous PhD’s in Data Science and Biostatistics, over 80 patents, prolific scientific publications, world-class systems, developers & engineers, and leaders in the clinical operations space.

We are looking for a Data Engineer to be part of growing team at Biofourmis. You will play a major role in Developing, Deploying, and Supporting Biofourmis’s Data Warehouse and Reporting Stack. We are looking for someone who is eager to leverage their existing skills and seek out new skills and solutions.


Role Responsibilities

  • You will play a major role in Developing, Deploying, and Supporting Biofourmis’s Data Warehouse and Reporting Stack.
  • Create and maintain optimal data pipeline architecture Build and contribute to existing ELT (extract, load and transform) data pipelines.
  • Work with data and analytics experts to strive for greater functionality in our data systems.
  • Build solutions that utilize the data pipeline to provide actionable insights for key business performance metrics.
  • Contribute to reporting and visualization layer.
  • Resolve operational issues as they occur to maintain the team’s SLAs.

Skills (Must)

  • Proven background in ETL development and data processing.
  • Strong SQL expertise and database concepts
  • Experience with data warehousing concepts, data modeling, and data wrangling
  • Experience with Unix/Shell or Python scripting
  • Strong CS fundamentals including data structures and distributed systems.
  • Able to build and operate Data Pipelines, Build and operate Data Storage
  • AWS Stack: EC2, S3, Redshift, Glue or AWS Data Pipeline, Athena
  • Database: MongoDB, RDS, DynamoDB
  • We are looking for someone who is keen to leverage their existing skills and seek out new skills and solutions.

Skills (Good to Have)

  • Exposure to working with DBT
  • Exposure to using BI Tools (Tableau)
  • Exposure to the healthcare domain
  • Exposure to embedded analytics

Apply Here

Submit CV To All Data Science Job Consultants Across India For Free

Bajaj Allianz General Insurance Company | Hiring | Data Engineer | BigDataKB.com | 1/11/2022

0

Bajaj Allianz General Insurance Company

India

Insurance Operator

Job Description

:

The Data Engineer/Analyst is responsible for developing data pipeline and data engineering components to support strategic initiatives and ongoing business processes. This role works with leads, analysts, and scientist to understand requirements, develop technical solutions, and ensure the reliability and performance of the data engineering solutions.

This role provides opportunity to directly impact business outcomes for sales, underwriting, claims and operations functions across multiple use cases by providing them data for their analytical modelling needs.

Department

:

HO

Open Positions

:

1

Skills Required

:

? Experience with data modelling, data warehousing, and building ETL pipelines ? Experience in Insurance domain

Role

:

    Design, Implement & manage AWS Data Lake System using AWS technologies like Glue, S3, RDS, EC2, DMS, Lambda, Step Function, SQS, SES, EMR(Spark), Python, Scala, Kinesis, Docker
    Developing Data pipeline for both structured & unstructured data
    Development of batch and real time processing jobs
    Development of complex features using Big Data Technologies which includes doing impact analysis, design, and development.
    Automation of analytical model
    Work effectively in a fast paced and dynamic environment
    Communicate effectively with all stakeholders
    Strong knowledge of one or more scripting language (Python/Spark/SCALA)
    Strong analytical skills with excellent knowledge of Oracle, SQL, and PLSQL.
    Good understanding of ETL techniques and best practices to handle extremely large volume of data
    Ability to handle multiple priorities tasks in a fast-paced environment
    Work well in teams, respecting ideas from teammates, business partners, and technical experts

Location

:

MARVEL

Education/Qualification

:

MCA / B.E. / M.E. / B.Tech / M.Tech or equivalent degree with minimum 5+ years’ experience in IT industry ? 5+ years hands on development experience on AWS Data Lake/Big data solutions

Desirable Skills

:

Looking for Data Engineer with 6+ years of relevant experience in implementation & maintenance of Data Lake & Big Data system using AWS technologies.

Years Of Exp

:

6 to 8 Years

Posted On

:

16-Nov-2021

Designation

:

Data Engineer

Apply Here

Submit CV To All Data Science Job Consultants Across India For Free

RouterX Pvt Ltd | Hiring | NOC Support EngineerL1 | BigDataKB.com | 1/11/2022

0

RouterX Pvt Ltd

Pune

We have openings in IT Networking Jobs Designation:Technical Support Engineer

  • Visit our website:https://www.routerx.net

Interested candidates can apply with your updated resume.

Responsibilities and Duties

  • Network Monitoring
  • LAN/WAN support
  • Network configuration and troubleshooting
  • Routing/Switching
  • Wireless Network Troubleshooting
  • ISP Support
  • Data center management and support
  • Issue Resolution.

Job Types: Full-time, Fresher

Schedule:

  • Morning shift

Work Remotely:

  • Yes

Apply Here

Submit CV To All Data Science Job Consultants Across India For Free

Hero Moto Corp | Hiring | Data Engineer | BigDataKB.com | 1/11/2022

0

Hero Moto Corp

New Delhi

Transport Equipment Manufacturing

Hero is hiring the sharpest brains to create the future of mobility.

Job Description:

Connects and models complex distributed data sets to build repositories, such as data warehouses, data lakes, using appropriate technologies. Manages data-related contexts ranging across addressing small to large data sets, structured/unstructured or streaming data, extraction, transformation, curation, modelling, building data pipelines, identifying right tools, writing SQL/Java/Scala code, etc.

Responsibilities:

  • Create and maintain optimal data pipeline architecture.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep data secure.
  • Create data tools for analytics and data scientist team members.
  • Work with data and analytics experts to strive for greater functionality data systems

Eligibility:

  • Minimum of 6 years of hands-on experience with a strong data background.
  • Bachelor’s degree in Computer Science or equivalent; Masters preferred.
  • Extensive experience working with Big Data tools and building data solutions for advanced analytics.
  • Practical knowledge across data extraction and transformation toolstraditional ETL tools (Informatica, Altryx)as well as more recent big data tools
  • Knowledge in data architecture, defining data retention policies, monitoring performance, and advising any necessary infrastructure changes
  • Solid development skills in Java, Scala, and SQL
  • Clear hands-on mastery in big database systemsHadoop ecosystem, Cloud technologies (AWS, Azure, Google), in-memory database systems (HANA, Hazel cast, etc) and other database systemstraditional RDBMS (Terradata, SQL Server, Oracle), and NoSQL databases (Cassandra, MongoDB, DynamoDB)
  • Comfortable in dashboard development (Tableau, Powerbi, Qlik, etc) and in developing data analytics models (R, Python, Spark)

Compensation: INR 15INR 25 LPA*

*The compensation/offer will be based on the total years of relevant experience, Skill/Competencies, Academic background, and the current compensation structure and will be decided on a case-to-case basis.

Application Deadline: 30/1/2022

Job Type: Full-time

Salary: ₹1,500,000.00₹2,500,000.00 per year

Schedule:

  • Day shift

Experience:

  • total work: 6 years (Required)

Work Remotely:

  • Temporarily due to COVID-19

Apply Here

Submit CV To All Data Science Job Consultants Across India For Free

Siemens | Hiring | Machine Learning Engineer Multi data(NLP+CV+Numeric ) | BigDataKB.com | 1/11/2022

0

Siemens

Bangalore

Computer Hardware & Software

Senior Machine Learning Engineer

Siemens founded the new business unit Siemens Advanta (formerly known as Siemens IoT Services) on April 1, 2019 with its headquarter in Munich, Germany. It has been crafted to unlock the digital future of its clients by offering end-to-end support on their outstanding digitalization journey. Siemens Advanta is a strategic advisor and a trusted implementation partner in digital transformation and industrial IoT with a global network of more than 8000 employees in 10 countries and 21 offices. Highly skilled and experienced specialists offer services which range from consulting to craft & prototyping to solution & implementation and operation – everything out of one hand!

Before our software developers write even a single line of code, they have to understand what drives our customers. What is the environment? What is the user story based on? Implementation means – trying, testing, and improving outcomes until a final solution emerges. Knowledge means exchange – discussions with colleagues from all over the world. Join our Digitalization Technology and Services (DTS) team based in Bangalore and enjoy the freedom to think in completely new categories!


This is your role. What part will you play?

  • Hands on expertise in advanced , statistical modelling of digital signals ,Machine learning, Deep learning, distributed computing and rapid prototyping
  • Experience in developing end to end machine learning and deep learning modules and deploy it as a flask service
  • Good knowledge of applied mathematics, statistics , neural network architecture and computer vision and NLP
  • Good expertise Audio, numeric sensor and image data(optional) pre processing
  • Good in creating end to end solution documentation and power point
  • Understand and comprehend reach paper and try to reproduce them into working codes
  • Experience in working with Version control systems and Agile methodologies
  • Proficient in statistical Machine learning and deep learning modelling and Analysis
  • Proficient in python, keras, tensorflow, pytorch numpy pandas, openCv, statmodels scipy and pyporch libraries .
  • AWS sage make knowledge is added advantage


Should possess good

  • Min 5-6 years of work AI and data science experience
  • University degree in Computer Sciences , data sciences, data analytics or similar field
  • Design and develop machine learning algorithms , Discover, design, and develop analytical methods to support novel approaches of data and information processing, Perform explanatory data analyses, Generate and test working hypotheses
  • Prepare and analyze historical data and identify patterns, Provide technical support for program management and development activities including manage knowledge sharing within a team
  • Manage design, development and deployment of scalable, high volume and real time system. Research on algorithm improvements and implement data processing. Assist project team in communicating and implementing project schedules.


Make your mark in our exciting world at Siemens.

This role is based in Bangalore. You’ll also get to visit other locations in India and beyond, so you’ll need to go where this journey takes you. In return, you’ll get the chance to work with teams impacting entire cities, countries – and the shape of things to come.

We’re Siemens. A collection of over 379,000 minds building the future, one day at a time in over 200 countries. We’re dedicated to equality, and we encourage applications that reflect the diversity of the communities we work in. At Siemens we are always challenging ourselves to build a better future. We need the most innovative and diverse Digital Minds to develop tomorrow ‘s reality.

Find out more about the Digital world of Siemens here: www.siemens.com/careers/digitalminds

Organization: Advanta


Company:
Siemens Technology and Services Private Limited


Experience Level:
Experienced Professional


Job Type:
Full-time

Apply Here

Submit CV To All Data Science Job Consultants Across India For Free

Conquest Technology Solutions Pvt Ltd | Hiring | Sr. Data Engineer | BigDataKB.com | 1/11/2022

0

Conquest Technology Solutions Pvt Ltd

Pune

Strong BigQuery and releated GCP experience, certifications are a plus Strong Query Optimization experience Exposure to workload/capacity management with BigQuery Software Engineering (Python) Cloud Data Engineering Modern Data Warehousing Design Third-Party tools (BI, ETL, Simba Drivers, etc.) integration Nice to Have Technical Skills: Java, Hadoop and Spark experience Data Analytics & Visualization (Tableau, PowerBI, Looker, etc)

Job Types: Full-time, Contractual / Temporary

Salary: ₹50,000.00₹150,000.00 per month

Schedule:

  • Day shift

Experience:

  • total work: 1 year (Preferred)

Work Remotely:

  • Temporarily due to COVID-19

Speak with the employer
+91 9700126311

Apply Here

Submit CV To All Data Science Job Consultants Across India For Free

NielsenIQ | Hiring | Associate Data Scientist | BigDataKB.com | 1/11/2022

0

NielsenIQ

Pune

Unknown / Non-Applicable

Qualifications

We’re looking for a passionate and talented Senior Data Scientist to join our growing team. In this role, you’ll have the chance to roll up your sleeves and apply data science methods and analytics to sustain NielsenIQ Analytics growth. Successful candidates are intellectually curious builders and active learners who are biased toward action, new problem solving.


Roles & Responsibilities :

  • Understand business issues, stakeholder requirement and expectations
  • Perform relevant modeling analysis and POC exploring multiple techniques
  • Ensure Analytics model quality
  • Document and present findings and recommendations on methodology in a structured way to various stakeholder or partnering teams (Engineering, Tech, Product leader…).
  • Work closely with the Engineering and Tech team to convert those POC into fully scalable products.
  • Active and effective collaboration with other project team members (other Data Scientist, Data Steward, Technology Engineer…)
  • Stay abreast of developments in the area to ensure bringing the most appropriate solutions to our business


Qualifications

  • Masters degree in Data Sciences, mathematics or a closely related field

  • 1+ years of experience as a Data Scientist in the business

  • Strong theoretical knowledge of statistical modelling and foundational mathematical concepts.
  • Experience in machine learning, supervised and unsupervised: Forecasting, Classification, Data/Text Mining, NLP, Decision Trees, Adaptive Decision Algorithms, Random Forest, Search Algorithms, Neural Networks, Deep Learning Algorithms

  • Experience with Optimisation techniques
    : linear programming, integer programming, genetic algorithm, constrained optimisation

  • Experience in working on distributed or cloud computing platforms such as Google Cloud or Microsoft Azure
  • Proficiency in Data Science coding languages like Python

  • Proficiency in SQL
  • Experience using collaborative development tools (git…).
  • Experience working with Agile methodologies (SCRUM)
  • Strong problem solving and excellent communication skills, independent working style

Additional Information


About NielsenIQ

NielsenIQ is a global measurement and data analytics company that provides the most complete and trusted view available of consumers and markets worldwide. We provide consumer packaged goods manufacturers/fast-moving consumer goods and retailers with accurate, actionable information and insights and a complete picture of the complex and changing marketplace that companies need to innovate and grow. Our approach marries proprietary NielsenIQ data with other data sources to help clients around the world understand what’s happening now, what’s happening next, and how to best act on this knowledge. We like to be in the middle of the action. That’s why you can find us at work in over 90 countries, covering more than 90% of the world’s population. For more information, visit www.niq.com.

NielsenIQ is committed to hiring and retaining a diverse workforce. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class.

Apply Here

Submit CV To All Data Science Job Consultants Across India For Free

Adani Group | Hiring | Data Engineer | BigDataKB.com | 1/11/2022

0

Adani Group

Ahmedabad

Asphalt Product Manufacturing

The Opportunity

The Industry Cloud Platform Engineers are responsible for end-2-end design, development, and delivery of highly quality platform / product features on Google Cloud Platform (GCP) as per the sprint plan for their respective platform areas. This requires engaging successfully with product owner, architects, business users and scrum management team for each phase of agile project delivery. The Google Cloud Engineers are seasoned professionals who are thoroughly hands-on with various components of Google Cloud Platform relevant for their respective platform areas.

As a Platform Engineers working on Data Engineer you will cover all aspects of data lifecycle management
As a member of the Adani Industry Cloud Platform engineering team, you will have world class opportunity to build world class product and solution on Google Cloud platform for Industrial Cloud IIoT platform delivering multi-business domain value.

Qualifications

Ideally, you should have:

  • 4 to 10 years of hands-on experience in the technical skills listed for one of the platform areas above
  • Certified GCP developer with in-depth knowledge
  • Familiarity with Operational Technology such as SCADA, DCS, PLC, Historian, MES etc.
  • Experience developing enterprise grade cloud software
  • Familiarity in engineering & industrial software domain
  • Experience of working in agile project management mode

Primary Location: IN-IN-Ahmedabad

Work Locations: Ahmedabad Ahmedabad Ahmedabad

Job: Adani

Organization: Corporate Services

Schedule: Regular

Shift: Standard

Job Type: Full-time

Day Job

Job Posting: Dec 21, 2021, 3:15:01 AM

Apply Here

Submit CV To All Data Science Job Consultants Across India For Free