Job Location: Remote
About Us
Support.com, Inc. (NASDAQ: GREE) is a leading provider of customer and technical support solutions delivered by home-based employees. For more than twenty years, the company has achieved stellar results for global enterprise clients and top-tier businesses. Support.com’s proven, omnichannel solutions have been specifically designed and optimized for the homesourcing environment, resulting in industry-leading NPS scores and first call resolution rates. The company efficiently meets changing client needs through its highly-scalable, global network of home-based employees and secure, proprietary, cloud-based platforms.
Why Join Us
- No travel, work from the comfort of your home, permanently!
- If you need a place to work, have fun and learn. Support.com is for you
- Competitive benefits and compensation
Responsibilities (Including, but not limited to)
What you will be doing
- Work closely with business partners to understand objectives, KPIs, & opportunities for improvement.
- Architect the database and data warehouse schema/table designing (Databricks/Redshift/snowflake/denormalized structure) for performance, reliability and scalability
- Work with product/system owners and business owners to understand raw data from source systems and determine how to map and model it into the data warehouse
- Spec end to end solutions across data integration/pipeline and data warehouse architecture
- Translate insights into actionable recommendations on how to improve business performance
- Explore various systems & data sources to map and define relations
- Build data processing workflows & ER Diagrams (using tools like lucid charts, or draw.io)
- Design, develop, and re-engineer database objects and processes
- Write advanced PL/SQL to transform data near-real-time
- Approach problems with systematic rigor & present solutions objectively
- Take agile development approach and build solutions at a rapid pace demanded by customers
- Partner with other development teams to build advanced analytics tools
- Work with SQL, Python & visualization tools.
- Continuously learn new techniques & tools
- Approach problems with scientific rigor & present findings objectively
Minimum Qualifications & Requirement
What you need for the job
- Bachelors / Masters degree in a technical/quantitative subject such as Mathematics, Computer Science, Economics, etc.
- 8 – 10 years experience with data analytics, BI, reporting, data science, or equivalent field
- Highly proficient with Python and PL/SQL (both DML and DDL)
- Hands on experience on AWS S3, Redshift, Redshift Spectrum and Airflow
- Expert in Data Architecture (Large scale Data Modelling and Data Governance)
- Hands on ETL/ELT/Data Modelling/Data Warehouse Design & Development exposure is a must
- Expert on one of Database Stacks – Databricks, Postgres, MSSQL, Redshift
- Proven track record in Technical delivery on Global Enterprise applications
- Excellent communication skills to coordinate with business stakeholders and technical teams
- Ability to mentor and guide junior team members and participate in Learning and Development activities for team
- Self-motivated & insatiable learner with a passion for driving business results with data
- Perform other duties as assigned
Bonus points for these!
- Excellent understanding of the full SDLC lifecycle including both waterfall, agile and DevOps
- methodologies
- Strong math/statistics skillset (hypothesis testing, probability, linear algebra, ML models)
- Familiarity with Snowflake
- Familiarity with Tableau is an added bonus
- Experience with cloud services (AWS, Azure, Google Cloud)
- Desire to work on a fast-paced, agile team
- Experience with Big Data, Data Analytics Programs
About the Role
Why work with us
- Use cutting edge data stack built on AWS Redshift, Spectrum, Databricks, S3, Apache Spark, Airflow, PostgreSQL, tableau and Python.
- We love collaborative, agile software development, iterative design and testing. We form tight teams, build rapid prototypes and release frequently. You won’t get bored!
- We encourage learning and integration of new technologies. If you’re passionate about technology, we’d love to hear your story.
About You:
- You are an intellectually curious, dedicated & mission-driven Lead Data Backend Engineer who will help our customers leverage data to improve performance. You have a unique combination of business acumen, analytical savvy, statistical/math knowledge, & data storytelling ability to transform data into valuable decisions.
Experience and Skills
What you need for the job
- Bachelors / Masters degree in a technical/quantitative subject such as Mathematics, Computer Science, Economics, etc.
- 8 – 10 years experience with data analytics, BI, reporting, data science, or equivalent field
- Highly proficient with Python and PL/SQL (both DML and DDL)
- Hands on experience on AWS S3, Redshift, Redshift Spectrum and Airflow
- Expert in Data Architecture (Large scale Data Modelling and Data Governance)
- Hands on ETL/ELT/Data Modelling/Data Warehouse Design & Development exposure is a must
- Expert on one of Database Stacks – Databricks, Postgres, MSSQL, Redshift
- Proven track record in Technical delivery on Global Enterprise applications
- Excellent communication skills to coordinate with business stakeholders and technical teams
- Ability to mentor and guide junior team members and participate in Learning and Development activities for team
- Self-motivated & insatiable learner with a passion for driving business results with data
- Perform other duties as assigned
Bonus points for these!
- Excellent understanding of the full SDLC lifecycle including both waterfall, agile and DevOps
- methodologies
- Strong math/statistics skillset (hypothesis testing, probability, linear algebra, ML models)
- Familiarity with Snowflake
- Familiarity with Tableau is an added bonus
- Experience with cloud services (AWS, Azure, Google Cloud)
- Desire to work on a fast-paced, agile team
- Experience with Big Data, Data Analytics Programs
Submit CV To All Data Science Job Consultants Across India For Free

