You should have:
Research shows that candidates from underrepresented backgrounds often don't apply for roles if they don't meet all the criteria – unlike majority candidates meeting fewer requirements. Channel the power of YOU and apply to discover if we're a match.
- Bachelor's degree in Computer Science, Computer Engineering, technically related field, or equivalent experience
- Minimum of 5 years of experience delivering data solutions on a variety of data warehousing, big data and cloud data platforms.
- 3+ years of experience working with distributed data technologies (e.g. Spark, Kafka etc) for building efficient, large-scale ‘big data’ pipelines;
- Strong Software Engineering experience with proficiency in at least one of the following programming languages: Spark, Python, Scala or equivalent.
- Experience with building data ingestion pipelines both real time and batch using best practices.
- Experience with Cloud Computing platforms like Amazon AWS, Google Cloud etc
- Experience in Transforming/integrating the data in Redshift/Snowflake.
- Experience in writing SQLs, PL/SQL's to ingest the data to cloud data warehouses.
- Experience supporting and working with cross-functional teams in a dynamic environment
- Experience with relational SQL and NoSQL databases, including Postgres, and Mongodb.
- Experience with scheduling tools preferrable Control-M,Airflow or AWS Step functions.
- Strong interpersonal, analytical, problem-solving, influencing, prioritization, decision- making and conflict resolution skills
- Excellent written/verbal communication skills.