293565 - Software Engineer Senior

  • Post Date:June 28, 2022
  • Apply Before: June 28, 2023
  • Views 25
0 Applications
  • Pay Rate 73.06
Job Description

GDIA is looking for a Software Engineer focused on building the GDIA (Global Data Insights and Analytics) Enterprise Data Ingestion platform. This role will work in a small, cross-functional team. The position will collaborate directly and continuously with business partners, product managers, product designers, software engineers and will release early and often. The team you will be working on is focused on developing and supporting the analytic data needs of data scientists by ingesting data into the data lake and transformed to a usable format as well as tools, metrics and privacy modules that supports source ingestion as well as both the operational resiliency of the platform and compliance requirements.
Skills Required:
The position requires Hadoop development experience using components like Hive, Hbase, Scala, Python, Map-reduce, Spark, etc. Additional responsibilities include scheduling, creating oozie workflows, data transformation and data replication in Hadoop environment. Experience with data replication/wrangling tools such as ETL, Alteryx or within Hadoop using Kafka, Sqoop, etc. The position requires strong knowledge of writing optimized Spark and Hive sql. Experience with either SQL, Teradata, Oracle or DB2. Experience with more than one is preferred. Excellent oral and written communication skills. Self-starter and can independently work with business customers to understand requirements and develop solutions based on the requirements in a constantly evolving environment. Strong team player with experience working as part of an agile team. Nice to Have Google Cloud Platform Shell/Perl scripting Visualization tools such as QlikView Java
Skills Preferred:
N/A
Experience Required:
The position requires Hadoop development experience using components like Hive, Hbase, Scala, Python, Map-reduce, Spark, etc. Additional responsibilities include scheduling, creating oozie workflows, data transformation and data replication in Hadoop environment. Experience with data replication/wrangling tools such as ETL, Alteryx or within Hadoop using Kafka, Sqoop, etc. The position requires strong knowledge of writing optimized Spark and Hive sql. Experience with either SQL, Teradata, Oracle or DB2. Experience with more than one is preferred. Excellent oral and written communication skills. Self-starter and can independently work with business customers to understand requirements and develop solutions based on the requirements in a constantly evolving environment. Strong team player with experience working as part of an agile team. Nice to Have Google Cloud Platform Shell/Perl scripting Visualization tools such as QlikView Java
Experience Preferred:
*** Please Keep. Do not Overwrite *** Suppliers: All Software Engineers are required to take HackerRank Coding Assessment: https://www.hackerrank.com/work/tests/325251/settings/access
Education Required:
BS in Computer Science or related or equivalent experience