Principal AWS Big Data Architect

About Kinect Consulting

Kinect helps organizations migrate to the cloud, transform their value proposition, and optimize their operational and technical investment. Technology is at the core of nearly every business today. In order to succeed in a digital world, organizations of all sizes, across all industries and sectors, are working to optimize their technology to drive competitive advantage. The cloud is critical to these efforts, making organizations more agile, scalable, innovative and secure. But the cloud journey is complex, and pitfalls are common and costly.

Kinect Consulting has successfully navigated all aspects of the cloud journey, in some of the most complex scenarios imaginable. Our Cloud Execution Framework offers a proven methodology for success, and combined with our related cloud services, ensures that our clients reach their optimized future state in the cloud. Our clients love us because we offer:

  • The agility, pricing and attention of a small firm
  • The backing of a Fortune 100 company
  • Experience supporting complex, global enterprises
  • Practical, actionable guidance that is business outcome driven

Job Description

Kinect Consulting is looking to hire an experienced and highly motivated Principal AWS Big Data Architect to lead design and development of large data pipelines using AWS Big Data tools and services and other modern data technologies. In this role, you will play a crucial part in shaping the future big data and analytics initiatives for many customers for years to come. This is a hands-on role which requires deep understanding of all phases of data pipelines, including ingestion, analysis/transformation and publishing. You will set the foundation, develop POCs, frameworks and reference implementations, and lead and guide other AWS data engineers.

About the Opportunity

You are a motivated hands-on big data architect who is passionate about building at scale on Amazon Web Services (AWS). You thrive at simplifying hard problems and can articulate the solution to both technical and non-technical stakeholders.

Key Responsibilities

The position requires limited supervision and responsibilities include:

  • Build end-to-end big data pipelines on AWS
  • Ingestion/replication via DMS (including SCT agents) from traditional on-prem RDBMS (e.g. Oracle, MS SQL Server, IBM DB2, MySQL, Postgres)
  • Real-time ingestion and processing with Kinesis Streams, Kinesis Firehose, and Kinesis Analytics
  • CDC, ETL and Analytics via AWS Glue, EMR, Spark, Presto, Athena, Flink, Python/PySpark, Scala, Zeppelin
  • Refactoring of existing RDBMS scripts (e.g. PL/SQL. T-SQL, PL/pgSQL) to PySpark or Scala
  • Buildout of data warehouse and published data sets using RedShift, Aurora, RDS, ElasticSearch • Scripting with AWS Lambda Experience Requirements The position requires the following technical skills:
  • 7+ years of experience in software development with Python, Scala or Java.
  • 5+ years of database development experience with RDBMS
  • 3+ years of database development experience within Hadoop ecosystem, including Spark
  • 3+ years of hands-on data engineering on AWS, including DMS, GLUE, EMR, Lambda, RedShift, S3, Kinesis • AWS Big Data Specialty and/or Solutions Architect Professional Certification is a plus (or to be obtained within first three months on the job)
  • A Bachelor’s Degree from an accredited college in Computer Science or equivalent experience