Job Title

Big Data Platform Engineer

  • Position:
  • Salary: $$70-73
  • Location:
      Remote
  • Work Eligibility: USC, GC, GC-EAD, TN, H1, H4-EAD, OPT-EAD, CPT
  • Job ID: 06246
Share This Job
Required Skills:

Pandya

29 Active Positions

Job Description

1) Top 3 requirements:

a. Python (Object Oriented Programming experience)

b. Pyspark

c. Scala

d. SQL

e. Airflow

f. Experience with big data.

e. Are any of them flexible?

Required:

• Strong verbal and written communication skills to effectively articulate messages to internal and external teams.

Hands on experience with Object Oriented programming with Python
Experience with Python, Pyspark, Scala and SQL
Experience with designing, building, optimizing, troubleshooting end-to-end big data pipelines using structured (relational and files) and semi-structured data
Experience with building metadata driven data processing frameworks
Strong experience in SQL, python, Pyspark, Scala and shell scripting.
Experience working with Airflow.
Experience working with big data.
Ability to take ownership of a request from initial requirements, design, development and production deployment

Nice to have:

• Azure

• Azure EventHubs

• Apace Kafka

• Streaming data

• CosmosDB/NoSQL Database

common data model
workflow – talk to business partners, gather requirements
use azure stack to complete this work
current common data model is a dimensional data warehouse

data factory and functions – not necessarily needed
azure devops – can have something similar like github
all other azure requirements needed

dimensional modeling
SQL – coding
spark – questions on optimization
pyspark – coding
python – coding
shell scripting – if on resume, need to be ready to speak to

Tags:

Join Our Free Newsletter

Our newsletter contains valuable information that will help you in the application process and you can learn what is the importance of different roles and positions in the job industry. This is a great way to learn and brush up on many common practices.