Job Description
Summary:
Must have extensive hands on experience in designing, developing, and maintaining software solutions on Big Data platform such as Hadoop eco-system.
Must have experience with strong UNIX shell scripting
Must have experience with one of the IDE tools such as Eclipse.
Must have working experience with Spark and Scala/Python.
Must have experience with SDLC Methodology (Agile / Scrum / Iterative Development).
Must have experience with Problem solving /analytical thinking
Must have experience with leading a team of developers on projects and mentoring them as well.
Preferred experience with NoSql Databases like HBASE, Mongo or Cassandra.
Preferred experience with developing Pig scripts/Hive QL, SQOOP, UDF for analyzing all semi-structured/unstructured/structured data flows.
Preferred experience with developing MapReduce programs running on the Hadoop cluster using Java/Python.
Preferred experience using Talend with Hadoop technologies.
Preferred experience with cloud computing infrastructure (e.g. Amazon Web Services EC2) and considerations for scalable, distributed systems
Preferred experience in the data warehousing and Business Intelligence systems
Preferred experience working in an Agile Matrix environment.
Preferred experience working with multiple stakeholders with regards to Design, Systems change / configuration management, Business requirements management.