Careers

 Home / Careers / Job Postings

Hadoop Developer

Job Description

· Implement Spark using Scala and utilize data frames and Spark-SQL APIs for faster data processing.

· Apply Neural Network concepts like LSTM, GRU to create better attribute classifiers for data monitoring.

· Use Spark to consume data from Kafka and convert that to a common format using Scala.

· Handle one network level and push data to different source like salesforce, Account-DB.

· Work on a Kafka cluster to demonstrate the incoming and outgoing events via Spark streaming.

· Implement Elasticsearch, Logstash, Kibana, and Beats (Elastic Stack) to optimize centralization.

· Build POC application to monitor the process flow using Tensorboard in the TensorFlow application.

· Work with Spark to consume data from Kafka and convert that to a common format using Scala.

· Develop Spark code and Spark-SQL/ Streaming for faster testing and processing of data.

· Read/ write data from various file formats (JSON, text, parquet, Schema RDD) using Spark-SQL.

· Import the data from different sources like HDFS/HBase into Spark RDD.

· Create data pipeline using Kafka, HBase, Spark and Hive to ingest, transform and analyze customer behavior.


Qualification: 


This position requires a minimum of Bachelor’s degree or equivalent in Computer Science, Computer Information Systems, and Information Technology or related.



Job Tags

Hadoop Developer, Hadoop, Spark, SQL, JSON, Scala, Elasticsearch, Logstash, Kibana, Beats, Kafka
 Job Location : Tempe
 Job Type : Contract
 Job Creation Date : 07/05/18 1:54 PM
Job Description: Top Required Skills: Previous experience with YARN (Resource Management and tuning)Experience with container managementPrevious experience supporting and monitoring cluster usageThis Infrastructure analyst must be able to Analyze, make recommendations and Implement Required Technology Experience: 3-5 yrs. of professional hands-on experience in the following:In-depth understanding of YARN; especially configuring and managing resource pool... read more
 Job Location : Princeton
 Job Type : Full Time
 Job Creation Date : 09/24/18 6:46 AM
Job Duties:Ensure that all development standards are being followed and work closely with the BI Functional Lead and the Solutions Architect.Develop complex load and transformation processes in support of the requirements, validate that they meet business and technical specifications.Deliver detailed architecture for Big Data Platform Migration to HADOOP and identify components, services migration to HadoopManage ongoing maintenance of the system and large data, and make recommendations for process improvements to optimize large data movem... read more
 Job Location : Princeton
 Job Type : Full Time
 Job Creation Date : 01/29/20 10:07 AM
· Implement Spark using Scala and utilize data frames and Spark-SQL APIs for faster data processing.· Apply Neural Network concepts like LSTM, GRU to create better attribute classifiers for data monitoring.· Use Spark to consume data from Kafka and convert that to a common format using Scala.· Handle one network level and push data to different source like salesforce, Account-DB.· Work on a Kafka cluster to demonstrate the incoming and outgoing events via Spark streaming.· Implement Elasticsearch, Logstash, Kibana, and B... read more