From:                              route@monster.com

Sent:                               Friday, May 06, 2016 2:13 PM

To:                                   hg@apeironinc.com

Subject:                          Please review this candidate for: Cloud

 

This resume has been forwarded to you at the request of Monster User xapeix03

Pramod Deshmukh 

Last updated:  09/21/15

Job Title:  no specified

Company:  no specified

Rating:  Not Rated

Screening score:  no specified

Status:  Resume Received


Alpharetta, GA  30004
US

pramod.v.deshmukh@gmail.com
Contact Preference:  Telephone

Quick View Links:

Resume Section

Summary Section

 

 

RESUME

  

Resume Headline: Big Data/ Hadoop Solution Architect/ Sr. Developer

Resume Value: 5sr526iay8jfuvgb   

  

 

PRAMOD DESHMUKH

Email: pramod.v.deshmukh@gmail.com

 

SUMMARY

 

An experienced Big Data-Hadoop Architect/Sr. Developer with varying level of expertise around different Hadoop projects like HBase, Phoenix, Storm, Kafka, Hive, Flume, ZooKeeper, Oozie, Pig, Kafka, Ranger. Good experience writing Talend, Spark, SparkSQL and Hadoop – MapReduce jobs.

 

Over 14+ years of experience in Oil and Natural Gas, Telecom, Security, Healthcare, Insurance, Finance and Retail domains. Creative and innovative; Proficient in verbal and written communication skills; Can initiate work, play a pivotal role in a team; Quick learner and always zealous to learn new technologies.

 

Currently assisting multiple customers in Oil and Natural Gas industry with their big data effort and rolling out big data solutions by transforming, loading and processing large diverse datasets using NoSQL and Hadoop ecosystem.

 

EXPERIENCE SUMMARY

§    Skilled in Big Data/Hadoop projects like HDFS, MapReduce, HBase, ZooKeeper, Oozie, Hive, Pig, Flume, Phoenix, Storm, Kafka, Spark, Shark, SparkSQL, Spark Streaming

§    Hadoop cluster designing, installation and configuration on Azure, AWS EC2 and On-Premise cluster using RHEL 6.5 and Centos 6

§    Skilled and experience designing search solutions using Solr and SiLK

§    Experience using data visualization/ BI tools like Tibco Spotfire, Jaspersoft, Google Visualization, Tableau, Pentaho

§    Handling different file formats on Parquet, ProtoBuff(Protocol Buffer), Apache Avro, Sequence file, JSON, XML and Flat file.

§    Skilled in Hadoop Administration, installation, configuring tweaking, managing and designing Hadoop clusters.

§    Skilled working with HDP 2.2, Ambari 1.7.0, Cloudera Manager CDM- 4 & 5

§    Good understanding of analytic tools like Tableau, Jaspersoft and Pentaho

§    Skilled in Object Oriented Analysis and Design (UML using Rational Rose) and database analysis and design (ERWIN and Oracle Designer) web application development using Java, J2EE, JSPs, Servlets, AJAX, JavaScript, Ext JS, JSON, XML and HTML.

§    Skilled in web technologies like Groovy, Grails, GORM, Spring Security

§    Experienced in:

o       Hortonworks HDP 2.2 cluster design, installation and configuration.

o       Big Data/Hadoop projects like HDFS, MapReduce, HBase, ZooKeeper, Oozie, Hive, Pig, Flume, Phoenix, Storm, Kafka, Spark, Shark, SparkSQL, Spark Streaming, Solr

o       HBase data modeling and rowkey design to accommodate heavy reads and writes and avoid region hotspotting.

o       Data visualization using Tibco Spotfire, Jaspersoft, Tableau, Pentaho, Tibco Spotfire and google visualization API’s

o       Groovy on Grails 2.2, Spring MVC, MyBatis 3, Struts 1.3, Hibernate, Ext JS, and J2EE design patterns.

o       Build scripts using Ant, Maven.

o       Testing tools like JProbe 6.0 and Mercury’s Test Director and Load Runner.

o       Scrum, RUP, Waterfall, and Agile methodology.

o       Mobile application development using Android 2.2 on Eclipse.

o       Mobile web application using Sencha Touch 1.0

 

AREAS OF PROFICIENCY

·  Architect and design Big Data/Hadoop solutions on projects like HDFS, MapReduce, Storm, Kafka, HBase, Phonenix, Hive, Pig, Flume, Spark, Shark, SparkSQL, Spark Streaming on Azure, AWS and on premise hadoop cluster.

·  Web Application and Mobile Web Application software engineering architecture, design and development.

·  Object Oriented Analysis and Design.

·  UML based designing with Rational Rose

·  3/n Tier Business Logic Implementation

·  Scrum Agile Software Development.

 

Achievements

·  Ciphertrust Inc. - Awarded “CEO Award of Excellence” - 2006

·  McAfee Inc. - Awarded “Productivity System Engineer of the Year” – 2008

 

 

TECHNICAL SKILLS

 

Big Data – Hadoop: MapReduce, Storm, Kafka, Spark, Spark Streaming, HDFS, HBase, Phoenix, Hive, Impala, Flume, ZooKeeper, Oozie, Avro, Parquet, Protobuff (Protocol Buffers) Sqoop,

 

Hadoop Distributions: Hortonworks HDP 2.1, 2.2, Cloudera 4 & 5 on Azure, AWS and On-Premise setup

 

Search Technologies: Solr, SiLK, ElasticSearch, Kibana

 

Hadoop Security: Knox, Ranger, AD (Active Directory), Kerberos, LDAP, Encryption in flight and at rest, Centrify, SED (Self Encrypted Disks)

 

Languages/Scripting: Java 1.7, Scala, Groovy 2, Perl, Shell Scripts, PL/SQL, Phoenix

 

Database: HBase, Phoenix, Cassandra, MongoDB, PostgreSQL, Oracle, MySQL, MS SQL Server, RRD, Actian Vortex 

 

Frameworks: Spring, Spring MVC, Grails, Hibernate, Struts

 

Web Technologies: JSP, Servlets, J2EE, Ext JS, AJAX, JSON, XML, ChartDirector, EJB 3.0, JDBC, JNDI, XSL, Web Services, iBatis/MyBatis, Hibernate, JUnit, WSDL, SOAP, Spring Security.

 

Business Intelligence: Tibco Spotfire, Jaspersoft, Google Visualization, Tableau, Pentaho, Tibco Spotfire, QlikView

 

Operating Systems: Linux, FreeBSD, Ubuntu

 

Application Servers: Jakarta Tomcat, WebLogic, WebSphere

 

Design Tools: Rational Rose, MS Visio

 

Tools: IntelliJ IDEA, Eclipse, Tableau, Pentaho, Tibco Spotfire, Jaspersoft, Talend, QlikView

 

Version Control Systems: SVN (Subversion), Git, CVS

 

System Testing Tools: JProbe, Mercury Load Runner, JUnit

 

 

EDUCATION and CERTIFICATION
(SCWCD)—Sun Certified Web Component Developer for J2EE 1.4 (Score: 81%)

(SCJP)—Sun Certified Programmer for the Java 2 Platform (Score: 86%)

C-DAC ACTS Post Graduate Diploma in Advanced Computing (1999)

Computer Engineering (1998)

 

 

FUNCTIONAL EXPERIENCE

 

Hortonworks – Johnson Controls Inc, Milwaukee, WI
Big Data/Hadoop Solution Architect/Sr. Developer Consultant

08/2015 – Till date

Working as a architect to review existing Hortonworks Data Platform, business requirement analysis and design Enterprise Data Lake and provide recommendation to operationalize this data lake.

 

Responsibilities

·              Reviewed Hadoop Cluster design in current environment to manage resource utilizations for different groups. The requirement was to accommodate multitenants and allocate resources based on priority and usage. The recommendation was to have hierarchical queues using YARN capacity scheduler.

·              Conducted interviews for environment requirements and mapped them back to the business requirements. The recommendations were to have Dev, Test and DR (cold/warm) clusters. The DR cluster was recommendations were based on enterprise DR policies.

·              Review existing data lake and captured both functional and non functional requirements and provided recommendations on

o       Data Ingestion framework to handle Realtime and Batch pay loads. This framework will have features

§      Source profile registration

§      Schema management

§      Data quality checks

§      Audit data pipeline/ingestion activity

§      Job scheduling and coordination

§      Ingestion UI and management console.

§      Notification services for open communication channel between registered applications

§      Support REST API’s

o       Data Security recommendations to administer, authentication, authorization and data encryption by introducing tools like Knox, Ranger and integration with Kerberos, Enterprise level ActiveDirectory.

o       Data Governance requirement analysis and recommended tools like Falcon and Atlas. This includes recommendations on

§      Define data security and data access policies

§      Track quality at every point in business process

§      Tracking data at every point of its use during its lifecycle

§      Data delivery, presentation and usability.

 

Operating System: Red Hat Linux

Technology: Hortonworks HDP 2.3 - Hadoop, MS Excel, Hive, HDFS, WebHDFS, WebHCat, ZooKeeper, Java, Shell Scripting

Tools: Ambari 2.1, Intellj IDEA, Git, MS Excel, MS Word, VI, vim, putty, ORC file formats.

 

 

Hortonworks – Hobby Lobby, Oklahoma City
Big Data/Hadoop Solution Architect/Sr. Developer Consultant

07/2015 – 08/2015

Working as a solution architect for TLog search and WebLog ingestion and analytics use-cases.

Designed end-to-end solutions and application designs on Hortonworks Data Platform 2.2 hadoop and Solr 4.10.2 platform.

 

Responsibilities

·              Hadoop Cluster design: Architected and designed HDP 2.2.6 cluster, initiated and executed installation in Azure environment.

·              TLog index and search

o       Installed and configured Solr 4.10.2 in cloud mode.

o       Configured to store indexes and data into HDFS.

o       Transform and validate tlog data using xsltproc and xmllint utilities

o       Load xml files into Solr using SimplePostTool

o       Designed search queries over REST and integrated .Net search application.

 

·              WebLog Analytics

o       Realtime log streaming using Flume-ng into HDFS

o       Created Hive tables on log data

 

 

Operating System: Centos 6.6

Technology: Hortonworks HDP 2.2.6 - Hadoop, MS Excel, Hive, HDFS, WebHDFS, WebHCat, ZooKeeper, Java, Shell Scripting

Tools: Ambari 2.1, HDP 2.2.6, Intellj IDEA, Git, MS Excel, MS Word, VI, vim, putty, ORC file formats.

 

 

Hortonworks – MetLife Inc, Raleigh NC
Big Data/Hadoop Solution Architect/Sr. Developer Consultant

05/2015 – 06/2015

Working as a solution architect for ETL and batch data loading to stage and processing/transforming data using Hive. Designed end-to-end solutions and application designs on Hortonworks Data Platform 2.2 hadoop platform.

 

Responsibilities:

·              Hadoop Cluster design: Architected and designed HDP 2.2.4 cluster, initiated and executed installation in Azure environment.

·              Designed solution for Loading and Transforming data using WebHDFS, Hive on TEZ and WebHCat and Pig.

o       Loaded policy and funds files produced by model runs on HPC (MS High Performance Computing) cluster into HDFS using WebHDFS REST api’s

o       Created Hive external STAGE tables and generated dynamic partitions on loaded files.

Moved STAGE data into ORC and SNAPPY compressed Hive managed table.

o     Transform and process data using Hive QL on Tez.

o     Power shell scripting to execute WebHDFS and WebHCat REST api’s

·              Data visualization using QlikView: Integrated QlikView and Excel with Hive over ODBC and Phoenix using ODBC-JDBC bridge solution.

·              Security: Installed and configured Knox and Ranger. Hive tables are secured using ACL’s. Working on integration of HDP with existing AD using Centrify

 

 

Operating System: Centos 6.6

Technology: Hortonwork HDP 2.2.4 - Hadoop, QlikView, MS Excel, Hive, Pig, HDFS, WebHDFS, WebHCat, ZooKeeper, Knox, Ranger, Java, Spark, Shell Scripting, Centrify, Actian

Database: Oracle

Tools: Ambari 2.0, HDP 2.2.4, QlikView, WebHDFS, WebHCat, Intellj IDEA, Git, MS Excel, MS Word, vi, vim, putty, ORC file formats.

 

 

Hortonworks - Hess Corporation, Houston TX
Big Data/Hadoop Solution Architect/Sr. Developer Consultant

02/2015 – 05/2015

Working as a solution architect for ETL and real-time data streaming and processing use cases. Designed end-to-end solutions and application designs on Hortonworks Data Platform 2.2 hadoop platform.

 

Responsibilities:

·              Hadoop Cluster design: Architected and designed HDP 2.0 cluster design, initiated and executed installation and configuration in AWS and on premise setup.

·              ETL: Developed ETL solution using Talend big data integration.

·              Realtime event processing, data analytics and monitoring for high rate of penetration.

a.                      Designed and implemented data streaming solution for drilling data using Kafka, Storm and HBase. This involves directory monitoring using java watch service API.

b.                     Designed and developed storm monitoring bolt for validating pump tag values against high-low and highhigh-lowlow values from preloaded metadata. This metadata is auto trained based on the alert event values. The analysed data is then fed to the system and dynamically distributed across all monitoring bolts.

·              HBase data modelling using Phoenix SQL: Designed HBase tables with Phoenix for time series and depth sensor data. Used salted buckets to evenly distributed data across region servers. Used immutable and secondary local indexes for data access pattern with column qualifiers not part of primary key.

·              Data visualization using Tibco Spotfire: Integrated Spotfire with Hive over ODBC and Phoenix using ODBC-JDBC bridge solution.

·              Jaspersoft reports and charts: Integrated Jaspersoft using Phoenix JDBC driver and generated charts and reports.

·              Security: Installed and configured Knox and Ranger.

 

 

 

Operating System: Red Hat Linux 6.5

Technology: Hortonwork HDP 2.2 - Hadoop, Kafka, Storm, HBase, Phoenix, HDFS, ZooKeeper, Knox, Ranger, Java, Spark, Shell Scripting, Tibco Spotfire, Jaspersoft, JSON, Protobuff/Protocol Buffers, JUnit
Database: Apache Derby

Tools: Ambari 1.7.0, Intellj IDEA, Git, MS Excel, MS Word, JIRA, Confluence, vi, vim, putty

 

 

 

Hortonworks – Noble Energy Inc, Houston TX
Big Data/Hadoop Solution Architect/Sr. Developer Consultant

 

03/2015 – 03/2015

Worked as solution architect for on-premise cluster design, installation and configuration.

 

Responsibilities:

·              Hadoop Cluster design: Architected and designed HDP 2.0 cluster; initiated and executed installation and configuration in AWS and On premise setup.

 

Operating System: Red Hat Linux 6.5

Technology: Hortonwork HDP 2.2 - Hadoop, Kafka, Storm, HBase, Phoenix, HDFS, ZooKeeper, Knox. Ranger, Java, Spark, Shell Scripting, JSON, Protobuff/Protocol Buffers, JUnit

Tools: Ambari 1.7.0, MS Excel, MS Word, JIRA, Confluence, vi, vim, putty

 

 

 

 

 

 

 

Hortonworks - Schlumberger Inc, Houston TX
Big Data/Hadoop Solution Architect/Sr. Developer Consultant

11/2014 – Till date

Working as a solution architect consultant for end to end solutions and application designs using Hortonworks Data Platform 2.1 & 2.2 hadoop platform. This involves detailed realtime data streaming application design for handling realtime ESB pump data from oil wells. Pumps have multiple sensors collecting metrics at 50 hz/second which are in near time route. HBase design, visualization and cluster design, installation and configuration, security configuration using Knox and Ranger.

 

Functionalities:

·              HBase data modeling and rowkey design: Designed HBase tables for time series data. Designed rowkey to avoid region hotspotting and accommodate desired read access/query patterns, used FuzzyRowFilter for fast key search across hbase regions.

·              HBase using Phoenix SQL: Designed HBase tables with Phoenix for time series and depth sensor data. Used salted buckets to evenly distributed data across region servers. Used immutable and secondary local indexes for data access pattern with column qualifiers not part of primary key.

·              Realtime data streaming: Designed and developed solution for ESB pump realtime data ingestion using Kafka, Storm and HBase. This involves Kafka and storm cluster design, installation and configuration.

·              Realtime event monitoring: Developed storm-monitoring bolt for validating pump tag values against high-low and highhigh-lowlow values from preloaded metadata.

·              Utilities: Developed utility for loading pump tag meta-data used for warning or error generation. 

·              Benchmarking Kafka producer to produce 1 million messages/second: Designed and configured Kafka cluster to accommodate heavy throughput of 1 million messages per second. Used kafka producer 0.8.3 API’s to produce messages. 

·              Install and configure Phoenix on HDP 2.1. Create views over HBase table and used SQL queries to retrieve alerts and meta data.

·              Data query and visualization: Used HBase API’s to get and scan events data stored in HBase.

o       Implemented Douglas Peucker - Decimation algorithm to reduce data size for a given epsilon.

o       Implemented 9 point smoothing algorithm to smooth metric values for generating a smooth pattern on visualization.

·              Predictive Analytics: Designed application to do predictive analytics for high tech and sophisticated equipment’s maintenance and life cycle.

·              Hadoop Cluster design: Cluster design, installation and configuration using HDP 2.1 stack on Azure, HDP 2.2 on AWS and HDP 2.2 on premise setup.

·              Security: Installed and configured Knox and Ranger.

 

Responsibilities:

·              Architect and designed big data solutions using HDP 2.2 hadoop stack.

·              Developed Kafka producer to produce 1 million messages per second.

·              Developed Kafka spout, HBase and enrichment bolts for data ingestion to HBase using HBase Client API’s and using Phoenix-SQL skin.

·              Designed, installed and configured HDP 2.2 hadoop cluster along with Knox and Ranger for security requirements; configured Kafka and Storm cluster to handle the load and optimize to get desired throughput.

·              Configured security using Knox and Ranger.

·              Cluster design for HDP 2.2 on AWS, Azure and on-premise setup.

 

Operating System: Red Hat Linux 6.5

Technology: Hortonwork HDP 2.2 - Hadoop, Kafka, Storm, HBase, Phoenix, HDFS, ZooKeeper, Knox. Ranger, Java, Spark, Shell Scripting, JSON, Protobuff/Protocol Buffers, JUnit
Database: MySQL

Tools: Ambari 1.7.0, Hue, Intellj IDEA, Git, MS Excel, MS Word, JIRA, Confluence, vi, vim, putty

 

 

Humedica Inc – Clinical Analytics, Boston MA
Big Data/Hadoop Consultant – Sr. Developer & Solution Architect

 

11/2013 – 11/2014

Humedica is a next-generation clinical informatics company that provides novel business intelligence solutions to the health care industry.

 

Functionalities:

·              Inbound/Outbound file tracking and ETL: Humedica provides data analytic tools and data management service for various hospital clients. This system does ETL for different EMR and HL7 data files.

a.          Streaming data to Hadoop using Kafka

b.         Data ingestion to HBase and Hive using Storm bolts.

c.          Perform file and record level validation.

d.         Light data transformation to avro to achieve standard structure for fast processing and compact storage. These avro events are posted to Kafka topics.

e.          Storing file metadata in HBase for tracking. E.g. file name, no. of records etc.

f.            Using Camus framework, which is a Map only kafka consumer to read data from Kafka topics and generate avro files on HDFS.

g.         Dynamic Avro schema generation for new EMR files.

h.         Create external HIVE tables on avro files.

i.             Create Sqoop jobs to export data from HIVE to Oracle stage environment.

 

·              Dedupe HL7: Humedica receives lots of duplicate segments as part of different HL7 messages, which leads into duplicates.

a.          Parsing HL7 messages using IBM parsers into different segment files.

b.         HL7 data is produced to Kafka and using storm spout and bolts then landed into HBase and Hive.

c.          Spark jobs run on HBase dataset. Different segment RDD’s are joined to produce logical data and then perform dedupe logic.

d.         After dedupe, this dataset is stored to HDFS and exposed using Hive external table.

e.          Sqoop job then exports deduped data from Hive to Oracle stage enviroment.

 

Responsibilities:

·              Architect and designed big data solutions for new system on Cloudera 5 - Hadoop ecosystem. Used Cloudera Manager and Hue for design and development.

·              Developed Kafka producer and consumers, HBase clients, Spark and Hadoop MapReduce jobs along with components on HDFS, Hive.

·              Developed Storm spout and bolts for data ingestion to HBase and Hive.

·              Integrated JUnit, MRUnit, Bamboo and Cobertura code coverage
 

Operating System: Linux, Mac OS X,

Technology: Cloudera 5 - Hadoop, MapReduce, HDFS, HBase, Hive, Kafka, Storm, ZooKeeper, Oozie,, Java, Spark, Shark, SparkSQL, Shell Scripting, Avro, Parquet, JUnit, MRUnit.
Database: Oracle

Tools: Cloudera Manager, Hue, UML, Intellj IDEA, SVN, MS Excel, MS Word, Hudson, JIRA, Confluence, vi, vim, putty

 

 

Amplify Learning, Alpharetta GA
Senior Big Data/Hadoop Consultant

Application: Curriculum Services

04/2013 – 10/2013

 

Amplify provides the Advisory Services, Systems integration, Custom applications and Education Data Systems to various school districts. It provides digital curriculum for ELA, Math and Science courses to the various school districts using tablet based web application. This application captures and produces big data for web click analysis and transactional data analysis to help curriculum services department create and enhance curriculum.

 

Functionalities:

·              Big Data - Analytics: This team is focused to process web application events and web logs. It also process transactional data from PostgreSQL RDBMS. Web application logs are used for crash reporting and flow tracking.

·              Configured Flume streaming agent to load events (JSON format) real time from log files to HBase.

·              Used Sqoop to import transactional data from PostgreSQL into Hive.

·              Worked on MapReduce job for converting HBase JSON data into tuple format on HDFS.

·              Map only job is scheduled to filter, sort and aggregate events from HBase to HDFS. This produces a flat file, which is more fine-grained data. This data is then piped into another Map Reduce job to generate data for Cognos analytical reports.

·              MapReduce jobs are scheduled using Oozie workflow and coordinator engine to use data from 2 and 4 and produce data for report generation.

·              Debugging MapReduce jobs using job history logs, and syslog for tasks. Used Log4j logging API’s and Counters for debugging failed jobs.

·              Create Hive managed and external tables, UDF and HiveQL

·              Generated MRUnit tests for MapReduce jobs.

·              Supporting production jobs as and when needed.

·              Fine-tuning and enhance performance MapReduce jobs.

 

·              Web Application: This application is developed using Groovy, Grails, GORM, RESTful web services, GSP. I worked on modeling using GORM, designing and developing restful web services, admin console and security (authentication and authorization) using Spring Security.

Involved in Pubnub Notification Services, DB Migration grails scripts, Homework Assignment, Security features and data utilities using shell scripts.

Used Jenkins for building project for dev env.

 

Responsibilities:

·              Review and estimate SCRUM user stories, create tasks in JIRA. Analysis and design using UML.

·              Involved in design of Big Data-Hadoop components like HBase, Hive and Oozie

·              Develop Big Data-Hadoop MapReduce jobs along with HDFS, HBase, Hive, Flume projects.

·              Also developed web application using Groovy, Grails, GORM, Restful web services, GSP, Angular JS, Git, JIRA, Shell Scripts, JUnit, Mockito, Jenkins
 

Operating System: Linux, Mac OS X

Technology: Cloudera Hadoop, MapReduce, HDFS, HBase, Hive, Flume, ZooKeeper, Oozie, MongoDB, Java, J2EE, Groovy, Grails, GORM, Restful web services. GSP, Angular JS, Git, Shell Scripting, JSON
Database: PostgreSQL 9.2, MongoDB

Application Server: Apache Tomcat 7
Tools: Cloudera Manager, UML, Intellj IDEA, JUnit, Git, MS Excel, MS Word, Jenkins, Hudson, JIRA, Confluence, vi, vim, putty, Tableau

 

 

 

E*Trade Financial, Alpharetta GA
Senior Java Hadoop Developer

Application: E*Trade CRM Application

10/2012 – 04/2013

 

Worked as a senior software engineer on Customer Relationship Management (CRM) application. This application is used by the E*Trade’s customer service to manage brokerage and bank customer profiles.

 

Functionalities worked:

·              ID Verification: Managing and performing customer credit or background check interfacing with Equifax, TransUnion and Lexis Nexis credit bureau.

·              Secure Messaging: Sending and managing secure messages to customers.

·              Analytic Reports: Developed Map Reduce jobs for analytic reports for a given profile.


Responsibilities:

·              Review functional requirements and create application design.

·              The application design includes business modeling using UML. Use Case, Sequence, Class and Deployment diagrams depict the flow of the application.

·              Develop application using Spring MVC, Java, J2EE, Web Services.

·              Developed MapReduce jobs for analytic reports for a given profile.

 

Operating System: Linux

Technology: Big Data-Hadoop MapReduce, HDFS, HBase, ZooKeeper, Java, J2EE, Spring 3.0, Spring MVC, Web Services, SOAP, WSDL, XML, JAXB, JSON, AJAX, JavaScript, jQuery, Eclipse-JUNO, Java 1.6, J2EE, UML, Maven.
Database: HBase, Oracle 10g
Application Server: Apache Tomcat 6
Tools: UML, Eclipse JUNO, JUnit, Subversion, MS Excel, MS Word, Tableau

 

 

AT&T, Atlanta GA
Senior Java Developer and Designer

 

Application: OPUS, OPUSLite and OPUSMobile

04/2010 – 09/2012

 

OPUSLite is a mobile device-based Point of Sale application deployed at various retail locations and accessed via mobile devices like iPhone, iPad and Android tablet, nation wide over WiFi or Data plan. The core functionality of OPUSLite is Queue Management (rsIQ), Profile Information, OE Offers and Eligibility Check (pCA) of Customers whose information is maintained in the Billing System. This also includes non serialized accessories sales which includes credit card swipe and receipt printing functionality.

 

Responsibilities:

·              Create application design based on the high level design and system requirements supplied by requirements team.

·              The application design includes business modeling using UML. Use Case, Sequence, Class and Deployment diagrams depict the flow of the application..

·              Develop application using Spring MVC , Android, Java, J2EE, Web services, Struts and Hibernate framework.

·              Develop stored procedures using PL-SQL on Oracle, develop DAO components using Hibernate ORM framework for accessing configuration and transactional data from Oracle database.

·              Generate build scripts using Ant

·              Accomplished proof of concept for development of mobile application using Android 2.2 platform.


Operating System: Oracle Solaris 10, Ubuntu 9.10

Technology: Spring 3.0, Hibernate 4, STRUTS 1.2, Web Services, SOAP, WSDL, XML, JSON, AJAX, Android 2.2 with Eclipse, Java 1.6, J2EE,  EJB3.0,  UML, Ant and Maven, PL/SQL, jQuery.
Database: Oracle 10g.
Application Server: BEA Web Logic 10.3
Tools: MS Excel, MS Word, UML, Eclipse Helios, JUnit, Subversion

 

 

McAfee Inc., Atlanta, GA

Senior Java Developer

Product: McAfee Email Gateway 7.0 aka “IronMail” 6.7.2

01/2006 – 04/2010

 

Role: Design, Development and Testing

 

IronMail provides comprehensive inbound threat protection against spam, viruses, phishing, zombies, and intrusions. IronMail also allows organizations to centrally define and manage corporate messaging policies that are automatically applied to all inbound and outbound messages.
 

Responsibilities:

·              Accomplished GUI development using Java, J2EE, Ext JS, JavaScript and Spring MVC framework.

·              Awarded ‘Productivity System Engineer of the year’ for successfully designing and implementing IronMail dashboard using ChartDirector, and AM Charts with statistical data from RRD.

·              Generated build scripts using Ant

·              Used JUnit to generate unit tests.

·              Created detailed requirement specification and designing based on inputs from Product Management team. This included functional specification, work breakdown and design for a feature.


Design Team Size: 30 (Alpharetta, GA. USA)

Technical Environment: Free BSD 6.1, Ubuntu 9.10, Spring, Java 1.5, J2EE, MySQL, XML, Ext JS 3.1, JSON, AJAX, IntelliJ IDEA 9.0.1, RRD, ChartDirector, AM Charts, Ant, UML using Rational Rose, Shell Scripts

Database: MySQL, used for storing customization and policy configuration.
Application Server: Tomcat.

 

 

BellSouth, Atlanta GA

Sr. Java Engineer

IOM Test Data Management

11/2005 –  01/2006

Role: Java Developer

 

Writing an approach for test data building. This approach is for the IOM Domain. IOM is an order management system for BellSouth based on YANTRA software. As per the approach and design we code simulator’s to create test data for IOM and verify and validate data at different checkpoints. These checkpoints are responsible to interpret the results at various points in the business process. This avoids manual intervention of tester and gives a better way for executing different test scenarios.

 

Responsibilities:

·              Attended JAD sessions with business team; transformed business requirements into detailed requirement and functional specifications.

·              Participated in team for generating UAT plans.

·              Generated unit test cases, and sample test data.

·              Assisted QA team in writing test scripts using Mercury’s Test Director.

 

Design Team Size: 15

Technical Environment: YANTRA, Test Director, Unix, UML, Java 1.4.2, JDBC, Oracle 9i, XML, XSL, Eclipse 3.0

 

McKesson Corp., Alpharetta, GA

Java Lead

03/2005 – 11/2005

Role: Design and Implementation

Installation software is a GUI based tool used by McKesson Corp. to install products on customer site.

This software is used for remote deployment and managing different McKesson products along with different versions, service packs and critical fixes for a product.

 

Responsibilities:

·              Generated requirement and functional specification documents.

·              Involved in designing strategies and approach for GUI based installation software using Install Anywhere 6.0 tool.

·              Developed proof of concepts for user interface and communication to update server, and screen mock-ups for design document.

·              Compiled kick-start scripts for staging Linux based appliance.

·              Developed Java swing custom screens to plug into Install Anywhere scripts.

·              Developed shell scripts to do remote deployment using Linux commands like rdist (remote distribution)

 

Design Team Size: 6

Technical Environment: Linux 3.0, Unix, UML, Install Anywhere 6.0 Enterprise Build, Java 1.4.2, JDBC, Swing, Oracle 9i, XML, XSL, Eclipse 3.0, Shell Script

 

 

McKesson Corporation, Roseville, MN

Java Developer

Application: Pathways Healthcare Scheduling (PHS)

05/2004 –  02/2005

 

PHS is the core product for healthcare industries. This product is used for mainly creating patient appointments and maintaining and scheduling the appointments using the advanced appointment book and the grid. User can drag and drop the appointments to schedule and also re-schedule the appointments. It has the validation process, which takes care of checking the availability of resources like facilities, practitioners, equipments, rooms, staff etc.

 

Responsibilities:

·              Generated HTML mock-ups for detailed design and functional specification document.

·              Created test scripts using Mercury’s Test Director and load testing using Load Runner

·              Communicated with offshore team. And, assisted development team in understanding requirements, and coding.

·              Developed integration test plans, integrated code from every iteration, and delivered code.

 

Team Size: 20

Technical Environment: UML, Netscape Collabra, Oracle 9i, XML, XSL , Java, J2EE, Java Servlets, JDBC, JSP, Eclipse 2.1, Mercury Load Runner and JProbe 6.0

 

 

Humana Inc, Louisville, KY

Java Programmer

Application:  Reporting Tool

07/1999–  05/2004

 

Role: Worked as team lead and was involved in designing, coding and testing.

Reporting tool is a web-based tool for generating reports in different formats like PDF, MS-Excel, CSV and XML. Used apache FO API’s for PDF generation, apache POI-HSSF API’s for MS-Excel generation. Used advanced features of Oracle 9i like SQLJ, Stored Procedure, Java Stored Procedures.

 

Responsibilities:

·              Participated in designing reporting system

·              Involved in web application development using Java, J2EE, Struts and Apache Formatting Object api’s

·              Involved in fine tuning application; as the web-application applied to companies of various sizes.

·              Participated in unit and integration testing using JUnit; and optimization using JProbe.

 

Team Size: 10

Database: Oracle 9i, develop stored procedures, java stored procedures and SQLJ

Technical Environment: BEA Web Logic 8.1, Oracle 9i, PL-SQL, XML, XSL with FO, Java Servlets, JDBC, JSP, EJB,  FOP, POI, Eclipse 2.1, MS Visual Source Safe, UML using Rational Rose, Ant

 



Experience

BACK TO TOP

 

Job Title

Company

Experience

Java / Big Data/ Hadoop Architect

Virtue Group

- Present

 

Additional Info

BACK TO TOP

 

Current Career Level:

Experienced (Non-Manager)

Date of Availability:

Within 2 weeks

Work Status:

US - I require sponsorship to work in this country.

Active Security Clearance:

None

US Military Service:

Citizenship:

None

 

 

Target Job:

Target Job Title:

Java Big Data / Hadoop Architect

Desired Job Type:

Temporary/Contract/Project

 

Target Company:

Company Size:

 

Target Locations:

Selected Locations:

US-GA-Atlanta North

Relocate:

Yes

Willingness to travel:

Up to 100%