From: route@monster.com
Sent: Friday, May 06, 2016 1:34 PM
To: hg@apeironinc.com
Subject: Please review this candidate for: Cloud
This resume has been forwarded to
you at the request of Monster User xapeix03
|
|||||||
|
|||||||
|
|
|
||||||
|
||||||
|
Gregory L. Burnett Mr. Burnett is a Senior Network Engineer possessing over 34 total
years of technical experience; 15 years in the field of Electrical
Engineering, and 19 years in the field of Information Technology. He is
responsible for extensive LAN/WAN Management and Administration. (Cisco
Routers, Switches, Firewalls) Areas of expertise include: Designing,
implementing, managing and supporting large corporate networks, Cloud
Services, VPN(IPSEC) Services, VoIP, IPv4, IPv6, DWDM, router installation
and nationwide WAN/Carrier router management. He possesses expert level
trouble shooting skills. He is a Bachelors graduate from Maryland
University, University College (UMUC), an MS/ITS Masters graduate from Johns
Hopkins University, School of Continuing Studies and is currently working on
his Dissertation for fulfillment of a Ph. D. in Information Systems from Nova
Southeastern University. SPECIALTIES n
Network LAN/WAN Systems Management n
Network LAN/WAN Systems Design n
Network LAN/WAN Systems Troubleshooting EDUCATION Ph. D (ABD) –
(Information Systems), Nova Southeastern University, Present M.S. (ITS -
Information Technology Systems), Johns Hopkins University, 2001 B.S. (CMIS -
Computer Management and Information Systems), University of Maryland,
University College, 1996 EXPERIENCE Leidos
(Formally SAIC Inc.) Herndon, VA, Nov. 2005 to present. §
Contractor
at the National Institute of Health (NIH), Bethesda Md. Senior Network
Engineer, NIH/Center of Information and Technology (CIT), Division of
Networking Services and Technology (DNST), Engineering Operations Section (EOS).
NIH/CIT is one of 27 NIH Institutes and Centers (ICs) and has a mission of providing
consistent, cost effective, and reliable IT services to all of the other 26 NIH ICs. n
One of five design engineers tasked with LAN and WAN core
infrastructure Design services for the NIH/CIT/DNST/EOS section. Our
NIH network (NIHnet) infrastructure devices primarily consist of Cisco
Systems Routers, Switches and Firewalls. n It is our job
to analyze the feasibility of Engineering design requests and or technical requirements
from existing and potential NIH/CIT customers, propose one or more
Engineering design solutions to their requests, then (if approved) facilitate
a complete design solution based off the chosen solution proposal. Customer
requests typically deal with: 1.
Special
routing or peering requests onto NIHnet or with external NIH resources
or entities. 2.
Addition
of special services (i.e. Cloud, VPN, IPv6, Bandwidth upgrades etc.) 3.
New
connectivity onto NIHnet or with external NIH resources or entities. Howrey
Simon Arnold & White, LLP (Law Firm), Washington DC, Jan. 2004
– Nov. 2005 Senior Systems
Engineer n
One of six Senior Systems and Network Engineers supporting an entire
enterprise data network, consisting of 11 offices worldwide and containing
over 1500 layers and support personnel. n
This team planned, re-designed and built our entire network nearly
from scratch via a mandate from our CIO to provide him with an extremely
robust, flexible, reliable network for his user base. n
Each person in our group had several assigned areas of expertise. I
was considered the group subject matter expert in - Routing and
routing protocols (BGP, OSFP, EIGRP), - WAN based
QOS (To support real time traffic) - Firewalls,
Security, IPSEC VPNs - Video and
Voice over IP technologies - Network
monitoring systems and tools (i.e. Solarwinds, Cisco Works), n
In addition to the everyday enterprise network support, management,
and monitoring, other tasks were to: - Provide Trial Support by setting up remote Corporate WAN network
access for Lawyers. - Recommend, plan, design and
implement new technologies as they may benefit the firm as a whole (i.e.
Voice/Video over IP is being slowly implemented starting this year with the
completion of our large Houston office) - Support partner sharing of
information via the creation of various types of IPSEC VPNs or other secure
connections. - Support the acquisition of other
law firms in terms of the assimilation and/or integration of the acquired
firms IT infrastructure into the Howrey IT infrastructure. Qwest
Communications International, Arlington, Va. Jun. 2000 - Jan. 2004 Senior Data
Network Engineer (Assistant Manager) n
One of six Senior Network Engineers working in the IP Operations Tier
III TAC group. My group serves as the escalation
point for the Tier II Network Operations Center (NOC). Not only does my group
serve as the Tier III level support for many Qwest products and services, we
also provide both Tier III and Tier IV (Engineering Level) support for Qwest
VPN products and services. Additionally, my group assumes technical
management, mentoring and teaching responsibilities to the Tier II Network
Operations Center. Issues or problems my group deals with (on a daily basis)
include both customer and NOC troubleshooting support for: customer requested
management escalations, intermittent circuit problems, complex BGP, MLPPP,
MPLS, GRE Tunnel, IPSEC, VPN and VPRN issues, DSL issues, and other
Engineering matters. Unlike the NOC, whose responsibility ends at the Qwest
edge (WAN) routers, my groups personnel possess both LAN and WAN skills
required to troubleshoot customer issues from the Qwest network to well
within the customers internal network. Responsibilities include, managing the
more than 1,000 Cisco, Juniper, Nortel (Shasta VPN Router),
Tasman/Tierra(MLPPP router) Redback (DSL router), Extreme (Switches),
Riverstone and Ascend Routers spanning across our nationwide fiber optic
backbone. n
Job required our group members to be Jacks of all trades in both LAN
and WAN technologies, protocols, and issues. n
We provided (essentially) LAN/WAN consulting services for current or
potential Qwest customer who either need assistance in successfully
implementing, migrating or upgrading their current IT environment to
integrate with the Qwest fiber optic backbone. We helped customers integrate new
and old technologies with our backbone. Such technologies included, DNS,
DHCP, BootP, GRE and IPSEC tunnels, VPN, Voice over IP(VOIP), PPTP, RAS,
Citrix, 802.11 wireless, OSPF, RIP, EIGRP, IGRP, BGP. n
Assisted sales team with unusual or complex customer problems/issues. n
Assisted and or accompany sales teams on sales proposals for potential
and current Qwest customers. n
Assisted the implementation group on difficult or complicated customer
installs. n
When requested to, assisted the Qwest IP Backbone Engineering group
with new Qwest router or switch installations and upgrades. Electronic
Data Systems (EDS), Sterling Va. Jan. 1999 May 2000 n
Contractor/ Senior Network Engineer (Project Manager) at the
Department of Energy (DOE), Germantown, Md. As part of a team of three Design
Engineers, designed and implemented the first ever Department of Energy
Corporate (ATM) LAN/Network for DOE employees. As a base network, we took
previously existing (loosely scattered) DOE Frame Relay circuits and
transformed/upgraded them collectively and cohesively into a single ATM based
Corporate Network. Configured and installed new 3810, 7204, and 7505 routers
onto the new network. n
Tasks included taking 38 independent point to point Frame Relay
connection sites and connecting them to form one DOE Corporate Network. n
Upgraded Frame Relay technology to ATM. n
Traveled to all DOE locations throughout the country to install
routers and confirm connectivity to the new ATM Corporate network. n
Set up a DOEnet helpdesk to handle all of the conversion type problems
that would have been associated with a new network implementation. n
Set up monitoring software (HPOpenview) to monitor and maintain the
many routers on the new network. n
Spearheaded Department of Energy VPN pilot using Compatible Systems,
Intraport 2, VPN Access Server. This pilot tested the feasibility of using
Internet VPNs as an alternative to both dial-up, remote site or telecommuter
DOE access and as an alternative to the costly T-1 ATM access for DOE site
connectivity onto the DOEnet ATM network. MRJ
Technology Solutions, Fairfax Va. Jun. 1997 - Jan. 1999 n
Network Support Engineer(w/Top Secret Clearance) One of three Network
Support Engineers in the MIS department at MRJs Corporate Headquarters. n
Responsible for administering and maintaining the classified LAN/WAN
and supported the unclassified LAN/WAN. The classified LAN we had
approximately 150 users on a 10-Base-2 (Coax) topology. n
Responsible included help desk support and wiring and cabling new
offices within our building. OAO Corp.
Greenbelt Md. May 1996 - May 1997 n
Systems Analyst (w/ Secret DOD Clearance) On one year assignment with
SAM/OPAMs Small Computer Support Center Help Desk at the PENTAGON Apcom Inc.,
Gaithersburg Md. July 1992 – May 1996 n
Electronics Test Technician Quantex
Corp., Rockville Md. July 1988 - July 1992 n
Electronics/Engineering Technician Hunter Labs
Inc., Reston Va. Dec. 1987 – June 1988 n
Electronics/Engineering Technician (Temporary position) Telecommunications
Techniques Corp. (TTC), Gaithersburg Md., Sept. 1987 – Nov. 1987 n
Prototype Technician (Temporary position) Avelex Inc.,
Lanham Md., June 1986 – June 1987 n
Senior Production Test Technician (Temporary position) Hekimian
Labs Inc., Gaithersburg Md., Oct. 1981 – June 1986 n
Electronics Technician-Assistant Supervisor/Engineering Technician HARDWARE AND SOFTWARE
SKILLS ·
Extensive experience working with ASR9k, 1k,
Nexus 9k, 7k, and 2k family of routers, Cisco 7600, 6500, 7200VXR, 4800,
4500, 3800, 3500, 2800, 1800 routers/switches as well as Cisco ASA 5580,
5550, and 5510s utilized as firewall and remote access functions. ·
Has experience and expertise with HP OpenView
and Monolith network management and monitoring systems, VPNs, Firewalls, LAN
probes/taps, network analysis equipment, as well as implementation of 10/100
Gigabit Ethernet equipment and systems. ·
Has expertise with protocols and services such
as 1, 10 and 100 Gigabit Ethernet, DWDM, TCP/IP versions 4 and 6, Voice
over IP (VoIP), Wireless technologies, MPLS, VPLS, BGP, OSPF, EIGRP,
Multicast, NLSP, HSRP, IGMP, IPSec and High Availability (HA) IPSec. ·
Has expertise in creating network diagrams
using software such as Visio, and has experience leading Senior,
Mid-level and Junior staff ·
Very capable of performing complex technical
analysis of network systems and their components. ·
Works very effectively as a team player, as
well as a team leader ·
Communicates very
effectively with team members, managers and Government personnel HONORS AND AWARDS - ITIL Version 3 - CompTIA A+ (2010 – Still
current) - CompTIA Network+ (2010 – Still
current) -
CompTIA Security+ (2010 – Still
current) ADDENDUM:
Enterprise Architect Profile Summary: I am a Senior IT Networking
Professional, possessing 34 total years of technical experience; 15 years in
the field of Electrical Engineering, and 19 years in the field of Information
Technology.
In the 10 years I have been working at the National
Institutes of Health, I have been tasked with performing multiple
architectural functions that go beyond standard or common engineering design.
This is because these architectural designs were for new, NIH enterprise wide
services that not only I had to design and implement, but I also had to
become a participating member of the upper management process of defining the
policies, processes and procedures of how these systems and services were to
be deployed, utilized, best secured, managed, monitored and documented.
I successfully architected and implemented the NIH
Enterprise Extranet site to site VPN DMZ service (2006), the NIH Enterprise
Cloud VPN connectivity service (2013), and co-architected the NIH Enterprise
wide IPv6 readiness and deployment initiative (2014). These systems have been
widely utilized throughout the NIH community and will continue to be
important offerings within the NIH IT Services Catalog.
Architectural Solutions Profile 1
Customer:
National Institutes of Health
Project
Name:
NIH Enterprise Extranet site-to-site VPN DMZ
Project
Dates:
2005 to 2006
Applicant
Role:
Lead Network Engineer
Overview:
In 2005, the way that NIH
securely communicated with external NIH medical research partners, was via
the use of site to site/host to host VPN encryption, terminating on the NIH
Titan mainframe, located within the NIH Building 12, Data Center. These VPN
connections were primarily dedicated and limited to only special NIH cases
where NIH Data Center hosted applications needed to exchange transactional
database information with an external medical entity. This Titan mainframe was
end of life and being retired, creating an urgent need to create a new VPN
termination point to host all of the mainframes encrypted VPN tunnels. It
should also be noted that the mainframe was, during that timeframe, using VPN
parameters (encryption and hashing) that were considered somewhat outdated.
Architecture:
I was tasked with architecting
a new service that would not only allow secure site to site medical research
collaboration for NIH data center customers and their partner organizations,
but this new service was to be expanded, to an enterprise level, to support
this kind of secure site to site medical records and data collaboration
efforts for ANY department or section within the NIH who would need to
utilize this kind of service.
The requirements gathering phase took several months to
complete. During this phase, I had to understand the process of how the site
to site encrypted VPN transmission of data was occurring to date, then
compare it to the latest VPN security best practices and guidelines at that
time. I had to gather and periodically meet with the existing mainframe
owners, the application owners who utilized the mainframe VPN service,
multiple security groups (application firewall teams, Corporate firewall team
and the Corporate Network Security and Policy teams) to come up with what we
felt was the best model of what this service should look like to NIH
customers and to produce a set of processes and procedures for all of the
various security and network teams to follow to make this new proposed
service not only widely available, but efficient in its process. Discussions
pertaining to FIPS, HIPPA, FISMA, PII (Personally Identifiable Information)
were very important in the discussions, as there would be many legal ramifications
if such medical data were improperly handled, in such an environment.
After several months of discussions with various security
and networking groups and teams, I then had to produce an architectural
design that not only involved components such as routers, switches and
firewalls, but I also had to architect and document the actual processes,
procedures and group workflows for the new NIH enterprise wide VPN service.
The design covered nearly a year span, from start to
finish. As of today, the NIH Enterprise Extranet site to site VPN DMZ, has
over the last 10 years, become the main portal for all 26 NIH Institutes and
centers to securely exchange important medical collaboration data,
transcripts, and images, with their medical research business partner
locations throughout the world.
Architectural Solutions Profile 2
Customer:
National Institutes of Health
Project
Name:
NIH Cloud VPN service
Project
Dates:
October 2013 – August 2014
Applicant
Role:
Lead Network Engineer
Overview:
In May 2013 my engineering
group had received word from our branch chief of multiple CIO queries as to
what our plans were for my division, the NIH Center for Information
Technology, to support efforts to provide NIH customers the service of
leveraging cloud networks and environments to cloud providers such as Amazon
Web Services (AWS) and Microsoft Azure.
Architecture:
I was tasked with looking into
architecting the best design and creating a new NIH Cloud VPN service that
would support efforts for any of NIH’s 27 Institutes and Centers (ICs) to
utilize cloud connectivity for remote backup/replication, disaster recovery
(DR), continuous operations (COOP), remote cold, warm, or hot standby site
failover, and remote software development, production and quality assurance
purposes.
The National Heart, Lung and Blood Institute (NHLBI) was the front runner and
most
aggressive proponent of NIH cloud connectivity during that timeframe. NHLBI
made it very
clear that they wanted this new cloud service in place, and up and running as
soon as possible.
NHLBI was the customer use case I would use as the basis for my Architectural
design. In response to overwhelming inquiries pouring in from
multiple Institutes and Centers (ICs) a cloud task force was formed. This
task force, of which I was made a member of, was made up of various
application, security, management and networking personnel from various ICs.
This task force was tasked with establishing the framework (processes,
procedures, and workflow) of which this new proposed NIH service would
operate. My major input to this cloud task force was that I felt that my
network design and architecture should be such that any potential AWS or Microsoft
Azure cloud connection, should be secured or locked down in such a way
as to be considered as close to a trusted connection as possible. I
postulated that these proposed cloud connections should be seen as no
different as any newly deployed NIH building or location being brought up in
the Bethesda metro area. I felt the key to the success of this architecture
was in the end to end security controls that would need to be in place for
the path (NIH – Internet – AWS/Azure) between the NIH and its Amazon or
Microsoft cloud vendor. The task force concluded that the key to achieving
the end to end security controls that we sought was by ensuring that the
Amazon cloud network the NIH was to inhabit went through the FedRamp
compliance process.
For several months, as my overall design was being created, I had multiple
meetings with the
NIH cloud task force, CIT management, NHLBI management, NHLBI applications
teams, NHLBIs
cloud computer architecture integrator personnel and project manager, and
Amazon Web
Services (AWS) sales support engineers and project manager. These meetings
were to ensure
not only that all team members understood what their tasks and timelines were,
but also to
understand everyone else’s.
In the virtualized architecture of the AWS cloud environment, all internal
configurations and
connectivity of the service run typically on the Linux platform. This added
complexity to my overall
architectural design, as my overall design could only be implemented by NIH
network personnel who
had to have familiarity with hardware and virtual routers, firewalls, Linux
operating systems, and
with GUI based firewall configurations. It was fortunate that I have
experience with all of these
environments and was able to successful implement the entire architecture,
with the assistance of one
other engineer. 1 |
|
|
||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|