Sunday, December 8, 2013

ESXi Build Complete Part I: Server Framework

I finally completed my ESXi server build and without further delay, here is the final build complete with part numbers for anyone that might want to build using my framework.
OEM
Item:
Manufacturer #
Qty
Unit $
Ext. $
ASUSASUS Barebones Rack ServerRS500-E6/EPS41599.99599.99
IntelXeon E5620 LG1366 CPU 2.4GHzBX80614E56202410.99821.98
Crucial16GB 1066 DDR3 SDRAMCT16G3ERSLQ810672196.99393.98
IntelS3500 120GB SSDSSDSC2BB120G4012149.99299.98
Western Digital1TB AV-GP HDDWD10EURX279.99159.98
Adaptec6405E RAID Controller/Card2270800-R1194.99194.99
SilverStoneSSD/HDD AdapterSDP09214.9929.98
RosewillSAS-SATA 1-4 Breakout CableSFF-8087121.9921.99
Total = Roughly $2,500 without Shipping and Handling or any applicable taxes.

Below are my server specifications with the hardware purchased. 
CategorySpecification
Operating System
VMWare ESXi 5.1 (Yes I know 5.5 is out)
CPUDual Quad Core / 8 Physical Cores / 16 Logical Cores
RAM32GB DDR3 RDIMM
Storage (SSD)2x 120GB SSDss
Storage (HDD)2x 1TB HDDs
RAID (Hardware)Array 1 (2x 120GB SSD, RAID I) Array 2(2x 1TB HDD RAID I)
Network2x Intel Gigabit LAN
Management1x iKVM MGT LAN

I was able to purchase nearly all of the equipment from NewEgg except for the processors which they happened to be out of stock at the time. 

ASUS
Thus far, everything has worked PERFECTLY. And let me take this time to say that ASUS is incredible in my opinion. If you want high power at low cost, ASUS is the perfect way to go. My high powered PC that I built 3 years ago, my Vyatta router/firewall, and this server are all ASUS based machines and I haven't had any issues out of them. 

The Build
Assembly was relatively easy. I received the processors first, put them in, applied thermal paste and put the included heatsinks in. The other parts came last Friday and I put everything in at relative ease. 

RAID
I built the RAID configuration next. I set up two arrays. The first array consists of two 120 GB SSDs. I went with RAID 1 for mirroring for simplicity sake and because I was limited to 4 drives. The OS (ESXi) is mirrored on this array. It took about 20 minutes to build and verify Array I. The second array is for the Datastore where the virtual machines and ISO files are stored. This array consists of two 1TB HDDs in a RAID I configuration. Array II took 2 hours to build and verify. 

OS Install
I went with VMWare ESXi 5.1 as my chosen hypervisor. Yes, I know that 5.5 is out, but there are a lot of limitations with the free version. Number one is what I have heard is the elimination of the Vsphere Client which is how the virtual machines are managed from a remote PC. From my understanding, ESXi is becoming more and more dependent on VCenter which now comes in the form of an appliance that IS NOT free. Until they iron details out on how they will support management for free users, I will stick with 5.1. 

Caveats
The ONLY snag I hit setting the server up was the RAID controller. The first thing any potential builders should know is that ESXi DOES NOT support software (built-in/integrated) RAID controllers at the present time. You must use a hardware controller/card if you want to use RAID with ESXi. Also, I used the Adaptec 6405E RAID controller and I had to use the ESXi ISO Customizer in order to create a custom ESXI installable ISO for ESXi to detect the array during OS installation. That software can be found here

Remote Management
Along with standard remote management through the Vsphere Client, I am also able to manage the server from the BIOS level up (including viewing the ESXi shell) using the ASMB-iKVM4 on the server. 

Now check out the pictures!
RAID Array I (OS) Building from Adaptec RAID BIOS.

RAID Array II (Datastore) Building
 
RAID Array health looks great.
  
After creating the custom ISO, ESXi detects the arrays.


 
ESXI is installed and server is racked and powered on.

Nice pic of the LEDs.
Another nice LED pic.
Rear pic of LAN ports and RAID controller (red LED)
 
The two servers of my CCNP lab.
In the next post I will detail some OS (ESXi) and Vsphere client photos!

Friday, November 29, 2013

Hypervisor Server Spec Change

So, plans have changed somewhat. I have decided to go from an AMD based virtual machine host to an Intel based one. I have actually began ordering parts. So far, specifications are:

ASUS RS500-E6/EPS4 1U Rackmount Server
        +LGA1366 Socket
        +192GB RDIMM (Max, 12x 240 Pin Slots)
        +2x 10/100/1000 Intel 82574L LAN Connections
        +1x ASMB4-iKVM (Management) RJ45
        +4x Hot-swap bays
        +2x PCI Riser Cards (for 1U servers)
        +2x Heatsinks
        +CD/DVD Optical Drives
2x Intel Xeon E5620 Westmere LGA1366 (2.4GHz, Hyperthread, VT-x, Turbo Boost)
        + 16 Total Cores with Hyperthreading
2x Crucial 16GB 240-Pin DDR3 1066 SDRAM (Registered)
        + 32GB Total 

Above is what has been decided on. Due to my desired RAID configuration, I am still deciding on storage factors for both OS and Datastore. I initially considered SSDs in a mirrored configuration for the OS but I have read conflicting reports about the usage of SSDs in relation to drive lifetime related to read/write and (affordable) SSDs performance suffering from RAID configuration. 

The server was bought from Newegg (http://newegg.com) and the processors were purchased from Directron (http://directron.com), which is my first buy from Directron. I'm still awaiting arrival of the CPUs. Newegg is simply the best place to buy PC/Server parts for builders. I am, however, interested to try Directron. 

Check out the pictures below. 

The first thing all of my lab equipment touches is my counter.

Nice quality 1U server. Hopefully it performs just as good.

Hot-swap bays give an enterprise feel to my lab server.

I like to build but there is nothing wrong with pre-built.

Everything is in order including unexpected riser cards.

CPU, Memory, Storage, and other parts are not included.

3 RJ45 connections. The isolated NIC is for mangagement.

Open 3.5" Hot-swap bays.

My wife's cat, Ramsay, makes her debut on my blog.

I racked it to keep it out of the way until the build is complete

I spaced it from the other servers to keep air-flow sufficient.

Monday, November 4, 2013

ESXi Server Realized

So, my tentative plans to build an ESXi server have become more concrete. My wife's Christmas present to me consists of letting me budget to build an ESXi (most likely) host. So far, I have gotten the hardware lined up and the total cost is looking to be around $2,500. 

Below are the specifications I have lined up.
  •  ASUS RS500A-E6/PS4 1U Rackmount Barebone Server 
    • Dual G34 Socket (Opteron 6300/6200/6100)
    • 256GB RDIMM 1600/1333/1066/800 DDR3
    • Dual Intel 82574L 10/100/1000 NIC
    • 1x MGMT LAN
    • 4x 3.5 Hotswap Bays
  • 2x AMD Opteron 6320 Abu Dhabi 2.8GHZ 8-Core
  • Wintec 32GB (4x8GB) 240-Pin DDR3 Registered
  • 2x Kingston 120GB SSDNow (OS)
  • 2x Western Digital 1TB WD AV-GP (VM/Datastore)
  • 1x Intel EXPI9301CT 10/100/1000 NIC
Those are just the essentials. I hope to have everything assembled before or during the week of Christmas. You might ask why an aspiring CCNP wants to experiment with virtualization. Besides virtualization being another of my interests, it has also become a fact of life in most datacenters as well as becoming increasingly prevalent in networking with the advent of virtualized network elements. If nothing else, it adds more variety to my growing lab.   

Sunday, October 20, 2013

IPsec VPN Tunnel

As you can tell by the title of this post, today was a milestone in my lab. I am starting to reap the fruits of my security studies. Most of the CCNA Security study that I have been undergoing (CBT Nuggets) has all been leading up to VPNs. Yesterday I completed and today I fully tested a site-to-site VPN in my lab.

I was able to establish a site-to-site VPN in my lab over my existing frame-relay topology. Usually when people think of VPNs, they think of two VPN gateways establishing a secure private tunnel over then internet. In most practical cases, this is true. However, a VPN can be tunneled over almost any routed infrastructure including a private frame-relay or MPLS network. As long as the two VPN server/gateways (usually a firewall) can talk to each other and agree on hashing, authentication, group, lifetime, encryption protocols, and allowed traffic, you should be able to establish a site-to-site VPN. Below is a sample of my site-to-site VPN lab.
This device is my Zone Based Firewall (Cisco 2821 Router). The caption above is a simple monitoring tab of the site-to-site VPN. This router runs the Advanced Security IOS supported by the Cisco Configuration Professional feature.























My ASA at the HQ site is the peer device to the Zone Based Firewall. Both devices negotiate VPN parameters and establish the SA(Security Association) and IPsec Tunnel. I tested the tunnel for over 14.5 hours without failure.



















Below is a simple diagram of how my topology was designed. Keep in mind, that routes for the destination networks supported by the VPN were not directly advertised. This would defeat the purpose. Basically traffic was generated from the local network supported by the VPN. Once that traffic reached that device's default gateway (edge device/firewall) that device could find no routes in it's routing tables to the remote network but found a usable path through VPN. At that point, the VPN tunnel is negotiated and established. From there, traffic is free to traverse the tunnel.




The goal of this site-to-site VPN was to permit traffic from 10.11.1.0/24 network and 10.11.2.0/24 networks behind the Zone Based Firewall to networks 10.0.2.16/24 behind the HQ ASA and 192.168.3.0/24 behind the my Internet Gateway/Core Router WITHOUT any advertising of those networks via static or dynamic routing protocols.

VPN is very simple in nature yet very complex at the same time. The key to configuring a site-to-site VPN is making sure that both sides agree to the same protocols. As you can see in my VPN I used:

Hashing - SHA1
Authentication - PSK
Group - DH2
Lifetime - 86400
Encryption - AES128

I used these parameters for both tunnels; IKE phase 1 and IKE phase 2. The first tunnel simply establishes is the VPN gateways are willing to talk to each other. If those conditions are met, then the second tunnel establishes the condition for traffic that will be allowed over the VPN coming from nodes on their respective networks. If you get ONE parameter off, everything fails. The only flexibility is in the lifetime. Other than that, everything has to match. 

I highly recommend watching both Jeremy Cioara (2008) and Keith Barker's (2012) Nuggets on site-to-site VPN. They both make it a lot easier by explaining the basic fundamentals of establishing a site-to-site VPN and Keith also introduces a very creative way to remember the VPN tunnel parameters required in the acronym HAGLE: Hashing, Authentication, Lifetime, Encryption.

On another side-note, Keith Barker also taught me an important lesson at the end of his CCNA Security series. When building a network, have security in mind before and during the implementation; not after. When establishing an enterprise (or larger) level network, we as network administrators should always be thinking of ways to secure the network in the planning phases because it is so much more difficult to implement security after the network has been established, or worse; after it has been compromised.

I REALLY enjoyed performing this lab and watching the VPN come to life. I will probably practice more with site-to-site VPNs and get more comfortable configuring them in different and more complex scenarios. Other than that, next stop, CCNP!