Wednesday, November 2, 2016

GNS3 Server

Around this time last year, I purchased a subscription to Cisco's VIRL which I had spent a year prior debating. I was excited to get everything up and going but to be completely honest, after about a month and a half of experimenting with VIRL, I was disappointed. VIRL's hardware requirements were too high for most self-funded labbers, setup of simulations and deployment often took up one third of my lab time, and there were many key features missing from the VMs despite this product coming directly from Cisco. This is not to bash VIRL, I just don't believe that the product had matured enough before it was released. Also, to be fair, I have not used VIRL since earlier this year, so many of these issues that I am mentioning are not only a matter of opinion, but may have been fixed since then. I do applaud the efforts and the idea of VIRL but it still doesn't quite measure up to the venerable GNS3 in my opinion; which brings me to the purpose of this post. 

GNS3 has been an invaluable study tool. I am always a fan of buying the real hardware, but for larger topologies with medium to high levels of network redundancy, that may be cost-prohibitive. After I benched VIRL, I got back into GNS3 considering I was moving into the ROUTE portion of the CCNP. GNS3 is great but using it across multiple clients and maintaining several copies of topologies across those clients (combination of Windows and Ubuntu) was becoming time consuming. To resolve that, I decided to start saving all of my topologies to my ownCloud (FreeNAS) server. Deploying them and saving them from there was easy and I didn't have to maintain multiple copies of the same topology across several devices. I could build a topology on my Ubuntu desktop, make changes, close, and see those changes available in one of my Windows 10 VMs or on my Ubuntu laptop. That solved one issue. The other issue is that my topologies were growing in size. GNS3 is a lot better on system resources than VIRL but I was pushing the limits. 

I decided to re-purpose a Supermicro server that I bought several years ago. I had since been using it as an ESXi test server along side my primary ESXi host. I loaded Ubuntu Server 16.04 on it and installed GNS3 server. Installing GNS3 server on Ubuntu Server takes less than 15 minutes and was easily done via these instructions that I was able to find in a quick search. The instructions note that GNS3 runs on port 8000 but GNS3 changed the default port to TCP 3080 several months ago. 

In any case, running GNS3 on a dedicated server has proven to have numerous advantages. First and foremost, this reduces the load on a standard or even higher end desktop or laptop. I'm able to run much larger topologies on the server with a hyper-threaded quad core Xeon and 24GB of DDR2. It's an older server but it gets the job done. 


Below is a sample of one of my CCNP topologies. This one simulates an EIGRP network composed of an enterprise core, WAN edge, several branch offices connected over Frame-Relay and Ethernet links, an Internet edge running iBGP and eBGP, and simulated ISP routers. GNS3 has come a long way from just supporting older 1700, 2600, 3600, 3700, and 7200 series routers. GNS3 has a marketplace that allows you to integrate several types of appliances including Cumulus Linux VX switches, various load balancers, VyOS, several Linux virtual machines (that serve as excellent endpoints), ntopng, Openvswitch and much more via Qemu virtual machines. It's amazing that I don't hear more about these features. One of the best to me is that you can use the same images used in VIRL (if you have a valid subscription and access to the images) to use IOS-XRv, ASAv, IOSv, IOSvL2, CSR1000v, and NX-OSv. You basically get VIRL with lower resource consumption.  


The edge and ISP routers in this topology are Cisco IOSv VMs while the core is IOSvL2 (basically an L3 switch), and the WAN aggregation are traditional 2600, 7200, and 3600 series IOS images. This is a medium-sized topology with several Linux virtual machines running as network hosts/clients to test communication outside of IOS. My physical server has two interfaces, so I use one for management and one that connects to the cloud for routing traffic from my virtual network to my physical network. My GNS3 network devices and clients receive NTP updates, DNS, and even has regular internet access through this interface. I connect both edge routers to a dumb switch in GNS3 that allows both to uplink off my physical Catalyst 3750 core switch. They run eBGP to simulate a real Internet connection and I redistribute those routes into my physical core's OSPF process to route traffic to my upstream Internet gateway. It really is amazing what you can do with GNS3 now. 

Another feature that I think is not getting enough attention is the fact that the integration of VIRL images allows for a good bit of switching features but definitely not 100%. For instance, you can set up HSRP but any connected hosts are unable to communicate with the virtual IP. They can communicate within the same VLAN and traffic can be routed to other interfaces but only using the VLAN IP. When it comes to switching, you don't get anything more than you would get in VIRL. 

Below are more screenshots of the same EIGRP lab. It's a dual-stack lab (IPv4 and IPv6) but routing protocols are only being applied to IPv4 currently. I was testing IPv6 for more basic features like link-local communication, EUI-64, DHCPv6 (to hosts), NDP, and understanding router advertisements and solicitations. 

ICMP communication from a GNS3 VM to the actual Internet. 
Routing table of GNS3 WAN router showing EIGRP-advertised default route.


Telnet session from physical PC to GNS3 router and MTR session.

GNS3 ISP transit-router showing BGP routing table.
For ROUTE, GNS3 is proving to be one of my most invaluable tools, especially with all of the new features added. I can't recommend it enough to anyone currently studying routing concepts. If GNS3 ever gets switching integrated 100%, it will truly be a network student's Swiss army knife. For the time being, it is more than sufficient for routing labs. I also highly recommend setting up a GNS3 server where possible. I tried deploying a GNS3 VM in several forms on ESXi and it was an insatiable CPU devouring vampire, no matter how much I tweaked the Idle-PC settings or adjusted resources in ESXi. It simply required more than I could give it as a VM on the CPU-front. I'm sure some more knowledgeable ESXi expert could get it to work much better, but I shamelessly went bare metal. Overall, my GNS3 server is proving to be an excellent investment in my studies. 

Have a similar experience? Have recommendations? Feel free to comment.  

Tuesday, November 1, 2016

FreeNAS Build

It's been a while since my last post (over a year). Life has taken me in several different directions since then. In February of 2015, my wife and I found out we were expecting and in October of 2015 we had a beautiful baby girl.  On the career-front, I passed my CCNP SWITCH in early March of 2016 and I have been actively working on the ROUTE test. I have been taking my time as I'm in the first year of parenthood and, as always, I want to make sure that I really learn the material; not just pursue a certification with nothing truly gained from the journey. Outside of the Cisco world, I did manage to build a much needed Network Attached Storage (FreeNAS-based). I evaluated building vs. buying and there wasn't much price difference in what was available for me to buy pre-packaged and what I could build. So I figured, why not build and learn something? 

First things, first... The BoM...

Motherboard:
  • Micro-ATX 
  • LGA 1151
  • 64GB Max Unbufferred DDR4
  • 8x SATA Ports / 6.0G
  • Dual 1Gbps LAN
  • Dedicated IPMI 
  • 4x PCI Expansion Slots
CPU
  • Skylake
  • 3.4 GHz
  • LGA 1151
  • Quad Core / HT 
Memory
  • 16GB DDR4
  • Un-Buffered
Storage
  • 80 GB 
  • Enterprise SSD
4x WD Red 3TB NAS HDD (Datastore)
  • 3TB
  • 5400 RPM 
Power
  • 650W
  • Modular
  • 80 PLUS Bronze
Case
Fractal Design Node 804 Black  (Not racked  -_-' )
  • Micro-Mini ATX Compatible
  • 10 potential 3.5" Disks + 2 Dedicated 2.5" Disk unit positions 
  • Chambered Design 

So basically what I ended up building was a cube-style NAS powered by a Skylake Xeon with 32GB DDR4 memory and dedicated IPMI. For the last three or four years I had been toying with the notion of building or buying a NAS, and in the the event of a 'build', FreeNAS was always my number one OS choice (though i reviewed others). FreeNAS seems to offer more robust and mature features considering it's large community. 

Of course the most interesting detail about a NAS is the storage. A lot of people in the FreeNAS forums recommended using a USB for the OS(boot) drive, but I opted for two mirrored SSDs for more reliability.  On the datastore front, the most popular choice for a NAS build seems to be the Western Digital Red series drives. Not looking to build a super-high capacity storage node, I selected four 3TB WD Red NAS HDDs. Using ZFS, I configured the four disks in RAIDZ2 which requires a minimum of four disks and uses two for parity. RAIDZ2 (RAID 6) configurations can potentially withstand the loss of up to two disks according to my research. 

Check out the final product. 









































This NAS build very quickly became an integral part of my lab and even my family's personal use. On the lab side, I use the NAS as a backup for my network configurations and templates and software repository. The biggest impact that the NAS has had on my labs has been its integration with my ESXi server. I use Veeam to back up all of my VMs to the NAS over SMB. I also use NFS to host my virtual machines and iso files with little to no performance difference while greatly increasing my storage flexibility.

On the personal side, myself and my wife use the NAS as backups for our computers and phones with all of our family pictures and files copied there regularly. I run both Plex and ownCloud jails from the NAS for home media server and Dropbox-like functionality, respectively. I also create images of all of our home computers using Clonezilla and store them there, having tested restoring from them successfully. 

So far I have been running my FreeNAS solution 24/7 for six months with zero issues. One of the best things I can say about FreeNAS is the all around flexibility of the system. Currently I run several file sharing protocols all from the same box including CIFS/SMB, FTP, and NFS. This flexibility makes the NAS accessible from almost any device and any OS. 

I would highly recommend FreeNAS as the OS of choice for any one looking to build a personal NAS. It's very feature-rich and FreeBSD (on which it's based) seems to be a solid, dependable operating system. The only drawback is depending on your build, FreeNAS has some recommended hardware minimums that may not be friendly to builders on a tight budget. My build is likely considered small but still totaled around $1700 to assemble. With any NAS build, the largest percentage of the cost will depend on your desired storage capacity and configuration. I built for scale (add more later) ;-) Another bit of advice that I would give to anyone looking to build a FreeNAS based storage system is to check out the FreeNAS forums. I never directly posted but I found the advice from users there extremely helpful; especially the moderator, cyberjock. FreeNAS can be one of those builds where you have to be very selective about some of the hardware being used and it would be very wise to use a starting point like the forums. 

Have a similar build or are you looking to build something similar? Feel free to comment.