Saturday, December 31, 2016

Proxmox Server Part I: The Hardware

Before I begin, I would like to extend my warmest holiday wishes to all. This Christmas was very special for me and my family as this was my daughter's first real Christmas (she was two months old this time last year). :-)

Since 2013, I have been primarily using ESXi as my hypervisor of choice for my lab and home network needs (Domain Controller, DNS, DHCP, RADIUS, etc.) Back in September, my ESXi server began randomly crashing, seemingly with no logs indicating what may be occurring. While troubleshooting this issue, I recognized the need for some sort of backup. Rather than going the traditional route of adding a secondary ESXi server, I decided to purchase a refurbished server and give Proxmox a try. 

The goal of this build was to build a cost effective backup and avoid incurring the costs of building a brand new server. Build preferences included:
  1. 2x Xeon CPUs 
  2. 64GB RAM (Minimum)
  3. 4x Hot Swap Bays (Minimum)
  4. 2x Integrated Network Interfaces (Minimum)
  5. ESXi supported Hardware RAID controller (In case Proxmox is abandoned)
  6. 2 Units / Rack (Maximum)
  7. Out-of-Band Management

I was able to find an excellent deal on a Dell R710 server on eBay that met and exceeded all of my specifications (see below).

  1. Dual Xeon X5570 2.93GHz
  2. 64 GB ECC DD3 
  3. 8x 2.5'' SATA Hot swap Bays
  4. 2x 500GB 7.2K Dell Enterprise HDDs
  5. 4x Integrated NetXtreme II Gigabit Network Interfaces
  6. PERC H700 RAID Controller
  7. 2U 
  8. iDRAC6 Support (Out-of-Band)
  9. Dual PSUs 
  10. Rail Kit Included
  11. 90 Day Warranty

Server Price:  $285.00
Shipping:        $47.14

The only items that i needed to purchase separately were the additional SSDs (my preference) for boot, the actual iDRAC6 interface (not included with server), and additional drive caddies.
  1. 2x CISNO 2.5" SAS HDD Tray/Caddy for Dell R710  - $25.80
  2. 5pk Dell Expansion Slot Cover - $5.25
  3. Dell K869T iDRAC6 Interface -  $9.99
  4. Assurant Protection Plan - $11.99
  5. 2x Black Diamond BDSSDS128G-MU 128GB SSD - $62.40 

Subtotal: $115.43
Shipping: $34.03

GRAND TOTAL: $481.60

Though refurbished, that's a lot of horsepower for less than $500. This is especially pleasing considering my 2013 ESXi build, consisting of all new hardware, was more than $2500. See the hardware below. 

Dell R710 - 2U/Dual Xeon X5570/64GB DDR3/8 Hot Swap Bays




SSDs installed in Bays 0 and 1, HDDs in 2 and 3






Dual Xeon X5570 - 16/19 DIMM slots populated 
Quad-Port Integrated NIC / Dual-Port Add-In NIC (Spare) / OOB Interface

Tucked tightly between my ESXi server and lab-only Cisco Catalyst 3750




 The only real issue I ran into was the SSDs that i purchased (mentioned above), though listed as enterprise grade on NewEgg, were not welcomed by the Perc H700 RAID controller. The controller continued throwing errors about drive failures on slots 0 and 1 (the slots the SSDs populated). The drives worked perfectly and passed all tests and were even successfully configured for RAID-1 by the controller but errors continued to fill the server's logs and the indicator lights continued to show amber LEDs. After doing some research, I found that Dell (and likely other OEMs) RAID controllers do not play nice with non-Dell, non-Enterprise storage devices. One Dell forum user mentioned that enterprise grade Intel SSDs can be used without throwing errors on the controllers. 

Luckily, I had two enterprise 80GB Intel S3510 SSDs  being used as boot drives (mirrored) in my FreeNAS build. As many FreeNAS users can agree, the Intels might have been overkill for the FreeNAS box because FreeNAS is generally not extremely picky about the boot drives that it uses. As a matter of fact, many FreeNAS builders recommend using USB thumb drives as the boot disk. So I simply removed the Black DIamond SSDs from the server, connected them to the FreeNAS box, mirrored the boot from the Intels, and verified functionality. Afterwards, I installed the Intel S3510 SSDs in the server and, voila; all disk errors were cleared. 

As far as storage is concerned, I ran my typical hypervisor setup of four drives; two SSDs housing the operating system in RAID-1 and two local storage (data) HDDs in RAID-1. 

iDRAC6 worked well with minimum configuration as with most out-of-band management interfaces. Typical features such as remote power control, console access, virtual media mounting, and logging are all available from the iDRAC6 interface. Of course i managed all RAID configuration and OS deployment from the iDRAC6 embedded web server. 





Overall, the Dell R710 is an excellent piece of hardware to lab with for virtualization practice. In subsequent posts, I will detail my experiences with Proxmox and my overhaul of my virtual infrastructure. I hope everyone has a happy and safe New Year and I look forward to blogging in 2017. 






Wednesday, November 2, 2016

GNS3 Server

Around this time last year, I purchased a subscription to Cisco's VIRL which I had spent a year prior debating. I was excited to get everything up and going but to be completely honest, after about a month and a half of experimenting with VIRL, I was disappointed. VIRL's hardware requirements were too high for most self-funded labbers, setup of simulations and deployment often took up one third of my lab time, and there were many key features missing from the VMs despite this product coming directly from Cisco. This is not to bash VIRL, I just don't believe that the product had matured enough before it was released. Also, to be fair, I have not used VIRL since earlier this year, so many of these issues that I am mentioning are not only a matter of opinion, but may have been fixed since then. I do applaud the efforts and the idea of VIRL but it still doesn't quite measure up to the venerable GNS3 in my opinion; which brings me to the purpose of this post. 

GNS3 has been an invaluable study tool. I am always a fan of buying the real hardware, but for larger topologies with medium to high levels of network redundancy, that may be cost-prohibitive. After I benched VIRL, I got back into GNS3 considering I was moving into the ROUTE portion of the CCNP. GNS3 is great but using it across multiple clients and maintaining several copies of topologies across those clients (combination of Windows and Ubuntu) was becoming time consuming. To resolve that, I decided to start saving all of my topologies to my ownCloud (FreeNAS) server. Deploying them and saving them from there was easy and I didn't have to maintain multiple copies of the same topology across several devices. I could build a topology on my Ubuntu desktop, make changes, close, and see those changes available in one of my Windows 10 VMs or on my Ubuntu laptop. That solved one issue. The other issue is that my topologies were growing in size. GNS3 is a lot better on system resources than VIRL but I was pushing the limits. 

I decided to re-purpose a Supermicro server that I bought several years ago. I had since been using it as an ESXi test server along side my primary ESXi host. I loaded Ubuntu Server 16.04 on it and installed GNS3 server. Installing GNS3 server on Ubuntu Server takes less than 15 minutes and was easily done via these instructions that I was able to find in a quick search. The instructions note that GNS3 runs on port 8000 but GNS3 changed the default port to TCP 3080 several months ago. 

In any case, running GNS3 on a dedicated server has proven to have numerous advantages. First and foremost, this reduces the load on a standard or even higher end desktop or laptop. I'm able to run much larger topologies on the server with a hyper-threaded quad core Xeon and 24GB of DDR2. It's an older server but it gets the job done. 


Below is a sample of one of my CCNP topologies. This one simulates an EIGRP network composed of an enterprise core, WAN edge, several branch offices connected over Frame-Relay and Ethernet links, an Internet edge running iBGP and eBGP, and simulated ISP routers. GNS3 has come a long way from just supporting older 1700, 2600, 3600, 3700, and 7200 series routers. GNS3 has a marketplace that allows you to integrate several types of appliances including Cumulus Linux VX switches, various load balancers, VyOS, several Linux virtual machines (that serve as excellent endpoints), ntopng, Openvswitch and much more via Qemu virtual machines. It's amazing that I don't hear more about these features. One of the best to me is that you can use the same images used in VIRL (if you have a valid subscription and access to the images) to use IOS-XRv, ASAv, IOSv, IOSvL2, CSR1000v, and NX-OSv. You basically get VIRL with lower resource consumption.  


The edge and ISP routers in this topology are Cisco IOSv VMs while the core is IOSvL2 (basically an L3 switch), and the WAN aggregation are traditional 2600, 7200, and 3600 series IOS images. This is a medium-sized topology with several Linux virtual machines running as network hosts/clients to test communication outside of IOS. My physical server has two interfaces, so I use one for management and one that connects to the cloud for routing traffic from my virtual network to my physical network. My GNS3 network devices and clients receive NTP updates, DNS, and even has regular internet access through this interface. I connect both edge routers to a dumb switch in GNS3 that allows both to uplink off my physical Catalyst 3750 core switch. They run eBGP to simulate a real Internet connection and I redistribute those routes into my physical core's OSPF process to route traffic to my upstream Internet gateway. It really is amazing what you can do with GNS3 now. 

Another feature that I think is not getting enough attention is the fact that the integration of VIRL images allows for a good bit of switching features but definitely not 100%. For instance, you can set up HSRP but any connected hosts are unable to communicate with the virtual IP. They can communicate within the same VLAN and traffic can be routed to other interfaces but only using the VLAN IP. When it comes to switching, you don't get anything more than you would get in VIRL. 

Below are more screenshots of the same EIGRP lab. It's a dual-stack lab (IPv4 and IPv6) but routing protocols are only being applied to IPv4 currently. I was testing IPv6 for more basic features like link-local communication, EUI-64, DHCPv6 (to hosts), NDP, and understanding router advertisements and solicitations. 

ICMP communication from a GNS3 VM to the actual Internet. 
Routing table of GNS3 WAN router showing EIGRP-advertised default route.


Telnet session from physical PC to GNS3 router and MTR session.

GNS3 ISP transit-router showing BGP routing table.
For ROUTE, GNS3 is proving to be one of my most invaluable tools, especially with all of the new features added. I can't recommend it enough to anyone currently studying routing concepts. If GNS3 ever gets switching integrated 100%, it will truly be a network student's Swiss army knife. For the time being, it is more than sufficient for routing labs. I also highly recommend setting up a GNS3 server where possible. I tried deploying a GNS3 VM in several forms on ESXi and it was an insatiable CPU devouring vampire, no matter how much I tweaked the Idle-PC settings or adjusted resources in ESXi. It simply required more than I could give it as a VM on the CPU-front. I'm sure some more knowledgeable ESXi expert could get it to work much better, but I shamelessly went bare metal. Overall, my GNS3 server is proving to be an excellent investment in my studies. 

Have a similar experience? Have recommendations? Feel free to comment.  

Tuesday, November 1, 2016

FreeNAS Build

It's been a while since my last post (over a year). Life has taken me in several different directions since then. In February of 2015, my wife and I found out we were expecting and in October of 2015 we had a beautiful baby girl.  On the career-front, I passed my CCNP SWITCH in early March of 2016 and I have been actively working on the ROUTE test. I have been taking my time as I'm in the first year of parenthood and, as always, I want to make sure that I really learn the material; not just pursue a certification with nothing truly gained from the journey. Outside of the Cisco world, I did manage to build a much needed Network Attached Storage (FreeNAS-based). I evaluated building vs. buying and there wasn't much price difference in what was available for me to buy pre-packaged and what I could build. So I figured, why not build and learn something? 

First things, first... The BoM...

Motherboard:
  • Micro-ATX 
  • LGA 1151
  • 64GB Max Unbufferred DDR4
  • 8x SATA Ports / 6.0G
  • Dual 1Gbps LAN
  • Dedicated IPMI 
  • 4x PCI Expansion Slots
CPU
  • Skylake
  • 3.4 GHz
  • LGA 1151
  • Quad Core / HT 
Memory
  • 16GB DDR4
  • Un-Buffered
Storage
  • 80 GB 
  • Enterprise SSD
4x WD Red 3TB NAS HDD (Datastore)
  • 3TB
  • 5400 RPM 
Power
  • 650W
  • Modular
  • 80 PLUS Bronze
Case
Fractal Design Node 804 Black  (Not racked  -_-' )
  • Micro-Mini ATX Compatible
  • 10 potential 3.5" Disks + 2 Dedicated 2.5" Disk unit positions 
  • Chambered Design 

So basically what I ended up building was a cube-style NAS powered by a Skylake Xeon with 32GB DDR4 memory and dedicated IPMI. For the last three or four years I had been toying with the notion of building or buying a NAS, and in the the event of a 'build', FreeNAS was always my number one OS choice (though i reviewed others). FreeNAS seems to offer more robust and mature features considering it's large community. 

Of course the most interesting detail about a NAS is the storage. A lot of people in the FreeNAS forums recommended using a USB for the OS(boot) drive, but I opted for two mirrored SSDs for more reliability.  On the datastore front, the most popular choice for a NAS build seems to be the Western Digital Red series drives. Not looking to build a super-high capacity storage node, I selected four 3TB WD Red NAS HDDs. Using ZFS, I configured the four disks in RAIDZ2 which requires a minimum of four disks and uses two for parity. RAIDZ2 (RAID 6) configurations can potentially withstand the loss of up to two disks according to my research. 

Check out the final product. 









































This NAS build very quickly became an integral part of my lab and even my family's personal use. On the lab side, I use the NAS as a backup for my network configurations and templates and software repository. The biggest impact that the NAS has had on my labs has been its integration with my ESXi server. I use Veeam to back up all of my VMs to the NAS over SMB. I also use NFS to host my virtual machines and iso files with little to no performance difference while greatly increasing my storage flexibility.

On the personal side, myself and my wife use the NAS as backups for our computers and phones with all of our family pictures and files copied there regularly. I run both Plex and ownCloud jails from the NAS for home media server and Dropbox-like functionality, respectively. I also create images of all of our home computers using Clonezilla and store them there, having tested restoring from them successfully. 

So far I have been running my FreeNAS solution 24/7 for six months with zero issues. One of the best things I can say about FreeNAS is the all around flexibility of the system. Currently I run several file sharing protocols all from the same box including CIFS/SMB, FTP, and NFS. This flexibility makes the NAS accessible from almost any device and any OS. 

I would highly recommend FreeNAS as the OS of choice for any one looking to build a personal NAS. It's very feature-rich and FreeBSD (on which it's based) seems to be a solid, dependable operating system. The only drawback is depending on your build, FreeNAS has some recommended hardware minimums that may not be friendly to builders on a tight budget. My build is likely considered small but still totaled around $1700 to assemble. With any NAS build, the largest percentage of the cost will depend on your desired storage capacity and configuration. I built for scale (add more later) ;-) Another bit of advice that I would give to anyone looking to build a FreeNAS based storage system is to check out the FreeNAS forums. I never directly posted but I found the advice from users there extremely helpful; especially the moderator, cyberjock. FreeNAS can be one of those builds where you have to be very selective about some of the hardware being used and it would be very wise to use a starting point like the forums. 

Have a similar build or are you looking to build something similar? Feel free to comment. 

Sunday, January 25, 2015

Lab Update 01-25-15

It's been a while since I posted an overall update on my lab. Since I have moved, I have taken the move as an opportunity to change some architecture. 


My lab as of 01/25/15







































I haven't added any significant equipment. I've mostly only moved things around for my CCNP SWITCH studies and other conveniences. From top to bottom:


Device/Model
Type
Role
Note
TRENDnet TC-P24C6
Patch Panel
Cable Termination

Cisco ASA5505
Firewall
Firewall Testing

Custom Build Router
Router/Firewall
Internet Gateway

Cisco WS-C3750-48TS
Multilayer Switch
Core Switch 1
HSRP
Cisco WS-C3750-48TS
Multilayer Switch
Core Switch 2
Supermicro Server
Server
DHCP, FTP, ESXi Management

ASUS Server
Server
ESXi Hypervisor
·         RADIUS
·         Domain Controllers
·         DNS (Load Balanced)
·         VCenter
·         Test Servers

Cisco WS-C3750-48TS
Multilayer Switch
Distribution Switch 1
HSRP
Cisco WS-C3750-48TS
Multilayer Switch
Distribution Switch 2
Cisco WS-C2960-24TT-L
Layer 2 Switch
Access Switch

Cyclades AlterPath ACS32
Access Server
Terminal Access Server

Cisco C2811
Router
N/A (Disconnected)

Cisco C2851
Router
N/A (Disconnected)

Cisco C2821
Router
N/A (Disconnected)

Cisco C2821
Router
N/A (Disconnected)

APC AP7900
Switched PDU
Rack Power




My topology has changed somewhat as I implement DNS in much more of my lab functions as well as my home network use. Both "core" switches use HSRP for HA and are redundantly connected to my internet gateway using OSPF (as the ASBR) and point-to-point (/31) connections for link redundancy. This is also how I connect my Distribution Switches back to the Core while summarizing routes of course. The Core is used to support some of my home network stuff (TVs, consoles, APs, etc.). Of course, this would normally NEVER be the case (to connect end devices to the Core), but this is a lab, not a production network. So I use my Cores for shared purposes; home network and lab. From Distribution below is exclusively used for CCNP lab purposes. The only time hosts are connected are for testing. I hope to add another Layer 2 access switch behind the Distribution switches for increased STP study. The Catalyst 2960-24TT-Ls are pretty cheap on ebay for layer 2 only operation/study.

Most of my routing equipment is disconnected at the moment to help me focus on my CCNP SWITCH studies, which by the way I had to go back and purchase the v2.0 study material for! I'm hoping that by March or April, I will be ready to take my CCNP SWITCH test. 

Other changes have seen me retire PPTP as my primary Remote Access VPN connection method in favor of the more secure certificate based OpenVPN. 

Things I would love to see integrated in my lab in the future include:
  • Gigabit switching (Cisco GbE equipment is still pricey, even on ebay)
  • PoE (Power over Ethernet)
  • New APs (more so for home network flexibility)
  • IPS and/or Next Gen Firewall
  • NAS - Build or Buy? (Looks pretty expensive to build)
  • Another ASA for more VPN practice (you can never get enough of that)

My lab will continue to grow and be an integral part of my studies as well as my home network and I will continue to update this blog as changes occur. 

My Wired Home

Since my last post, I have left my 2 bedroom apartment and achieved part of the American dream; becoming a first time home buyer. With 4 bedrooms and 3 baths, this offers myself and my wife a lot more space, privacy, and sense of stability. Now what would be the first thing that a "packet junkie" such as myself would want to do in a new home? Wire it!

So I sat down and planned the install and how I wanted to go about doing it and whether or not I wanted to pay someone else to do it as this would be my first time performing this kind of work. Most jobs in the networking field will never require you to install cabling or ports as that's usually left to the electrician or other speciality contractors, but I figured this would be a fun DIY project and a good learning experience. 

Safety first, so supplies included:
  • Flashlight and headband mounted light
  • Dust mask
  • Work Gloves
Some of the tools needed, included:
  • 12V Drill (an 18+V would have been much better and faster)
  • Paddle Bits (for drilling the holes in the 2x4s)
  • Steel Fishing Tape
  • Fiber Optic Cable Pulling Rod (Better for getting through insulation)
  • Stud Finder
  • Wire Cutters (Optional)
  • Punch-Down Tool
  • Ethernet Crimping Kit 
  • Electrical Tape
  • Strong Box Cutter or Drywall Saw
  • Leveler/Ruler
  • Assortment of screwdrivers, etc.
Supplies needed included
  • Bulk Ethernet Cable (I went with 1000' CAT6)
  • Ethernet Keystone Jacks
  • Keystone Wall Plates
  • Low-Voltage Old Work Boxes
  • Patch Panel (Optional)
  • RJ45 Connectors (If not using Patch Panel)
  • Cable Pass Through Wall Plate
CAT5E or CAT6

It doesn't really matter which you choose as CAT5E will offer more than enough performance for most home installs. I chose CAT6 because you never know... 10GbE could be household any day now! Anyway, the only thing to make sure of is that IF you're going to go with CAT6 cable, make sure you use CAT6 for all other physical cabling components. This includes using CAT6 spec keystones, patch-panels, couplers, RJ45 connectors, etc. Using CAT5E at any point on CAT6 cabling will virtually turn your install into a CAT5E install as the physical components can only perform at that spec. 

The Process
The process itself can be a lot more complicated than it looks. The first thing to do is to decide reasonable locations for where you want to locate your jacks. I used a stud finder to make sure that I was correctly locating the jack in between the wooden studs inside of the wall. Interior walls are best as dropping cables down exterior walls are much more time consuming and difficult due to the angle of the roof, but far from impossible. 

Once you decide where you want your jacks, you have to figure out where and how you want to get your cabling through the wall. In my case, I have an attic, whereas others may have a crawlspace, etc. Traversing the attic required careful footing a good balance as walking on the beams above the ceiling is mandatory unless you want to fall through your ceiling. It was sometimes tricky as insulation skewed a lot of visibility.

Snowy looking insulation.


















One of the hardest things to do was to figure out where I was in the attic. That required a lot of tapping and a few "Marco Polo" games between me and my wife (with her outside of the attic). From there, it was a matter of finding where in the room I wanted to locate the port. The fact that most areas I chose to place ports were near existing electrical or cable outlets made deciding where to drill holes a lot easier, as in the attic, you can see where electrical or coax cables go down in the same manner you intend to drop your Ethernet lines. I tragically decided to attempt running my first cable through the same hole as an existing coax line. The cable quickly got stuck and caused a lot more chaos than it was worth, so I decided to stick to drilling dedicated holes for Ethernet. 

3/4in hole in 2x4 above wall.




















 

Above each wall in most homes is a 2x4 (many times two double-stacked). I drilled 3/4in holes using paddle bits in case I wanted to run multiple lines to the same jack. One of the next things I did was test running a fiber-optic rod or steel fishing tape down first before attempting to fish cable. This helped me to figure out that there was no fire-block or additional obstruction preventing the cable from being dropped. Afterwards, I simply used either the steel fishing tape or fiber-optic rod with the Ethernet cable bound to either using electrical tape to drop the cable into the wall. The steel fishing tape works well, but tends to curve when facing thick insulation whereas the fiber-optic rod can just stab through. 


Ethernet fished through.
















 
Afterwards, it was time to hit the floor level and cut a hole in the wall and hopefully find the dropped Ethernet. I used a box cutter and drywall saw to cut a hole in the wall. I use a leveler to make sure that everything is even and straight.


Ethernet successfully pulled.




















 
Once the Ethernet is pulled through, from there I used a low voltage old work box (orange frame) to mount the wall plate. Of course, the Ethernet needs to be punched down to the keystone jack first. 


Lining pairs up to keystone terminals.
Ethernet terminated to keystone.





















I then terminated the Ethernet to the keystone jacks and cut any excess. Don't worry, most come with a color diagram to guide you. From there, it was a simple matter of installing the keystone to the wall plate and mounting it to the wall.


Not too bad for my first Ethernet install.

















I tried to keep Ethernet jacks next to power and coax ports simply because it is much more convenient and practical when setting up entertainment (TV, gaming consoles, cable box, etc) locations. Practicality was one of the central themes during my planning. I didn't want to install Ethernet ports everywhere simply because I could. In my opinion, that really makes the home look tacky. So I stuck to the idea of installing the ports in practical locations where they would be used.

In some locations, I wanted to install multiple ports. For example, in my living room, I have my Smart TV, PS4, Xbox 360, and other network capable devices. In those areas, I ran multiple cables that terminated at two or four port jacks, testing Ethernet connectivity and quality in between each run.

4 cables in 3/4in hole.
4-Port Wall Plate





















Running the cables out to the ports was only half the job. The other half was to terminate the cables back to my network equipment. I decided to turn one of the rooms in my house into our office where I would of course house my rack. I also decided to use a patch panel for more flexibility. The location where all network cables would terminate of course required a wider hole in the 2x4 to support all cables in the home being terminated at the rack. I also installed a cable pass-through wall jack to allow all cables to come through.


Cable pass-through wall plate

















After I got all cables fed through to the rack, it was time to terminate to the patch panel and cut any excess.


Me using a punch-down tool for the first time!





















Terminating to the patch panel.

















I terminated everything to the patch panel using a punch-down tool in low-impact mode. The panel I purchased has 24 ports (more than enough for a home). I only used 10 ports as that's how many cables I ran totally throughout my home. The panel also comes color coded as you can see. From there, I ran 5ft CAT6 patch cables to my lab switches (2x 48-port Catalyst 3750s). 


Patch panel labels consistent with wall-port labels


















Cables connect back to 3750 switches

















After terminating everything to the patch panel and connecting back to my switches, I then ran several tests, checking for physical issues (connectivity, input errors, speed/duplex problems, CRC errors, etc.). Everything checked out great. I completed my project having run a total of 10 cables to 6 wall ports; 4x single port jacks (1), 1x dual port jack (2), and 1x quad port jack (4). So with that said:

Why do this?

For a number of reasons, but two more notable than others. The first is that I like the idea of an Ethernet connected home. The second is that I prefer a faster, more stable wired connection for certain applications (PSN, Netflix, etc.) over a more variable wireless connection. I definitely still use wireless in my home, but it's great having both. 


How much?

The price of performing this can vary greatly depending on what you are trying to accomplish and what you already have. As a matter of fact, most of the unexpected costs I ran into was for tools that I did not previously own. So this project did also yield the advantage of me building up my home tool-set. I did not keep meticulous records of what I spent but I would say that it was well over $700. 

Was this project useful?

Definitely. What I learned here will help me as I plan to take on two projects later that will utilize the scalability of this project. I plan to install ceiling mounted wireless access-points later this year or next year and an IP-based home security system. My wife also wants the latter as this is our first home and she is interested in home security as well.

Let me know about your experience wiring your home with Ethernet or if you have any questions about my install, feel free to ask.