One Year Down the Line
Posted: November 2nd, 2012, 2:07 am
Hello all! It's been a long while since I've posted here, but I've been visiting the main site occasionally when I needed to check some server-related info. (Seriously, Ian, this site is amazing!)
Last year, I built a server, which was pretty fun and I learned a lot. (Those interested, previous topic.) I've been using it since then and it's been great. (Even moved house and just set it up in the new place - new roommates do quite enjoy it.)
Since then I've built a monster desktop PC for gaming, which was also good fun, and the hardware experience I had from building the server was invaluable. I also put a blu-ray drive into the desktop, so I could finally add my blu-ray movies to my server collection. Of course, I'm back here because over the last two days I've done somewhat of an overhaul of my server, and thought you guys might be interested to hear what I've done. (And it might help a few people who want to do something similar, I can go into more detail on specifics if people are interested in them.) So, here's a story:
My previous setup had three 2TB drives, one for parity (first with flexRAID, then snapRAID once the former became payment-only), and the other two for data. (OS on the first one.) I noticed that, over the course of digitizing all of my DVDs and Blu-Rays , I was approaching the size limit of my first drive. I've been looking into ways of avoiding arbitrary splits of data across the two drives for a while now. Drive 1 already used LVM to manage most of its partitions, and I saw that LVM would let me extend the data partition on drive 1 to also include the space on drive 2. That's when I saw that (or at least thought) snapRAID would take issue with this configuration. Suddenly, if you're sitting on top of the file system, it looks like there are two disks, one 4TB and one 2TB, which doesn't work out for parity too well. True, I could use the device mount points, /dev/sda and /dev/sdb, but that felt like I was working around LVM.
I've also been concerned that I was using Ubuntu Server 11.04 which isn't an LTS version. (Last year me didn't think about that part all that much, and refused to install it all again after I'd realized LTS was an advantage, even with a lower version number.) The current LTS, at least when I looked, is 12.04. I looked at that, and then I was browsing the Arch Linux wiki (another story to go with that, Arch Linux on yet another computer, but that never went anywhere) and found an explanation of software RAID using Linux. I have to say the Arch Linux wiki does a great job of explaining RAID levels. (Check out this and this.) That got me thinking, what if I went with software RAID 5 at OS level, then put LVM on top to make easily poolable/extendable partitions, and upgraded to a stable LTS install all at once? After reading up a lot, and finding a fair amount of outdated information online, I was fairly sure I could make a massive RAID5 array of the whole 6TB. My main worry was that such a software RAID configuration couldn't be bootable (since Ubuntu needs to load the software RAID configuration for it to be assembled properly, and the array needs to be assembled for Ubuntu to be able to boot) but it seems sometime in the last year or so, grub2 has added support for booting from software RAID5. (Previously, it only supported RAID 0 and 1, which is what many people seem to still believe today.)
Thankfully, I had just enough storage space between all of my other computers to temporarily copy all the data files off my media server to elsewhere. I destroyed the existing partition setup. (It was strangely cathartic working on terminal commands and not caring if it destroyed file systems or configurations, since the objective was to delete it all anyway.) Once I had used GParted to convince (read: coerce) LVM into giving up its volume group hold on drive 1 (which the Ubuntu installer itself seemed unable to do), I booted up from a freshly burnt 12.04 install disc and, surprisingly easily, created a single partition on each drive and made a RAID5 array of the whole thing. Then, amalgamating the RAID and LVM sections of the Ubuntu Advanced Installation instructions, I created a volume group that occupied the whole array. That was divided up into 3 logical volumes, 20GB ext4 for /, 2GB for swap, and the rest ext4 for /home (so that's where the data goes).
That was about 4 hours ago. Since then I've run through a lot of the tutorials from here again, much faster than I did the first time because I've got a much clearer understanding of how it all works now. Currently, my desktop is copying a big chunk of my media back onto the server, and it's all swimming along merrily on top of a RAID5 array, with one big 4TB shared drive. There were some annoying bugs at shutdown on the previous Ubuntu Server release I was using, where / wouldn't be unmounted cleanly because it was 'busy' during shutdown. Then on boot it would complain / wasn't unmounted cleanly, and try to run a disk check, which of course put boot times in the range of 10 minutes, instead of 30 seconds. Now it shuts down and boots no problem!
Over the last year, a part of me has sort of wondered whether I use the server enough to justify having built the whole thing. I definitely learned a lot, which was great, but there was always this voice, wondering if I got enough use out of it. Well, yesterday I pulled the server down for the first time since I set it up, and that voice got real quiet real fast. Suddenly I had no music, there were shows I wanted to watch that didn't work, and my xbmc box did basically nothing. The server's use is harder to quantify than something like a new desktop PC, because it isn't something that gets used in isolation for specific purposes, it provides functionality to other devices, which can be much more useful than it immediately appears. So, yes, resoundingly, in every aspect building the server was worth it.
Going forward, I'm looking at extending the functionality my server offers. I'm especially interested in being able to access it remotely. I'm starting with simple remote access to ssh and Webmin, which I'll test out next time I go to college and can try to connect from outside my local network. Then, who knows? I might install a version of Apache, get some php working, and MySQL (gotta put those Databases courses to use!) and see what I can host. I'll be dealing with the fact that this year I've got a dynamic IP address (externally) so I may need some service to work around that. It's all exciting though, and I'm looking forward to working on it!
Thanks for reading! And thanks to the site that got all this started!
Last year, I built a server, which was pretty fun and I learned a lot. (Those interested, previous topic.) I've been using it since then and it's been great. (Even moved house and just set it up in the new place - new roommates do quite enjoy it.)
Since then I've built a monster desktop PC for gaming, which was also good fun, and the hardware experience I had from building the server was invaluable. I also put a blu-ray drive into the desktop, so I could finally add my blu-ray movies to my server collection. Of course, I'm back here because over the last two days I've done somewhat of an overhaul of my server, and thought you guys might be interested to hear what I've done. (And it might help a few people who want to do something similar, I can go into more detail on specifics if people are interested in them.) So, here's a story:
My previous setup had three 2TB drives, one for parity (first with flexRAID, then snapRAID once the former became payment-only), and the other two for data. (OS on the first one.) I noticed that, over the course of digitizing all of my DVDs and Blu-Rays , I was approaching the size limit of my first drive. I've been looking into ways of avoiding arbitrary splits of data across the two drives for a while now. Drive 1 already used LVM to manage most of its partitions, and I saw that LVM would let me extend the data partition on drive 1 to also include the space on drive 2. That's when I saw that (or at least thought) snapRAID would take issue with this configuration. Suddenly, if you're sitting on top of the file system, it looks like there are two disks, one 4TB and one 2TB, which doesn't work out for parity too well. True, I could use the device mount points, /dev/sda and /dev/sdb, but that felt like I was working around LVM.
I've also been concerned that I was using Ubuntu Server 11.04 which isn't an LTS version. (Last year me didn't think about that part all that much, and refused to install it all again after I'd realized LTS was an advantage, even with a lower version number.) The current LTS, at least when I looked, is 12.04. I looked at that, and then I was browsing the Arch Linux wiki (another story to go with that, Arch Linux on yet another computer, but that never went anywhere) and found an explanation of software RAID using Linux. I have to say the Arch Linux wiki does a great job of explaining RAID levels. (Check out this and this.) That got me thinking, what if I went with software RAID 5 at OS level, then put LVM on top to make easily poolable/extendable partitions, and upgraded to a stable LTS install all at once? After reading up a lot, and finding a fair amount of outdated information online, I was fairly sure I could make a massive RAID5 array of the whole 6TB. My main worry was that such a software RAID configuration couldn't be bootable (since Ubuntu needs to load the software RAID configuration for it to be assembled properly, and the array needs to be assembled for Ubuntu to be able to boot) but it seems sometime in the last year or so, grub2 has added support for booting from software RAID5. (Previously, it only supported RAID 0 and 1, which is what many people seem to still believe today.)
Thankfully, I had just enough storage space between all of my other computers to temporarily copy all the data files off my media server to elsewhere. I destroyed the existing partition setup. (It was strangely cathartic working on terminal commands and not caring if it destroyed file systems or configurations, since the objective was to delete it all anyway.) Once I had used GParted to convince (read: coerce) LVM into giving up its volume group hold on drive 1 (which the Ubuntu installer itself seemed unable to do), I booted up from a freshly burnt 12.04 install disc and, surprisingly easily, created a single partition on each drive and made a RAID5 array of the whole thing. Then, amalgamating the RAID and LVM sections of the Ubuntu Advanced Installation instructions, I created a volume group that occupied the whole array. That was divided up into 3 logical volumes, 20GB ext4 for /, 2GB for swap, and the rest ext4 for /home (so that's where the data goes).
That was about 4 hours ago. Since then I've run through a lot of the tutorials from here again, much faster than I did the first time because I've got a much clearer understanding of how it all works now. Currently, my desktop is copying a big chunk of my media back onto the server, and it's all swimming along merrily on top of a RAID5 array, with one big 4TB shared drive. There were some annoying bugs at shutdown on the previous Ubuntu Server release I was using, where / wouldn't be unmounted cleanly because it was 'busy' during shutdown. Then on boot it would complain / wasn't unmounted cleanly, and try to run a disk check, which of course put boot times in the range of 10 minutes, instead of 30 seconds. Now it shuts down and boots no problem!
Over the last year, a part of me has sort of wondered whether I use the server enough to justify having built the whole thing. I definitely learned a lot, which was great, but there was always this voice, wondering if I got enough use out of it. Well, yesterday I pulled the server down for the first time since I set it up, and that voice got real quiet real fast. Suddenly I had no music, there were shows I wanted to watch that didn't work, and my xbmc box did basically nothing. The server's use is harder to quantify than something like a new desktop PC, because it isn't something that gets used in isolation for specific purposes, it provides functionality to other devices, which can be much more useful than it immediately appears. So, yes, resoundingly, in every aspect building the server was worth it.
Going forward, I'm looking at extending the functionality my server offers. I'm especially interested in being able to access it remotely. I'm starting with simple remote access to ssh and Webmin, which I'll test out next time I go to college and can try to connect from outside my local network. Then, who knows? I might install a version of Apache, get some php working, and MySQL (gotta put those Databases courses to use!) and see what I can host. I'll be dealing with the fact that this year I've got a dynamic IP address (externally) so I may need some service to work around that. It's all exciting though, and I'm looking forward to working on it!
Thanks for reading! And thanks to the site that got all this started!