So, I’ve been slowly picking up more hardware to play with. I got this 1RU system a few days back.
HP Proliant DL120 G5. Yeah, another G5 box. but this one is quite different in that it has a low wattage power supply. It doesn’t take anywhere near the power of the Gen 5 Xeon’s. originally it came with an E2160 but I just happened to have a Q9950 spare from a HP Desktop tower that had bad caps so i bunged that in and it worked perfectly fine.
My tomato router is getting bit long in the tooth so I’m considering replacing that with this and running pfense. I’ve already whacked in a spare 250GB SSD and replaced the CR2302 battery so as soon as i can pick up an Intel nic we should be good to go. I already stuck proxmox on it briefly for testing just to play a little with the Q9950 it was quite fun to use.
I’ve seen other people mention they use or used OpenBSD. I ran OpenBSD as my firewall for many, many years. Using both a Sun Ultra 5 (converted to Ultra SCSI and not the crap IDE controller) and also an Alphastation 255 at different points in time. I started in the BSD world on NetBSD on the Digital DECstation family. (MIPS hardware).
One of my most favorite machines of this period of time. My SGI Octane. Sadly, it did not survive storage at a relative’s when I went to Oman. I wept when I came back and saw the condition it was in.
Yesterday I had another server arrive for the homelab.
This is my second HP ML150 G6 and my third ML150. The third machine is a G5 and runs the primary storj node.
This hardware is a dual socket 1366 system. Unfortunately, even though the hardware supports X56xx Xeon’s ,HP never patched the BIOS to support them though they did on other ML3xx G6 systems. It means I am limited to dual X55xx CPU’s. I think this particular box came out of someone else’s homelab as it looks like they created a dummy to try and fool the motherboard into thinking a third fan was connected. (required for dual cpu configuration) My other two ML150’s run Proxmox so it is likely that will get installed here too. Unlike the HP docs the real max ram this system will support is 384GB. 32GB dimm’s are still expensive though so it is more likely I’ll set this up with 192GB long term. I plan to stick in a HP P420 raid card with 2GB cache, 10 Gbit networking and probably a quad gigabit card as well. It needs rebuilding and some additional parts before it will be in production use but I hope it will allow me to turn off my last two DL360 G5’s.
I did some upgrades to this machine on the weekend too.
This is a HP ML110G6. It runs truenas and is my main network storage and has 14TB installed with 11TB available for use (Two 4TB drives are available as RAID 0’s only with the important stuff on a mirror. I have a 10Gbit card on the way from ebay for it. finally I’ll get to use some of my multimode fibre patch leads I bought 10 years ago. lol
I have an Exchange VM for one to move here and Exchange loves RAM. A few years ago I had a technet subscription and this Exchange license was part of it. Still supported for a few more years too. I get all my internal alerts sent to it instead of O365. Even on my Windows Desktops I tend to allocate 16GB ram. It just lets them run better and not choke on chrome for one. I use the desktop vm’s to testing for client deployments as we don’t actually have an official lab environment at work - this is my substitute. I also have a large Nextcloud instance to move here and that is going to take over as being our chat/video call server for our meetings so we save a bit more money for the business. My current Nextcloud has 24GB ram so I’ll just move it over to to new box. The system will support over 8 3.5" drives in two bays of 4 so It’s going to have a lot of VM’s in the long run. I’ve also just started Server 2022 and Win 11 testing as well. I also want a Solaris 11 VM and get back to testing Debian kernels again. (Though the Debian testing will mainly be on Sparc and Alpha and not AMD64). Many years ago I had a cross compile environment setup for the Linux Vax Project and I’d like to resurrect that too. (Although I ran that on an Alpha at the time) Though I am pretty sure Linux Vax has not been worked on for a lengthy period now. I really need my Vax out of storage again. lol.
Node 3 will probably come up on this system eventually as well but I still have 2.5TB free on my other two nodes to it is not going to be needed for a while yet since ingress is so slow at the moment.
I now have a 10 gig link installed between Proxmox1 and my TrueNAS box plus my Dell Precision tower running win 10 and TrueNAS. No switch as yet but that is definitely on the want list.
Pass thru to a Server 2016 vm under Proxmox was really trivial and actually outperforms the native Windows 10 install. The only draw back is my TrueNAS only has a pciex4 slot so performance isn’t as high as it could be.
Yesterday I had a 4 port HWIC Switch module arrive for my Cisco 1841 router. It basically adds 4 10/100 ports to the router. I used the 1841 recently to simulate my isp ppp-oe server and was able to use it to build my pfsense replacement without taking down the live connection until it was ready. I’m going to use this to segment my network a bit more and move my lab machines into their own physical network as well as logical. VLAN’s have the disadvantage that they can still add to the workload on the network device in terms of CPU and memory. I plan to add a few more routers to the lab over the next few months.
Nah, this is part of my general lab. Whilst I won’t rule out running a node on this machine in the future it won’t be until at least mid to late next year. I have enough free storage at the moment.
The only stuff planned for this machine for now is cpanel, Exchange and a server 2022 VM. I can’t quite do all that as yet as the cpu is only a dual core. But, it will do a 10 core 20 thread CPU so that is what I’ll pickup next.
I do have to say a big +1 for Veeam again. I was able to do the restores from backup painlessly. The only gotcha is to make sure the network port group match on both the backup and destination.