Recently, we had a item of hardware fail in my internal server (my in house server).
Essentially that was just my older 2 year old machine, which unfortunately, had an accident. It wasn’t the servers fault however.
Unfortunately, a Pop Top bottle filled with a bit of Sprite came into contact with the Zalman fan after being knocked on to the case, laying on its side, resting on the two servers that superseded it.
Unfortunately for that motherboard, it didn’t seem to want to come back on after suffering numerous injuries as a result of the soft drink being quickly propelled around the case by the Zalman fan (which I just put into that case not so long ago, upgrading my own fan due to copper fin ‘bendage’).
Anyway, so the time has come to solve two problems, one, my partners thirst for more computational power to process tasks faster, and the newly invented problem, my server suffering from liquid near the brain (CPU).
The good news for me is my habit of buying parts nearly identical for my machine and my partners, and the end result of my older machine becoming the new server means I don’t have to do any formatting or reinstall.
It’s simply take her high spec P4 3.0Ghz 531 Processor, 4GB of RAM, and the same model motherboard, and place that in the server case, plug the HDDs back in the same order, and fire them up, and we should be good to go again.
Unfortunately however, for me and my server (and the projects I maintain on it), they have to sit back a bit whilst we hack data from the Linux Virtual Hard Drive (we use Linux Virtual Machines over Virtual Server, on a Windows Server 2003 base), from the hard drive on that machine, so that I can continue finishing some updates to OzVoIPStatus, or, wait until next weekend, when the hardware for my partners machine, is to arrive (they were supposed to be here Friday, but the distributors can’t tell the difference between Sydney and Melbourne – strange, I can, one has a highly problematic public transport network, and its not Melbourne).
Anyway, due to the delay from Melbourne, I have to wait until at least Monday to get the gear, which means probably taking Monday (which I would usually use to put towards focusing on websites of clients) to build the server and get it back online, just so I can get somewhere, otherwise the whole week will be nearly spent doing little at all in the way of development (lack of test area).
The new gear will see a performance improvement for my partner, who long has been stifled by the Hyper Threading artifical dual CPU technoloy, and will almost definitely love the ultimate responsiveness of a dual core machine!
I was planning the upgrade, but not so soon, after Christmas.
The next item on her upgrade list is two LCD monitors to help trim down the fat on the power bill her two CRTs generate.
Although trimming fat shouldn’t be something we have to do, instead, power solutions should be worked on that are friendly to the environment and reduce costs to consumers. Such examples exist in GeoThermal power. Truly amazing idea (figures quoted as 5c a KWH, compared to the ~12c KWH we pay now (and the 15c KWH after XX kWH).
My plan to release a key update to OzVoIPStatus is likely to go ahead, with thanks to Virtual PC 2007, which works just like Virtual Server, except its desktop oriented, which will do now for the time being.
My updates aren’t much on the web side, and more in the back, though some of the changes I imagine will be visible to viewers, as this essentially will fix a bug with outages being logged every 60 seconds due to the more passive nature of the tests currently being done.
It sort of sucks to lose a machine that isn’t so hardly worked yet! Many of my machines get around 5 years or so before I take them out, simply due to aged hardware, leaky capacitors, warping due to CPU heat, etc. But then, it was originally my workstation, and I do work them pretty hard.
So, tomorrow, hopefully, assuming I complete it, we’ll have a OzVoIPStatus fix up that solves outages being logged incorrectly, and can (but won’t for a little while) provide data back out as to what was found at the cause of each outage.