Merry Christmas everybody! Sorry I don't have a better message right now, but our servers will be offline sometime after midnight (GMT -5) tonight for maintenance. Also, sorry for the funkiness with the site last night... some things have to be addressed.
Messmaster said: Merry Christmas everybody! Sorry I don't have a better message right now, but our servers will be offline sometime after midnight (GMT -5) tonight for maintenance. Also, sorry for the funkiness with the site last night... some things have to be addressed.
My apologies for the extended downtime. Once we got started with the repairs (a hard drive swap), other things started breaking. Luckily we're stable now, though we're still working on other things.
Our backup fileservers were unaffected, though they only serve our video files and are not suited to serve actual web pages for UMD. So I couldn't exactly switch all traffic to them. But I'm working on making them full UMD servers so we can hopefully avoid future downtimes like this.
I'll be making some adjustments like resetting the dates on what's new entries and downloads that missed their debut. But please message me with anything you might need, and I'll get back to you soon.... for now I just need to sleep for a couple hours. Thanks so much for your patience with me everybody.
Thanks so much for being out there doing the lords work my day job this week has been filled with busted hard drives, failed system boards, and exploded raid caches. Tis the season I guess. Sounds to me like you need some RAID5 or RAID6 in UMD's life lol.
messyhot said: Thanks so much for being out there doing the lords work my day job this week has been filled with busted hard drives, failed system boards, and exploded raid caches. Tis the season I guess. Sounds to me like you need some RAID5 or RAID6 in UMD's life lol.
Man we got so many raids going on that the roaches don't even look this way But that doesn't help when TWO drives decide to take a dump at the same time [edit: and a raid controller too!]
That's why one of my storage methods at home is a 6-drive raid-6. I can lose 2 drives at once and not experience data loss or down time. Raid 6 is probably overkill for OS/boot drives and does give a slight performance hit unless you have a premium controller and drives, but for me the peace of mind is worth it.
When it hadn't come back up by the evening (GMT), I wondered if shit had hit the fan.
Messmaster said: Man we got so many raids going on that the roaches don't even look this way But that doesn't help when TWO drives decide to take a dump at the same time
Typically the way, unfortunately. Particularly if the drives are roughly the same age, the additional load of the RAID array handling one failed drive can cause additional drives to fail.
It's a love/hate relationship I guess. I've never been keen on the cloud, but we've just migrated a lot of stuff to it (talk about hopping on the band wagon late) and I must say that feeling of no longer having to nurse a sickly server back to life is a big relief.
Having said that, I imagine the amount of content that gets hosted here would make the cost of it astronomical?
I can't believe you are still running your own hardware, your operation is very suited to cloud and all the hastle is greatly reduced and you can move your capex to opex too
Wouldnt mind love but will take anything I can get
Messy.Charlotte said: Thanks MM, you're doing an amazing job!
Seconded!
Messmaster said: Man we got so many raids going on that the roaches don't even look this way But that doesn't help when TWO drives decide to take a dump at the same time
Ouch! That's never fun.
We're starting to play with ZFS and various RaidZ configurations at a datacentre I use. Got a big beast hypervisor running RaidZ1 (RAID5 basically) on 6TB HDDs with SSD caches, but for bigger disks I'm tending to think mirrors. We have a four-blade server, each blade has 3 disk slots in the front and a single PCIe on the board, for one deployment we put in an SSD as the boot device, a pair of 10TB disks in RaidZ-mirror as the storage array, and an M2 SSD in the PCI, split into 2 partitions as read cache and wrire cache respectively. Performance is very good and as the 2 HDDs are different manufacturers we're hoping for different lifetimes and hence when the ZFS scrub spots a problem we can swap one without losing the other.
I totally love ZFS scrub, usually gives plenty of warning of things going sour, and you can build arrays with spare disks that will be auto-swapped in if one of the main array ones starts to have issues.
What I now want to do is build a big ZFS San, with lots of mirrored pairs of 10TB disks, and NFS mount them as storage to our VMware hypervisors using a 10gig network, so we don't use on-board disks at all other than to boot off of. That way we can have the VM storage on ZFS disks even though the HVs themselves don't speak ZFS.
yamtree said: I can't believe you are still running your own hardware, your operation is very suited to cloud and all the hastle is greatly reduced and you can move your capex to opex too
Y'all seen the cost of data transfer in the cloud!?
yamtree said: I can't believe you are still running your own hardware, your operation is very suited to cloud and all the hastle is greatly reduced and you can move your capex to opex too
Y'all seen the cost of data transfer in the cloud!?
By the time you charge for the labor to maintain the hardware and the lost sleep its not far off at all. Much more scalable and reliable for a global marketplace.
yamtree said: I can't believe you are still running your own hardware, your operation is very suited to cloud and all the hastle is greatly reduced and you can move your capex to opex too
Y'all seen the cost of data transfer in the cloud!?
By the time you charge for the labor to maintain the hardware and the lost sleep its not far off at all. Much more scalable and reliable for a global marketplace.
While that may be true for a corporation with a ton of staff, UMD is MM working for himself. Generally cloud hosting is great for processing power, but as soon as you start to ramp up either storage requirements, or bandwidth, it gets stupidly expensive, stupidly fast. UMD uses an imperial fucktonne of both.
DungeonMasterOne said: [...] UMD uses an imperial fucktonne of both.
Which converted to metric is a metric fucktonne of both.
- UselessConverterBot
UMD is also a very outbound heavy site, which goes against the norm for a lot of corporations; you'd be looking at just shy of a hundred dollars per TB from something like AWS.
Im just looking out for MM's sanity here. I know what its like pulling all nighters trying to get something fixed so production can get back online. It sucks being in those positions and having something else to do or somewhere else to be. The sanity alone (for me) is reason enough to at least explore the cloud options.
DungeonMasterOne said: [...] UMD uses an imperial fucktonne of both.
Which converted to metric is a metric fucktonne of both. - UselessConverterBot
Messy.Charlotte said: I also imagine whilst it may be true that the two end up being pretty similar in a standard, corporate environment, UMD is very much an outbound heavy site and it's that which costs the money in the cloud.
From when I looked into it, storage space also costs a bomb, I have a physical server with 15TB of disk on board. I really don't want to think what that amount of capacity would cost on AWS or Digital Ocean.
Edit: I looked it up. $752.74 a month. Which to be fair is less than I was expecting. But I pay less than $100 a month for my current hosting. And with all the stores, the UMD uses hundreds of terrabytes.
WildThang said: Im just looking out for MM's sanity here. I know what its like pulling all nighters trying to get something fixed so production can get back online. It sucks being in those positions and having something else to do or somewhere else to be. The sanity alone (for me) is reason enough to at least explore the cloud options.
Oh for sure, haha.
Ideally a site like UMD would have a whole team of people including web developers, security, networking and, now a days, people like data scientists.
But hey, always open if you need a hand with anything.
I don't know how MM does it tbh.... he's part of a rare breed of people for sure.
osbaldeston said: Don't know if it's connected to the maintenance but on my Messy Mayhem sub, all the videos currently say
Please check back soon. We're preparing this video for display.
We're aware of the cause of that issue (it's the same reason the bin uploads are down), and we're working to resolve it. I'll update this thread once more progress has been made.
Alright, we back. Had to replace even more hardware which was actually causing some files to go missing from new scenes as they were being created! I'm working on restoring all that now, and I'll get back to everybody's messages soon as I can.
For any download customers who were affected by the downtimes, let me know if you'd like a time extension on your purchase.
Thank you for all your hard work keeping this place running. I can't say I really know what you're doing but I have always thought this site worked very well.
Most things break now and then and need updating or maintenance...that much is never in doubt. You're appreciated!