Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
When my QNAP finally died on me, I decided to build a DIY NAS and did consider some of the NAS OSes, but I ultimately decided that I really just wanted a regular Linux server. I always find the built-in app stores limiting and end up manually running Docker commands anyways so I don't feel like I ever take advantage of the OS features.
I just have an Arch box and several docker-compose files for my various self-hosting needs, and it's all stored on top of a ZFS RaidZ-1. The ZFS array does monthly scrubs and sends me an email with the results. Sometimes keeping it simple is the best option, but YMMV.
I like Unraid because it's essentially "just Linux" but with a nice web UI. It's got a great UI for Docker, VMs (KVM) and Linux containers (LXC).
Just got unraid up and running for the first time today. There’s a bit of a learning curve coming from TrueNAS Scale but it supports my use case: throwing whatever spinning rust I have into one big array. Seems to work alright, hardware could use additional cooling so I’ve shut it off until a new heatsink arrives.
What made you switch from TrueNAS Scale to Unraid, if I may ask? Is it just the ability to mix different drive sizes? I'm currently using TrueNAS Core and thinking about migrating to TrueNAS Scale.
Yes, that’s the only reason. You can mix drive sizes and still have a dedicated parity drive to rebuild from in case things go poorly. I am aware that it’s basically LVM with extra steps, but for a NAS I just want it to be as appliance-like as possible.
Still using Scale at work, though - that use case is different.
Thanks for your response!
My NASs are purely NAS, I prefer a Debian server for... Pretty much everything. But my storage only does storage, I keep those separate (even for an old PC acting as a NAS).
No matter what goes down, I can bring it back up, even with a hardware failure.
I used to do that. I had a QNAP NAS and a small Intel NUC running Arch that would host all my services. I would just mount the NAS folders via Samba into the NUC. Problem is that services can't watch the filesystem for changes. If I add a video to my Jellyfin directory, Jellyfin won't automatically initiate a scan.
Nowadays, I just combine them into one. Just seems simpler that way.
I just have my downloader trigger a scan at completion.
I have a few proxmox clusters going, combining it all wouldn't be practical. This way my servers (tiny/mini/micros I've repurposed) stay small with decent sized ssd's, big storage in 2 NAS's, and a third for backups.
That sounds like a config issue. I use NFS shares in a similar way, and Plex/*arr/etc has zero issues watching for changes.
I think it's a samba limitation. Maybe NFS works well for that case.
I went with OMV on older but plenty capable hardware (Intel 4th-7th gen) because 1. I'm cheap, and 2. I could configure it how I wanted.
Glad I went that way, because I was considering "upgrading" to a Synology for a while.
I now have my OMV NAS (currently running on a very-unstressed 2014 Mac mini and a 4-bay drive enclosure), and a separate Proxmox cluster with multiple VMs that use the NAS through NFS shares. Docker-focused VMs are managed by local Dockge instances, which is incredibly handy for visualizing the stacks. Dockge instances can also link to each other, so I can log into any Dockge instance and have everything available.
I can do command line stuff just fine, but I am a visual person, so having all that info right in front of me on one page is very, very helpful.
Oh yeah. I bet you're feeling lucky you didn't switch to Synology given the recent drama where they're locking features down to their branded hard drives, which we all know are just up-charged drives from regular vendors.
What drive bay enclosure are you using btw and how does it connect to your Mac mini?
Never heard of dockge. I'll have to check it out! I've just been using podman and docker-compose scripts.
Drive bay I'm using is a Sabrent DS-SC4B, connected via USB3. I'm currently collecting parts for an actual tower build based on a G4560T.
Interesting! I am assuming each drive shows up as an independent drive that you can raid up however you want in software? Man I was looking for something like this, but at the time I was building my NAS, I couldn't find something similar so I just decided to build a whole new machine with enough space to contain the drives themselves. Had I known, I might have gone with this and a NUC or something. How's the performance?
Yeah, each drive is shown as if they were individually attached the machine. RAID how you want (or don't). I've got three 4TB drives in an 8TB RAID5, one 4TB that contains data from my gaming PC that I'm working on moving to the RAID, and then a separate 8TB external drive that everything on the RAID array is rsynced to for backup (not ideal but it's something).
I'm actually going the other way and building a proper server out of an ancient HP Proliant ML110 G2 that my dad gave me. Shockingly, it's fully ATX compatible and has 8+ drive bays. I'm just reusing the case though and stuffing it with more modern components; it was originally equipped with a Pentium 4 😂 I'm not a fan of the single USB connection for all that data.
Sufficient I suppose. Limited by the single USB 3 connection. The Mac mini isn't stressed at all, but the RJ45 connector has some fucky Apple weirdness about it that causes it to go to sleep periodically. There's a workaround for it that I applied a while ago, but it still drops out occasionally. But, that's an Apple-specific problem, not the enclosure. The enclosure works fine.
Haha, one of my top concerns at the beginning was form factor. I really could not find a decent 4-bay case at the time that wasn't super hard to build in or a full-blown ATX. I think the closest I found was a Jonsbo N2, but it doesn't give enough space for a decent cooler. What I ended up going with was total overkill, a NZXT H1 with a PCI-E NVMe expansion card that gave me 3 extra NVMe slots. So now I have a RAIDZ1 array made up 4x 4TB SSDs. The overall form factor is nice, but the performance is completely ridiculously overspecced. My rationale though is that the SSDs were cheap enough and I think they'll outlast a regular HDD. I was annoyed at how my WD Reds died within 3-4 years back when I was still using my QNAP.
Now that locally hosting AI models is becoming a thing, I am kinda regretting going small form factor because I can't cram GPUs in there. So now I am thinking maybe getting one of those 4-foot high small server cabinets and getting a few Sliger CX4170a's and just building full PCs. I would probably move my main PC into that rack as well. But this is all just thoughts. Budget wise it's a bit ridiculous, but one can dream!
Dang, if they made an updated one with USB 4, that'd be sick. Heck, I wouldn't even mind if they had multiple USB connections coming out of the thing, I just like the form factor.
Out of curiosity as an owner of a QNAP NAS, how did it go out? Any signs it was in its last legs? Now that I’ve used one, the form factor is the only thing better than most options out there when I got it.
Nowadays all QNAP, Sinology and other NAS vendors supposedly offer a lot of extra value with their cloud options, but I find them a sure way to get hacked based on the average company’s investment in security (I work in IT, it is a sad affair sometimes) combined with all the ransomware specifically targeting them due to old packages they rely on = I’ll build my next system from the ground up, even if the initial cost is higher and the result is uglier.
It was this nasty Intel clock drift bug: https://forum.qnap.com/viewtopic.php?t=157459
Support was completely unresponsive and refused to do anything. Didn't even acknowledge the issue AFAIK. I tried to add the resistor but my copy of the NAS didn't expose the right pins so I couldn't even solder them on if I wanted to. Then I tried mounting my drives into another Linux machine, at which point I realized they were using some custom version of LVM that didn't work with standard Linux. I ended up having to buy a new QNAP NAS just to retrieve my data and then I returned it.
After that, I swore off proprietary NASes. If I can't easily retrieve data from perfectly good drives, it is an absolute no go.
I've run the same md-raid array in three different machines (ok, I've added and swapped a couple drives, but still). I love that about md-raid. Pull the drives out of one system, stick them into another system with
mdadm
installed, and it recognizes the array immediately.I have feeling I may find myself here in time, as I develop this setup more.
If you're familiar with Linux, I highly recommend it. The flexibility is just great and you can setup whatever dashboards / management tools you need. No need to tie yourself to a specific solution IMHO.
If you're going with Docker containers, a lot of the NAS OSes just hold you back because they don't support all the options that Docker offers. You'll be fighting the system if you need to do any advanced Docker configuration.
Thank you!
I'm not familiar, yet. My background is MS OS but going back as far as CLIs so I'm confident I'll learn fast.
If you want reliability, keep your NAS as a NAS; don't run applications on the same system. If you screw something up, you'll have to rebuild the whole thing. Run your applications in a VM at the minimum, that way you can just blow it away and start over if it gets fucked, without touching the NAS.
I feel like containers work just as well for the "blow it away" usecase though and it doesn't have the VM overhead.