this post was submitted on 05 Jun 2023
86 points (98.9% liked)

Lemmy

13104 readers
2 users here now

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the lemmy.ml instance, go to !meta@lemmy.ml.

founded 5 years ago
MODERATORS
 

With forewarning about a huge influx of users, you know Lemmy.ml will go down. Even if people go to https://join-lemmy.org/instances and disperse among the great instances there, the servers will go down.

Ruqqus had this issue too. Every time there was a mass exodus from Reddit, Ruqqus would go down, and hardly reap the rewards.

Even if it's not sustainable, just for one month, I'd like to see Lemmy.ml drastically boost their server power. If we can raise money as a community, what kind of server could we get for 100$? 500$? 1,000$?

you are viewing a single comment's thread
view the rest of the comments
[–] nutomic@lemmy.ml 40 points 2 years ago (9 children)

The site currently runs on the biggest VPS which is available on OVH. Upgrading further would probably require migrating to a dedicated server, which would mean some downtime. Im not sure if its worth the trouble, anyway the site will go down sooner or later if millions of Reddit users try to join.

[–] Pisck@lemmy.ml 27 points 2 years ago

There will either be an hour of downtime to migrate and grow or days of downtime to fizzle.

I love that there's an influx of volunteers, including SQL experts, to mitigate scaling issues for the entire fediverse but those improvements won't be ready in time. Things are overloading already and there's less than a week before things increase 1,000-fold, maybe more.

[–] OsrsNeedsF2P@lemmy.ml 19 points 2 years ago (3 children)
8 vCore
32 GB RAM

😬

2 follow-ups:

  • Can we replace Lemmy.ml with Join-lemmy.org when Lemmy.ml is overloaded/down?
  • Does LemmyNet have any plans on being Kubernetes (or similar horizontal scaling techniques) compatible?
[–] makingStuffForFun@lemmy.ml 11 points 2 years ago

We need Self hosted team and team networking to represent. Would be amazing to see some community support in scaling Lemmy up.

[–] poVoq@slrpnk.net 6 points 2 years ago (1 children)

Maybe some dns fail-over for lemmy.ml to point to join-lemmy.org might be cool indeed 🤔

[–] tmpod@lemmy.pt 2 points 2 years ago

Yeah, was thinking of a DNS based solution as well. Probably the easiest and most effective way to do it?

[–] nutomic@lemmy.ml 5 points 2 years ago (1 children)

Can we replace Lemmy.ml with Join-lemmy.org when Lemmy.ml is overloaded/down?

I dont think so, when the site is overloaded then clients cant reach it at all.

Does LemmyNet have any plans on being Kubernetes (or similar horizontal scaling techniques) compatible?

It should be compatible if someone sets it up.

[–] SemioticStandard@lemmy.ml 8 points 2 years ago (1 children)

You could configure something like a Cloudflare worker to throw up a page directing users elsewhere whenever healthchecks failed.

[–] nutomic@lemmy.ml 18 points 2 years ago (4 children)

Then cloudflare would be able to spy on all the traffic so thats not an option.

[–] SemioticStandard@lemmy.ml 6 points 2 years ago (2 children)

spy on all the traffic

That's...not how things work. Everyone has their philosophical opinions so I won't attempt to argue the point, but if you want to handle scale and distribution, you're going to have to start thinking differently, otherwise you're going to fail when load starts to really increase.

[–] Cadende@lemmygrad.ml 10 points 2 years ago

Cloudflare does have the ability to spy on traffic though, they hold SSL keys.

[–] wagesof@links.wageoffsite.com 3 points 2 years ago

You could run an interstitial proxy yourself with a little health checking. The server itself doesn't die, just the webapp/db. nginx could be stuck on there (if it's not already there) with a temp redirect if the site is timing out.

[–] Cadende@lemmygrad.ml 3 points 2 years ago* (last edited 2 years ago)

A better option for a simple usecase like that is using something from your DNS provider. Depending on who you use they may have a health check service that has no access to user data that can simply ping a URL, and if it fails hard enough, start redirecting traffic to join-lemmy.org

I think Constellix has it, though I'm not necessarily recommending them specifically

[–] sam_uk@slrpnk.net 2 points 2 years ago

How about https://deflect.ca/ they could still spy but probably less bad?

[–] Lobstronomosity@lemmy.ml 14 points 2 years ago* (last edited 2 years ago) (1 children)

I'm sure you know this, but getting progressively larger servers is not the only way, why not scale horizontally?

I say this as someone with next to no idea how Lemmy works.

[–] nutomic@lemmy.ml 33 points 2 years ago (1 children)

Its better to optimize the code so that all instances benefit.

[–] Lobstronomosity@lemmy.ml 13 points 2 years ago* (last edited 2 years ago) (1 children)

Is it possible to make Lemmy (the system as a whole) able to be compatible with horizontally scaling instances? I don't see why an instance has to be confined to one server, and this would allow for very large instances that can scale to meet demand.

Edit: just seen your other comment https://lemmy.ml/comment/453391

[–] nutomic@lemmy.ml 20 points 2 years ago (3 children)

It should be easy once websocket is removed. Sharded postgres and multiple instances of frontend/backend. Though I don't have any experience with this myself.

[–] wiki_me@lemmy.ml 13 points 2 years ago

I think that is unavoidable, Look at the most popular subreddits , they can get something like 80 million upvotes and 66K comments per day, do you think a single server can handle that?

Splitting communities just so that it will be easier technically is not good UX.

[–] bobpaul@fosstodon.org 12 points 2 years ago

@nutomic @Lobstronomosity In one of the comments I thought I saw that the biggest CPU load was due to image resizing.

I think it might be easier to split the image resizer off to its own worker that can run independently on one (or more) external instances. Example: client uses API to get a temporary access token for upload, client uploads to one of many image resizers instead of the main API, image resizer sends output back the main API.

Then your main instance never sees the original image

[–] ccunix@lemmy.ml 8 points 2 years ago (1 children)

There is already a docker image so that should not be too hard. I'd be happy to set something up, but (as others have said) the DB will hit a bottleneck relatively quickly.

I like the idea of splitting off the image processing.

[–] nutomic@lemmy.ml 2 points 2 years ago (1 children)

Image processing isnt causing any noticable cpu load.

[–] ccunix@lemmy.ml 2 points 2 years ago (1 children)

I saw someone say it was, obviously I have no access to data.

[–] nutomic@lemmy.ml 1 points 2 years ago

Maybe on another instance but not on lemmy.ml

[–] pe1uca@lemmy.one 7 points 2 years ago (3 children)

What's the current bottleneck?

[–] dessalines@lemmy.ml 38 points 2 years ago (1 children)

SQL. We desperately need SQL experts. It's been just me for yeRs, and my SQL skills are pretty terrible.

[–] Valmond@lemmy.ml 2 points 2 years ago

Put the whole DB in RAM :-)

Makes me remember optimization, lots of EXPLAIN and JOIN pain, on my old MySQL multiplayer game server lol. A shame I'm not an expert ...

[–] poVoq@slrpnk.net 10 points 2 years ago (4 children)

There are some SQL database optimisations being discussed right now and apparently the picture resizing on upload can be quite CPU heavy.

[–] itsmikeyd@lemmy.ml 21 points 2 years ago (2 children)

SQL dev here. Happy to help if you can point me in the direction of said conversation. My expertise is more in ETL processes for building DWs and migrating systems, but maybe I can help?

[–] veroxii@lemmy.world 19 points 2 years ago (1 children)

I've been helping on the SQL github issue. And I think the biggest performance boost would be to separate the application and postgresql onto different servers. Maybe even use a hosted postgresql temporarily, so you can scale the db at the press of a button. The app itself appears to be negligible in terms of requirements (except the picture resizing - which can also be offloaded).

But running a dedicated db on a dedicated server - as close to the bare metal as possible give by far the best performance. And increase it for more connections. Our production database at my data analytics startup runs a postgresql instance on an i9 server with 16 cores, 128GB RAM, and a fast SSD. We have 50 connections set up, and the run pgbouncer to allow up to 500 connections to share those 50. And it seamlessly runs heavy reporting and dashboards for more than 500 business customers with billions of rows of data. And costs us less than US$200pm at https://www.tailormadeservers.com/.

[–] Cadende@lemmygrad.ml 7 points 2 years ago

And I think the biggest performance boost would be to separate the application and postgresql onto different servers.

I think hexbear.net (an older lemmy fork-ed site) is working on this in conjunction with moving back to a modern lemmy version

[–] MDKAOD@lemmy.ml 5 points 2 years ago* (last edited 2 years ago) (1 children)

apparently the picture resizing on upload can be quite CPU heavy

This suggestion probably won't help with hosted VPS, but lib nvJPEG pushes crazy theoretical numbers for image resizing.

Maybe this could be worth investigating?

[–] poVoq@slrpnk.net 8 points 2 years ago* (last edited 2 years ago)

Probably not, but it does mention a more general CUDA based solution that might be interesting to add to Pictrs. I could for example move my Pictrs instance onto a server that does have an older Nvidia GPU to accelerate stuff (to use for Libretranslate and some other less demanding ML stuff).

Edit: Ok looks like the resizing is anyways only supported on Pictrs 0.4.x which most Lemmy instances are not using yet. However this seems to use regular ImageMagick in the background, so chances are quite high that it can be made to work with OpenCL: https://imagemagick.org/script/opencl.php

load more comments (1 replies)
[–] esturniolo@lemmy.ml 5 points 2 years ago (1 children)

And may be the bandwidth. Serve thousands and thousands need at minimum 1gbps.

[–] nutomic@lemmy.ml 6 points 2 years ago

Its mostly text so bandwidth shouldnt be a problem.

[–] Ashwag@lemmy.ca 3 points 2 years ago (1 children)

So reading this correctly, it's currently a hosting bill of 30 Euro a month?

[–] milan@discuss.tchncs.de 3 points 2 years ago* (last edited 2 years ago) (1 children)

No, thats the 8 GB memory option... if its the biggest, it should be around 112 €. Meanwhile i keep wondering if i should let Lemmy stay on the current KVM (which is similarely specked but with dedicated cores and stuff) or if it is better to move it to one of my dedis just in case... well... will see xD

[–] nutomic@lemmy.ml 3 points 2 years ago (16 children)

Its the one for 30 euros, Im not seeing any vps for 112. Maybe thats a different type of vps?

load more comments (16 replies)
[–] elouboub@kbin.social 3 points 2 years ago

Is it running in a single docker container or is it spread out across multiple containers? Maybe with docker-machine or kubernetes with horizontal scaling, it could absorb users without issue - well, except maybe cost. OVH has managed kubernetes.

load more comments (3 replies)