dragonfly4933

joined 2 years ago

Comparing python to rust, rust has far fewer breaking updates than python, and thats a fact. Feature updates can and do break older code in python, whereas in rust this is simply not allowed with few exceptions.

The language is allowed to change in compatible ways with editions. Every few years a new edition is released which allows otherwise breaking changes to be implemented, but the old and new code can still work together. Developers can rev the edition version when they want. I also think cargo might be able to help upgrade to a new edition as well.

Rust isn’t perfect, but python fails to learn the lessons that even perl implemented decades ago.

[–] dragonfly4933@lemmy.dbzer0.com 22 points 3 weeks ago (2 children)

To be honest, I never heard of it, and it is interesting, but the language isn't the only factor, it's the ecosystem as well. It says it's an alternative to C, so I will just assume it can consume C libraries. But that still leaves you with using C libraries, which is not a great position to be in if you are looking to not use C.

If you are looking for something that is actually in use, but not rust, look into Zig. Still would need to use a lot of C libraries, but it at least looks like it has momentum. Not to mention they seek to completely replace libc, which would actually be useful and an achievement, since that is the biggest problem C actually has.

I am a rust fan myself, but if you are new to programming it's not a great place to start due to its' learning cliff.

[–] dragonfly4933@lemmy.dbzer0.com 2 points 1 month ago (1 children)

Maybe, but i never mentioned years into the future. Of course technology will improve. The hardware will get better and more effcient, and the algorithms and techniques will improve.

But as it stands now, i still think what i said is true. We obviously don’t have exact numbers, so i can only speculate.

Having lots of memory is a big part of inference, so I was going to reply to you that prices of memory stopped going down at a similar historical rate, but i found this, which is interesting

https://ourworldindata.org/grapher/historical-cost-of-computer-memory-and-storage?time=2020..latest

The cost when down by about 0.1x from 2000 to 2010. 2010-2020 it was only about 0.23x. 2020-2023 shows roughly another halving of the price, which is still a pretty good rate.

The available memory is still only one part. The speed of the memory and the compute connected to it also plays a big part in how these current systems work.

Of the things people complain about that systemd brings in, this is among the least offensive. It makes sense for an init system to provide such functionality, the function of spawning new system processes.

Additionally, in modern systems it doesn’t make sense to use such features. Spawning a new process per request or on demand doesn’t gain you much and does reduce performance.

Spawning new processes on most OS is pretty slow compared to other operations. Additionally, there would also be an increase in latency as the new process needs to be loaded, whereas most software these days can handle the new request in more efficient ways.

I think you can also try to reuse the same process for multiple requests, stopping it only once it has been quiet for a while. But this still doesn’t really help much.

Historically, i think it was used to try to save memory. But today its a bigger nusance than it is worth. I just checked how much memory sshd is using, and i think it is less than 10mb.

total kB 8508 6432 1160

And to be clear, you theoretically can’t save much if any memory doing this because you must have enough memory available to be able to run the process, otherwise bad things will happen or some other process gets oomed.

Additionally, spawning a new process per request can represent an availability violation. An attacker could launch a series of very slow connections to a server spawning a new process per request, causing a depletion of resources.

With all that said, I wouldn’t say there are no uses at all for this, it can be useful to make very minimal network connected software that does some very basic stuff in a secure network.

[–] dragonfly4933@lemmy.dbzer0.com 6 points 1 month ago (5 children)

If the product costs that much to run, and most users aren’t abusing their access, it’s possible the product isn’t profitable at any price that enough users are willing to pay.

[–] dragonfly4933@lemmy.dbzer0.com 2 points 2 months ago (1 children)

You would still need to pass the GPU through to the VM, but this can eliminate the need to plug the GPU output into another device or use a dedicated monitor.

I have never used it, but I know it is pretty common.

[–] dragonfly4933@lemmy.dbzer0.com 3 points 2 months ago (3 children)

This might be of use to you:

https://looking-glass.io/

You might still need a dummy hdmi/DP plug/adapter.

[–] dragonfly4933@lemmy.dbzer0.com 3 points 2 months ago

Boot issues on Linux are like most of the other problems Linux has, there is no standard way to do things, so people invent their own ways, and it results in the problems we see today. This doesn't just apply to booting, it also particularly includes dns and network management. Combined with the fact that its a low level thing people don't want to deal with, it gets left to rot. Few understand it leading it frustration.

Grub isn't a simple tool because it's not solving only simple problems. A simple situation would be booting a VM, where something like systemd-boot is probably preferred over grub since the heavy lifting should already be done by the host OS at that point.

Also, it's not grub that is usually broken (grub did load after all...), it's something else like a bad or botched update or something similar that breaks support for some hardware or the initramfs got messed up. I frequently encounter servers that suddenly stop booting and get stuck in either the initramfs or at grub, and selecting an older option usually gets me back into the os proper. Also, I have noticed it's most often ubuntu that gets messed up while rhel and friends are much less likely to break. Breakage on arch is usually the result of specific user error, or some incompatibility was introduced.

In your case, the issue could have been (just guessing) a new kernel was installed, but the config tool might not have been run to create the new references. It's not exactly grubs fault if the thing it was suppose to point to no longer exists. Simpler systems like arch do not have this problem at all since the kernel is always overwritten in-place, so the references are unlikely to ever get broken, but this is not without pretty annoying tradeoffs.

I didn't think to check the number of patches, but as you can see, a lot of those patches have nothing to do with x86 specifically, and some relate to the scripts to implement or change behavior for their distro. If you check Arch, it has not nearly as many patches and still works fine. https://gitlab.archlinux.org/archlinux/packaging/packages/grub

You are correct that grub probably is the better part of an OS, but so are most other bootloaders that actually implement useful features (UKI IS linux, for example). systemd-boot implements few extra features like filesystems and lvm. On Linux, it's not the end of the world since you can pack in much more stuff in the initramfs to support more filesystems and other interesting behaviors.

I can definitely agree that grub is not very actively maintained, and there are even some outstanding bugs and fairly important and reasonable feature requests that are sitting with ready to apply patches. But grub is also a mostly complete project. Most things boot fine with it as is, and it's not like the EFI spec is constantly changing requiring regular updates. It's also probably fair to say that working on grub probably isn't a walk in the park due to how low level it is.

To be more clear on how I implemented my little scheme, neither grub or a script actually sync anything. I have two completely independent ESPs that are not synchronized automatically in any way. But because the grub EFI binary supports btrfs, it can just point to /boot in whatever btrfs filesystem which is where most of the configuration actually is. In this way, the dual ESPs are generated once and occasionally updated whenever I feel like, and /boot can continue to be managed by mainline scripts without any customization, such as mkconfig whatever initramfs build tool since the mirroring is completely transparent.

It simply is not possible to replicate this without grub since no other bootloader (to my knowledge) supports btrfs, or any other raid capable abstraction. You could get close by including additional scripts to ensure the appropriate configs and images are synced, but that is another point of failure.

[–] dragonfly4933@lemmy.dbzer0.com 1 points 2 months ago

Primary issue I found with that software is there is no way to bypass certificate and old issues.

[–] dragonfly4933@lemmy.dbzer0.com 5 points 2 months ago (2 children)

GRUB is still the standard bootloader in physical deployments because it is the most likely to work and supports most of the features you might want in a bootloader.

UKI based booting is interesting since it seems like it might support even more features. But the last time I tried to test it, there wasn’t a ton of documentation on it and the software still seemed a bit green and inflexible.

For example, my main computer right now has a completely redundant boot process. I have 2 disks which each have an efi system partition. And the root file system is btrfs raid1 across 4 disks. This was very easy to set up and completely supported by grub with no custom configuration needed. The only slightly tricky thing I had to do to install the second efi was to use an extra flag.

[–] dragonfly4933@lemmy.dbzer0.com 3 points 3 months ago (6 children)

That's fine and easy on desktop/web browser, but for mobile devices it is not quite as easy. You would either need to use a hacked version of the app or a third party app.

[–] dragonfly4933@lemmy.dbzer0.com 5 points 3 months ago

Missouri is already ignoring certain federal law, so it might not matter.

 

I am currently looking for a way to easily store and run commands, usually syncing files between two deeply nested directories whenever I want.

So far I found these projects:

Other solutions:

Bash history using ^+r Bash aliases Bash functions

What do you guys use?

view more: next ›