this post was submitted on 26 Jul 2025
243 points (98.4% liked)

Linux

8652 readers
472 users here now

A community for everything relating to the GNU/Linux operating system (except the memes!)

Also, check out:

Original icon base courtesy of lewing@isc.tamu.edu and The GIMP

founded 2 years ago
MODERATORS
 

This is very exciting. Here is the APK I downloaded. And the associated discussion.

It even already seems to support stylus input which is very exciting seeing as there has been talk of porting RNote to Android.

top 50 comments
sorted by: hot top controversial new old
[–] dil@lemmy.zip 5 points 1 day ago

So gimp on android?

[–] Mwa@thelemmy.club 3 points 1 day ago

less gooo!!

[–] A_norny_mousse@feddit.org 22 points 3 days ago (3 children)

Doesn't KDE/Plasma (or Qt) have this for years?

[–] klangcola@reddthat.com 49 points 3 days ago

Yes, and a few KDE apps work great on Android.

But more FOSS is more better, so GTK on Android is great news for both Android users and GTK developers

[–] schnurrito@discuss.tchncs.de 11 points 3 days ago (4 children)

Yes, e.g. Krita has long been available for Android tablets.

darktable on Android would be awesome, I don't think there's currently any good FOSS raw development software for Android.

load more comments (4 replies)
load more comments (1 replies)
[–] k0e3@lemmy.ca 30 points 3 days ago* (last edited 3 days ago) (1 children)

I can't read the discussion because some damn Canadian neko waifu thinks I'm a bot.

[–] possiblylinux127@lemmy.zip 22 points 3 days ago (3 children)

It does that for all clients

You just need to wait for the proof of work to complete

[–] PumaStoleMyBluff@lemmy.world 7 points 2 days ago (1 children)

It actually doesn't do that for all clients, according to the docs

It'll let you straight through if your user agent doesn't contain "Mozilla"

[–] Linearity@infosec.pub 2 points 2 days ago (2 children)

Whaaaat? Why only look for Mozilla?

All normal web browsers have Mozilla in the name so it’s kinda weird to only do it for that. Both chrome safari and FF start with Mozilla 5.0

[–] LunaChocken@programming.dev 2 points 1 day ago

Because it's super common in web scrapers

[–] sxan@midwest.social 10 points 2 days ago (7 children)

You just need to wait for the proof of work to complete

I will never find the irony in this anything other than pathetic.

The one legitimate grievance against Bitcoin and other POW cryptocurrencies - the wasteful burning of energy to do throw-away calculations simply to prove the work has been done... the environmental cost of distributed scale meaningless CPU cycle waste purely for the purpose of wasting CPU cycles, has been so eagerly grasped by people who are largely doing it to foil another energy wasteful infotech invention.

It really is astonishing.

[–] possiblylinux127@lemmy.zip 6 points 2 days ago (2 children)

Do you have a better way? It is way more private than anything else I've seen.

From a energy usage perspective it also isn't bad. Spiking the CPU for a few seconds is minor especially compared to other tasks.

[–] jasory@programming.dev 2 points 18 hours ago

The mersenneforums have users solve an obscure (to a non-mathematician) but relatively simple number theory problem.

[–] sxan@midwest.social 1 points 1 day ago (4 children)

Yeah, tarpits. Or, even just intentionally fractionally lagging the connection, or putting a delay on the response to some mime types. Delays don't consume nearly as much processing as PoW. Personally, I like tar pits that trickle out content like a really slow server. Hidden URLs that users are not likely to click on. These are about the least energy-demanding solutions that have a chance of fooling bots; a true, no-response tarpit would use less energy, but is easily detected by bots and terminated.

Proof of work is just a terrible idea, once you've accepted that PoW is bad for the environment, which it demonstrably is.

[–] possiblylinux127@lemmy.zip 2 points 17 hours ago

Tar pits rely on crawlers being dumb. That isn't necessarily the case with a lot of stuff on the internet. It isn't uncommon for a not to render a page and then only process the visible stuff.

Also I've yet to see any evidence that Arubis is any worse for the environment than any basic computer function.

[–] solardirus@slrpnk.net 4 points 1 day ago (2 children)

Tarpits suck. Not worth the implementation or overhead. Instead the better strat is to pretend the server is down with a 503 code or that the url is onvalid with a 404 code so the bots stop clinging to your content.

Also we already have non-PoW captchas that dont require javascript. See: go-away for these implemwntations

[–] possiblylinux127@lemmy.zip 1 points 17 hours ago (1 children)

Good luck detecting bots...

[–] solardirus@slrpnk.net 1 points 7 hours ago* (last edited 7 hours ago)

It's actually not that hard. Most of these bots are using a predictable scheme of headless browsers with no js or minimal js rendering to scrape the web page. Fully deployed browser instances are demonstrably harder to scale and basically impossible to detect without behavioral pattern detection or sophisticated captchas that also cause friction to users.

The problem with bots has never rested solely on detectability. It's about:

A. How much you inconvenience the user to detect them

B. Impacting good or acceptable bots like archival, curl, custom search tools, and loads of other totally benign use cases.

[–] sxan@midwest.social 1 points 22 hours ago (1 children)

There is negligible server overhead for a tarpit. It can be merely a script that listens on a socket and never replies, or it can reply with markov-generated html with a few characters a second, taking minutes to load a full page. This has almost no overhead. Implementation is adding a link to your page headers and running the script. It's not exactly rocket science.

Which part of that is overhead, or difficult?

[–] solardirus@slrpnk.net 1 points 7 hours ago* (last edited 7 hours ago)

It certainly is not negligble compared to static site delivery which can breezily be cached compared to on-the-fly tarpits. Even traditional static sites are getting their asses kicked sometimes by these bots. And yoy want to make that worse by having the server generate text with markov chains for each request? The point for most is reducing the sheer bandwidth and cpu cycles being eating up by these bots hitting every endpoint.

Many of these bots are designed to stop hitting endpoints when they return codes that signal they've flattened it.

Tarpits only make sense from the perspective of someone trying to cause monetary harm to an otherwise uncaring VC funded mob with nigh endless amounts of cache to burn. Chances are your middling attempt at causing them friction isn't going to, alone, actually get them to leave you.

Meanwhile you burn significant amounts of resources and traffic is still stalled for normal users. This is not the kind of method a server operator actually wanting a dependable service is deploying to try to get up and running gain. You want the bots to hit nothing even slightly expensive (read: preferably something minimal you can cache or mostly cache) and to never come back.

A compromise between these two things is what Anubis is doing. It inflicts maximum pain (on those attempting to bypass it - otheriwse it just fails) for minimal cost by creating a small seed (more trivial than even a markov chain -- it's literally just an sha256) that a client then has to solve a challenge based on. It's nice, but certainly not my preference: I like go-away because it leverages browser apis these headless agents dont use (and subsequnetly let's js-less browsers work) in this kind of field of problems. Then, if you have a record of known misbehavers (their ip ranges, etc), or some other scheme to keeo track of failed challeneges, you hit them with fake server down errors.

Markov chains and slow loading sites are costing you material just to cost them more material.

[–] possiblylinux127@lemmy.zip 3 points 1 day ago* (last edited 1 day ago)

None of those things work well is the problem. It doesn't stop the bots from hammering you site. Crawlers will just timeout and move on.

[–] JadedBlueEyes@programming.dev 2 points 1 day ago* (last edited 1 day ago) (1 children)

I run a service that gets attacked by AI bots, and while PoW isn't the only way to do things, none of your suggestions work at all.

[–] possiblylinux127@lemmy.zip 1 points 17 hours ago* (last edited 17 hours ago)

I think Anubis is born out desperation

[–] Cethin@lemmy.zip 6 points 2 days ago (2 children)

The point is to make it too expensive for them, so they leave you alone (or, ideally, totally die but that's a long way off). They're making a choice to harvest data on your site. Make them choose not to. It saves energy in the long run.

load more comments (2 replies)
[–] Jumuta@sh.itjust.works 12 points 2 days ago* (last edited 2 days ago) (1 children)

either you have the service with anubis or you have no service at all

unlike pyramid coins, anubis serves a purpose

[–] sxan@midwest.social 7 points 2 days ago (3 children)

It still uses Proof-of-Work, in which the coal being burned is only to prove that you burned the coal.

[–] possiblylinux127@lemmy.zip 6 points 2 days ago (1 children)

Everything uses energy

Do you have any measurements on power usage? It seems very minor.

[–] sxan@midwest.social 2 points 1 day ago (4 children)

Everything computer does use power. The issue is the same very valid criticism of (most) crypto currencies: the design objectives are only to use power. That's the very definition of "proof of work." You usually don't care what the work is, only that it was done. An appropriate metaphor is: for "reasons", I want to know that you moved a pile of rocks from one place to another, and back again. I have some way of proving this - a video camera watching you, a proof of a factorization that I can easily verify, something - and in return, I give you something: monopoly money, or access to a web site. But moving the rocks is literally just a way I can be certain that you've burned a number of calories.

I don't even care if you go get a ~~GPU~~ tractor and move the rocks with that. You've still burned the calories, by burning oil. The rocks being moved has no value, except that I've rewarded you for burning the calories.

That's proof of work. Whether the reward is fake internet points, some invented digital currency, or access to web content, you're still being rewarded for making your CPU burn calories to calculate a result that has no intrinsic informational value in itself.

The cost is at scale. For a single person, say it's a fraction of a watt. Negligible. But for scrapers, all of those fractions add up to real electricity bill impacts. However - and this is the crux - it's always at scale, even without scrapers, because every visitor is contributing to the PoW total, global cost of that one website's use of this software. The cost isn't being noticeable by individuals, but it is being incurred; it's unavoidable, by design.

If there's no cost in the aggregate of 10,000 individual browsers performing this PoW, then it's not going to cost scrapers, either. The cost has to be significant enough to deter bots; and if it's enough to be too expensive for bots, it's equally significant for the global aggregate; it's just spread out across a lot of people.

But the electricity is still being used, and heat is still being generated, and it's yet another straw on the environmental camel's back.

It's intentionally wasteful, and a such, it's a terrible design.

[–] possiblylinux127@lemmy.zip 2 points 1 day ago

It doesn't need to be anywhere near as resource intensive as a crypto currency since it isn't used for security. The goal is not to stop bots altogether. The goal is to slow down the crawlers enough so that the server hosting the service doesn't get pegged. The bots went from being respectful of server operators to hitting pages millions of times a second. This is made much worse by the fact that git hosting services like Forgejo have many links many of which trigger the server to do computations. The idea behind Arubis is that a user really only has to do the PoW once since they aren't browsing to millions of pages. On a crawler it will try to do tons of proofs of work which will bog down the crawling rate. PoW also has the advantage of requiring the server to hold minimal state. If you try to enforce a time delay that means that the server has to track all of that.

It is also important to realize that Anubis is a act of desperation. Many projects do not want to implement it but they had no choice since their servers were getting wrecked by bots. The only other option would be Cloudflare which is much worse.

load more comments (3 replies)
load more comments (2 replies)
load more comments (4 replies)
[–] k0e3@lemmy.ca 8 points 3 days ago* (last edited 2 days ago)

I did but it told me I'm a bot :(

Edit: Yay, it worked.

[–] artyom@piefed.social 18 points 3 days ago (1 children)
[–] subarctictundra@lemmy.world 48 points 3 days ago (2 children)

Quite a substantial step towards being able to use Linux apps on Android phones.

[–] someacnt@sh.itjust.works 3 points 2 days ago

Oh, are we getting Year of android desktop?!

[–] artyom@piefed.social 10 points 3 days ago

Ohhhh I see

What is GTK? Grand Theft Kraftdinner?

load more comments
view more: next ›