Exactly, the assumption (known as the inductive hypothesis) is completely fine by itself and doesn't represent circular reasoning. The issue in the "proof" actually arises from the logic coming after this, in which they assume that they can form two different overlapping sets by removing a different horse from the total set of horses, which fails if n=1 (as then they each have a single, distinct horse).
KingRandomGuy
I haven't used the XPS 13 personally but my experience and all my friends' experience with the XPS lineup is that despite their build quality, they're quite prone to failure. On my 15, the keyboard failed multiple times, as well as one of the fans and eventually one thunderbolt port, all within a span of 4 years.
They're beautiful machines that really should be quality, but in practice for some reason they haven't lasted for me. On the plus side though, Dell does at least offer service manuals, and lots of parts can be replaced by a user (on the 15 you can easily replace fans, RAM, SSDs, and with some work you can replace the top deck, display, and SD reader).
I'm fairly certain blockchain GPUs have very different requirements than those used for ML, especially not LLMs. In particular they don't need anywhere as much VRAM and generally don't require floating point math, nor do they need features like tensor cores. Those "blockchain GPUs" likely didn't turn into ML GPUs.
ML has been around for a long time. People have been using GPUs in ML since AlexNet in 2012, not just after blockchain hype started to die down.
I think what they meant by that is "is this different wrt antitrust compared to Intel and x86?"
Intel both owns the x86 ISA and designs processors for it, though the situation is more favorable in that AMD owns x86-64 and obviously also designs their own processors.
I would say that in comparison to the standards used for top ML conferences, the paper is relatively light on the details. But nonetheless some folks have been able to reimplement portions of their techniques.
ML in general has a reproducibility crisis. Lots of papers are extremely hard to reproduce, even if they're open source, since the optimization process is partly random (ordering of batches, augmentations, nondeterminism in GPUs etc.), and unfortunately even with seeding, the randomness is not guaranteed to be consistent across platforms.
No, the server is on the github account linked above as well. The repo is here.
Signal however doesn't federate and does not generally support third-party clients.
I used to do something like this before Signal became a thing. We used to use OTR via the Pidgin OTR plugin to send encrypted messages over Google Hangouts. Funnily enough, I'm pretty sure Pidgin supports Discord, so you could use the exact same setup to achieve what you described.
It was pretty funny to check the official Hangouts web client and see nonsensical text being sent.
The sidereal telescope mount one seems to be right (approx 1 rotation per day).
This was also one of my concerns with the hype surrounding low cost SLS printers like Micronics, especially if they weren't super well designed. The powder is incredibly dangerous to inhale so I wouldn't want a home hobbyist buying that type of machine without realizing how harmful it could be. My understanding is even commercial SLS machines like HP's MJF and FormLab's Fuse need substantial ventilation (HEPA filters, full room ventilation, etc.) in order to be operated safely.
Metal is of course even worse. It has all the same respiratory hazards (the fine particles will likely cause all sorts of long-term lung damage) but it also presents a massive fire and explosion risk.
I can't see these technologies making it into the home hobbyist sphere anytime soon as a result, unfortunately.
One trick for removing walking noise in post is to use a tool like StarNet++ to decompose the image into stars and nebulae, then mask out the nebula in the image. Invert the mask and desaturate.
Like others said, if possible, try dithering in the future. It'll help to minimize walking noise to begin with. It's pretty easy to configure in most imaging software, but typically requires autoguiding.
TBH the paper is a bit light on the details, at least compared to the standards of top ML conferences. A lot of DeepSeek's innovations on the engineering front aren't super well documented (at least well enough that I could confidently reproduce them) in their papers.
Yep this is the exact issue. This problem comes up frequently in a first discrete math or formal mathematics course in universities, as an example of how subtle mistakes can arise in induction.