I’m lucky enough to be able to budget for things I want. If it’s in the budget, no justification is required. If it’s in the budget but expensive, then I just have to figure out if I want it more than the other things I want (or will want) that I won’t be able to afford as a result.
hedgehog
I hate how much I agree with you in principle and how ugly it looks in practice. With doubled periods, at least - different marks don’t trigger that same reaction. For example, a question mark inside, followed by a period or comma outside feels right.
It’s not grammatically incorrect to end a sentence with a preposition. It’s a common misconception that it is a rule, basically because one guy argued in favor of it back in the 1600s and had some support for formal writing in the 1700s. But it’s never been a broad rule, and even in formal contexts it’s not a rule in any current, reputable style or usage guides (so far as I know, at least).
Some more info on the topic: https://www.merriam-webster.com/grammar/prepositions-ending-a-sentence-with
Glaring doesn't imply a negative meaning. In this case it's used to mean "obvious".
Unless you’re suggesting that “glaring” means “obviously staring” (it doesn’t - that would be “glaringly staring”) this doesn’t make any sense.
“[He’s] glaring at [direct object]” is an example of a sentence that uses the present participle form of the verb “glare,” which explicitly communicates anger or fierceness.
If you’re not convinced, read on.
—————
The verb form that takes an object is:
Glare (verb with object): to express with a glare. They glared their anger at each other
The noun form the above definition references is:
Glare (noun): a fiercely or angrily piercing stare.
“Glaring” can be an adjective and one of those definitions does mean “obvious” or “conspicuous,” but the use of that form of the word doesn’t make sense in her sentence. Think about a comparable sentence like “The undercover operative is conspicuous at the bar,” where the bar is the location. (Even then, most people wouldn’t use “glaring” in that sentence, as “conspicuous” or “obvious” are much less ambiguous; the operative could be staring piercingly or angrily at the bar rather than being glaring while being at the bar.) Another example that makes a bit more sense is “The effect of the invasive plants is glaring at the park.”
But for that interpretation to be valid here, you’d have to:
- believe that the dude is trying to hide/blend in, or otherwise explain how he - not what he’s doing, but the dude himself - is conspicuous
- believe that the woman’s referring to her own ass as a location
- assume that she isn’t commenting on how the guy is looking at her ass, even though the joke depends on giving him something different to look at
That’s a bit of a stretch.
This is what I would try first. It looks like 1337 is the exposed port, per https://github.com/nightscout/cgm-remote-monitor/blob/master/Dockerfile
x-logging:
&default-logging
options:
max-size: '10m'
max-file: '5'
driver: json-file
services:
mongo:
image: mongo:4.4
volumes:
- ${NS_MONGO_DATA_DIR:-./mongo-data}:/data/db:cached
logging: *default-logging
nightscout:
image: nightscout/cgm-remote-monitor:latest
container_name: nightscout
restart: always
depends_on:
- mongo
logging: *default-logging
ports:
- 1337:1337
environment:
### Variables for the container
NODE_ENV: production
TZ: [removed]
### Overridden variables for Docker Compose setup
# The `nightscout` service can use HTTP, because we use `nginx` to serve the HTTPS
# and manage TLS certificates
INSECURE_USE_HTTP: 'true'
# For all other settings, please refer to the Environment section of the README
### Required variables
# MONGO_CONNECTION - The connection string for your Mongo database.
# Something like mongodb://sally:sallypass@ds099999.mongolab.com:99999/nightscout
# The default connects to the `mongo` included in this docker-compose file.
# If you change it, you probably also want to comment out the entire `mongo` service block
# and `depends_on` block above.
MONGO_CONNECTION: mongodb://mongo:27017/nightscout
# API_SECRET - A secret passphrase that must be at least 12 characters long.
API_SECRET: [removed]
### Features
# ENABLE - Used to enable optional features, expects a space delimited list, such as: careportal rawbg iob
# See https://github.com/nightscout/cgm-remote-monitor#plugins for details
ENABLE: careportal rawbg iob
# AUTH_DEFAULT_ROLES (readable) - possible values readable, denied, or any valid role name.
# When readable, anyone can view Nightscout without a token. Setting it to denied will require
# a token from every visit, using status-only will enable api-secret based login.
AUTH_DEFAULT_ROLES: denied
# For all other settings, please refer to the Environment section of the README
# https://github.com/nightscout/cgm-remote-monitor#environment
To run it with Nginx instead of Traefik, you need to figure out what port Nightscout’s web server runs on, then expose that port, e.g.,
services:
nightscout:
ports:
- 3000:3000
You can remove the labels as those are used by Traefik, as well as the Traefik service itself.
Then just point Nginx to that port (e.g., 3000) on your local machine.
—-
Traefik has to know the port, too, but it will auto detect the port that a local Docker service is running on. It looks like your config is relying on that feature as I don’t see the label that explicitly specifies the port.
JustWatch is still useful if you want to act like you watched it legitimately, e.g., if a coworker asks where they can watch it. Even if your coworker also pirates, they might not have an account on your private tracker, Usenet, etc..
I may be wrong, as I haven’t actually torrented anything substantial since Demonoid was still a thing, but it all feels less accessible than it used to be.
It’s not “dark green,” that’s for sure.
There’s a difference between an answer from ChatGPT and an answer from ChatGPT that’s been reviewed by a person, particularly if that person is knowledgeable of the topic. ChatGPT isn’t deterministic, so if I go and ask ChatGPT the same thing, there’s no guarantee I’ll get an at all similar answer.
The problem for me is that I have no way of knowing whether the person posting the ChatGPT response is or isn’t an expert and whether they actually reviewed the output. However that’s true of people in general, just replace “reviewing the output” with “not trolling,” so the effort to assess the utility of a comment is pretty similar.
Ars points out that these findings contradict those of other experiments and then goes on to postulate as to why. I clicked on the link to the other experiment:
when data is combined across three experiments and 4,867 developers, our analysis reveals a 26.08% increase (SE: 10.3%) in completed tasks among developers using the AI tool
By comparison, this experiment considered 16 developers. That’s 0.3% as many as the experiments its findings contradict. Fortunately, the authors don’t claim their findings are broadly applicable. They even have a table that reads:
We do not provide evidence that | Clarification —- | —- AI systems do not currently speed up many or most software developers | We do not claim that our developers or repositories represent a majority or plurality of software development work AI systems do not speed up individuals or groups in domains other than software de- velopment | We only study software development AI systems in the near future will not speed up developers in our exact setting | Progress is difficult to predict, and there has been substantial AI progress over the past five years [2] There are not ways of using existing AI systems more effectively to achieve positive speedup in our exact setting | Cursor does not sample many tokens from LLMs, it may not use optimal prompting/scaffolding, and domain/repository-specific training/finetuning/few-shot learning could yield positive speedup
That said, the study has been an interesting read so far. I highly recommend reading it directly rather than just the news posts about it. Check out their own blog post: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
I personally find the psychological effect - the devs thought they were 20% faster even afterward - to be pretty interesting, as it suggests that even if more time overall is spent, use of AI could reduce cognitive load and potentially side effects like burnout.
I’d like to see much larger scale studies set up like this, as well as studies of other real world situations. For example, how does this affect the amount of time this takes 10,000 different developers to onboard onto an unfamiliar repository?
You can’t just consider the cheese! You gotta look up all the ingredients!
Consensus: hold the tomato! Otherwise, if there’s no seasoning, everything else is acceptable in small amounts.