suicidaleggroll

joined 2 days ago
[–] suicidaleggroll@lemm.ee 2 points 7 hours ago* (last edited 7 hours ago) (1 children)

But from a grammatical sense it’s the opposite. In a sentence, a comma is a short pause, while a period is a hard stop. That means it makes far more sense for the comma to be the thousands separator and the period to be the stop between integer and fraction.

[–] suicidaleggroll@lemm.ee 3 points 18 hours ago (1 children)

I hoped it would be better, but all in all I thought it was enjoyable

[–] suicidaleggroll@lemm.ee 11 points 1 day ago

Standard street performance is around 1-2 deg negative camber, an experienced eye can tell when looking at the car from the outside but it's not super obvious. Aggressive track camber is around 3-4 deg, that's getting a bit more obvious to the naked eye, but still looks fairly normal. The cars you're talking about with like 10+ deg of camber, where the outside of the tire isn't even touching the pavement, is just the owners making their car handle like shit and burn through tires every 1000 miles because they think it looks cool.

[–] suicidaleggroll@lemm.ee 3 points 1 day ago* (last edited 1 day ago) (2 children)

Sure, it's a bit hack-and-slash, but not too bad. Honestly the dockcheck portion is already pretty complete, I'm not sure what all you could add to improve it. The custom plugin I'm using does nothing more than dump the array of container names with available updates to a comma-separated list in a file. In addition to that I also have a wrapper for dockcheck which does two things:

  1. dockcheck plugins only run when there's at least one container with available updates, so the wrapper is used to handle cases when there are no available updates.
  2. Some containers aren't handled by dockcheck because they use their own management system, two examples are bitwarden and mailcow. The wrapper script can be modified as needed to support handling those as well, but that has to be one-off since there's no general-purpose way to handle checking for updates on containers that insist on doing things in their own custom way.

Basically there are 5 steps to the setup:

  1. Enable Prometheus metrics from Docker (this is just needed to get running/stopped counts, if those aren't needed it can skipped). To do that, add the following to /etc/docker/daemon.json (create it if necessary) and restart Docker:
{
  "metrics-addr": "127.0.0.1:9323"
}

Once running, you should be able to run curl http://localhost:9323/metrics and see a dump of Prometheus metrics

  1. Clone dockcheck, and create a custom plugin for it at dockcheck/notify.sh:
send_notification() {
Updates=("$@")
UpdToString=$(printf ", %s" "${Updates[@]}")
UpdToString=${UpdToString:2}

File=updatelist_local.txt

echo -n $UpdToString > $File
}
  1. Create a wrapper for dockcheck:
#!/bin/bash

cd $(dirname $0)

./dockcheck/dockcheck.sh -mni

if [[ -f updatelist_local.txt ]]; then
  mv updatelist_local.txt updatelist.txt
else
  echo -n "None" > updatelist.txt
fi

At this point you should be able to run your script, and at the end you'll have the file "updatelist.txt" which will either contain a comma-separated list of all containers with available updates, or "None" if there are none. Add this script into cron to run on whatever cadence you want, I use 4 hours.

  1. The main Python script:
#!/usr/bin/python3

from flask import Flask, jsonify

import os
import time
import requests
import json

app = Flask(__name__)

# Listen addresses for docker metrics
dockerurls = ['http://127.0.0.1:9323/metrics']

# Other dockerstats servers
staturls = []

# File containing list of pending updates
updatefile = '/path/to/updatelist.txt'

@app.route('/metrics', methods=['GET'])
def get_tasks():
  running = 0
  stopped = 0
  updates = ""

  for url in dockerurls:
      response = requests.get(url)

      if (response.status_code == 200):
        for line in response.text.split("\n"):
          if 'engine_daemon_container_states_containers{state="running"}' in line:
            running += int(line.split()[1])
          if 'engine_daemon_container_states_containers{state="paused"}' in line:
            stopped += int(line.split()[1])
          if 'engine_daemon_container_states_containers{state="stopped"}' in line:
            stopped += int(line.split()[1])

  for url in staturls:
      response = requests.get(url)

      if (response.status_code == 200):
        apidata = response.json()
        running += int(apidata['results']['running'])
        stopped += int(apidata['results']['stopped'])
        if (apidata['results']['updates'] != "None"):
          updates += ", " + apidata['results']['updates']

  if (os.path.isfile(updatefile)):
    st = os.stat(updatefile)
    age = (time.time() - st.st_mtime)
    if (age < 86400):
      f = open(updatefile, "r")
      temp = f.readline()
      if (temp != "None"):
        updates += ", " + temp
    else:
      updates += ", Error"
  else:
    updates += ", Error"

  if not updates:
    updates = "None"
  else:
    updates = updates[2:]

  status = {
    'running': running,
    'stopped': stopped,
    'updates': updates
  }
  return jsonify({'results': status})

if __name__ == '__main__':
  app.run(host='0.0.0.0')

The neat thing about this program is it's nestable, meaning if you run steps 1-4 independently on all of your Docker servers (assuming you have more than one), then you can pick one of the machines to be the "master" and update the "staturls" variable to point to the other ones, allowing it to collect all of the data from other copies of itself into its own output. If the output of this program will only need to be accessed from localhost, you can change the host variable in app.run to 127.0.0.1 to lock it down. Once this is running, you should be able to run curl http://localhost:5000/metrics and see the running and stopped container counts and available updates for the current machine and any other machines you've added into "staturls". You can then turn this program into a service or launch it @reboot in cron or in /etc/rc.local, whatever fits with your management style to start it up on boot. Note that it does verify the age of the updatelist.txt file before using it, if it's more than a day old it likely means something is wrong with the dockcheck wrapper script or similar, and rather than using the output the REST API will print "Error" to let you know something is wrong.

  1. Finally, the Homepage custom API to pull the data into the dashboard:
        widget:
          type: customapi
          url: http://localhost:5000/metrics
          refreshInterval: 2000
          display: list
          mappings:
            - field:
                results: running
              label: Running
              format: number
            - field:
                results: stopped
              label: Stopped
              format: number
            - field:
                results: updates
              label: Updates
[–] suicidaleggroll@lemm.ee 5 points 1 day ago

Personally, I just have a couple of cheap CyberPower UPSs for my servers. I know I know, but I'm waiting for them to get old and die before I replace them with something better. My modem, router, and primary WiFi AP are on a custom LiFePO4-based UPS that I designed and built, because I felt like it. It'll keep them running for around 10 hours, long past everything else in the house has shut down.

[–] suicidaleggroll@lemm.ee 3 points 1 day ago

Dual booting is not a great long-term plan because it’s updates are known to delete grub

That problem is overblown. I've been dual-booting Windows and Linux for around 20 years now, I think I've had that happen...once? Over a decade ago? And to fix it you just use a Linux live USB to boot back in and repair grub. People bring it up every time dual-booting is mentioned as if it's the end of the world, but in reality it's a very rare problem and is easy to fix if it happens.

[–] suicidaleggroll@lemm.ee 2 points 1 day ago

Anything on a separate disk can be simply remounted after reinstalling the OS. It doesn't have to be a NAS, DAS, RAID enclosure, or anything else that's external to the machine unless you want it to be. Actually it looks like that Beelink only supports a single NVMe disk and doesn't have SATA, so I guess it does have to be external to the machine, but for different reasons than you're alluding to.

[–] suicidaleggroll@lemm.ee 3 points 1 day ago

I'd like to know the same. I really like the RP2040 and use it often, looking to move to the RP2350 but the GPIO issue is holding me back.

[–] suicidaleggroll@lemm.ee 15 points 1 day ago

This is their attempt to get around that pesky 1st amendment. Make criticism of the king a "mental disorder", and then you can lock them up involuntarily "for their own protection".

[–] suicidaleggroll@lemm.ee 4 points 1 day ago* (last edited 1 day ago) (5 children)

This is a great tool, thanks for the continued support.

Personally, I don't actually use dockcheck to perform updates, I only use it for its update check functionality, along with a custom plugin which, in cooperation with a python script of mine, serves a REST API that lists all containers on all of my systems with available updates. That then gets pulled into homepage using their custom API function to make something like this: https://imgur.com/a/tAaJ6xf

So at a glance I can see any containers that have updates available, then I can hop into Dockge to actually apply them on my own schedule.