Air Date:
Latest update:
Date: Thu, 12 Sep 2024 09:46:34 -0400
From: Douglas McIlroy <douglas.mcilroy@dartmouth.edu>
Newsgroups: gmane.comp.printing.groff.general
Subject: On computerese
Message-ID: <CAKH6PiWeb62+PwDt_S7GgDjCBL_TS0mzB-qQd9H3_V_=1qn2cQ@mail.gmail.com>
There it festered, right in the middle of Branden's otherwise high literary
style: "use cases". I've despaired over the term ever since it wormed its
way into computer folks' vocabulary. How does a "use case" differ from a
"use"? Or, what's the use of "use case"?
And while I'm despairing, "concatenate" rolls on, undeterred by Research's
campaign for concision. We determinedly excised the word from the seventh
edition. The man page header for cat(1) read "catenate and print". Posix
added content on both ends, making "concatenate and print files". Gnu
puffed it up further to "concatenate files and print on the standard
output".
It's not as if the seventh edition was storming the gates of English.
According to the OED, "catenate" and "concatenate" are synonyms of long
standing that entered the language almost simultaneously. Why pick the
flabby one over its brisk--and more mnemonic--rival?
Tags: quote
Authors: ag
Air Date:
Latest update:
The idea of fetching some Reddit post titles to print them randomly
in the terminal dates back to at least 2016. Taking the feed
from /r/showerthoughts subreddit, for example, one could place the
following script into his .bash_login
or elsewhere:
It's a slightly naïve 3-liner that tries to protect itself from 429
that Reddit frequently returns, in which case the script gracefully
outputs nothing. w3m is required to convert &
& friends into
their corresponding symbols.
The 1st enhancement here would be to cache the JSON. The enterprise
approach is to set up a cron job that saves the data into a temporary
file &, if that operation succeeds, renames it into a well known (by
the script that parses the JSON) path.
The mickey mouse way is to check the mtime of the previously saved
JSON & delete the file once it becomes too old.
The 2nd enhancement would be to fetch multiple subreddits & merge them
together.
A relatively small makefile can handle all this:
$ cat reddit-wisdom
#!/usr/bin/make -sf
r := showerthoughts CrazyIdeas oneliners
update := 600
wits := /tmp/reddit-wisdom/all.txt
age := $(shell expr `date +%s` - `date -r $(wits) +%s 2>/dev/null || echo 0`)
$(shell [ $(age) -gt $(update) ] && rm -f /tmp/reddit-wisdom/*)
all: $(wits)
sort -R < $< | head -1 | w3m -dump -T text/html
$(wits): $(patsubst %, /tmp/reddit-wisdom/%.json, $(r))
jq -r '.data?.children[]?.data | "\"\(.title)\" -- \(.author)" | gsub("\\n"; " ")' $^ > $@
/tmp/reddit-wisdom/%.json:
mkdir -p $(dir $@)
echo Fetching r/$*
curl -sf 'https://www.reddit.com/r/$*/top.json?sort=top&t=week&limit=100' > $@
.DELETE_ON_ERROR:
It considers remote data stale if it's older that 600 seconds. I think
it's safe to increase the value to at least 48 hours.
Note that the default subreddits might include NSFW content.
$ ./reddit-wisdom
Fetching r/showerthoughts
Fetching r/CrazyIdeas
Fetching r/oneliners
"The shovel was a ground breaking invention." -- EndersGame_Reviewer
$ ./reddit-wisdom
"Name your daughters Elle and Noelle" -- EmpireStrikes1st
Tags: ойті
Authors: ag
Air Date:
Latest update:
I expected to use this "new" eslint as follows:
- have a user-wide eslint installation with a personal default
configuration;
- given #1, to be able to run eslint in any project (mine or not)
to check it against my personal rules;
- if a project has
node_modules/.bin/eslint
script (implying there
is a config file in the project's root directory), running
node_modules/.bin/eslint
should ignore the global config and use
the local one;
- Flycheck in Emacs should automatically execute either
node_modules/.bin/eslint
(if it exists) or the global version
otherwise.
Naïvely installing eslint via 'npm i -g' won't do much good since any
plugins must still be installed locally.
Hence, I created a directory ~/lib/dotfiles/eslint
solely for a
"global", user-wide installation & configuration:
|-- eslint*
|-- eslint.config.mjs
|-- node_modules/
| |-- .bin/
| | `-- eslint -> ../eslint/bin/eslint.js*
| `-- eslint/
`-- package.json
where eslint is a shell script, symlinked from a directory in PATH:
$ stat -c%N `which eslint`
'/home/alex/bin/eslint' -> '/home/alex/lib/dotfiles/eslint/eslint'
$ cat eslint
#!/bin/sh
__dir__=$(dirname "$(readlink -f "$0")")
"$__dir__"/node_modules/.bin/eslint -c "$__dir__"/eslint.config.mjs "$@"
and eslint.config.mjs
has common rules for .js files and files
without extensions:
import globals from 'globals'
import js from '@eslint/js'
import react from 'eslint-plugin-react'
export default [
js.configs.recommended,
{
rules: {
"no-unused-vars": [ "warn", { "argsIgnorePattern": "^_" } ],
…
},
languageOptions: {
globals: {
...globals.browser,
...globals.node,
…
}
}
},
{
files: ["**/!(*.*)"],
ignores: ["**/{Makefile,Rakefile,Gemfile,LICENSE,README}"],
},
…
]
If you don't care about any custom configurations that arrive with
almost any JS project, you may stop here. Otherwise, we need to
instruct Emacs where to search for eslint executable.
(defun my--npm-exec-path()
(let ((npm-root (string-trim
(shell-command-to-string "npm root 2>/dev/null"))))
(when (not (string= "" npm-root))
(make-local-variable 'exec-path)
(add-to-list 'exec-path (file-name-concat npm-root ".bin"))
)))
(add-hook 'js-mode-hook 'my--npm-exec-path)
(add-hook 'js-mode-hook 'flycheck-mode)
This tells the editor to invoke my--npm-exec-path
function every
time Emacs opens a .js file. The function runs npm root
to guess the
proper path for node_modules
directory and prepends Emacs'
internal exec-path
list with the path where the eslint script
might be installed. Iff such a script is found by Flycheck, our global
~/lib/dotfiles/eslint/eslint.config.mjs
is ignored.
Tags: ойті
Authors: ag
Air Date:
Latest update:
If downloading from a server is slow, how would you prove to a devops
guy that you're experiencing a slowdown? There are a couple of
possibilities, the worst of which would be sending a video. What if
you send a speed graph using data from curl?
Every second curl prints to stderr the following:
fprintf(tool_stderr,
"\r"
"%-3s " /* percent downloaded */
"%-3s " /* percent uploaded */
"%s " /* Dled */
"%s " /* Uled */
"%5" CURL_FORMAT_CURL_OFF_T " " /* Xfers */
"%5" CURL_FORMAT_CURL_OFF_T " " /* Live */
" %s " /* Total time */
"%s " /* Current time */
"%s " /* Time left */
"%s " /* Speed */
"%5s" /* final newline */,
…
Therefore, by replacing \r with \n we can send a download log:
$ curl http://example.com/1.zip -o 1.zip 2>&1 | tr \\r \\n
To draw a graph with gnuplot, we can use Current time and Speed
columns. Gnuplot understands time as input data, but I don't know how
to persuade it to interpret values like 100k or 200M, thus we need to
convert them into 'bytes'. This is a cute little problem for code
golf, but amusingly, it was already solved in coreutils > 12 years ago
via numfmt(1).
$ echo 1M and 10M | numfmt --from iec --field 3
1M and 10485760
(macOS & FreeBSD both have coreutils package, where the utility
executable is prefixed with 'g'.)
#!/usr/bin/env -S stdbuf -o0 bash
set -e -o pipefail
numfmt=`type -p gnumfmt numfmt;:`; test "${numfmt:?}"
cat <<E
set xdata time
set timefmt "%H:%M:%S"
set xlabel "Time, MM:SS or HH:MM:SS"
set format y "%.0s%cB"
set ylabel "Speed, Unit/second" offset -1,0
set grid
plot "-" using 1:2 with lines title ""
E
curl "$@" -fL -o /dev/null 2>&1 | tr \\r \\n | awk '
/[0-9.][kMGTP]?$/ {
time = index($10, ":") == 0 ? $11 : $10
if (time != "--:--:--") print time, $NF
}' | tr k K | $numfmt --from iec --field 2
Usage:
$ ./curlbench http://example.com/1.zip | gnuplot -p
Tags: ойті
Authors: ag