Air Date:
Latest update:
When the authors of a new terminal emulator (TE) decide which
capabilities to support, they inevitably end up with a subset of
XTerm's features. Then, much like a non-mainstream web browser hiding
its true identity behind a fake User-Agent
request header, the new
TE often claims to be a variant of xterm
. This frustrates the
current XTerm maintainer & the maintainers of the terminfo database,
who insist that every TE should have its own entry in terminfo, and
that it's always better to esse quam videri. So far, TE authors have
largely ignored this prudent advice.
Sometimes, the ability to deceive programs that rely on the TERM
environment variable can be useful. Suppose you want to display an
infobox to a user via dialog(1), but you want it to use a monochrome
palette. The dialog program doesn't have a flag for this, but we can
trick it into thinking it's running in a terminal that doesn't support
colours:
$ TERM=vt100 dialog --infobox 'ну шо ти малá' 0 0
This works because the dialog(1) program uses the ncurses library,
which in turn relies on the terminfo database. The latter includes an
entry for the vt100
terminal, that, according to terminfo, is fairly
unsophisticated:
$ TERM=vt100 tput colors
-1
To get a full list of such simpletons, run:
$ find /usr/share/terminfo -type f |
xargs -n1 basename |
while read line; do [ `tput -T $line colors` = -1 ] && echo $line; done
In general, CLI programs that use terminal features beyond simply
deleting the previous character fall into 2 categories--those that:
- consult terminfo for supported capabilities;
- include their own little database of compatible TE.
Both rely on $TERM
, hence, both can be easily lead astray.
There is a 3rd way: ask the terminal emulator (TE) to report its
capabilities using escape sequences. This mechanism, called
XTGETTCAP, is slowly gaining popularity among TEs. Obviously, it
"works over ssh" & doesn't require terminfo to be installed on the
remote machine. XTGETTCAP was first introduced by XTerm and is now
supported by Kitty and iTerm2. With this mechanism, the value of
$TERM
becomes less relevant:
$ echo $TERM
xterm-256color
$ TERM=vt100 ./XTGETTCAP TN
xterm-256color
$ ./XTGETTCAP colors
256
Alas, there is no widely known CLI utility named XTGETTCAP
--in the
example above, it's just a ~small shell script. Even more unfortunate
is that doing a proper XTGETTCAP query & reading the response is a
tricky business. You need to:
Construct a query using an escape sequence & hexify the capability
name, e.g., colors
becomes 636f6c6f7273
.
Switch the TE from canonical to raw mode.
Handle raw mode correctly. In this mode, the TE doesn't assemble
characters into lines, so we can't rely on it to return everything
up to one line at a time. Instead, we must either request a
response of a known size (we don't know the size!) or read an
unknown-length response with a timeout. If you royally screw this
up, your read operation may block ∞.
Restore canonical mode.
Unpack the reply.
#!/bin/sh
set -e
eh() { echo "Error: $*" 1>&2; trap - 0; exit 1; }
text2hex() { od -A n -t x1 | tr -d ' \n'; }
hex2text() { sed 's/../0x& /g' | xargs printf '\\\\%03o\n' | xargs printf %b; }
stty_orig=`stty -g`
stty_restore() { stty "$stty_orig"; }
[ -n "$1" ] || eh Usage: XTGETTCAP capability
capablity=`printf '%s' "$1" | text2hex`
trap stty_restore 0
stty -echo raw time 1 min 0
printf '\033P+q%s\033'\\ "$capablity" > /dev/tty
buf=`dd status=none count=1`
stty_restore
[ -n "$buf" ] || eh unsupported terminal
[ "$V" ] && printf %s "$buf" | xxd
buf=`printf %s "$buf" | sed 's/[^a-zA-Z0-9+=]//g'`
[ P1 = "${buf%+*}" ] || eh unknown capabilitah
printf %s "${buf#*=}" | hex2text | xargs
A couple of notes:
the script doesn't work under screen/tmux, even if an underline TE
supports the XTGETTCAP mechanism.
if you decide to fiddle with time
and min
settings, think twice;
here's a matrix from APUE, ch. 18:
hex2text()
may look silly, but it works even under FreeBSD (unless
the source contains encoded null bytes) & doesn't rely on
non-standard utilities like xxd;
in principle, dd ...
call can be replaced with a simple cat
.
$ ./XTGETTCAP TN
xterm-kitty
$ VERBOSE=1 ./XTGETTCAP colors
00000000: 1b50 312b 7236 3336 6636 6336 6637 3237 .P1+r636f6c6f727
00000010: 333d 3332 3335 3336 1b5c 3=323536.\
256
$ ./XTGETTCAP шо?
Error: unknown capabilitah
- Requires
allowTcapOps
resource to be
true. By default, it's off on all IBM distros due to security
concerns (e.g., see CVE-2022-45063).
Tags: ойті
Authors: ag
Air Date:
Latest update:
If Chrome befriends GPU drivers from the underlying OS, it displays
the status of video encoding/decoding on its internal chrome://gpu/
page. If "Video Decode" row says "Hardware accelerated," somewhere at
the end of the page, there may be a list of specific codecs. Why
"may"? Because, at the time of writing, Chrome implements 5 video
codec types:
In edge cases, when an old GPU proudly reports something like MPEG2
(but not a single codec from the list above), the "Video Decode" row
deceptively includes the word "Hardware," but the actual "Video
Acceleration Information" table contains no data.
On Linux, Chrome uses VAAPI for hardware video acceleration, meaning
it requires the "correct" Mesa DRI drivers to be installed. If Chrome
doesn't like the installed Mesa version, video encoding/decoding gets
demoted to "software only," even if the GPU supports every codec known
to mankind.
Chrome supports the WebCodecs API, which allows checking for hardware
acceleration of a particular codec via static methods like
VideoEncoder.isConfigSupported()
.
For example, if you paste the following snippet into a devtools
console, you get a pretty table of "Video Decode" status:
Promise.all([
['AV1', 'av01.0.08M.10'],
['H.264', 'avc1.640033'],
['H.265', 'hev1.1.6.L120.90'],
['VP8', 'vp8'],
['VP9', 'vp09.00.40.08']
].map( async v => ({
name: v[0],
codec: v[1],
support: (await VideoDecoder.isConfigSupported({
codec: v[1], hardwareAcceleration: "prefer-hardware",
})).supported
}))).then(console.table)
For a simple web page that draws a similar table, console.table()
certainly won't do, for it returns undefined, but writing a table rows
generator fits in
$ wc -l *html
47 index.html
lines.
The full example is here (works only in Chrome; Firefox returns
bogus data). I use it to check GPUs on el cheapo Android TV-boxes.
Tags: ойті
Authors: ag
Air Date:
Latest update:
Date: Thu, 12 Sep 2024 09:46:34 -0400
From: Douglas McIlroy <douglas.mcilroy@dartmouth.edu>
Newsgroups: gmane.comp.printing.groff.general
Subject: On computerese
Message-ID: <CAKH6PiWeb62+PwDt_S7GgDjCBL_TS0mzB-qQd9H3_V_=1qn2cQ@mail.gmail.com>
There it festered, right in the middle of Branden's otherwise high literary
style: "use cases". I've despaired over the term ever since it wormed its
way into computer folks' vocabulary. How does a "use case" differ from a
"use"? Or, what's the use of "use case"?
And while I'm despairing, "concatenate" rolls on, undeterred by Research's
campaign for concision. We determinedly excised the word from the seventh
edition. The man page header for cat(1) read "catenate and print". Posix
added content on both ends, making "concatenate and print files". Gnu
puffed it up further to "concatenate files and print on the standard
output".
It's not as if the seventh edition was storming the gates of English.
According to the OED, "catenate" and "concatenate" are synonyms of long
standing that entered the language almost simultaneously. Why pick the
flabby one over its brisk--and more mnemonic--rival?
Tags: quote
Authors: ag
Air Date:
Latest update:
The idea of fetching some Reddit post titles to print them randomly
in the terminal dates back to at least 2016. Taking the feed
from /r/showerthoughts subreddit, for example, one could place the
following script into his .bash_login
or elsewhere:
It's a slightly naïve 3-liner that tries to protect itself from 429
that Reddit frequently returns, in which case the script gracefully
outputs nothing. w3m is required to convert &
& friends into
their corresponding symbols.
The 1st enhancement here would be to cache the JSON. The enterprise
approach is to set up a cron job that saves the data into a temporary
file &, if that operation succeeds, renames it into a well known (by
the script that parses the JSON) path.
The mickey mouse way is to check the mtime of the previously saved
JSON & delete the file once it becomes too old.
The 2nd enhancement would be to fetch multiple subreddits & merge them
together.
A relatively small makefile can handle all this:
$ cat reddit-wisdom
#!/usr/bin/make -sf
r := showerthoughts CrazyIdeas oneliners
update := 600
wits := /tmp/reddit-wisdom/all.txt
age := $(shell expr `date +%s` - `date -r $(wits) +%s 2>/dev/null || echo 0`)
$(shell [ $(age) -gt $(update) ] && rm -f /tmp/reddit-wisdom/*)
all: $(wits)
sort -R < $< | head -1 | w3m -dump -T text/html
$(wits): $(patsubst %, /tmp/reddit-wisdom/%.json, $(r))
jq -r '.data?.children[]?.data | "\"\(.title)\" -- \(.author)" | gsub("\\n"; " ")' $^ > $@
/tmp/reddit-wisdom/%.json:
mkdir -p $(dir $@)
echo Fetching r/$*
curl -sf 'https://www.reddit.com/r/$*/top.json?sort=top&t=week&limit=100' > $@
.DELETE_ON_ERROR:
It considers remote data stale if it's older that 600 seconds. I think
it's safe to increase the value to at least 48 hours.
Note that the default subreddits might include NSFW content.
$ ./reddit-wisdom
Fetching r/showerthoughts
Fetching r/CrazyIdeas
Fetching r/oneliners
"The shovel was a ground breaking invention." -- EndersGame_Reviewer
$ ./reddit-wisdom
"Name your daughters Elle and Noelle" -- EmpireStrikes1st
Tags: ойті
Authors: ag