Jehan Alvani
  • Home
  • Archive
  • About
  • Joining metrics by labels in Prometheus

    I’m using node_exporter to generate host metrics for several of the nodes in my lab. I was re-working one of my thermal graphs, today, with the goal of getting good historical temps of my Pis and my Ubuntu-based homebuilt NAS into a single readable graph. node_exporter has two relevant time series:

    1. node_thermal_zone_temp which was exported on all of the Raspberries Pi
    2. node_hwmon_temp_celsius which was exported by the NAS and the Raspberries Pi 4. The rPi3 did not export this metric.

    I liked node_hwmon_temp_celsius a lot, and opted to spend some time focusing on getting that to fit as well as I could. It’s an [instant vector][instant_vector], and it returned the following with my config:

    node_hwmon_temp_celsius{chip=“0000:00:01_1_0000:01:00_0”, class=“nas server”, environment=“storage”, hostname=“20-size”, instance=“10.0.1.217:9100”, job=“node-exporter”, sensor=“temp1”}    29.85
    node_hwmon_temp_celsius{chip=“0000:00:01_1_0000:01:00_0”, class=“nas server”, environment=“storage”, hostname=“20-size”, instance=“10.0.1.217:9100”, job=“node-exporter”, sensor=“temp2”}   29.85
    node_hwmon_temp_celsius{chip=“0000:00:01_1_0000:01:00_0”, class=“nas server”, environment=“storage”, hostname=“20-size”, instance=“10.0.1.217:9100”, job=“node-exporter”, sensor=“temp3”}   32.85
    node_hwmon_temp_celsius{chip=“0000:20:00_0_0000:21:00_0”, class=“nas server”, environment=“storage”, hostname=“20-size”, instance=“10.0.1.217:9100”, job=“node-exporter”, sensor=“temp1”}   52.85
    node_hwmon_temp_celsius{chip=“0000:20:00_0_0000:21:00_0”, class=“nas server”, environment=“storage”, hostname=“20-size”, instance=“10.0.1.217:9100”, job=“node-exporter”, sensor=“temp2”}   52.85
    node_hwmon_temp_celsius{chip=“0000:20:00_0_0000:21:00_0”, class=“nas server”, environment=“storage”, hostname=“20-size”, instance=“10.0.1.217:9100”, job=“node-exporter”, sensor=“temp3”}   58.85
    node_hwmon_temp_celsius{chip=“pci0000:00_0000:00:18_3”, class=“nas server”, environment=“storage”, hostname=“20-size”, instance=“10.0.1.217:9100”, job=“node-exporter”, sensor=“temp1”}     37.75
    node_hwmon_temp_celsius{chip=“pci0000:00_0000:00:18_3”, class=“nas server”, environment=“storage”, hostname=“20-size”, instance=“10.0.1.217:9100”, job=“node-exporter”, sensor=“temp2”}     37.75
    node_hwmon_temp_celsius{chip=“pci0000:00_0000:00:18_3”, class=“nas server”, environment=“storage”, hostname=“20-size”, instance=“10.0.1.217:9100”, job=“node-exporter”, sensor=“temp3”}     27
    node_hwmon_temp_celsius{chip=“thermal_thermal_zone0”, class=“raspberry pi”, environment=“cluster”, hostname=“cluster1”, instance=“10.0.1.201:9100”, job=“node-exporter”, sensor=“temp0”}    37.485
    node_hwmon_temp_celsius{chip=“thermal_thermal_zone0”, class=“raspberry pi”, environment=“cluster”, hostname=“cluster1”, instance=“10.0.1.201:9100”, job=“node-exporter”, sensor=“temp1”}    37.972
    node_hwmon_temp_celsius{chip=“thermal_thermal_zone0”, class=“raspberry pi”, environment=“cluster”, hostname=“cluster2”, instance=“10.0.1.252:9100”, job=“node-exporter”, sensor=“temp0”}    32.128
    node_hwmon_temp_celsius{chip=“thermal_thermal_zone0”, class=“raspberry pi”, environment=“cluster”, hostname=“cluster2”, instance=“10.0.1.252:9100”, job=“node-exporter”, sensor=“temp1”}    32.128
    

    The class, environment, and hostname labels are added when scraped.

    The chip label looked interesting, but it appears to the an identifier as opposed to a name, and I’m terrible at mentally mapping hard-to-read identifiers to something meaningful. Digging around a little more, I found node_hwmon_chip_names, which when queried returned

    node_hwmon_chip_names{chip="0000:00:01_1_0000:01:00_0", chip_name="nvme", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter"}                    1
    node_hwmon_chip_names{chip="0000:20:00_0_0000:21:00_0", chip_name="nvme", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter"}                   1
    node_hwmon_chip_names{chip="pci0000:00_0000:00:18_3", chip_name="k10temp", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter"}                  1
    node_hwmon_chip_names{chip="platform_rpi_poe_fan_0", chip_name="rpipoefan", class="raspberry pi", environment="cluster", hostname="cluster0", instance="10.0.1.42:9100", job="node-exporter"}               1
    node_hwmon_chip_names{chip="platform_rpi_poe_fan_0", chip_name="rpipoefan", class="raspberry pi", environment="cluster", hostname="cluster1", instance="10.0.1.201:9100", job="node-exporter"}              1
    node_hwmon_chip_names{chip="platform_rpi_poe_fan_0", chip_name="rpipoefan", class="raspberry pi", environment="cluster", hostname="cluster2", instance="10.0.1.252:9100", job="node-exporter"}              1
    node_hwmon_chip_names{chip="power_supply_hidpp_battery_0", chip_name="hidpp_battery_0", class="nas server", environment="storage", hostname="20-size", instance="10.0.1.217:9100", job="node-exporter"}     1
    node_hwmon_chip_names{chip="soc:firmware_raspberrypi_hwmon", chip_name="rpi_volt", class="raspberry pi", environment="cluster", hostname="cluster0", instance="10.0.1.42:9100", job="node-exporter"}        1
    node_hwmon_chip_names{chip="soc:firmware_raspberrypi_hwmon", chip_name="rpi_volt", class="raspberry pi", environment="cluster", hostname="cluster1", instance="10.0.1.201:9100", job="node-exporter"}       1
    node_hwmon_chip_names{chip="soc:firmware_raspberrypi_hwmon", chip_name="rpi_volt", class="raspberry pi", environment="cluster", hostname="cluster2", instance="10.0.1.252:9100", job="node-exporter"}       1
    node_hwmon_chip_names{chip="thermal_thermal_zone0", chip_name="cpu_thermal", class="raspberry pi", environment="cluster", hostname="cluster1", instance="10.0.1.201:9100", job="node-exporter"}             1
    node_hwmon_chip_names{chip="thermal_thermal_zone0", chip_name="cpu_thermal", class="raspberry pi", environment="cluster", hostname="cluster2", instance="10.0.1.252:9100", job="node-exporter"}             1
    

    You might notice that the chip label matches in both vectors. Which made me think I could cross-refrence one against the other. This was way more hack-y than I expected.

    Prometheus only allows for label joining by using the group_right and group_left operations, which are very poorly documented. Fortunately, I came across these two posts by Brian Brazil, which got me started. This answer on Stack Overflow helped me get the rest of the way there.


    I’ll start with my working query and work backwards.

    avg (node_hwmon_temp_celsius) by (chip,type,hostname,instance,class,environemenet,job) *  ignoring(chip_name) group_left(chip_name) avg (node_hwmon_chip_names) by (chip,chip_name,hostname,instance,class,environemt,job)
    

    We’ll break the query above into two parts seperated by the operator:

    • the Left side: avg (node_hwmon_temp_celsius) by (chip,type,hostname,instance,class,environemenet,job)
    • the Right side: avg (node_hwmon_chip_names) by (chip,chip_name,hostname,instance,class,environemt,job)
    • the Operator: * ignoring(chip_name) group_left(chip_name)

    Let’s go through each.

    The left side averages the records for every series that has the same chip label. In this case, the output above showed that some chips had multiple series seperated by temp1…tempN labels. I don’t really care about those, so I averaged them. Averaging records with one series just returns that series value, so that’s a good solution.

    The right side returns several series with labels matching chips to chip_names, and the other requisite labels. The value for these series are all 1, effecitvely saying “this chip exists.”

    The operator is where it gets both interesting and hacky.

    1. Arithmetic operations are a type of vector match, which take series with identical labels and perform the operation on their values. I used a * (multiplication) vector match because the right-side value is always 1 and therefore safe to multiply my left-side values without changing them.
    2. The ignore() keyword allows us to list lablels to be ignored when looking for identical label sets. In this case I told the arithmetic operator to ignore(chip_name) becuase it only exists on the right side.
    3. We can use the grouping modifiers (group_left() and group_right()) to match many-to-one or one-to-many. That is, the group_left() modifier will take any labels specified and pass them along with the results of the equation. Since I used group_left(chip_name), it returned chip_name in the list of fields after matching.

    Here’s what makes this hacky: as far as I can tell, this is the only way to take matching labels and use them in reference to one-another.

    The query returns1

    {chip="0000:00:01_1_0000:01:00_0",chip_name="nvme",class="nas server",hostname="20-size",instance="10.0.1.217:9100",job="node-exporter"}         28.85
    {chip="0000:20:00_0_0000:21:00_0",chip_name="nvme",class="nas server",hostname="20-size",instance="10.0.1.217:9100",job="node-exporter"}            54.85
    {chip="pci0000:00_0000:00:18_3",chip_name="k10temp",class="nas server",hostname="20-size",instance="10.0.1.217:9100",job="node-exporter"}           30.166666666666668
    {chip="thermal_thermal_zone0",chip_name="cpu_thermal",class="raspberry pi",hostname="cluster1",instance="10.0.1.201:9100",job="node-exporter"}      36.998000000000005
    {chip="thermal_thermal_zone0",chip_name="cpu_thermal",class="raspberry pi",hostname="cluster2",instance="10.0.1.252:9100",job="node-exporter"}      32.128
    

    Pretty sweet.


    1. You’ll notice the series for chip="platform_rpi_poe_fan_0" and for hostname=cluster0 were dropped because there’s no series with matching labels on the left-side results. [return]
    3 February 2021
  • Had an ongoing issue with Mac clients constantly popping “Server Disconnected” erorrs when they’ve mounted and NFS volume. This may be specifc to NFSv4, I couldn’t find a way to test.

    Regardless, mounting the volume with mount -o nolocks,nosuid [server]:/path/to/export /local/mountpoint worked. I added this to /etc/nfs.conf as a workaround. I wonder if I could configure this for a specific NFS server, rather then globally.

    #
    # nfs.conf: the NFS configuration file
    #
    nfs.client.mount.options=nolocks,nosuid
    
    29 January 2021
  • Just under 18 days until pitchers and catchers report.

    28 January 2021
  • Passing an nvidia GPU to a container launched via Ansible

    I recently built an addition to my lab that is intended to mostly replace my Synology NAS1, and give a better home to my Plex container than my 2018 Mac mini. The comptuer is running Ubuntu 20.04 and has a nvidia Geforce GTX 1060. I chose the 1060 after refrring to this tool which gives nice estimates of of the Plex-specific capabilities enabled by the card. I wanted something that was available secondhand, had hardware h.265 support, and could handle a fair number of streams. 1060 ticked the right boxes.

    After rsyncing my media and volumes, I spent some time last night working on the Ansible role for launching the plex container while passing the GPU to the contiainer. I spent a bunch of time in Ansible’s documentation and with this guide by Samuel Kadolph.

    
       - name: “Deploy Plex container”
        docker_container:
            name: plex
            hostname: plex
            image: plexinc/pms-docker:plexpass
            restart_policy: unless-stopped
            state: started
            ports:
              - 32400:32400
              - 32400:32400/udp
              - 3005:3005
              - 8324:8324
              - 32469:32469
              - 32469:32469/udp
              - 1900:1900
              - 1900:1900/udp
              - 32410:32410
              - 32410:32410/udp
              - 32412:32412
              - 32412:32412/udp
              - 32413:32413
              - 32413:32413/udp
              - 32414:32414
              - 32414:32414/udp
            mounts:
              - source: /snoqualmie/media
                target: /media
                read_only: no
                type: bind
              - source: /seatac/plex/config
                target: /config
                read_only: no
                type: bind
    - source: /seatac/plex/transcode target: /transcode read_only: no type: bind - source: /seatac/plex/backups target: /data/backups read_only: no type: bind - source: /seatac/plex/certs target: /data/certs read_only: no type: bind env: TZ: “America/Los_Angeles” PUID: “1001” PGID: “997” PLEX_CLAIM: “[claim key]” ADVERTISE_IP: “[public URL]” device_requests: - device_ids: 0 driver: nvidia capabilities: - gpu - compute - utility comparisons: env: strict

    This is the part relevant to passing the GPU to the container, and the (lacking) documentation can be found [in the device_requests section, here].(https://docs.ansible.com/ansible/latest/collections/community/general/docker_container_module.html#parameter-device_requests)

            device_requests: 
              - device_ids: 0
                driver: nvidia
                capabilities: 
                  - gpu
                  - compute
                  - utility
    

    device_ids is the ID of the GPU that is obtained from nvidia-smi -L, capabilties are spelled out on nvidia’s repo, but all doesn’t seem to work.

    Hope this helps the next poor soul who decides this is a rabbit worth chasing.


    1. I’ll keep Surveilance Station on my Syno for the time being. [return]
    27 January 2021
  • I’m at that point in my beard growth where my face looks poorly-rendered and slightly out of focus.

    19 January 2021
  • As far as gorgeous mise en place go, it’s pretty hard to beat yakisoba.

    9 January 2021
  • Lab Cluster Hardware

    In my last post about my home lab, I mentioned I’d post again about the hardware. The majority of my lab is comprised of 3 Raspberries Pi with PoE Hats, a TP-Link 5-port Gigabit PoE switch, all in a GeekPi Cluster Case. Thanks to the PoE hats, I only need to power the switch and the switch powers the three nodes. I have an extended pass-through 40-pin header on the topmost Pi (the 3B+, currently) which allows the goofy “RGB” fan to be powered, which actually made the temps on the cluster much more consistent.

    Cluster v2

    Cluster In the Cabinet

    The topmost Pi is a 3B+, and the bottom two nodes are Raspberry Pi 4s (4 GB models). They’re super competent little nodes, and I’m really pleased with the performance I get from them.

    Here’s a graph of 24-hours of the containers’ CPU utilization across all nodes. You can see the only thing that’s making any of the Pis sweat is NZBGet, as I imagine the process of unpacking files is a bit CPU intensive.

    Cluster 24 hour CPU

    Here’s my “instant” dashboard, which shows point-in-time health of the cluster. I’ll dig into this more at some point in the future.

    Cluster instant DB

    The Plex container is running on my 2018 Mac mini, which I’m not currently monitoring in Grafana. That’s a to-do.

    1 December 2020
  • The Death/Rebirth scene in Princess Mononoke is almost too beautiful for words. I’ve seen it so many times and it never loses its impact.

    30 November 2020
  • Fustercluck - Reworked my Raspberry Pi Cluster

    I’ve spent the past couple months’ forced down-time1 reworking my Raspberry Pi cluster that forms a big portion of my home lab. I set out with the goal of better understanding Prometheus, Grafana, and node-exporter to monitor the hardware. I also needed the Grafana and Prometheus data to be persistent if I moved the container among the nodes. And I needed to deploy and make adjustemnts via Ansible for consistency and versioning. I’ve put the roles and playbooks on GitHub.

    This wasn’t too hard to achieve; I did the same thing that I’d done with my Plex libraries: created appropriate volumes and exposed them via NFS from my Synology. Synology generally makes this pretty easy, although the lack of detailed controls did occasionally give me a headache that was a challenge to resolve.

    Here’s a diagram of the NFS Mounts per-container.

    NFS Mount Diagram

    The biggest change from my previous configuration was that previously, I had NFS Exports for Downloads/Movies/Series. Sonarr helpfully provided the following explainer in their Docker section.

    Volumes and Paths

    There are two common problems with Docker volumes: Paths that differ between the Sonarr and download client container and paths that prevent fast moves and hard links.

    The first is a problem because the download client will report a download’s path as /torrents/My.Series.S01E01/, but in the Sonarr container that might be at /downloads/My.Series.S01E01/. The second is a performance issue and causes problems for seeding torrents. Both problems can be solved with well planned, consistent paths.

    Most Docker images suggest paths like /tv and /downloads. This causes slow moves and doesn’t allow hard links because they are considered two different file systems inside the container. Some also recommend paths for the download client container that are different from the Sonarr container, like /torrents.

    The best solution is to use a single, common volume inside the containers, such as /data. Your TV shows would be in /data/TV, torrents in /data/downloads/torrents and/or usenet downloads in /data/downloads/usenet.

    As a result, I created /media, which is defined as a named Docker volume, and mounted by the Plex container (on the MacMini), Sonarr, Radarr, and NZBGet2.

    I’ll post to-come with a couple cool Dashboards I’ve built the actual hardware I’m using for the cluster.


    1. Forced because of COVID-19, and also because I had some foot surgery in early September, and I’ve been much less mobile since then. Fortunately, I’m healing up well, and I’ll be back to “normal” after a few more months of Physical Therapy. [return]
    2. NZBGet’s files are actually in /media/nzb_downloads, but I left it as /media/downloads for the sake of clarity in the post. [return]
    27 November 2020
  • Getting Apple Emoji on the Raspberry Pi

    I talked about getting the HyperPixel 4.0 to work in my last post, but I also wanted an excuse to show it off. I’m building a Grafana-based statusboard for the services I run on my lab, and I wanted some character. AFAIK, Raspbian doesn’t include Emoji fonts, but you can add some with Google’s noto-emoji.

    I wanted Apple emoji, so I zipped the .ttc and, copied it to my Pi, and it extracted into /usr/shared/fonts/. This would be easy to automate since Apple regularly adds characters with updates.

    8 November 2020
  • Pimoroni HyperPixel 4.0 Touch Workaround

    I’m using the gorgeous Pimoroni Hyperpixel 4.0 on a Raspberry Pi 4 for a small project. The display is crazy beautiful, and comes in a touch and non-touch version.

    I ran into issues getting touch working, and opened an issue. After a little poking and a helpful commenter pointing me to related issues, I found a workaround

    For posterity, I followed the directions above (running Pimoroni’s install script, choosing option 2 for Rectangular with Experimental Pi 4 Touch Fix, then edited /boot/config.txt as follows. Modified lines commented as such.

    # Enable DRM VC4 V3D driver on top of the dispmanx display stack
    #dtoverlay=vc4-fkms-v3d # Modified
    max_framebuffers=2
    
    [all]
    #dtoverlay=vc4-fkms-v3d
    
    dtoverlay=hyperpixel4
    gpio=0-25=a2
    enable_dpi_lcd=1
    dpi_group=2
    dpi_mode=87
    dpi_output_format=0x7f216
    dpi_timings=480 0 10 16 59 800 0 15 113 15 0 0 0 60 0 32000000 6
    display_rotate=1 # Modified
    

    Very pleased with this, though I’ve heard it won’t work with low-level access (ie. RetroPie setups). YMMV.

    8 November 2020
  • Providing Data Persisence on Prometheus Containers with NFS on Synology

    Made some progress on one of my distraction projects over the past couple days. I’d been working on creating an Ansible role to deploy a Prometheus container with persistent data backed by NFS on my rPi cluster. Getting the NFS mount to work with Prometheus was a challenge. Relevant stanzas from the role’s main.yml:

    
          - name: "Creates named docker volume for prometheus persistent data" 
            docker_volume:
                volume_name: prometheus_persist
                state: present
                driver_options: 
                    type: nfs
                    o: "addr={{ nfs_server }},rw"
                    device: ":{{ prometheus_nfs_path }}"
                
                
          - name: "Deploy prometheus container"
            docker_container:
                name: prometheus
                hostname: prometheus
                image: prom/prometheus
                restart_policy: always
                state: started
                ports: 9090:9090
                volumes:
                  - "{{ prometheus_config_path }}:/etc/prometheus"
                mounts:
                  - source: prometheus_persist
                    target: /prometheus
                    read_only: no
                    type: volume        
                comparisons:        
                    env: strict
    
    

    However I was getting permission denied on /prometheus when deploying the container. A redditor pointed me in the direction of the solution. Since NFS is provided by my Synology, I can’t set no_root_squash, but by mapping all users to admin in the share’s squashing settings, I could allow the container to set permissions appropriately. Progress!

    3 November 2020
  • I wish that iOS let me define a Home Screen other than my first as a “default” (the one that Springboard will switch to when I swipe up twice), allowing me another swipe-left Home Screen.

    23 October 2020
  • Bellinger expressed deep admiration for that home run.

    18 October 2020
  • Dawned on me today that I’ve been carrying Field Notes almost every day since October of 2011. In that time, I’ve only lost one and it came back to me. Here they are.

    7 October 2020
  • any millennial born after 1981 can’t cook… all they know is avocado toast, climate change, shrinking middle class, unaffordable healthcare, be in debt, eat hot chip & die

    4 October 2020
  • Oh hell guess I’ll have to sell the house

    4 October 2020
  • Still pouring one out for Transmit on iOS.

    1 October 2020
  • Amazon's Ring drone is creepy as hell

    Amazon announced the Ring drone, and it’s creepy as hell. Not just because the concept of an always-on video feed of the inside of your house is creepy (it is). Not just because the video created by the always-on camera is sent to remote servers and who-knows-what is done with it (also creepy). But also because of the super creepy history of the company who built the camera and is marketing it to you.

    Ring has privacy issues. This has been well-documented. In a 2019 lette to lawmakers, Ring copped to firing four employees for accessing customer video data outside of their normal responsiblities. The fact that it’s possible for this to happen shows a lack of concern for customer privacy.

    Also in 2019, the Intercept reported that Ring gave their Ukraine-based R&D team an S3 bucket with recordings of every Ring video ever. The report claims that there was no policy or system to restrict how the R&D team used the video.

    How is it a good idea to give this company access to a remotely-controllable remotely-accessible video device inside my home?

    All of this from the same company that will helpfully deliver packages inside your locked home.

    And from the same company that has “partnered” with Police departments all over the country to provide access to the privately-owned cameras customers put on or in their houses, as well as a convenient map of all the cameras installed.

    Some folks have the poor stance “Oh if you’re not doing anything wrong, what do you have to hide?” Which is irrelevant; it’s not Amazon’s business what goes on inside my home despite their repeated attempts to make it their business.

    In addition to the lax security and customer privacy, this is literally inviting the police state into your house. It’s dystopian.

    It’s awfully disappointing that so little of the coverage of the announcement of this device touched on Amazon and Ring’s anti-customer and anti-privacy history. There’s no reason people should trust them with this level of access.

    Update (October 28th): Removed a whole bunch of erronious references to Nest, a Google company. Got my wires crossed.

    25 September 2020
  • Aubrey is walking around reciting mantras:

    “No one can change my name”
    “No one can do what I can do”
    “Everyone isn’t the same as me”
    “No one can change my name”

    24 September 2020
  • An attractive and multi-functional workspace

    I’ve been fortunate enough to have a dedicated workspace in each of the houses and apartments I’ve lived in since 2006, when I bought my first house. Some have been better than others, and the current setup isn’t done1, but it’s pretty good and I’m happy with where it is, now. Since my last role was about 80% remote, I knew how important a good home setup was prior to the pandemic forcing many of us to think more about how to be productive from home.

    Each of the things are chosen because they help me do more, or are pleasing while I’m at my desk. I won’t say that the desk is always this clean, but it’s never too far from it. It’s important that I want to be at my desk, and to achieve that, I need stuff that is both pleasing and functional.

    The space is where I do almost all of my nerding both for work and fun, and when the weather is too poor to get outside it needs to convert into a cycling training space.

    I have my 2019 Trek Domane SL5 Disc on a Wahoo Kickr from 2017. It connects to my PC via a USB attached ANT+ dongle, and the PC runs Zwift. I’ve done exactly 2 rides with this setup, but I wanted to get it going before I went down for a foot surgery, so I’d have it ready to go as soon as I’m clear to train again - about a month from now.

    I keep three computers at my desk permanently.

    • My work computer - currently a 2017 13” MacBook Pro
    • My personal 2018 MacMini, which hosts a few Docker containers and where I do a lot of personal nerdery.
    • My PC, seen on the right, which is mostly for gaming and Zwift.

    The speakers are Joey Roth, and the desktop organization is Ugmonk’s Gather System.

    I use two Tripp-Lite B004-DPIUA4-K KVMs to allow me flexibility for moving my displays between the computers. I covered the far-too-bright indicators with black Fastcap stickers, so they don’t distract me while I work.

    I’m currently using a WASD Code 87-key with Cherry MX Browns and O-Rings. The WASD keys are bespoke Mechanicallee’s Walnut walnut keycaps. I parted with my WhiteFox True Fox some time ago, and I had a Kira with Hako True switches, that I loved typing on but I was never comfortable with the layout. Sold it on, as well. Nice thing about mechanical keyboards is that they retain their value.

    Finally, in my Ikea drawer/computer stand and storage drawers, I put in a couple drawers of pick-and-pluck foam to better organize my gadgets. This one is for photos and video, and another is full of Raspberry Pis and components, with anti-static foam. It’s the old Alton Brown-ism “Organization will set you free.”


    1. I have some plans for improved storage and more bookshelves, both of which are desperately needed. [return]
    23 September 2020
  • BBC Video: Jogging While Black - the calculations you have to make

    Privilege is being able to exercise without being afraid of getting killed by the police.

    21 September 2020
  • An NYT Photo Essay on Montana-based Tom Morgan Rodsmiths, makers of custom fly fishing rods. Gorgeous.

    21 September 2020
  • If you’re wondering why your Safari toolbar icons are suddenly tinted with a color, it’s apprently an undocumented change that indicates the extension has access to the current tab. Hope a UI for this is coming.

    20 September 2020
  • David Schnurr’s US COVID-19 dashboard is by far the best I’ve seen yet in terms of easily-grokable relevant data at a glance.

    19 September 2020

Follow @jalvani on Micro.blog.