Hey, Siri, let’s start a fire.

Almost immediately after moving into our house, I wanted to get our gas fireplace in HomeKit. We’re heavily invested in HomeKit, and I really wanted the fireplace to (a) turn on and off via HomeKit and (b) have a switch that matched the other HomeKit switches in the house. However, the specifics of doing so were a bit challenging, and it took me an embarrassingly long time to figure out a really pretty simple solution.

The Challenge

The fireplace is a typical gas fireplace with a switch that connects via low-voltage wiring to an AC powered control unit which lives in an open space under the fireplace. When the switch is in the “on” position, the circuit is closed, and the control unit opens the gas valve and repeatedly fires the starter until the gas ignites. Super simple.

However, HomeKit and other “smart” switches require high-voltage AC to operate, and the control switch doesn’t have AC in the gang box.

The Solution

I spent a year thinking about how to get AC to the gang box without cutting a bunch of drywall, when I realized it wasn’t worth the effort. The obvious thing to do is to use a Lutron Caseta on/off switch, a relay, and a Lutron Pico remote. This meets my needs, since I have Caseta switches throuhout the first floor.

Here’s my hardware list (affiliate links):

  1. 1 Lutron PD-6ANS Switch
  2. 1 Lutron Claro Single-Gang Wallplate
  3. 1 Lutron Non-Dimmer Pico Remote
  4. 1 Pico Remote Wall-Box Adapter
  5. 110v Coil, 10A relay
  6. Thermostat Wire
  7. 2-gang box
  8. AC Appliance Cord
  9. 18/24gauge Male and Female Disconnects (these aren't on Amazon in quantites that aren't nuts; you can find them at your hardware store)

Here’s a diagram of how I wired the switch to the relay. I did all of this on my workbench so I could easily test the relay with my multimeter, and tuck the wire nuts away as best I could. Most relays will have a diagram printed on their housing which shows which pins energize the coil, and which contacts are switched. In my case, 2 and 7 energize the coil, and 1 and 3 or 8 and 6 were switched pairs.

 It took some testing to figure out which wire was switched.

It took some testing to figure out which wire was switched.

The fireplace control connnected to the low-voltage switch with 18 gauge thermostat wire using quick-disconnects, so I stuck some quick disconnects on the thermostat wires connected to the relay to make installation easier. The switch's black (hot) wire is connected to the black (hot) wire of the AC cord, the switch's switched (red) wire is connected to the relay's coil, so when the switch is switched on, the coil is energized. The neutral wires of the relay, the switch, and the AC cord are all connected to complete the circuit. The low-voltage wires to the fireplace control are connected to two switched contacts of the relay. When the coil is energized, the gate is closed, and the loop is completed, just like flipping the old switch to the "on" position. The switch's blue wire isn't used in this case, and is just wire-nutted in the two-gang box.

I assmbled all of this in a two-gang box, tested it on my workbench, then moved it under the fireplace. There's an AC outlet under the fireplace which was unused, which was a convenient place to plug in the power cord. I disconnected the low-votage switch with the disconnects under the fireplace, and reconnected the control wires to my thermostat wire coming from the relay using the disconnects that I added.

At this point I could toggle the switch in the gang box and, lo and behold, fire! I replaced the low-voltage switch with the Pico remote. Added the switch and Pico remote to the Lutron app, which in turn adds it to HomeKit, and did some minor tweaks in HomeKit.

WhiteFox True Fox Keyboard

Finished Project

Nearly four years ago, I picked up a Nixie Clock kit with the plan to build it with my son. He wouldn't be born for another two months. To complete the Nixie Clock, I needed a soldering iron, and I wanted one that would be approachable and usable for a variety of projects, so I also picked up a Hakko FX-888D.

Shortly thereafter, my son was born, my time to work on projects evaporated, I took a new job, moved the family to Boston for a year, we found out we'd have another kiddo, moved to Seattle, my daughter was born, my time further evaporated, so on and so forth.

About 18 months ago, I jumped in on a drop on Massdrop for a WhiteFox keyboard. It delivered just about a year ago, and it has been waiting patiently since then.

I finally got a chance to break out the soldering station for the first time, and assemble the keyboard over the Thanksgiving break from work. The project was really fun, and I'm itching to do another one. My soldering skills improved dramatically simply through repetition, and it was a really fun learning experience[1].

Mid-assembly (on chamois!?)

The keyboard itself is much better than I expected. The gap between the right-modifiers and the arrow keys is exactly as my fingers expect, and it makes things like ⌘+← just as easy as on my WASD Code 87-key, or on my MBP's Keyboard for that matter.

Assembled WhiteFox

Add to that, Input Club has a really nice web-based configuration tool which makes key programming super easy. It means I can futz with the layout until I get it exactly the way I like it. For example, I just realized that I'd likely prefer the ~ key to be just to the left of Esc and all the other num keys right one. This will be no problem at all.

Layout

Of course, this isn't without its caveats. The current firmware (both Latest and LTS) have bugs with the LED functions. Not a dealbreaker, but frustrating.

Oh, that Nixie clock is still waiting. It's almost time.


  1. As an aside, I took the assembly picture while I was working on a chamois for some reason. It didn’t dawn on me until I was reviewing the pictures that a chamois isn’t ESD-safe. Everything turned out just fine, but talk about a noob mistake.  ↩

eero and Split-Horizon DNS

Update: As of Septmber 14, 2017 eero has updated all eero networks to enable hairpinning. That means the config I laid out isn't necessary, but maybe this is still useful for some folks.

At this point, I'd guess most folks reading this are familiar with eero and similar distributed mesh home wifi products. For those that aren't, the tl;dr is that in many environments, a single router with a set of radios won't cover a whole home very well, and a distributed system with multiple radios working together is a better solution (read the linked Sweethome overview - it covers this better than I will). I'm a happy eero customer, and I'm not super interested in getting into why eero was a better fit for me than its competitors at the time I made the decision. As I said, overall I've been very happy.

However, I did have to get a little creative to solve a specific problem:

NAT/Egress Hairpinning

I have long run a few internal services on various web-servers running inside and outside my network.

Unfortunately, eero doesn't currently support a feature called Hairpinning, which allows clients inside the network to egress the network and get a response from the public IP of the same network. As a result, I had to access these services by one name externally (eg. [server].alvani.me), and another name internally ([server-hostname].local). This is frustrating since it breaks scripts and alerts that rely on consistent DNS names for services, and it means I need a second set of bookmarks to access services inside versus outside my network, plus the congnitive load of remmebering if I should use this name or that name to access this service. The whole point of DNS is to make this easy on us simple humans.

Split Horizon DNS to the Rescue

Split Horizon is a pretty common implementation in the enterprise world. In general, it's overkill for home networks, since there aren't a lot of DNS needs for most home networks. I really didn't want to, but eventually it became clear that for the short term, it'd be a good idea to run my own internal DNS server which handled request for the *.jehanalvani.com and *.alvani.me domains, while forwarding all other requests to upstream DNS providers. I could run in this configuration until eero eventually adds support for the hairpinning.

I'm writing this to document how I configured split-horizon on a Synology with Synology's DNS Server software. There's no reason this couldn't be adapted fairly easily to BIND or whatever other DNS Server you might want to run.

I started by installing DNS Server on my DS414Play.

Then, in DNS Server, I configured my forwarders.

Forwarders

Oh, if you leave the "Limit Source IP service" box checked, make sure you add your internal network CIDR to the list of permitted IPs. Otheriwise you'll get "Forbidden" on your DNS requests.

I added the zones I care about

Zones

A Quick Aside About DNS Zones

If you're familar with DNS, this is not a section you'll need to read, but I got some feedback that the concept of DNS Zones are confusing. For our purposes, DNS Zones are any domain(s) or subdomain(s) you want your nameserver to respond for. Of course, zones are much more powerful, and can be used in a lot of ways. The bottom line is that Zones are a method of drawing a demarcation for the responsibilities of different nameservers.

Moving on:

Finally, inside the zones, I configured the subdomains and services I care about

Zone config

This matches the public DNS configuration of the same services, just replacing the IP address with the internal IP address of the server running the service. In my case, they're all the same, because I'm using a reverse proxy to interpret the requests and send them to the appropriate servers.

Now, any client that sends requests to this name server should recieve the internal record as an answer. I can test with a couple simple NSLOOKUPs from inside the network.

First, query a common public nameserver:

nslookup syno.jehanalvani.com 4.2.2.2

Which returns

>jehan.alvani@MBP $ nslookup syno.jehanalvani.com 4.2.2.2
Server:        4.2.2.2
Address:    4.2.2.2#53

Non-authoritative answer:
syno.jehanalvani.com    canonical name = dynamic.jehanalvani.com.
dynamic.jehanalvani.com    canonical name = starman.synology.me.
Name:    starman.synology.me
Address: 50.35.122.71

Where 4.2.2.2 is the nameserver that was queried, and the results of the non-autoritative answer are exactly what I've configured for my pubilc subdomain. (syno.jehanalvani.com is a CNAME to dynamic.jehanalvani.com, which is itself a CNAME to synology's dynamic DNS service). Ineffecient? Maybe a little. But I don't have to rememebr anything other than "dynamic.jehanalvani.com" after initial setup.

And a query against my Synology's private IP returns the following

`nslookup syno.jehanalvani.com 10.0.1.20`

>jehan.alvani@MBP $ nslookup syno.jehanalvani.com 10.0.1.20
Server:        10.0.1.20
Address:    10.0.1.20#53

syno.jehanalvani.com    canonical name = dynamic.jehanalvani.com.
Name:    dynamic.jehanalvani.com
Address: 10.0.1.20

Boom. That's what I want to see.

DHCP configuration

The next step is to tell the clients inside the network to use this nameserver. Clients inside the network needs to know to send requests to my nameserver. If you're configuring for your network, you might prefer not to set DHCP name servers to the your internal server, instead electing to give manual DNS server to specific clients. I wanted to be able to use any device on the network to reach these URIs, so I opted to configure eero's DHCP server to direct all clients to my internal nameserver.

DHCP Config

Caveat

On a reddit thread, someone asked if this configuration works with eero Plus. The short answer is: yes! The long answer is: yes, but with some qualifications.

eero hasn't published much about how eero Plus works. However, it seems to hijack outbound DNS requests, sending the request to a nameserver eero controls. It's important to note that this only affects outbound DNS requests, so requests that say inside your network still work, as long as there is a responder to accept the requests. In the configuration above, that means that any records in the defined zones will be successful, but anything that needs to be forwarded to an upstream server will be hijacked by eero Plus.

This breaks services like Unlocator and Unblock-Us which rely on their own DNS servers receiving all requests, and interpreting specific requests. I had to disable eero Plus for the time being, since I make use of these services (as you can tell if you're familiar with the nameservers I configured as forwarders).

Is it worth it?

Obviously, that depends on your needs and your level of comfort. I have been happy with this setup for the year+ that I've been using it. However, as soon as eero adds support for NAT hairpinning, I'll happily pull it out. This does introduce a bit of fragility I don't care for, mostly in that it forces my Synology to be powered on for anyone on my network to be able to reach any internet resources.

Fallout 4 Comparison: Bunker Hill In Real Life v. In-Game

A few months ago, I took a new job that had my family and I relocate from Balitmore to Boston. Getting settled into the new gig, and getting settled into the new home routine have both contributed to my lack of time to wrtie, here.

We arrived just in time for the notorious Boston winter. We also arrived just in time for the release of Fallout 4, which takes place in a fictionalized version of Boston some 250 years in the future. The apartment we're renting is only a few hundred feet from the historic Bunker Hill Monument, which in-game, has become a refuge for traders retired from travling the wasteland of the post-apocolyptic Commonwealth.

This morning, the sun was out and I had a little time to take some pictures of Bunker Hill for a comparison with its in-game representation.

Micro 4/3 and the Olympus OM-D EM-1

I’ve been a photographer since high school. I’ve usually been a hobbyist, occasionally making some money, rarely making the bulk of my income from photography. I started with a Minolta STSi, eventually working up to a Nikon F5 before I made my move to digital with a D200. When that started to gather dust, I moved on to a few point-and-shoots, experimenting with not having a SLR; leaning on my phone for day-to-day shots, and a small camera when I thought I’d use it. In practice, that marked the period of my life where I took the absolute fewest pictures. That was a bummer.

I’ve been curious about the new (relatively speaking) compact mirrorless camera systems. I researched the two main systems, APS-C versus Micro Four Thirds (MFT). The more I read, the clearer it became that MFT is winning the battle despite the documented advantages of APS-C’s larger sensor size[1]. In my research, there were more lens options available for MFT than APS-C, which I took as an an indicator the platform’s momentum. Hell, the Wirecutter has a writeup of the MFT lenses you should investigate for your new camera, but such an article doesn’t exist for APS-C. I ordered an Olympus OM-D EM–1. Plus, the camera is gorgeous in silver, which certainly didn’t hurt.

I ordered two lenses with the camera: an Olympus M.Zuiko Digital 17mm f/2.8, and an Olympus M. Zuiko Digital 45mm f/1.8[2] - 34mm and 90mm DSLR/35mm equivalent focal lengths, respectively.

 On an overcast day, the 45mm didn't have any trouble snapping to focus, and freezing Tex and his favorite ball. 

On an overcast day, the 45mm didn't have any trouble snapping to focus, and freezing Tex and his favorite ball. 

Olympus’ autofocus is fast and accurate. Its face detection is helpful, in that it tries to intelligently focus on faces in the frame as best as it can, and it usually does a good job choosing the faces that are important based on the focal plane. I’ve been really impressed with the speed of the system, but I wonder how much of that is just the progression of technology overall since I bought my D200 in 2003.

Despite the lower f-stop, I’ve found that the 17mm is almost always the one I leave on the body. Maybe once the boy starts running more than fifteen feet away, I’ll use the 45mm more frequently. It’s a good all-around lens. It’s not super fast, or super sharp, but it also sat at a price-point where I don’t get worried about it always being on my camera. On the other hand, there have been a couple times I wished I had sprung for the $150 more expensive f/1.8 version of the lens. I might, yet.

Either way, once again having a camera I’m confident in and with means I’ve taken a ton of pictures. Mostly of my son, but that gives me something else to work on: expanding the subjects and events I’m comfortable shooting.


  1. See also. See also.  ↩

  2. I’ve linked to the intenational SKU, which is $150 less expensive, but carries no warranty. The Olympus-warrantied SKU is here, if you prefer. I couldn’t find an international SKU for the 17mm. I should note that I actually got a pretty good deal on the warrantied SKU, so I bought that one.  ↩

Bash on Synology DiskStation

I picked up a Synology DS415play, a couple weeks ago. I’ve been looking for a good way to store family memories, as well as have device-indepentet on-site storage and redundancy.

The Synology DiskStation is a Linux-based NAS, and I couldn’t stand that the default shell was ash. How to fix?

Process

I’m taking a lot from this post on the Synology forums, and trying to explain it in a litte more detail[1].

Finding the bootstrap file for the Intel Atom CE5335 was a bit of a challenge, since Synology doesn’t use it as widely used as some other CPUs in their lineup. Fortunately the thread I linked above has a relatively recent (Nov 2014) bootstrap for a DS214play, which uses the same CPU. I guessed it would be the same, and it was.

I’m going to assume that if you’re reading this, you are thinking of doing the same on your DiskStation, and that you have an ill-defined-but-higher-than-zero knowledge of both *nix systems and how to Google.

First, you’ll need to ssh into the NAS as root (root’s password is the same as the admin user password).

You’ll need to execute the following commands, command by command:

$ cd /volume1/@tmp  
$ wget http://ipkg.nslu2-linux.org/feeds/optware/syno-i686/cross/unstable/syno-i686-bootstrap_1.2-7_i686.xsh  
$ chmod +x syno-i686-bootstrap_1.2-7_i686.xsh  
$ sh syno-i686-bootstrap_1.2-7_i686.xsh  
$ rm syno-i686-bootstrap_1.2-7_i686.xsh

Line by line, the above does the following:

  1. Change into the Synology’s temp directory
  2. Download the bootstrap script
  3. Make the script executable
  4. Run the script
  5. Remove the script

At this point you have ipkg, the package manager, installed, but your shell doesn’t know about the folder it’s installed in. You’ll need to add /opt/bin/ and /opt/sbin/ to the PATH[2] in your .profile.

While you’re in your .profile, you might see a line that says HOME=/root/. I changed mine to HOME=~/, since I want this .profile to be portable between users. I’ve copied it to the admin user when I finished, so I have the same experience when I’ve connected as admin and root.

Now, if you type ipk and hit [tab] it should autocomplete to ipkg.

So, let’s install bash:

$ ipkg install bash`

Bam. You’ll get some output, then you should have bash installed.

Now, I needed conventions for getting into bash when I connect to the device. Again, the Synology forums came to the rescue.

Add the following to the end of your .profile:

if [[ -x /opt/bin/bash ]]; then   
    exec /opt/bin/bash  
fi

The above checks if /opt/bin/bash is executable. If it is, the command will execute /opt/bin/bash. If it is not, it doesn’t execute /opt/bin/bash, therefore leaving you in ash.

Add the following to .bashrc:

PS1='\u@\h:\W \$ '  
export SHELL=/opt/bin/bash

The top line you may want to adjust to your taste. That’s how I like my prompt to look. Here’s a good tool to help you build the prompt that suits you.

The second line sets the SHELL variable to /opt/bin/bash. Remember that .bashrc is only read by bash when bash is started, so the SHELL only gets set if bash is called.

Now before you close your current SSH session, start a second. You should get your new, fancy bash prompt. Success!

Once you have that good feeling, copy .profile and .bashrc to /volume1/homes/admin/, and start another ssh session, this time connecting as admin. If that works, you’re set.


  1. I find that if I start a project like this by thinking about (and sometimes outlining) a post like this as I go, I have a better understanding of what I’m doing. Often, if I can’t follow the thread from beginning to the end, I don’t actually begin the project because I feel like I don’t understand the process well enough.  ↩

  2. Here’s an explanation of the PATH variable in UNIX, if you’re not familiar.  ↩

Modern Data Storage

Since earlier this year, I’ve been making a focused effort to ensure that my and my family’s important data is safe. It’s the closest I ever get to making a New Year’s resolution[1]. When I picked up my 2008 MacPro, a couple years ago, I built my own Fusion Drive, but also threw in a second, larger spinning disk drive for internal Time Machine backups.

Obviously, a single copy is not a backup, and for years I have kept all of my documents in Dropbox. That’s a second copy of most stuff. I signed up for BackBlaze earlier this year as well. That captures everything Dropbox does, plus the few things on my MacPro’s hard drive[2], that aren’t in Dropbox.

On-Site Improvements

I recently added a Synology NAS to the mix. On-site, large storage (with multi-drive redundancy), including multiple user accounts, various web services, and slew of other features made it very appealing. I picked the DS415play, because of the hardware video transcoder, and hot-swappable drives[3].

The Synology also allows each user account to sync their Dropbox (among other cloud storage providers) to a folder in their user home directory, it seemed like a nice way to have a second on-site copy of all of my and Lindsay’s docs.

In addition, the PhotoStation feature will help me solve the issue I’ve been struggling with: how do I make sure that Lindsay and I both have access to our family photos, consistently, effortlessly, and without relying on an intermediary cloud service. Neither of us are interested in uploading all of our photos to Facebook or Flickr just to share them. It also takes thought, effort, and coordination on our parts to get photos our of our Photo Streams give them to one another. I want to minimize that, while ensuring that these photos are well backed up.

Unfortunately, there’s no package to backup my Synology to BackBlaze, and Marco had an article that highlighted his issues trying to make that work, and he wound up settling on Crash Plan. I’ll likely do the same.

Expect posts in the coming weeks about how I’m messing with this stuff. I’ve found it to be a lot of fun, already, and I’m pretty impressed with the Synology. It’s a little fiddly for most people, but if you’re inclined to be a nerd - especially a Unix-y nerd - it’ll be right up your alley.


  1. I’m a big fan of the idea that if something is important to you, you should be doing it already.  ↩

  2. You might have noticed that I’m pretty focused on backing up my MacPro, and I’m much less worried about my rMBP. There are three reasons for this: first, My MacPro has all of my family photos in Aperture libraries that are too big to go into Dropbox; second, everything on the rMBP is in Dropbox, thanks to Dropbox for Business’ ability to sign the app into a work account, and a personal account. I symlinked ~/Documents/ to ~/Dropbox (Under Armour)/Documents/, but I stil have access to my personal dropbox at ~/Dropbox (Personal)/. The only things that aren’t in there are my Downloads folder (which could be easily, and arguably should be), and my ~/Sites/ which I really only use for Cheaters and as a repository for various software and configs routers, switches, WAN Optimization devices, and can be discarded at will.  ↩

  3. Synology’s feature matrix is a bit of a mess, but eventually I decided that hot-swappable drives was a must, which took me up to the DS415, and adding the transcoder was an additional $60, so that made the cut, but each person’s needs are going to be a little different.  ↩

Unblock-Us and Netflix Update

Nick wrote in with a good note regarding my Unblock-Us + BIND setup:

I noticed after setting up the netflix.com zone that unblock.us resolved most of the Netflix addresses to a CNAME, e.g.:

 secure.netflix.com.   86400   IN  CNAME   secure-1848156627.us-west-9.elb.amazonaws.com.

My ISP’s DNS server did not know about the address secure-1848156627.us-west-9.elb.amazonaws.com, but the unblock.us DNS server resolved it successfully. So I just added another zone for amazonaws.com, and forwarded those requests to unblock.us. That seems to have resolved it - Netflix now works. Not ideal, since the rule is a bit general, but I’m happy to have it working.

Good investigation, and little things like this may resolve some of the issues I was seeing with this setup, last year. I don't have the patience to keep up with it, but I'm certain some of you are more patient people than I am.

Cheaters to launch an SSH Session

I'm a stalwart Terminal fan for my Engineering tasks. I don't understand why so many colleagues prefer a Terminal emulator like SecureCRT when we have native SSH built right into the OS. Something a lot of SecureCRT guys hold over my head is the nested folders with saved SSH sessions.

It dawned on me this morning that I could duplicate that functionality in something I'm already using: Brett Terpstra's Cheaters.

I won't get into an in-depth review of Cheaters, here. Simply put, it's a small app that launches a web view of a locally-hosted set of websites. Brett's suggestion is to use it as a place to keep cheatsheets (hence the name), like a virtual cubicle wall.

I used a little grep and sed on our existing hosts file, and came up with a Markdown list of links to the hostnames of our devices, using the following syntax:

[hostname.domain.com](ssh://hostname.domain.com)

I spent a couple minutes sorting the list into a reasonable hierarchy, then I used this nice little tutorial to create expanding lists using CSS and jQuery. I ran my Markdown list through Brett's own Marked 2, and copied the HTML to a new cheatsheet.

I took the CSS and javascript from the tutorial linked above, and dropped them into the appropriate folders inside of my Cheaters folder. In my new cheatsheet, I linked the specific CSS and javacscript:

<head data-preserve-html-node="true">
    <link data-preserve-html-node="true" rel="stylesheet" href="css/expandolist.css">
    <script data-preserve-html-node="true" type="text/javascript" src="js/expandolist.js"></script>
</head>

I didn't need to worry about jQuery, since Cheaters already uses it. I added the appropriate ID's to the div that holds the list, and to the first ul element. That's really all there was to it. Now I have a nice, organized, expandable list that lives in my menubar, which I can use to launch SSH sessions right in Terminal without having to remember specific hostnames. Not bad for 45 minutes of effort.

 My Cheaters SSH list

My Cheaters SSH list

Considering iPhones

Two great reviews of the new iPhones dropped in the past couple days: John Gruber's and Matthew Panzarino's. Both are thoughtful and fairly deep. And while they both touch on the software, they focus almost entirely on the hardware. Interestingly, their conclusions about the biggest (no pun intended) question about the hardware was very similar.

Regarding the size question, here's Gruber:

  • If you simply want a bigger iPhone, get the 4.7-inch iPhone 6. That’s what it feels like: a bigger iPhone.

  • If you want something bigger than an iPhone, get the 5.5-inch iPhone 6 Plus. It feels more like a new device — a hybrid device class that is bigger than an iPhone but smaller than an iPad Mini — than it feels like a bigger iPhone.

And here's Panzarino:

The iPhone 6 Plus is a great option for people who don’t have or want an iPad — or simply don’t want to carry it. Where the iPhone 6 is a great upgrade to the iPhone line, the iPhone 6 Plus is a fantastic ‘computer’.

I fall decidedly in the camp that doesn't want to carry an iPad. I've had iPads since the first one. I loved it. I've carried an iPad Mini for the past two years, and I love it ad a device. But I live on my phone. If I can get more room on my phone, I think I can give up my iPad. The idea of a new, The iPhone Plus is closer in size to a paperback than the iPad Mini is, and I'm quite comfortable reading a paperback at length. I read and send significantly more email from my iPhone than from my iPad. I more frequently use my iPhone to SSH to routers, switches, and my computers to help in troubleshooting. It seems very compelling to get more screen size, higher resolution, better battery life, and a marginally better camera all on the device I most often use, while trimming the number of devices I carry by one. When I need more oomph, I'll get my laptop out.

On the other hand, my wife doesn't have a computer, and she uses her iPad nonstop. She saw the cutouts I printed at work, and decided she wanted my 5s, and willingly offered up her upgrade. I think that's largely because she's very happy to use her iPad at home, and her iPhone while she's out of the houe. Where I "step up" to a MacBook Pro, she steps up to an iPad Mini.

I'm kind of rolling the dice. I haven't held the phone in my hand (or to my head). The closest I've gotten is holding a paper cutout that approximates the phone to my head. But, hey. Worst case scenario, I decide it was a mistake, and replace it in two years.

Thoughts from the Baltimore Marathon

I got to run the marathon in the Baltimore Running Festival, yesterday. I say "got to run", but it was really not too tough. The 4000 available slots aren't in such high demand that it's terribly difficult to sign up. The tough part is the months of training leading up to the race, the weekly runs of increasing distance, the difficulty getting up and down stairs after your first 20 mile training run (not to mention after the actual 26.2 mile race).

I saw a few things that I thought were really interesting this year as compared to years past (I ran the half in 2008 and 2009, and the relay in 2011). I thought I'd take a couple minutes to document them.

  • The oragnizers have strong opinions that people shouldn't use headphones during the race, but well more people were wearing headphones than weren't; and significantly more people were using them than I'd seen in years past. Especially full marathoners, and myself included. First time I've worn them in a race, and I'd frequently taken them off to talk to the runner next to me. I saw many people doing the same. I think it's a shift in the idea that when you have headphones on, you're being antisocial.
  • I saw people with all types of phones on the course, taking photos and video while spectating. iPhones had the majority by a long shot, but I saw dozens of Samsungs, a couple HTC, and a couple LG.
  • I saw half a dozen runners using their iPhones to FaceTime with friends and family while running. A couple relay members coordinated with their teammates running the next leg by FaceTime, but most often it was a runner talking to a friend or family member who was sitting comfortably at home. It struck me as an amazing way to share the experience with a loved one.
  • I never saw anyone with anything other than an iPhone making a video call. I'm not saying it wasn't happening, but I didn't see it.
  • Cell reception in the runners' area and celebration village was spotty at best, and significantly less reliable than in years past. Not sure if this is due to the proliferation of smartphones, or if more people were simply using the available bandwidth at a given time. Talking to people, the experience was the same on all carriers.
  • To go along with that, we've run our pop-up retail shop on a 3G card in a router for the past three years are the BRF, to much success. This year it was a painful experience.
Source: http://brettterpstra.com/2013/10/12/run-wh...

The Bitch is Back: Fever Bitch

I’m back on Fever. Years ago, I installed Shaun Inman’s RSS aggregator and reader because I was looking for a way to deal with my sprawling RSS list. Fever’s novel approach promised to show me the most interesting stuff by letting me see the stuff that was being most talked about by people whose opinions mattered to me. I liked the interface, but when the iPad came out, I wanted something that felt more natual on the iPad. As soon as Reeder came out for iOS, I jumped ship to the Google Reader + Reeder bandwagon.

Fever has gotten so much better than when I last looked at it. Its iPad view is vastly more usable than I remember, and the ability to do feed refreshes through a crontab entry on your server is a pretty solid solution. Of course you still have the option to also refresh your feeds when you load the Fever page, so it feels like your feeds are always up-to-date.

It’s also a lot more clear, now, how to use Sparks. I spent the last few years culling my RSS subscriptions. I eliminated nearly every high-volume site, and kept only a handful of feeds from people whose every word I wanted to read. I’m now going back and re-subscribing to a handful of high-volume sites, specifically to use them as Sparks[1].

I’ll probably have more to say about it after I’ve gotten a few days to use it again. But, I’m really excited about it at the moment.


  1. This is definitely going to be a balance that will take a while to get right. I’ve noticed that adding the RSS feed from Pinboard’s Popular Page can create a lot of duplicate entries in my “Hot” view. I’ve done zero research on this, though.  ↩

Selective use of Unblock-Us.com with my Very Own DNS Server

I’m using a service called Unblock-Us to specific domains for me. The service is really excellent; a DNS-based service that (I assume) works by accepting DNS requests on their service, they proxying the request and all responses through their network. I say “I assume” because when I emailed Unblock-Us for confirmation, they wouldn’t confirm or deny my assumptions. I guess they didn’t want to give up the recipe to their secret sauce. Can’t blame them.

Now, while Unblock-Us is DNS-based, I’m not too cool with the idea of sending all of my DNS requests across the internet. I cooked up a little modification to my caching DNS server that sends the domains I specify to Unblock-Us, forwarding other requests to public DNS servers the first time, then just serving them up locally. Here’s how I did that.

  1. First things first, I signed up for Unblock-Us[1], and I activated it.

  2. I created a fresh SD card for my Raspberry Pi. You could run this on any Mac or pretty much any Linux distro. I’m sure you could make it work on Windows, though I have no idea how. There are plenty of reasons to use something more powerful than a Raspberry Pi, but I don’t care about them for the time being. The Pi is fine for me.

  3. I got the Raspberry Pi online and gave it a static IP on my network.

  4. Installed BIND 9, a great and really widely-used DNS server.
    sudo apt-get install bind9 on Debian (or Raspbian or Ubuntu) systems.

  5. Modified my configuration files, by adding the following lines to the listed files:

    1. /etc/bind/named.conf.options
      This specifies the DNS servers that my BIND server will forward requests to when it doesn’t already know how to handle them. It’ll take all answers from these guys and cache them until the TTL expires, so it can handle future requests without going out to the internet.

       forwarders {
              4.2.2.2;
              8.8.8.8;
        };
      
    2. /etc/bind/named.conf.local
      This defines the zones for specific domains that will just be forwarded to Unblock-Us’s DNS servers.

      ######
       # Conditional Forwarding Zones: These zones forward their DNS requests as specified
       ######
      
       Zone "unblock-us.com" { type forward; forward only; forwarders { 208.122.23.22; 208.122.23.23; 184.106.242.193; };};
       Zone "domain1.com" { type forward; forward only; forwarders { 208.122.23.22; 208.122.23.23; 184.106.242.193; };}; 
       Zone "domain2.com" { type forward; forward only; forwarders { 208.122.23.22; 208.122.23.23; 184.106.242.193; };};
       ⋯
       Zone "domainN.com" { type forward; forward only; forwarders { 208.122.23.22; 208.122.23.23; 184.106.242.193; };};
      

      The first line, above, sends all requests for unblock-us.com to the Unblock-Us DNS servers (primary, secondary, and tertiary in order). The other lines, I populate with any other domains I’d like to send to Unblock-Us, just by replacing “domain1.com”, “domain2.com” … “domainN.com”. For example, if I wanted to send DNS requests for Google, Netflix, and Apple to Unblock-Us, my file would contain the following lines:

      ######
       # Conditional Forwarding Zones: These zones forward their DNS requests as specified
       ######
      
       Zone "unblock-us.com" { type forward; forward only; forwarders { 208.122.23.22; 208.122.23.23; 184.106.242.193; };};
       Zone "google.com" { type forward; forward only; forwarders { 208.122.23.22; 208.122.23.23; 184.106.242.193; };}; 
       Zone "netflix.com" { type forward; forward only; forwarders { 208.122.23.22; 208.122.23.23; 184.106.242.193; };};
       Zone "apple.com" { type forward; forward only; forwarders { 208.122.23.22; 208.122.23.23; 184.106.242.193; };};
      

      It’s worth noting that Unblock-Us doesn’t support Google or Apple, so while they will properly handle the DNS request, they will not provide any additional benefit. I was just providing them as a configuration example. Netflix is a supported site, and a full list of supported sites can be found here.

  6. Finally, I updated DHCP settings on my router[2] to point to my BIND server as the primary DNS server, and public DNS[3] as the secondary DNS server. As my devices DHCP leases came to expire, they’d check in with the router, and the router would hand them a new lease with the updated DNS settings.

I’m sticking this here because I thought some of you might find it helpful. This isn’t a solution for those who are less than technically inclined. To be honest, I don’t know enough about BIND to really troubleshoot it, but there’s tons of helpful documentation online. If I learn anything significant, though, I’ll post more about it.

Update August 13, 2014: I've been meaning to update this post for a while. Toward the end of last baseball season, this configuration stopped working. I reached out to Unblock Us about it, and they weren't able to give me much direction, except that the domains that MLB.tv, etc that need to be redirected to Unblock Us change frequently. They'd only support a configuration where all DNS traffic was hitting their DNS servers. So, I reverted to using Unblock-Us DNS on my Airport Extreme, and being done with it.

I suppose it would probably be possible to sniff the outbound DNS request being made by your computer when accessing the services, and redirecting those domains, but my fear is that it will be tedious to maintain as content providers switch CDNs, etc.

Update October 24, 2014: An interesting comment from Nick shed a little more light on capturing Netflix traffic. Worth reading if this setup is still something you'd like to do. Please do read.


  1. An affiliate link. If you sign up with this, I’ll get a little kickback. If you don’t want to use my affiliate link, here’s a non-affiliate link.  ↩

  2. Or DHCP server, if you run a seperate DHCP server. I’m running it all from my AirPort Extreme, though.  ↩

  3. I’m using 4.2.2.2, just like the forwarder on my BIND server.  ↩

Re-Enabling Folder Actions in Mountain Lion

I'm not a big Folder Actions user; a lot stuff that Folder Actions is designed for, I accomplish with some shell scripting & cron jobs, or Hazel.

I was surpriesd to find that when I tried to set up a new Folder Action in 10.8.2, it wasn't available; "Folder Actions Setup…" wasn't an option in the "Services" menu for any folder. Apparently, it's a pretty common problem for Mountain Lion users. Here's how I fixed it.

  1. Create ~/Library/Scripts/Folder Action Scripts/ & drop the folder action script you want to add in there.
  2. Strangely, open System Preferences → Keyboard → Keyboard Shortcuts, and click the "Reset to Defaults" button.
  3. Scroll up to the "Files & Folders" section, and make sure that the "Folder Actions Setup…" listing is checked.

My guess is that if you were like me and had some custom keyboard shortcuts when you upgraded from 10.7.x to 10.8, it borked up the Keyboard Shortcuts upgrade, which in turn borked up the Services menu. Dumb.

Source: topsites://

What I want Apple to do with Podcasts

On this week's The Talk Show, John Gruber, Adam Lisagor, and John August spend some time talking about Apple's new Podcasts app. At one point (46m 34s, if you're playing at home), Gruber outlines what he wants Apple to do with Podcasts.

"The thing I want from a podcasts app - the thing I want from Apple - is I want them to put my podcast subscriptions in the cloud. And […] when I say 'I've subscribed to this podcast' it's all stored in the cloud, and then whichever device I look at, they all just know which ones I subscribe to. 'Cause the way this works is if I subscribe to a new show on my Mac in iTunes, today, then I go to my iPhone and open it up, that podcast is not in the Podcasts app yet. I have to sync it. That's crazy right? Isn't that what Steve Jobs told us a year ago that we wouldn't have to do anymore?"

What I want might be a little greedier, and probably a lot more work for Apple to do, but it seems technically feasible. I'll lay it out:

  • I want what Gruber wants, sure. I want a list that's stored in iCloud with all my podcast subscriptions.
  • I want it to be updated with which episodes I've listened to and which I haven't.
  • Additionally, I want it to store playback time location.
  • I want it to be a system-wide store, like Calendars, that can be read and written by third party applications.

The upshot would be huge. If I'm talking with a coworker, and he says "Oh, hey, do you listen to "The Moth"? This week's episode is really good" , and I could open iTunes and click the subscribe button on my Mac, and I listen to the first 20 minutes before I have to head off to a meeting. Fast forward an hour, and I'm about to head home. I open Instacast, and it sees the updated iCloud Podcasts info, and after downloading the episode of The Moth, I can pick up right where I left off before my meeting. When I get home, I turn on my AppleTV and I can listen to the last part of the episode right there, through the AppleTV's native podcast support.

Instacast, on it's own, handles most of this. It syncs subscribed podcasts, episode status, and playback location across the iPhone and iPad App pretty well. This seems like a great spot for Apple to come in with a good solution to take work off its developers, and give its customers a better experience.

MLB Standings Geeklet (2012)

I hoped that I could get the MLB Standings Geeklet updated before the 2012 season started. Unfortunately, I haven't been able to. A lack of available time, and MLB's changes to how the stats are served means that I lack the temporal funds to spend on this project.

If I see a good alternative, I'll pass it along.

Update (June 1, 2012): I enabled the Geeklet on a whim a couple weeks back, and it was working fine! I guess the MLB team websites hid the standings until the season started, this year.