Light Bulbs & Lasers

May 9th, 2015 • permalink

“I used to be a light bulb, but now I’m a laser.”

I love this quote (spoken by Nicholas Negroponte, regarding his intense focus on the $100 laptop project). It helped inspire the development of Clara, a brain-sensing, environment-augmenting, focus-enhancing smart lamp I built with Marcelo Mejía Cobo and Belen Tenorio for our combined MFA Interaction Design / MFA Products of Design Smart Objects course at SVA this spring.


Clara is designed to help creative, easily-distractible folks find their way down the “ideation funnel” (jeez, that’s a phrase only a product manager could love). It’s a lamp with an embedded speaker, and it responds to your brain waves to subtly adjust your environment. At first, the lamp emits a warm, comforting glow, conducive to idea generation and creativity. But as you start homing in on a specific idea, the light becomes crisper and cooler, and the volume of the ambient noise flowing from the embedded speaker slowly increases, enhancing your ability to concentrate and block out external distractions.

The best part is, it actually works!

We used the Neurosky MindWave Mobile, a Bluetooth EEG-reading headset, to wirelessly detect “attention” and mapped the lamp’s color temperature and speaker volume accordingly.


Clara uses the following components:

…and relies on the following Arduino libraries:

The Arduino code is available on Github.

XE2S3719Working with the EEG headset was tons of fun. I started out using a Mattel MindFlex, a 5- or 6-year-old toy which enabled the user to move a ball through an obstacle course using nothing but their power of concentration. Some ITP grads figured out how to do this a few years ago, and their blog post on the topic was invaluable. They even put together a Processing sketch that allows you to visualize your brain waves in 1-second intervals as they’re coming in.

(Brief sidebar: brain waves? Yeah, yeah. This is some woo-woo stuff, for sure. I spent a lot of time early on trying to find reliable research about what kind of behavior can be predicted by an EEG. Some researchers figured out, apparently, that the brain basically shuts down in the half-second before a “eureka moment.” We were hoping to use this to predict a eureka moment and block out distractions, but that seemed impractical. After all, what could we do? A blaring klaxon in your genius moment, warning you not to lose your brilliant idea? So we decided, instead, to focus on setting up an environment conducive to having a eureka moment.)

After the MindFlex, we landed on the Neurosky MindWave, which uses the same chipset but is Bluetooth-enabled and geared for more professional uses. Sparkfun and Neurosky both have instructions (1, 2) for connecting the MindWave to an Arduino, which had overlapping utility. I’d recommend reading them both side-by-side. After some technical difficulties, the big breakthrough was to disconnect the BlueSMiRF’s RX pin. After all, we never need to send data to the headset – we’re just receiving. Also, disconnect the BlueSMiRF’s TX pin during sketch upload, or it’ll fail. After that, it was smooth sailing using the kitschpatrol library. I just needed to change the serial port from 9600 baud to 57600 baud to accommodate the MindWave.


The other big challenge was “fading” between colors on the Neopixel array. The ultimate solution is to use the map function to get the desired RGB color between the start and end points (warm light, cool light) and then use a bunch of for loops to adjust the color in many steps. There are plenty of forum posts about this, but I found this example code by Stack Overflow user ladislas to be invaluable.

The basic structure of the Arduino code is straightforward. The Neopixel strip is instantiated, then the Music Maker shield is instantiated, then we take advantage of interrupts to listen for, receive and act on Bluetooth serial data while the music is playing. When the MindWave detects “activity” (a number from 0-100 generated via some proprietary algorithm on the Neurosky chip), we initiate the “fade” of the music and the light.

IMG_2891And that’s Clara! Belen and Marcelo were responsible for the incredible industrial design, and they’re putting together a video of it in action. Since nothing will show off the effect better than that, I’ll update this post when it’s ready!

Thither & Yonder

May 9th, 2015 • permalink

I’ve been working on a bunch of mapping projects lately, mostly designed for pedestrians.

Thither is a navigation concept that reimagines efficiency. You give it a destination and an arrival time within the next hour, and it will route you to optimize for that precise arrival time. If you’re in a rush, it will try to send you through shortcuts like parks, plazas, atriums, subway passageways and other public-private spaces. If you’ve got time to kill, however, it’ll send you on a journey past interesting places, using the Google Maps API to estimate walking time.



Thither consists of a front-end mapping service using Google Maps, the Google Places API and some pretty Stamen tiles, coupled with a back-end PHP script that takes in your origin, destination and desired time of arrival and finds the perfect route with the maximum possible waypoints, which it then spits out via a JSON array.

It generates an interesting route by generating your direct walking route, then finding three sets of coordinates between its origin and destination, at 25%, 50% and 75%. It uses those to make three calls for nearby waypoints, one at each of those coordinate sets. (This was a workaround to Foursquare limitations, when I was experimenting with that as a data source. It clustered waypoints too tightly around a single set of coordinates.) It then filters out duplicates, orders the list by proximity to your direct walking route using the Haversine formula and the Google Directions API to calculate distance, and loops through until your modified walking route hits the ceiling set by your allotted time.

The data is provided on the fly thanks to Atlas Obscura. They don’t have an API, so I’m scraping JSON from their site. Sorry, guys! You should probably explore their site, give them money, and sign up for one of their Obscura Day trips at the end of the month. They do fine work.

Thither is a live, fully-functional prototype, which you can access at

Next up: Yonder.


Yonder is another pedestrian navigation concept, designed for the Apple Watch.

Unlike Thither, which helps you explore your city, Yonder relies on your local knowledge to help you get to your destination quickly.

Yonder is essentially a smart compass, overlaid on a street map and paired with local waypoints. It seeds the map with popular attractions, so you can quickly identify a point nearby your destination and tap it to reorient the compass needle. This way, you can lock in your destination without dictating its precise address, and then orient yourself with only a quick glance at your wrist. It’s especially helpful when you’re underground and trying to exit the subway system without getting disoriented. (I know I’m not the only one on the planet who was shocked to learn that compasses work underground. So obvious, yet so mind-blowing!) And if you’re the kind of person who works better with cardinal directions than waypoints, never fear – the compass needle defaults to “Manhattan north” (28.911° offset from true north).

While it’s not available on your wrist just yet, a working prototype has been developed and can be visited at (on an iPhone or iPad, and while you’re somewhere near Chelsea, since these waypoints are temporarily hardcoded).

The prototype uses OpenLayers 3, rather than Google Maps, as its base, due to its strong vector support. This allows the map base to rotate smoothly while the waypoint icons remain stationary. It also uses Compass.js, which simplifies interaction with the iPhone or iPad’s built-in compass.



Thither was conceived and designed in collaboration with my SVA IxD colleague Marcelo Mejía Cobo. Yonder was conceived and designed in collaboration with Marcelo and our SVA IxD colleague Nic Barajas.


December 20th, 2014 • permalink

I love my Yamaha A-760 Natural Sound Stereo Amplifier (circa 1981). I came across it on craigslist six years ago, and I knew that it was the one when I discovered this restoration project online. One doesn’t typically put that much effort into something without a good reason. It only confirmed my excitement when I learned that I was buying it from its original owner, who still had the owner’s manual.


But, as is always the case, my true love has been tested over time. The biggest challenge? This amp hails from an age before most A/V hardware was equipped with remote controls, and I am a lazy, modern human. To make matters worse, TV seems to be mixed so erratically these days — I feel like I’m constantly fine-tuning the volume!

I spent years despairing over this. (Time that could’ve been better spent purchasing a new receiver? No, thank you.) Fortunately, I also spent this time leveling-up my physical computing skills, and eventually found a video tutorial online by Ian Johnston, who had tackled a similar project using an Arduino and a motorized potentiometer in place of the existing volume knob. Perfect! I called upon my friend, storied recording engineer Jon Altschuler, to help me make sense of Ian’s schematics and the byzantine Yamaha service manual (PDF), and together we devised a shopping list and plan of action.

…and then, this project sat on the shelf for almost a year, during which my girlfriend and I moved into our new ginkgo-adjacent apartment and the precious A-760 sat in a moving box.

But! It’s the holidays! And my girlfriend’s parents are coming to visit. Can’t have them see our apartment without a fully-functional A/V system, right? So instead of cleaning the house, I spent this week tricking out my amp. Here’s what the final project looks like:

This project utilizes the following components:

…and Arduino code that can be downloaded via github and leverages the following sources:

The actual steps of this project will vary depending on the internal configuration of your amplifier. In my case, I had to remove a daughterboard containing the volume-control potentiometer and a couple of resistors, so I could make space for my motorized pot and my Arduino hardware.


The motorized pot works like any other DC motor. In order to make the motor go forwards and backwards, you create a circuit called an H bridge (or purchase a chip that does the work for you. NYU ITP has a great tutorial on this). To save time, I used the Seeeds Motor Shield (not pictured), which worked with very little setup, thanks to their handy-dandy code library.

I also wanted to control the power switch, which required use of a servo. After much deliberating (and ruing of my non-STEM liberal arts education), I decided that the best approach would be to ziptie the servo next to the internal power switch, and use a thin, strong filament to pull the button. This enables remote control of the switch without interfering with manual control. I used some plastic picture hanging wire and fused it to the plastic button using my soldering iron (with a tip I didn’t mind destroying). The result is kind of hypnotizing:

I decided to graft my project onto the existing DC power circuitry inside the amp. Turns out it runs at 47v. When powering the Arduino via its DC jack or VIN pin, it’s best to keep the power supply between 7 and 12v so its internal regulator can work properly. To that end, I used a small DC-DC buck converter to step down the voltage and feed my project. I was able to stick it on top of a heatsink adjacent to the gigantic DC transformer inside the amp.


This worked great, with one exception. Because the onboard DC power is controlled by the amp’s power switch, once the Arduino shuts off the power it can’t turn itself back on again! Kind of like The Most Useless Machine Ever.

To resolve this, I removed the DC-DC buck converter and replaced it with a small 9v AC-DC adapter (you can use any adapter that outputs 7v-12v DC and is small enough to fit inside your amplifier), wired directly to the 110v AC current inside the unit. This way the Arduino receives continuous power, independent of the amp’s DC transformer.


For the infrared receiver, I used a standard IR sensor, which I stuck in the space hollowed out for the Yamaha’s Listening Level Monitor, a slider I never use. The IR library has an example that allows you to use the Arduino’s serial monitor to listen for incoming signals, so I jotted down the commands for vol up, vol down, mute and power from my universal remote. Thus, my setup essentially mimics a Sony amplifier.

The biggest hurdle is that both the motor shield and the servo library require use of the same hardware PWM timer on the Arduino, so the volume control would stop working once the servo was attached, even though there wasn’t a direct pin conflict. I was unable to figure out a way around this problem, and it looks like I’m not the only one. For now, my ugly hack is to reset the Arduino board every time the servo is triggered. This is fairly instantaneous and it allows the volume control to keep working properly.

So, what does it look like put together? It’s a little messy, but it works:


My next revision will definitely be smaller, likely using an Arduino Nano with a single hand-made circuit instead of an Uno with a big shield, and may add controls for other amp features, such as input source.

Oh, and I guess I should note that other amps go to 11… but this one STARTS AT INFINITY.


Char Talk

December 17th, 2014 • permalink

If you are playing around with Yosemite’s SMS relay feature and pissing off all your Android-using buddies with your cropped text messages, boy have I got the fix for you.


I experienced this problem firsthand as soon as I upgraded to Yosemite and eagerly started sending SMS messages via the Messages app. My friend was not nearly as enthusiastic about my new toy, as you can see. So why were my texts cropped? Well, after counting the number of characters the texts were getting split into and googling around, I discovered that it’s a character encoding issue!

It seems that SMS messages are split into 67-character chunks (rather than the typical 160-character limit) when they’ve got funky Unicode symbols in them. This can be really rough when you’re texting in a non-Latin language — texts aren’t cheap! But it was also a big problem for us. So, what Unicode symbols was I using?

I noticed that the split texts all had apostrophes in them. And, well, my apostrophes were looking super classy, thanks to the new text replacement features introduced in Mavericks. Unicode! All these attractive quotes and dashes were coming at a pretty steep price, so it was time to make a change.

You can remove smart quotes and dashes system-wide, but for our purposes, we’re only going to remove them in Messages.

To do this, navigate to Edit > Substitutions > Show Substitutions in the Messages app, and deselect everything.


And that’s all there is to it! Now your texts will retain their generous 160 character length, and your friends will be slightly less annoyed with you. Glory be.

(PS: check out my post on using AppleScript as an SMS/iMessage relay if you’d like to flip the script and use your Stone Age device to communicate via iMessage.)


November 19th, 2014 • permalink

So, in late August, my girlfriend and I moved into a lovely new apartment in the Slope, and everything was right with the world.

…until October. That’s when we were faced with our first true cohabitation challenge — the ginkgo tree outside our front door.


Let me get this out of the way first: the ginkgo is a wondrous species. The species is so hardy that it’s considered to be a living fossil. Several ginkgos even survived Hiroshima!

But, well, there’s no classy way to put this. The fruit of the poor ginkgo smells like death. In the words of the Washington Post, “the bouquet of a ginkgo tree’s fruit has strong notes of unwashed feet and Diaper Genie, with noticeable hints of spoiled butter.”

It’s truly untenable. So I built a wearable ginkgo detector that vibrates when the wearer walks within 100ft of one of these wretched, incredible beauties. I call it Stinkgo. (I also built a complementary web simulator at, if you’d like to try it out without getting your hands dirty.)


Stinkgo consists of a Raspberry Pi (actually a low-power Pi clone designed for wearables called the ODROID W), a 750mAh LiPoly battery, an Adafruit Flora GPS unit, a vibration motor and a database of ginkgo trees.

The order of operations for this project was roughly as follows: 1) set up a local Postgres installation; 2) isolate the ginkgo trees from the city’s tree census data; 3) geocode lat/lng coordinates for the city’s ginkgos; 4) generate a geohash for each ginkgo; 5) set up the Raspberry Pi; 6) install WiringPi in order to access the GPIO outputs; 6) set up the Flora GPS receiver; 7) set up the PHP scripts; 8) solder; 9) detect nearby ginkgos!


Though I’ve chosen to use Sqlite for the final product, I used Postgres to prepare the data, mostly because it offers comprehensive geolocation features via PostGIS. By far the easiest way to install Postgres on the Mac is to use, which offers a dead-simple one-step installation. Once you’ve got that going, you’ll need to connect to the database. I use an ancient free version of Navicat Lite, which is no longer being offered, but which you can download here. (Or you can use any PostgreSQL GUI you’d like.) You’ll just need to create a table for your ginkgos, with three columns: lat (float8), lng (float8), and geohash (varchar). (We’ll get to that last one in a moment.)


The city offers its tree census data in borough-by-borough files, downloadable in many formats. The data includes a column for species, and by cross-referencing with this species list (PDF), we can tell that we’re looking for GIBI. There are many ways to do this directly in Postgres, but I just downloaded the CSV, opened it in Excel, deleted every column except BUILDINGNU, BUILDINGST, BOROUGH, ZIPCODE and SPECIES, and sorted by SPECIES. Then I deleted everything that wasn’t a GIBI. I repeated the process with the other boroughs, and created a master CSV with ~11,000 rows of coordinates.


The shapefiles provided by the city actually includes geolocation data, but it sucks. It’s a rough estimate of street position based on the tree’s street address, but that estimate does not offer us the precision we need. (For the record, I did figure out how to isolate the coords from the shapefile: you’d need a command line tool called ogr2ogr, which you can install using these instructions. The shapefile uses EPSG 2263 projection, so you’d use a command kind of like this to create a CSV of coordinates, noting that you’ll actually get the longitude in the x column: ogr2ogr -f csv -s_srs EPSG:2263 -t_srs crs:84 -lco GEOMETRY=AS_XYZ bklyntrees.csv BrooklynTree.shp) So, instead, we turn to a rooftop geocoder! I used the Bing Maps API, piped through this absolutely wonderful geocoding tool. I actually had no trouble running several thousand requests through the tool at one time, but YMMV. It’s important to remember that geocoding is a murky business. Most of the results you’ll get from Google or Bing will geocode the street address to the center of the building. There’s no way to get accurate results geocoded to the sidewalk. It sucks, but it’ll suffice.


So, 11,000 trees is a lot of data to sift through. The Raspberry Pi actually needs to determine distance from the nearest tree every 3 seconds (using the haversine formula), and it’s simply not powerful enough to do that with so much data in such a short window. A friend pointed me to geohashing as a means of dividing the data into smaller, more workable chunks, and it worked beautifully. I wrote about geohashing on the SVA IxD blog, but the executive summary is that it’s an open-source algorithm for dividing the world into recursive grids. It adds a character for each level of precision, and 6 characters’ worth of precision gives us a grid that covers what would amount to about a 4-avenue x 5-block square of Park Slope (if the grid was aligned to the north). Fortunately, it’s really easy for us to calculate a 6-character geohash for each of our trees using PostGIS. The command would look something like this: UPDATE stinkgo SET geohash = (SELECT ST_GeoHash(ST_SetSRID(ST_MakePoint(lng,lat),4326),6));. Later, we’ll use a PHP class to geohash your realtime lat/lng with the same level of granularity. Then the script will pull only those records from the database that share the same geohash, so that the Pi only needs to calculate distance from, say, 70 trees, instead of  all 11,000.


When you’re done, you can use Navicat to export your Postgres database into a CSV, and then reimport into a Sqlite file which you can upload to the Pi. This isn’t strictly necessary, of course, but it’s a little easier than setting up Postgres on the Pi and in my unscientific tests it’s also slightly quicker. Importantly, the PHP script I’ve written assumes that the database file is Sqlite.


There are many tutorials out there for this, but here’s a brief summary. I set up the Pi headlessly, and I’d recommend doing the same. (I used a stock B model for testing, then moved the SD card over to the ODROID W once everything was finished, as the ODROID W doesn’t have built-in networking and I felt no need to solder anything extra.) To do this, you’ll need to use Raspbian, not NOOBS. (NOOBS doesn’t support SSH out-of-the-box.) I did a lot of testing on the Pi so I installed a full Apache environment, but you really don’t have to. You just need to install PHP, the PHP development tools, git and Sqlite to get started (sudo apt-get install php5 php5-dev sqlite3 git-core).


In order to use the Raspberry Pi’s GPIO pins to control the vibration motor, we’ll need to use the WiringPi library. Specifically, since we’re using a PHP script, we’ll need the PHP wrapper. Basically, you’ll start by cloning the repository recursively (so as to get the main WiringPi library too):

git clone git:// --recursive

…and then find that folder, and create your build folder:

cd WiringPi-PHP

You’ll need to find the file in the resultant build folder, and move it to /usr/lib/php5. Then you’ll need to set the pin type. Create a file at /etc/php5/conf.d/wiringpi.ini with the following info:

Later, in the PHP script, we’ll be controlling pin 1 (or GPIO 18), as that’s the only GPIO pin on the Pi with hardware PWM (pulse-width modulation) support, and we’ll need this to vary the intensity of the vibration motor depending on your proximity.

(Incidentally, I found pin mapping fairly mystifying, so I have to thank x1nick at whose experience with WiringPi-PHP made this a lot easier for me. You can find that thread here. The wiringpi reference pages on pin definitions and special pin functions were also quite helpful.)

You’ll also need to find the wiringpi.php file and save it for later. We’ll need it to be in the same directory as our main PHP script in order to control the vibration motor.


The primary resource for this is the Adafruit guide. It references their standard breakout board, but the Flora GPS is basically the same thing in a slightly altered form factor (and it takes 3.3v instead of 5v). Note that we’ll be using UART to connect to the Pi’s GPIO pins, rather than connecting via USB, so you’ll need to follow the instructions at the end of the document. You’ll also need to configure gpsd to work on boot. You can do this by running sudo dpkg-reconfigure -plow gpsd. As you’re answering the questions, use the same path from the Adafruit guide (/dev/ttyAMA0), and use the -n option to automatically poll the GPS board rather than wait for a client to connect.



For the vibration circuit, you’ll need a vibration motor, a diode, a transistor and a chunk of stripboard. The transistor allows you to use the full 5v output of the pi while controlling the power of the motor using PWM, and the diode prevents power from flowing back into the pi when the motor shuts off. This is the basic circuit — just replace the Arduino with a Pi, and pin 9 with Wiringpi pin 1 / GPIO pin 18. (And, uh, the breadboard with stripboard.)



Now you’ll upload the PHP script, which you can download via github. Note that the script outputs your coordinates to a file called log.csv. This is largely for troubleshooting. If you don’t want the log, just comment out the relevant sections of the script.

To execute the script every 3 seconds, I use a series of cron jobs. (Perhaps there’s an easier way to do this, but I don’t know what it is. Other than looping within the script, I guess.) Cron doesn’t get more granular than 1 minute intervals, so you’ll need to use the sleep function to trigger them at 3-second intervals. Make sure you’re editing the crontab as sudo, or else wiringpi won’t have access to the pins. Once you’re in the editor, you can format your triggers as follows:

* * * * * sleep 3; /your/path/to/stinkgo_wrist.php
* * * * * sleep 6; /your/path/to/stinkgo_wrist.php

and so on.


Here’s the pinout for the pi’s GPIO ports. It’s confusing, because there are several different ways to identify each pin. The PINS mapping we used during WiringPi-PHP setup above equates to WiringPi in the diagram below.



So, to hook up the GPS and the vibration motor, we’ll need the following (color coded according to my photo below):

GPS: 3.3v power, ground, RX (WiringPi pin #15), TX (pin #16)

(*NOTE: RX and TX must be switched when connecting to the GPS unit!)

Vibration motor: 5v power, ground, PWM pin (pin #1).

In the end, it should look like this (you can use a breadboard and alligator clamps to get this all temporarily set up before you solder):



At this point, you’re basically done. Your software is running, your GPS is functional, your script is uploaded, and your wiring is complete. You should be able to attach a USB battery pack and go into the world to test. (The GPS requires line-of-sight. A window might do the trick, or it might not. Be patient. The red LED on the GPS will stop blinking when it has acquired a fix.)


Now, you get to solder everything. For the GPS unit, solder the3.3v powergroundTX and RX cables, switching RX and TX as mentioned above. For the vibration circuit, solder the 5v powerground, and PWM pins as mentioned above. Assuming you’re also switching from the stock pi to the ODROID W, you’ll need to solder all the pins into the header. Here’s the pinout, color-coded as above — double-check that the board is oriented appropriately!



In my experience, once I switched the SD card over the ODROID W worked perfectly, with no additional configuration needed. Just solder with care!


Once you’re all set up, sew your components into a piece of fabric (I used this ginkgo pattern) to create a wristband. You can use cotton batting (or, if you’re me, an old sock) to pad everything.



Congratulations! You have a ginkgo detector! Sadly, the trees are pretty bare and innocuous these days, but you can lie in wait until the next grotesque ginkgo berry season comes around, and then dazzle your buddies. Happy hunting!



Ungluing “Contracts of Adhesion”

October 30th, 2014 • permalink


I gave a presentation on contracts of adhesion last night, as the final project for my Introduction to Cybernetics and the Foundations of Systems Design course at SVA IxD. Here it is!


So… what’s a contract of adhesion?


In brief, it’s a contract in which one party dictates all the terms, and the other party has absolutely no power to negotiate. It might be a foreign term, but we come across the concept all the time.


In leases, for instance. Most of us have signed exactly this lease, and had zero ability to edit it. (This over-xeroxed copy was provided by my broker when we rented our apartment a couple months ago. He was surprised that I even wanted to read it.)


Or, on the back of event or transportation tickets. We’re bound by this agreement, yet we have no idea it exists.


Or, of course, in the infamous EULA. This is what I’m going to focus on. We’re all familiar with this screenshot — a long, read-only document filled with legalese in small capital letters, and there on the bottom? A juicy little checkbox. All we have to do is click it, and we’re on our way. This is the problem!


Here’s a cybernetic representation of the problem. Essentially, the company and the customer are having two completely distinct dialogs, running in parallel. The company wants to limit its liability, and it uses a contract to do so. The customer just wants to access the app, and it uses the checkbox to do so. The contract seldom enters the customer’s equation.


Another way to look at this is as a first-order feedback loop. A traditional example of this model is a person using a thermostat. The person, represented as the outside loop, sets the temperature, and the thermostat, represented as the inner loop, turns the heating/cooling element on/off in an effort to reach that temperature. But the thermostat — unless it’s a Nest — has no agency over the outer goal. Similarly, here, the customer has no agency over the end user license agreement. The company sets the terms.

And this sucks. We all know why it sucks for customers — you can’t negotiate the agreement, probably won’t read it, and are likely bound by it. I would argue it sucks for companies as well. It can be a toxic PR nightmare when one of your customers finally reads the contract and realizes its terms. And if a court gets involved, it could decide that there’s an unconscionable term, and it could void the whole contract as a result.


The problem comes down to consent. The company wants it, but I’m not convinced the customer can ever give it in a meaningful way, since the customer is just trying to access the software, rather than trying to commit to sign a contract. So how do we solve this?


Well, if we have consent, what are we missing? Information. I propose that we completely change the way we think about contracts. Rather than imagine them as big, monolithic documents, we could break them up into a rolling series of smaller agreements, each following three simple principles:

1) Use plain language. 2) Ask for what you need, and nothing more. 3) Confirm that the other party actually understands what they’re agreeing to!

But we still have a problem: this model is no longer sufficient for our needs. The customer still has no agency. So what do we do now?


We shift to a conversation model! Now, we’ve raised the customer to the level of the company, and we’ve got informed consent on both ends. The company and the customer share the goal of transacting at the upper level, and share the goal of protecting themselves on the lower level.


Here’s what it looks like in detail. The primary lever in use by the company is language, now. And the primary lever used by the customer is knowledge. On the upper level, information. On the lower level, we’re still making that contractual commitment. And the environment has shifted, too — we’re now trying to achieve a meeting of the minds, rather than focusing on the legal obligation.

After all, that’s what a contract is all about! It’s not supposed to be adversarial!


Here’s what it looks like in our other model. Now, the company and the customer are collaborating on the goals level, above, and the methods level, below. On the goals level, the company asks for something, the customer says no, the company explains the consequences, and the customer expresses understanding. Information! So we shift to the document. On the lower level, the company asks the customer to click the checkbox, the customer expresses his or her confusion, the company explains the term, and the customer clicks to agree. Consent!

So, what would this look like in the real world? Well, it turns out we see it all the time.


It would look a whole lot like this Photos app request for permission to use your geolocation data. It sure looks like a contract! It has offer, acceptance, and consideration! It doesn’t have to be more complicated than this.

So, what company in their right mind would do something like this? Well, maybe it’s a matter of public policy. Maybe it would call for legislative intervention. But…


…maybe, just maybe, we’ll find a company that subscribes to the Sy Syms school of thought, and values transparency in its dealings. We’ll just have to wait and see.

contracts_of_adhesion.016Thank you! (PS: I booked this Breather space on Broadway to rehearse for my presentation, and it was awesome. You and I can both earn a free hour if you sign up using this link or my code: LE3R5A.)

PSL modem

October 19th, 2014 • permalink

Fall’s vogue flavor seems to be the pumpkin spice latte. This limited-edition Starbucks concoction comes around every year, but things have finally gone viral. As a result, about a third of all pumpkin spice latte tweets are nice and earnest, and the rest are mostly trolling misogynistic comments about white women in yoga pants.


So I decided to make a pumpkin that listens for new PSL tweets and randomly boos or applauds them. I call it the PSL modem. Spicy!

(In case it’s not clear from the video, the applause is coming from the pumpkin, not the laptop.)

The PSL modem is an Arduino Yún with an MP3 breakout board, operating wirelessly, working in conjunction with a helper script operating on a computer. The helper script uses ajax to check for tweets every 5 seconds (Twitter’s 1.1 API rate limit). It’s not strictly necessary, but it moves the Twitter API action out of the Arduino, which saves on overhead. (If you’d like to make the thing completely headless, check out Temboo’s Arduino library. But I like seeing the live tweets, god help me.) The PHP script passes the unique tweet ID to the Yún via the browser using the Yún’s mailbox library, which takes the format psl.local/mailbox/tweetID. Each time the ID changes, the Yún plays a random track from a collection of MP3s.

In case you’d like to build your own smartgourd, the PSL modem uses this hardware:

Arduino Yún (for its fancy wifi and bridge library)
Adafruit VS1053 MP3 breakout board
Adafruit 3.7w Stereo Class D Amplifier
– Breadboard (or, if you’re smarter than me, perfboard)
Male-to-female jumper wires
– Micro-SD card
– ≥3Ω speaker (I used this 8Ω, 1w model)
– Small USB battery pack (I use the Instapark MP1800U2 ‘cos it’s compact)
Sugar pumpkin, or commensurate squash

…and this software:

– Arduino IDE 1.5.8 (beta) or greater
Arduino script and helper PHP scripts
– Adafruit VS1053 library
tmhOAuth library & Twitter API
– Laugh track MP3s (I used Amazing Sound Effects of Crowds)
BlueHarvest for cleaning invisible temp files from your SD card (optional)

OK! First up: connect the Yún to your wifi network!

(You can follow along here.) When you first power up the Yún, it’ll create a wifi network called ArduinoYun-XXXXXXXXXXXX. Join this network with your computer, wait to obtain an IP address, and then navigate to arduino.local. When prompted, enter “arduino” as your password and click “Log In.” On the next page, click “Configure” to set up wifi. You can assign a name to the Yún (if you use anything other than “psl” you’ll need to change the PHP example pages accordingly). Choose your wifi network from the drop-down, enter the password and click “Configure & Restart.” Now you can rejoin your regular wifi network. After a couple of minutes, you should see a white LED on the Yún illuminate, meaning the Yún is online. (If you don’t, reset the wifi and try again. The relevant reset button is next to the USB port and is oriented sideways, and if you hold it in for >5 and <30 seconds, the Yún will recreate its original wifi network so you can try your configuration again.) Hey, look at that, you’re up and running!

Now’s a good time to head back to psl.local and identify your Yún’s IP address and MAC address for port forwarding, in case your server isn’t on your LAN (though this tutorial assumes that it is). In order for the Yún to receive commands from the browser located outside your LAN, you’ll need to forward a port to the Yún’s TCP port 80.

Now, let’s unplug the USB connection and then solder the VS1053 and amp to the header pins, then wire everything up.


The VS1053 breakout and the stereo amp both need to be soldered to their header pins. (Or, I suppose, you could try using these solderless header pins from Sparkfun, but they get mixed reviews.) There are a lot of small connections, so work slowly and be careful. Adafruit has instructions here (for the VS1053) and here (for the amp). The amp is a little trickier, since it only has header pins on one side. As per the diagram, stick the leftover header pins between the amp and the breadboard to keep everything steady while you’re soldering.

Once you’re done soldering, hook everything up. The VS1053 gets hooked up to the Yún as per these instructions (don’t wire in the headphone jack), with one important distinction. Because the Yún’s SPI interface doesn’t run through its digital pins (as with the Uno), you need to route the SPI pins on the VS1053 (CLK, MISO and MOSI) directly to the ICSP header on the Yún. This is where those male-to-female jumpers come in. Instead of using Arduino pins 11, 12, and 13, use the following schematic:


Once the VS1053 is wired, you can hook up the amp and the speaker, using these instructions. LOUT and ROUT on the VS1053 go to the L+ and R+ pins on the amp. L- and R- go to the AGND pins. Then use the speaker terminal you soldered earlier to connect the speaker. (If you have a red wire, that’s positive.) I only use the right channel. Who needs stereo inside a pumpkin? Consider using the jumpers to adjust the gain on the amp. It’s well-insulated in that gourd! But start low, so you don’t blow anything out while you’re testing.

Congratulations, you now have a completely functional physical setup. Let’s take a break and set up the Twitter API.


Head over to and login with your Twitter account info. Then click the button to create a new app. Provide all the relevant details, accept the terms, and create the app. Then, head over to the “Keys and Access Tokens” and take note of the Consumer Key (API Key) and Consumer Secret (API Secret). On the bottom of that page, generate your access token. Then take note of your Access Token and Access Token Secret. You’ll need these to scrape the Twitter feed. (For each key, make sure you’ve copied the entire key. They often have dashes in the middle, and double-clicking on the text may only pick up everything on one side of the dash.)

So, let’s upload the PHP scripts (and CSS file) to your server of choice and test them out. They can be downloaded on github, and they consist of index.php, psl_json.php, and psl.css. Also download tmhOAuth and place it in the same folder. Remember those secret keys you copied? Paste them in the appropriate variables at the top of psl_json.php. (Note: it’s important to safeguard these keys, so you should look into the best way of doing that. Here’s a Stack Overflow thread with advice on how to store those API keys in php.ini, which is outside the webserver’s root.) Now, open your browser and load index.php. You should be seeing live tweets.

Now, format your micro-SD card as fat32 and upload your songs to it (and make sure the MP3’s filenames are ≤8 characters long). OS X creates many invisible temporary files and folders which can get in the way of your script’s efforts to pick and play a random track. An easy to way to resolve this is with BlueHarvest, which offers a 14-day free trial, but it’s not necessary. You should be able to get the same results with any tool / shell script / app that allows you to identify and delete invisible files.

Once the SD card is loaded and inserted into the VS1053, we need to do one more thing before we can upload your code. We have to remove conflicting SD card libraries from the Arduino app. Basically, the VS1053 comes with an SD card, as does the Yún. Both have competing libraries, so we’re gonna remove the Arduino IDE’s library and stash it elsewhere for later. To do this, find the Arduino app in your Applications folder. Right-click it and click “Show Package Contents.” Then navigate to Contents > Java > libraries > Bridge > src and remove FileIO.cpp and FileIO.h. Stash these somewhere so you can restore them later if you need to. (Or, you can always download a fresh copy of the Arduino app later.)

Now, it’s time to plug everything back in, upload the code to your Yún and test everything out. I’d like to take a moment and give credit to Aero98 on the Arduino and Adafruit forums, whose code for identifying and playing a random track I am adopting here. (I have added one thing to it, which is randomSeed — this uses the output of an unused analog pin to seed the first track. Without it, your random sequence will always start at the same place. I also corrected a memory leak by making “path” a global variable.) The rest of my code is adapted from two sources: the “MailboxReadMessage” example in the Yún’s bridge library, and the “player_simple” example in the Adafruit VS1053 library.

Assuming everything’s working as it should be, and your browser (or perhaps someone else’s, if your PHP files are hosted outside your LAN) is currently accessing index.php, you should start hearing your songs every time there’s a new tweet! You can unplug the Arduino from your PC and plug it into the USB battery pack to go completely wireless.


Onto the pumpkin. Select one that’s a little larger than you think you’ll need — the walls are thick, and the chamber somewhat small. Cut a hole in the bottom, and scoop the guts out (don’t forget to save those seeds for roasting!). The widest component I used is a 3″ x 3″ speaker, so that’s about the size of the hole I made in the bottom. Stuff your components haphazardly into a gallon-sized Ziploc bag (I invite you to find a more elegant and static-resistant solution), and shove them into your pumpkin like it’s a Thanksgiving turkey. (With an appropriately-sized gourd and gourd aperture, I find that everything stays in place pretty well without needing to be secured in any meaningful way.)

Flip the pumpkin over, and you’ve got yourself a PSL modem. Happy tweeting!


Consider The Cowpath

September 4th, 2014 • permalink

I recently gave a mini PechaKucha-style talk at SVA IxD about habits and cowpaths and such, which I’ve reproduced here:


One day, when I was about thirteen years old, I decided to bike from my house to Brooklyn Heights for the first time. All was going fine… until I nearly merged onto the highway. (Don’t worry. I veered off, found a payphone, and called my mom for directions. She almost had a heart attack.) Why did I almost make such a boneheaded decision? Well, I was following the route my mom used when she drove us to school every day. She took the Prospect Expressway, so I was going to do the same. It was the path I knew the best, and habit is an extremely powerful force — one that problem-solvers ignore at their own peril.


In the IT world, we like to blow up bad habits. We have a phrase for this: “don’t pave the cowpath.” In other words, wouldn’t it be nice to come up with an objectively superior solution for a problem, rather than cementing the jury-rigged method that some former employee implemented some Friday afternoon ten years ago? This sounds really good, but it’s shortsighted. If we try to torpedo an existing system, human nature — those bad habits — will still find a way.


Take, for example, the mysterious case of the client who likes to keep their password on a Post-It stuck to their computer. As an IT consultant, I come in and my eyes bug out. I say, here, let me set you up with a fancy password manager. It’ll solve all your problems, it’s really pretty, and all we have to do is set up a master password so you can use it. So what happens next? (img)


Of course. My client has simply scrawled their master password onto the Post-It note. Why? Well, zooming out, maybe we weren’t solving the right problem. Their goal was to remember their password and easily access their files. Perhaps instead of undoing the Post-It cowpath and introducing a much larger security breach, we can dig deeper and solve the unspoken problem. So, what is the problem?


As usual, XKCD has the answer. It turns out, apparently, that we’ve all been crafting terrible passwords for years. Adding complex numbers and symbols is (relatively) easy for a computer to brute-force, but nearly impossible for a human to remember. Choosing a random string of words, like “correct horse battery staple,” is far easier for a human to commit to memory, but far harder to crack. By looking at the root of this issue — convenience and ease of memorization — we’re far more likely to keep the password off the Post-It.


Getting to the root of the problem involves noticing that some seemingly-senseless paths are intentional — these are known as desire paths. Returning to the parable of my fateful bike ride, it turns out that the best way to get to Brooklyn Heights from my old house is to cut across the Parade Grounds, dart across a long street with heavy, near-continuous vehicular traffic, and walk your bike through a hole in the fence around Prospect Park. No matter how many times the Parks Department puts up a new fence, folks keep tearing it down, because the alternative (biking around half the park to get to the proper entrance) is so onerous. Rather than try to rebuild the fence and correct the behavior, perhaps we can embrace it — by putting up a traffic light, for instance.


Similarly, by paying attention to what’s going on and trying to withhold judgment, we might (might!) learn that a cyclist biking down a one-way street against traffic is actually not an asshole. Perhaps he’s not even going the wrong way. Maybe he’s made a prudent decision that this route is the safest and most expedient, and maybe instead of giving him a ticket we can design a solution, like a protected two-way bike lane, that would be more effective. (img)


Sometimes the behavior we must design for is largely invisible, so we have to get creative. Here, looking at the tracks that cars leave after a snowstorm, we can see the desire path developing in negative space. The cars have done the research for us, and now we can identify an obvious spot for a pedestrian refuge. This behavior isn’t accidental, but it does require a well-trained eye to observe it. (img)


And for another example of nearly-invisible behavior, here’s a closeup of that desire path into Prospect Park. Take note of that string bridging the gap in the fence. That’s part of an eruv, which is a consecrated, unbroken filament strung around entire neighborhoods to virtually extend the borders of Orthodox Jews’ households on the sabbath. Designing for this space requires awareness of many diverse, hidden needs — none of which we can ignore, and all of which will route around any obstructions.


The moral of the story is that life finds a way. Habits are powerful, and behavior is stubborn. It also usually has an internal logic, even if that’s sometimes hidden on the surface. In order to design mindfully, we have to embrace these “bad habits” and figure out their root cause, rather than torpedoing an existing system in favor of an impossible “objectively superior” solution.

Pixel Perfect

July 9th, 2014 • permalink

Today, I helped bury my dear friend Chloe Weil.


I first met Chloe thirteen years ago, in an elective high school health ed course called “Death & Dying.” Remembering that makes me laugh now, darkly. There was something dreadfully apropos about it then, and it’s even more devastatingly perfect now.

We rolled our own social media outlets back in the day. She maintained a proto-Twitter feed in which each entry was precisely 101 words long. She documented her teenage life in stark detail online, with the same clarity, maturity, observational prowess and humor that attracted so many to her in recent years. We bonded over our personal websites when we were sixteen. Her site was always better than mine. On April Fools Day, we switched our index.html pages and befuddled our friends.

Chloe was the least sentimental person I’ve ever met — she routinely shredded her ephemera and jettisoned old projects with ease — so I feel vaguely guilty telling you too much about her. But goddamn it, I want to make sure you understand.


I want you to know that she vibrated at a different frequency. I don’t really know how to put it any other way. Her raw talent just seemed so effortless. Her dark discomfort wouldn’t allow her to see how loved she was, or how incredible, or how talented. But you can see it, and you don’t need my help. Read for yourself. She will stop you in your tracks.

Chloe and I didn’t always sync up, but when we did it always involved a fantastic, hilarious, unreal voyage. She visited me in college and we spent the entire weekend doing nothing but silently typing to each other on our laptops — it was one of the best weekends I’ve ever had. A few years later, we met up in the City and circumnavigated Central Park one evening, staying up sitting on a bench just chatting until 7am. It was one of the best nights I’ve ever had. And a couple of summers ago, we met up with our friend Jon and walked through the wilds of Red Hook, marveling at every dark corner and fortress-like tower and pier to nowhere. It was one of the best twilights I’ve ever had. We’re all lifelong Brooklynites, so none of this should have impressed us, but when you’re walking with Chloe, you’re on the adventure of a lifetime. Every single time.


I hadn’t seen Chloe in a few months, but I did get to have one more adventure with her recently. I had a dream about her a couple of weeks back. I scrawled it down in the middle of the night and emailed it to her in the morning, and here it is.

Dream Chloe met up after a long time. Walking around. Lots of abandoned carnicerias. There was a time lord. Some guy got sucked into an engine block. Beautiful ethereal bats shadows. Someone waiting for a flight had hacked in and pretended to be you. We talked about our relationships and making time for our friends. You said you only liked my sister. Some people were taking wedding photos in a car, lit from outside. You weren’t sure how to get home from there.



February 21st, 2014 • permalink

I turned 29 last week. I entered my thirtieth year.

It wasn’t the easiest of birthdays, but the invisible countdown has actually been extremely helpful for me. I’ve been more productive since January 1st than I had been in perhaps all of 2013. The nagging call of my thirties haunts my every lazy impulse.

I predict that it’s going to get a lot easier after my next birthday. I’ll be able to substitute the guilt of wasting my twenties with the freedom and power of an entire new decade to squander!

To that end, I had these little cards printed. Like me, many of my friends will be turning 30 during the next twelve months — and many family members will be turning 20, 60 and even 90. They could all use a little reminder that once they turn that corner, the pressure’s off — for a while, at least.



Where Am I?

You are currently browsing the Uncategorized category at Things We Make.