Dark Mode for Coda

At some point I'm going to post my thoughts on Coda, but lets just say that I quite enjoy it as a tool to make projects with a lot of interconnections and sections manageable in a way that Google Docs, markdown editors, and other tools just cannot.

Continuing my interest in all things dark mode, one annoyance right now is that Coda doesn't seem to support themes, and presents only the standard white background layout. This is pretty jarring when authoring content at night, but because Coda is web-based, this is just a style edit away from a fix. Using the open source extension Stylebot, I've made a set of changes to Coda that darkens most of the basic UI.

Check out the Github Gist here.

Before & after

Lightswitch - adding dark mode compatibility to a website

One of my goals developing this blog was to design it to work with both light and dark color schemes. Having different background colors is very helpful for readability and eyestrain both during the day and night, and follows the ongoing trend of computers adapting to blending into to their surroundings.

Dark mode was the major user-facing feature in September's release of MacOS Mojave and has been well received, which means other platforms (including iOS) will surely follow. It was only a matter of time before this was brought to the web, and indeed that happened with last week's release of Safari Technology Preview 68, which lets websites specify different styles based on whether the OS is in light mode or dark mode (technically, the prefers-color-scheme media query).

However, there is one issue with just letting OS mode determine a website's color scheme - user preference. Because OS mode doesn't change based on day/night, users are going to set their OS mode once and probably leave just it that way, regardless of time of day. Those users may not always want a website to be light or dark based on their OS, and may wish to override the default.

Lightswitch.js active on this website

My solution is a bit javascript I call Lightswitch that automatically detects the user's OS mode, and also allows the user to override that mode with a button you put on your site. On this blog, the button is the half circle icon in the top right corner, but you may attach the override functionality anywhere on your site -- such as this link.

Here's the code:

/*******************************************************************************
LIGHTSWITCH: A DARK MODE SWITCHER WITH USER OVERRIDE
By Nick Punt 10/26/2018
How to use:
  *  Create two color schemes in CSS under the classes 'light' and 'dark'
  *  Add the class 'light' or 'dark' to your body as your default color scheme
  *  Add button to page with id 'lightswitch', which lets users change/override
  *  Use the class 'flipswitch' for any style changes you want on lightswitch
Logic:
  1. When user hits page for first time, color scheme is based on OS/browser
     (if supported), otherwise it defaults to the body class you added
  2. When user clicks lightswitch to override colors, their preference is stored
  3. When user alters their OS light/dark mode, switch to dark if dark mode,
     and light if light mode
     
Note:
The 'prefers-color-scheme' css support is currently only available in Safari 
Technology Preview 68+. 
*******************************************************************************/

// New prefers-color-scheme media query to detect OS light/dark mode setting
var prefers_light = window.matchMedia('(prefers-color-scheme: light)')
var prefers_dark = window.matchMedia('(prefers-color-scheme: dark)')

// Change to dark and rotate the switch icon
function darkmode() {
  document.body.classList.replace('light', 'dark');
  document.getElementById("nav-light").classList.add("flipswitch");
}

// Change to light and rotate the switch icon
function lightmode() {
  document.body.classList.replace('dark', 'light');
  document.getElementById("nav-light").classList.remove("flipswitch");
}

// Initialization triggers light/dark mode based on prior preference, then OS setting
if(localStorage.getItem("mode")=="dark") {
  darkmode();
} else if(localStorage.getItem("mode")=="light") {
  lightmode();
} else if(prefers_light.matches) {
  lightmode();
} else if(prefers_dark.matches) {
  darkmode();
}

// Fires when user clicks light/dark mode switch in top right
function handleThemeUpdate() {
  if (document.body.classList.contains('light')) {
    darkmode();
    localStorage.setItem("mode", "dark");
  } else {
    lightmode();
    localStorage.setItem("mode", "light");
  }
}

// Runs when OS changes light/dark mode. Changes only if you were on default
// color state (light on light mode, dark on dark mode).
function OSColorChange() {
  if (prefers_light.matches) {
    lightmode();
    localStorage.setItem("mode", "light");
  } else if (prefers_dark.matches) {
    darkmode();
    localStorage.setItem("mode", "dark");
  }
}

// Listeners for when you change OS setting for light/dark mode
prefers_light.addListener(OSColorChange)
prefers_dark.addListener(OSColorChange)

Download on Github

The only known issue with Lightswitch currently is that since the site defaults to light, when a user loads a page with a lot of content (e.g. the blog index), the screen may flash to the light background briefly before the javascript is run.

Setting content to light or dark based on the OS mode is a first step to having our computers be truly responsive to our surroundings, but it's not the final word. Ideally, websites and documents would set a color scheme based on environmental factors like time of day and lighting in the room. Javascript isn't the right way to solve these problems though - it's the OS' job to factor these in to provide a consistent experience from UI to apps to content. The best we can do in the meantime is to set up our content with light and dark variants and allow users to set override preferences on this content.

What Apple Won’t be Announcing at WWDC 17

WWDC is rumored to be pretty exciting this year. As with every year, the week before has been filled with lots of wish lists and guesses from enthusiasts around what could be in the latest iOS release. I’ve compiled my own, but I realized one thing I think is most interesting is unlikely to be announced at WWDC, because it’s tied to new hardware.

The hardware in question is dual front- and rear-facing cameras rumored to be on the next iPhone (that’s f-o-u-r cameras, folks). The existence of hardware isn’t enough to sell phones though — its the new use cases are enabled by the hardware that get people lining up on day one.

In case it wasn’t already apparent, the last couple years have ushered in a golden age of cameras as a platform. Far beyond mere photos and video, cameras are enabling computers to understand the world around us, and helping us capture, reconstruct, and remix the world. Through the iPhone, Apple has been one of the enablers of this golden age, and if their acquisitions and product roadmap hints the last few years are any indication, they will continue to be the driving force for years to come.

Below are a few thoughts on camera-related features that won’t be announced at WWDC, but that I hope we see soon — maybe even in Fall.

3D Photos

Imagine if the photos you looked at on your device were to suddenly gain depth — that when you rotated the device ever so slightly, the face you were just looking at head-on also rotated. 3D Photos is basically that.

Imagine you could take photos like these

3D images and stereoscopic cameras are not a new thing. However, 3D content is typically consumed using 3D glasses or VR headsets, which present each eye with different image which your brain interprets as having depth. 3D Photos is about taking a different approach to consumption, making flat photos come to life by having the device, not your brain, reconstruct the depth of the object.

Viewing 3D Photos within Photos would be easy — likely just by rotating the device around, like you can with panoramas and 360 photos/videos on Facebook. As a secondary way to trigger it, the user could 3D Touch on the photo then slowly swipe left or right — an expansion on how Live Photos work now.

Done right, I think 3D Photos are a flagship feature. 3D Photos would have limited range of 3D effect, but wouldn’t need much to create the sense of depth. Even a few degrees of difference would be enough for our brains to make much more meaning from the photos, for humans to emerge from flat pieces of glass.

Of course, having very small interaxial separation between the two cameras means the 3D effect would be limited, and there would be limitations on the optimal distance subjects needed to be away from the cameras. I’m not sure if the interaxial separation is enough on the upcoming iPhone, but lets assume it is.

Rear Camera for Landscape, Front Camera for Portrait

Currently, the iPhone 7S Plus has dual rear-facing cameras oriented horizontally, meaning that getting a 3D effect by rotating left and right would only be possible for portrait shots. The new iPhone is rumored to move the cameras vertically, meaning that 3D Photos could be enabled for landscape shots, which makes much more sense for the rear camera. The different focal lengths of the rear camera are very likely to remain as they enable zoom, however it may be possible to get a 3D Photo effect through interpolation (similar to how interpolation is used in zoom) and having a minimum distance for 3D Photo effects to work — say maybe 5 feet, otherwise the telephoto camera is too zoomed into the subject and 3D Photos are auto-disabled.

Meanwhile, the new iPhone is rumored to have dual front-facing cameras oriented horizontally, which fits for the use case of portrait 3D Photo selfies. Unlike the rear cameras, having different focal lengths doesn’t make much sense on front-facing cameras — it makes much more sense to have those cameras at the same focal length, and use the binocular separation to be able to create 3D Photo selfies at a closer distance.

Just look at how Snap and Instagram have made looping video a thing — 3D Photos are the next logical step. I think this feature will produce a really amazing effect and will have to be seen to be believed. It will give iDevices a whole new depth, first hinted at in the parallax effect introduced in iOS 7.

Gaze Scrolling

Gaze tracking is the logical next input method to add after keyboard, mouse, touch, and voice. There’s many benefits to using the eyes for certain input tasks, not least of which is it doesn’t involve moving or saying anything. Apple is likely already working to perfect gaze tracking, as it is pretty much necessary for AR to work seamlessly. If the upcoming iPhone includes a front-facing infrared light, it’s likely a limited version of gaze tracking can be launched even before Apple’s AR product arrives.

Scrolling is the most obvious use case I think a first version of gaze tracking would work well for. So much of iOS is driven by vertical scrolling, and prolonged use can contribute to repetitive stress issues in your hands. I’d imagine scrolling is the single most used touch action on iPhones, before even button presses. Seems a good place to optimize.

The best way I can think to implement gaze scrolling is to trigger it by looking at the very top or bottom of the screen for a fixed time of about second or so, at which point you would get a haptic vibration and the screen would scroll. Gazing in the scroll area for an extended period of time would reduce the time before triggering scrolling again, providing a functional analog to the touch-based scroll acceleration effect.

gazetiming

Limited gaze tracking also fits in well with the upcoming nearly edge-to-edge iPhone design. This new iPhone has an even more vertical aspect ratio, which should offer some extra room for visual feedback when the scroll action timer gets triggered by your gaze — perhaps a slow fade at the bottom or top of the screen that grows brighter until scrolling is triggered.

Rumored iPhone 8 layout, with overlays where gaze scrolling could trigger

As far as implementing it goes, using the gaze tracker only at the edges means it would need less accuracy and calibration could be confined to one dimension. The feature could be disabled when the user is outside of range, which wouldn’t be too big a problem considering you can just hold the iPhone a bit closer to your face — much easier than the MacBook/iPad use case where you’d need to move your whole body to find the sweet spot. I currently have Tobii’s Eye Tracker 4C, which exhibits this annoying property.

Unfortunately, gaze tracking is rather difficult to implement accurately, and will not work for everyone — obviously the blind, but also users with astigmatism, amblyopia, strabismus, and probably thick or dirty glasses. Further, the whole UI paradigm of iOS is built touch as the primary input source, and it doesn’t make sense to change that — using gaze tracking as a primary input method is best left for AR. Therefore, it’s best to start by implementing gaze tracking as a supplementary input method like 3D Touch was, strategically incorporating it into certain optional areas in iOS. These areas will also need to be ones where any accuracy limitations will not be too pronounced.

I’m personally excited for it because I’ve spent the last couple years overcoming an injury that made prolonged use of my hands a bit difficult. Gaze tracking is on the cusp of being useful to everyone (and life changing to some) and as much as I’d like it on Mac or iPad, if I were an Apple PM I’d make the call to put it on the iPhone first — it just makes so much sense in how we use the device, even in a limited form.

If accuracy is good, of course, I’d hope there would be more broad gaze tracking built into other parts of iOS, which is rumored to be going through a big redesign this year.

CameraKit and Raise-To-Shoot

Welcome to the Future — Word Lens

The camera is a platform, not just an app. There’s all sorts of use cases that come through object detection and information overlay on cameras — from scanning pieces of paper like receipts (ScanBot) to looking up information on where you are (Yelp) to translating text (Word Lens), to simply taking pictures of someone who’s smiling back at you. Right now each one of these is gated behind different apps, and their data isn’t always appropriate as part of the Photos stream.

CameraKit would be something like SiriKit, where developers can register certain use cases (identifying points of interest through visuals and GPS, detecting and scanning paper, detecting words and overlaying content), and those detectors are run in the background when the camera is running. The user would be shown relevant overlays and actions they could take based on what they were pointing their device at. Looks like a receipt? The scan button appears, and when pressed the scan overlay comes up and auto-shoots when you’ve got it positioned right. Looks like some text in Thai that you can’t read? The translate button appears, and pressing it drops the translation over. Looks like a person smiling back at you? Well, just have it take the picture automatically, or wink at the front-facing gaze tracker.

ScanBot (via TheSweetSetup)

This is really powerful when coupled with a lower friction way to access the camera. In iOS 10, Apple added Raise-to-Wake, which automatically turned on and off the phone when it detected you picked it up. This immediately opened up the lock screen as a whole other interface in iOS for certain tasks, and was a brilliant addition. It’s probably time to go the next step and simply have the camera enabled when this happens, looking for relevant objects and presenting relevant actions upon detection — similar to how iBeacons show you relevant apps like the Apple Store app when you walk into an Apple Store.

To go one step further, the lock screen could simply have the camera stream be its background, making the device almost transparent. To make this transparency effect work, the rear camera would need to be set at the right zoom level and presenting the right part of the frame based on where the user was looking from — which could be calibrated by detecting the user’s face and its distance away from the device via the front-facing cameras.[1]

Conclusion

Even if Apple is working on 3D Photos and Gaze Scrolling, I highly doubt they’ll drop any hints about them at tomorrow’s WWDC. There’s a decent chance we’ll see something like a CameraKit emerge, coupled with some advances to Photos ability to classify things.

Overall, I’m really excited about the opportunities that come from stereoscopic cameras and object detection. As I said at the beginning, I think we’ve entered a golden age of cameras. To really appreciate what that means we have to let go of our notions of how we’ve always used cameras and imagine them as being integral to the device experience, always-on, understanding what we’re looking at and what we’re doing on the device. Their output will occasionally be photos and videos, but mostly they’ll be another context sensor inferring what we want and helping us navigate and remix the world.


  1. Yes this would be hard or even just not possible, but the effect would be nothing less than magical. ↩︎