Dark Mode for Coda

At some point I'm going to post my thoughts on Coda, but lets just say that I quite enjoy it as a tool to make projects with a lot of interconnections and sections manageable in a way that Google Docs, markdown editors, and other tools just cannot.

Continuing my interest in all things dark mode, one annoyance right now is that Coda doesn't seem to support themes, and presents only the standard white background layout. This is pretty jarring when authoring content at night, but because Coda is web-based, this is just a style edit away from a fix. Using the open source extension Stylebot, I've made a set of changes to Coda that darkens most of the basic UI.

Check out the Github Gist here.

Before & after

Lightswitch - adding dark mode compatibility to a website

One of my goals developing this blog was to design it to work with both light and dark color schemes. Having different background colors is very helpful for readability and eyestrain both during the day and night, and follows the ongoing trend of computers adapting to blending into to their surroundings.

Dark mode was the major user-facing feature in September's release of MacOS Mojave and has been well received, which means other platforms (including iOS) will surely follow. It was only a matter of time before this was brought to the web, and indeed that happened with last week's release of Safari Technology Preview 68, which lets websites specify different styles based on whether the OS is in light mode or dark mode (technically, the prefers-color-scheme media query).

However, there is one issue with just letting OS mode determine a website's color scheme - user preference. Because OS mode doesn't change based on day/night, users are going to set their OS mode once and probably leave just it that way, regardless of time of day. Those users may not always want a website to be light or dark based on their OS, and may wish to override the default.

Lightswitch.js active on this website

My solution is a bit javascript I call Lightswitch that automatically detects the user's OS mode, and also allows the user to override that mode with a button you put on your site. On this blog, the button is the half circle icon in the top right corner, but you may attach the override functionality anywhere on your site -- such as this link.

Here's the code:

/*******************************************************************************
LIGHTSWITCH: A DARK MODE SWITCHER WITH USER OVERRIDE
By Nick Punt 10/26/2018
How to use:
  *  Create two color schemes in CSS under the classes 'light' and 'dark'
  *  Add the class 'light' or 'dark' to your body as your default color scheme
  *  Add button to page with id 'lightswitch', which lets users change/override
  *  Use the class 'flipswitch' for any style changes you want on lightswitch
Logic:
  1. When user hits page for first time, color scheme is based on OS/browser
     (if supported), otherwise it defaults to the body class you added
  2. When user clicks lightswitch to override colors, their preference is stored
  3. When user alters their OS light/dark mode, switch to dark if dark mode,
     and light if light mode
     
Note:
The 'prefers-color-scheme' css support is currently only available in Safari 
Technology Preview 68+. 
*******************************************************************************/

// New prefers-color-scheme media query to detect OS light/dark mode setting
var prefers_light = window.matchMedia('(prefers-color-scheme: light)')
var prefers_dark = window.matchMedia('(prefers-color-scheme: dark)')

// Change to dark and rotate the switch icon
function darkmode() {
  document.body.classList.replace('light', 'dark');
  document.getElementById("nav-light").classList.add("flipswitch");
}

// Change to light and rotate the switch icon
function lightmode() {
  document.body.classList.replace('dark', 'light');
  document.getElementById("nav-light").classList.remove("flipswitch");
}

// Initialization triggers light/dark mode based on prior preference, then OS setting
if(localStorage.getItem("mode")=="dark") {
  darkmode();
} else if(localStorage.getItem("mode")=="light") {
  lightmode();
} else if(prefers_light.matches) {
  lightmode();
} else if(prefers_dark.matches) {
  darkmode();
}

// Fires when user clicks light/dark mode switch in top right
function handleThemeUpdate() {
  if (document.body.classList.contains('light')) {
    darkmode();
    localStorage.setItem("mode", "dark");
  } else {
    lightmode();
    localStorage.setItem("mode", "light");
  }
}

// Runs when OS changes light/dark mode. Changes only if you were on default
// color state (light on light mode, dark on dark mode).
function OSColorChange() {
  if (prefers_light.matches) {
    lightmode();
    localStorage.setItem("mode", "light");
  } else if (prefers_dark.matches) {
    darkmode();
    localStorage.setItem("mode", "dark");
  }
}

// Listeners for when you change OS setting for light/dark mode
prefers_light.addListener(OSColorChange)
prefers_dark.addListener(OSColorChange)

Download on Github

The only known issue with Lightswitch currently is that since the site defaults to light, when a user loads a page with a lot of content (e.g. the blog index), the screen may flash to the light background briefly before the javascript is run.

Setting content to light or dark based on the OS mode is a first step to having our computers be truly responsive to our surroundings, but it's not the final word. Ideally, websites and documents would set a color scheme based on environmental factors like time of day and lighting in the room. Javascript isn't the right way to solve these problems though - it's the OS' job to factor these in to provide a consistent experience from UI to apps to content. The best we can do in the meantime is to set up our content with light and dark variants and allow users to set override preferences on this content.

What Apple Won’t be Announcing at WWDC 17

WWDC is rumored to be pretty exciting this year. As with every year, the week before has been filled with lots of wish lists and guesses from enthusiasts around what could be in the latest iOS release. I’ve compiled my own, but I realized one thing I think is most interesting is unlikely to be announced at WWDC, because it’s tied to new hardware.

The hardware in question is dual front- and rear-facing cameras rumored to be on the next iPhone (that’s f-o-u-r cameras, folks). The existence of hardware isn’t enough to sell phones though — its the new use cases are enabled by the hardware that get people lining up on day one.

In case it wasn’t already apparent, the last couple years have ushered in a golden age of cameras as a platform. Far beyond mere photos and video, cameras are enabling computers to understand the world around us, and helping us capture, reconstruct, and remix the world. Through the iPhone, Apple has been one of the enablers of this golden age, and if their acquisitions and product roadmap hints the last few years are any indication, they will continue to be the driving force for years to come.

Below are a few thoughts on camera-related features that won’t be announced at WWDC, but that I hope we see soon — maybe even in Fall.

3D Photos

Imagine if the photos you looked at on your device were to suddenly gain depth — that when you rotated the device ever so slightly, the face you were just looking at head-on also rotated. 3D Photos is basically that.

Imagine you could take photos like these

3D images and stereoscopic cameras are not a new thing. However, 3D content is typically consumed using 3D glasses or VR headsets, which present each eye with different image which your brain interprets as having depth. 3D Photos is about taking a different approach to consumption, making flat photos come to life by having the device, not your brain, reconstruct the depth of the object.

Viewing 3D Photos within Photos would be easy — likely just by rotating the device around, like you can with panoramas and 360 photos/videos on Facebook. As a secondary way to trigger it, the user could 3D Touch on the photo then slowly swipe left or right — an expansion on how Live Photos work now.

Done right, I think 3D Photos are a flagship feature. 3D Photos would have limited range of 3D effect, but wouldn’t need much to create the sense of depth. Even a few degrees of difference would be enough for our brains to make much more meaning from the photos, for humans to emerge from flat pieces of glass.

Of course, having very small interaxial separation between the two cameras means the 3D effect would be limited, and there would be limitations on the optimal distance subjects needed to be away from the cameras. I’m not sure if the interaxial separation is enough on the upcoming iPhone, but lets assume it is.

Rear Camera for Landscape, Front Camera for Portrait

Currently, the iPhone 7S Plus has dual rear-facing cameras oriented horizontally, meaning that getting a 3D effect by rotating left and right would only be possible for portrait shots. The new iPhone is rumored to move the cameras vertically, meaning that 3D Photos could be enabled for landscape shots, which makes much more sense for the rear camera. The different focal lengths of the rear camera are very likely to remain as they enable zoom, however it may be possible to get a 3D Photo effect through interpolation (similar to how interpolation is used in zoom) and having a minimum distance for 3D Photo effects to work — say maybe 5 feet, otherwise the telephoto camera is too zoomed into the subject and 3D Photos are auto-disabled.

Meanwhile, the new iPhone is rumored to have dual front-facing cameras oriented horizontally, which fits for the use case of portrait 3D Photo selfies. Unlike the rear cameras, having different focal lengths doesn’t make much sense on front-facing cameras — it makes much more sense to have those cameras at the same focal length, and use the binocular separation to be able to create 3D Photo selfies at a closer distance.

Just look at how Snap and Instagram have made looping video a thing — 3D Photos are the next logical step. I think this feature will produce a really amazing effect and will have to be seen to be believed. It will give iDevices a whole new depth, first hinted at in the parallax effect introduced in iOS 7.

Gaze Scrolling

Gaze tracking is the logical next input method to add after keyboard, mouse, touch, and voice. There’s many benefits to using the eyes for certain input tasks, not least of which is it doesn’t involve moving or saying anything. Apple is likely already working to perfect gaze tracking, as it is pretty much necessary for AR to work seamlessly. If the upcoming iPhone includes a front-facing infrared light, it’s likely a limited version of gaze tracking can be launched even before Apple’s AR product arrives.

Scrolling is the most obvious use case I think a first version of gaze tracking would work well for. So much of iOS is driven by vertical scrolling, and prolonged use can contribute to repetitive stress issues in your hands. I’d imagine scrolling is the single most used touch action on iPhones, before even button presses. Seems a good place to optimize.

The best way I can think to implement gaze scrolling is to trigger it by looking at the very top or bottom of the screen for a fixed time of about second or so, at which point you would get a haptic vibration and the screen would scroll. Gazing in the scroll area for an extended period of time would reduce the time before triggering scrolling again, providing a functional analog to the touch-based scroll acceleration effect.

gazetiming

Limited gaze tracking also fits in well with the upcoming nearly edge-to-edge iPhone design. This new iPhone has an even more vertical aspect ratio, which should offer some extra room for visual feedback when the scroll action timer gets triggered by your gaze — perhaps a slow fade at the bottom or top of the screen that grows brighter until scrolling is triggered.

Rumored iPhone 8 layout, with overlays where gaze scrolling could trigger

As far as implementing it goes, using the gaze tracker only at the edges means it would need less accuracy and calibration could be confined to one dimension. The feature could be disabled when the user is outside of range, which wouldn’t be too big a problem considering you can just hold the iPhone a bit closer to your face — much easier than the MacBook/iPad use case where you’d need to move your whole body to find the sweet spot. I currently have Tobii’s Eye Tracker 4C, which exhibits this annoying property.

Unfortunately, gaze tracking is rather difficult to implement accurately, and will not work for everyone — obviously the blind, but also users with astigmatism, amblyopia, strabismus, and probably thick or dirty glasses. Further, the whole UI paradigm of iOS is built touch as the primary input source, and it doesn’t make sense to change that — using gaze tracking as a primary input method is best left for AR. Therefore, it’s best to start by implementing gaze tracking as a supplementary input method like 3D Touch was, strategically incorporating it into certain optional areas in iOS. These areas will also need to be ones where any accuracy limitations will not be too pronounced.

I’m personally excited for it because I’ve spent the last couple years overcoming an injury that made prolonged use of my hands a bit difficult. Gaze tracking is on the cusp of being useful to everyone (and life changing to some) and as much as I’d like it on Mac or iPad, if I were an Apple PM I’d make the call to put it on the iPhone first — it just makes so much sense in how we use the device, even in a limited form.

If accuracy is good, of course, I’d hope there would be more broad gaze tracking built into other parts of iOS, which is rumored to be going through a big redesign this year.

CameraKit and Raise-To-Shoot

Welcome to the Future — Word Lens

The camera is a platform, not just an app. There’s all sorts of use cases that come through object detection and information overlay on cameras — from scanning pieces of paper like receipts (ScanBot) to looking up information on where you are (Yelp) to translating text (Word Lens), to simply taking pictures of someone who’s smiling back at you. Right now each one of these is gated behind different apps, and their data isn’t always appropriate as part of the Photos stream.

CameraKit would be something like SiriKit, where developers can register certain use cases (identifying points of interest through visuals and GPS, detecting and scanning paper, detecting words and overlaying content), and those detectors are run in the background when the camera is running. The user would be shown relevant overlays and actions they could take based on what they were pointing their device at. Looks like a receipt? The scan button appears, and when pressed the scan overlay comes up and auto-shoots when you’ve got it positioned right. Looks like some text in Thai that you can’t read? The translate button appears, and pressing it drops the translation over. Looks like a person smiling back at you? Well, just have it take the picture automatically, or wink at the front-facing gaze tracker.

ScanBot (via TheSweetSetup)

This is really powerful when coupled with a lower friction way to access the camera. In iOS 10, Apple added Raise-to-Wake, which automatically turned on and off the phone when it detected you picked it up. This immediately opened up the lock screen as a whole other interface in iOS for certain tasks, and was a brilliant addition. It’s probably time to go the next step and simply have the camera enabled when this happens, looking for relevant objects and presenting relevant actions upon detection — similar to how iBeacons show you relevant apps like the Apple Store app when you walk into an Apple Store.

To go one step further, the lock screen could simply have the camera stream be its background, making the device almost transparent. To make this transparency effect work, the rear camera would need to be set at the right zoom level and presenting the right part of the frame based on where the user was looking from — which could be calibrated by detecting the user’s face and its distance away from the device via the front-facing cameras.[1]

Conclusion

Even if Apple is working on 3D Photos and Gaze Scrolling, I highly doubt they’ll drop any hints about them at tomorrow’s WWDC. There’s a decent chance we’ll see something like a CameraKit emerge, coupled with some advances to Photos ability to classify things.

Overall, I’m really excited about the opportunities that come from stereoscopic cameras and object detection. As I said at the beginning, I think we’ve entered a golden age of cameras. To really appreciate what that means we have to let go of our notions of how we’ve always used cameras and imagine them as being integral to the device experience, always-on, understanding what we’re looking at and what we’re doing on the device. Their output will occasionally be photos and videos, but mostly they’ll be another context sensor inferring what we want and helping us navigate and remix the world.


  1. Yes this would be hard or even just not possible, but the effect would be nothing less than magical. ↩︎

Zuck's Building Global Community Note

I highly recommend everyone read Mark Zuckerberg's Building Global Community note. These ideas raise challenging / fascinating / relevant questions for the rules of our connected world.

Its also interesting to see from a strategic perspective that Facebook sees basically all aspects of community as under their purview. On the one hand this is both ambitious-in-a-positive-way and begins to take responsibility for their influence, and on the other I struggle to see whether this has left room for non-FB mediated community.[1]

As someone who spent a lot of time building community products for a specific audience with specific needs (students and educators), I know what it feels like to see Facebook undermine efforts by launching different tools and features which only loosely meet those needs, but win because of scale. I worry that the subtext of this note (or at least its realized eventuality) is basically if its not on Facebook, it's not a community[2], and that we'll all be limited in what tools our communities have based on the lowest common denominator of what Facebook offers.

The Growing Importance of Sophisticated Social Tools

The many tools and capabilities described in Zuck's piece (much of it AI-based content analysis) are incredibly important and sometimes downright necessary as the information age advances and more human activity is mediated digitally. This new digital medium we've arrived at comes with it many flaws - a lack of empathetic cues from text prompts make conversation more divisive, a penchant for reactivity due to the abundance of always-on content, an addiction cycle brought on by the reward structure of content, and even the ability to be manipulated en masse by bots. The more our activity moves to this digital medium, the more we need sophisticated tools to elevate discourse and keep communities cohesive, rather than subject ourselves to living with the flaws of the medium and being overwhelmed with the toxic raw data of the internet.

Now compare this to Facebook's strategy, which is to own the entire consumer social experience, because ad models require eyeballs. As Ben Thompson of Stratechery puts it, Facebook is a walled-garden and not a platform because they capture the vast majority of the value themselves, leaving little room for other companies/organizations to capture value. Meanwhile by Ben's (and Bill Gates') definition, you only have a platform when you capture less value than you create for others. If Facebook succeeds in owning the majority share of the many facets of human community, a walled-garden strategy is extremely problematic because Facebook may not leave enough room for others to shape communities in ways that Facebook does not support[3] -- that is, unless you exit the walled garden.

Diverse, Healthy Community Cannot Co-exist with a Monopoly Social Network

Exiting the walled garden though means being subjected to that toxic raw data of the internet. The only way to avoid that is to use the same types of sophisticated tools that Facebook is describing here. However, if Facebook successfully leverages their scale to become a near monopoly across all possible community activity (which I think they will), they may not leave room for other communities to build these tools themselves. There may not be the market, or the available talent, to reach such sophistication required. And so if you exit the walled garden, you enter the ghetto of poorly filtered internet communities struggling to keep at bay the toxic bots and the less than stellar human behaviors that emerge from the medium. Many an online community have struggled with this, going all the way back to usenet and BBSs.

I know this is a bit of a dramatic line of thinking. It supposes little technology and knowledge transfer between Facebook and others, it supposes there is no market outside Facebook for community products that can achieve these sophisticated means, it ignores the possibility of open source, etc.

The reason I bring it up is that I read Mark's post as Community = Facebook. Again, both positive and negative things will arise if that is to come to pass, but that is a huge thing that we should not underestimate. In that reality, I am concerned for what this means for the diversity of human community structure and for those that wish to operate differently than Facebook - either in a highly divergent way, or more likely, because they have small nuances[4] to them that are not well captured by Facebook's present definition of and toolset for community.

The idea that as AI gets better we will have tools that elevate communities and discourse is very exciting. I hope Facebook doesn't keep these tools to themselves, and that communities outside Facebook will have the power to overcome the weight of the toxic raw data of the internet and act as counterbalances to any potential community monoculture that may arise.

Nonetheless, I find Zuck's note to be largely positive and quite inspiring.[5]


  1. Zuck mentions 'physical community', but lets face it, given our Augmented Reality future, that will converge with Facebook-mediated community. ↩︎

  2. Aligning Facebook with community is similar to how Google has aligned itself with the web. Companies aligning themselves to larger themes is a great tool to instill corporate values and keep incentives pure, if the directionality is preserved.

    Good for Internet --> Good for Google

    Good for Community --> Good for Facebook

    However, if the directionality changes, you have a recipe for all sorts of misalignment of incentives and employee brainwashing. Imagine the horrible decisions and implications that would follow if companies believed:

    Good for Facebook --> Good for Community

    Good for Google --> Good for Internet ↩︎

  3. Early on, Facebook did make an effort at becoming a platform and had a great deal more social data interoperablity happening. In fact, I personally built two startups around this ecosystem. Unfortunately, because they did not adequately police the use of this data, Facebook Apps became a privacy nightmare and synonymous with scams and users (rightly) feeling exploited, not to mention the nefarious things certain companies did with this data. ↩︎

  4. Nuances like how new community members are onboarded with increasing freedoms over time, defining allowable content and configuring the methods of enforcing it, scaffolding community governance practices like voting, housekeeping of past content, etc. ↩︎

  5. As I import this note to my blog in late 2018, my feelings of inspiration have (unsurprisingly) waned. ↩︎

The Feed Bites Back, Followup

This is a follow-up to yesterday's post on the Facebook feed.

A few things:

  1. Filter bubbles matter
  2. Cybersecurity matters
  3. Wealth inequality matters

The first just had its first real moment. We're severely underestimating the effects of filter bubbles in social media, because we have an outdated notion of how people form opinions and how manipulable we really are, as well as not understanding what the true effects are of the information systems we use. Recently our ability to distort information seems to have grown faster than our ability to correct distortion. The entropy is increasing. Our expectation is that we're getting the truth from what we read. The reality is that we often don't.

The second has just moved from 'annoying that I have to remember these darn passwords' to 'they can read this, can't they?'. Legitimate cybersecurity chops are now a must have for future candidates. Code and exploits - not guns or men or physical things - are the battlefield of the 21st century, and we struggle to reason about it. As computing spreads to more and more things, the attack surface increases, and the ways these attacks can affect and disrupt us multiply. Our expectation was that things were private and safe, the reality is they're not and anything can happen at any moment.

The third has been brushed aside for too long. In a way the 2008 recession is like the Treaty of Versailles post-WW1, in that the aftereffects of the recession created a massive wealth transfer to the top (through r>g, regulatory capture, campaign finance laws, etc), and that helped create the conditions for this election that the above two then ran with. We talk about wealth inequality as an abstract concept but do almost nothing structurally to resolve it, and now it bites. Our expectation was we'd continue the growth path and opportunities of post-war America, and the reality is that we didn't, and we have no new broadly-accessible paradigm to pivot to.

There are many more takeaways from this election, but I think these are the most novel ones that come to mind. All three of them are about the mismatch of reality and expectation. We've clung to our 20th century safety blanket for too long into this century, and now it's been taken away.

The Feed Bites Back

Facebook needs to do some serious soul-searching after this election and improve the feed algorithm to reduce the propagation of destructive falsehoods. A post's ability to get a rise out of people is an insufficient test for greater feed visibility.

I understand the reluctance to play arbiter of truth, but as it stands the algorithm is already doing that by implying everything is true, and even amplifying it with things like post-click related content. It was cute when it was just your parents posting something innocuous that you could reply to with a snopes article, but now we see its eventuality.

Facebook has succeeded in making a super sticky product with a huge audience that feeds people highly relevant content. Kudos! Now it's time to enter the next phase of social software, and acknowledge that relevance is not enough. That with power comes responsibility. Connecting the world is an insufficient mission if what people connect over are falsehoods.

We're well past the point where one could argue the social network is merely an arbiter of human conversations, rather than an influencer who's played a part in creating the divisive & ignorance-fueled conditions we find ourselves in at this moment. We need truth if we are to advance as a species, and we need to hold our institutions accountable to ensure that it rises to the top. Facebook is the leading media institution, and has thus far simply looked the other way.

I hope that in a few years we'll chuckle when we look back at this time, because of course this insanity is what you get when you give every adult in the country an hour a day dose of relevancy-driven filter bubble that tells them whatever lies they want to believe, backed by a business model based solely on eyeballs. Tech optimizes around goals, and the goals are too simple and self serving.

(Note: many other legit problems have been brought to the surface with this election, which I do not mean to discount. Economics, flawed power structures, automation, race, gender, immigration, social issues, expectations of boomers about the past, a thousand things. We have serious problems and we absolutely need to come together to fix them. But this post isn't about issues, this is about the ability for perceptions to be distorted and truths entirely dismissed. I think this is really the first election that 'internet power' has been truly flexed - that bottom-up, chaotic, fickle force that can create and destroy in equal measure. Right now it's the Wild West of truth on the internet and Facebook is the leading mechanism by which perceptions have been altered. And it's been gamed, big time.)


Originally posted on Facebook 11/08/16. I had written this a few days before the election but held off publishing it until that evening.