The New York Times recently claimed that brain-computer interfaces (BCI) are headed for the mainstream market sooner rather than later. Whether these kinds of “think it and the computer does it” UIs will be practical and useful enough to achieve adoption outside the “glasshole” set is up for debate. But one thing’s for sure: consumer products mean legalese–a lot of it. Few of us read the Terms of Service (TOS) agreements associated with the bevy of networked technology we blindly rely on. We only tend to notice or care about TOS when something breaks or freaks us out after the fact (as Instagram found out last year). But when a consumer product claims to jack itself right into your mind? That might just make people want to actually read these contracts up front.
The Times article notes that Muse, a manufacturer of one of these early-adopter BCI products, maintains an FAQ on its website “devoted to convincing customers that the device cannot siphon thoughts from people’s minds.” It’s a dryly amusing nod to the fact that, while these sci-fi-in-reality tech products seem neat and cool, there’s some kind of comfort-zone Rubicon that gets crossed when they start interfacing with our thoughts. Of course no product that uses a crude EEG headband is going to be able to literally read your mind the way Gmail scans your inbox. But that’s the kind of thing that’s nice to be spelled out in writing anyway, isn’t it?
An FAQ and a TOS each serve very different purposes. The former is meant to be reassuringly human-readable, while the latter is often written to actively thwart comprehension. What will the TOSs for BCIs look like? Exactly the same as those for Instagram and iTunes, no doubt–that is, utterly opaque, designed to protect the manufacturer’s present and future interests by making them as inscrutable as possible. But what if the creators of these products took this as an opportunity to do something different: to turn TOS’s into part of the user experience?
The UI-design refrain I’ve been chanting lately–“familiar, legible, and evident”–could meaningfully include these formerly forbidding documents. The experience of using a BCI is unfamiliar enough that Muse feels the need to proactively head off fears of dystopian mind-reading, but the real-world legal scenarios explicated in a TOS are much more likely to result in unintended outcomes that can materially impact someone’s life. When those outcomes–especially ones related to privacy or copyright–spring from a device that “connects to your brain” (figuratively or literally), the blowback is going to make the Instagram TOS flap look like peanuts: it’s going to feel like a whole new dimension of violation. And the 6-foot-long TOS that you “agreed” to without reading, because it was designed to discourage you from doing so, is going to feel like it was purposefully weaponized against you in order to facilitate that violation.
No consumer tech company needs that kind of noise. With BCI, it just might pay to make TOS’s as inviting, understandable, as usable as possible–in other words, by treating them like another part of the product packaging, which they in fact already are. Accomplishing this without neutering TOS’s as legally enforceable documents will take some kind of UX-design/lawyer genius. But if these companies want us to trust them enough to literally open our minds to them, shouldn’t they put in that kind of extra effort?