Re: Voice Interfaces

Dustin Curtis wrote a fantastic article discussing the shortcomings of voice interfaces. I’d like to discuss a few further points in relation to this problem.

Go read his thoughts first, then come back. I’ll wait.

Contextual Awareness

Dustin outlines that a lack of contextual awareness makes voice interfaces less than simple to use. I’d argue that this contextual awareness goes beyond voice interfaces and software, and affects the full product and experience design. At the end of Dustin’s article, he tells people to imagine what could be done; here are a few ideas I have about contextual awareness as it relates to our devices today.

Why do I have to tell my phone when I’m holding it with one hand?

The Freaking Huge iPhone 6 Plus and the Slightly Less Huge iPhone 6 bring to light some important contextual issues related to human factors. Specifically for those who have smaller hands (most applicably, women), using the phone with one hand becomes difficult. In fact, Apple has introduced a strange accessibility mode that brings the top of the screen down by hitting the home button a few times in a row.

That’s nice of them to help out, but why do I have to tell my phone how to be accessible every time I use it? Personalized accessibility should come as a standard. It shouldn’t be that difficult of a task. In fact, how cool would it be if I could tell my phone how well I could reach, and it remember that? Perhaps adapt my interface to work with my hand better. Or maybe track where my thumb is and move the icons nearer to it, magnet-style. Seems relatively doable if Amazon’s phone can track my eyes.

This kind of context is the low-hanging fruit; the things that continuously provide a positive return in user experience. This is the evolution of “user account preferences”.

My life has modes; why doesn’t my computer?

If you are like me, you have different modes of being. While I’m at work, I’m in work mode. Not everyone treats this issue in the same way, but perhaps you’ve experienced this before. This is especially important for people who use their computers in multiple contexts. I’d like to have a “mode” switcher on my computer. I should be able to flip a switch and open an environment on my computer that helps me focus, relax, or accomplish a specific set of tasks. For example, to record a screencast, I need my desktop cleared, a specific wallpaper set, a particular resolution on my screen, Do Not Disturb turned on, and an array of programs opened.

This should be a contextual setup on my computer, but it’s not.

Maybe you have kids that you want to allow to use your computer, but you don’t want to set up full user accounts for them. Why can’t you easily set access control and flip a switch to change contexts?

It’s very simple to make this happen – in fact, on a few occasions, I have set up scripts to make these kinds of contexts happen with a simple command. But unfortunately, my operating system doesn’t do this on its own, and my contexts shift over time. Thus, maintaining scripts to handle this for me is unrealistic.

Mobile isn’t replacing my laptop any time soon – it’s just expanding it at a different resolution.

It seems that our desktop/laptop computers are staying relatively stagnant. The atmosphere of operating systems have changed in our lives because of mobile, but the desktop isn’t keeping up. My computer should have sense-abilities that are currently reserved only for my phone, or at the very least it should be tightly connected with my mobile devices and use the sensors in my phone to inform it of context. My health tracking should be most accessible on my laptop, and should change in resolution as I move from the larger and more flexible laptop to more limited devices.

My laptop should be as smart or smarter than my mobile device. Until the resolution of interaction on a phone matches or surpasses that of the interaction on a desktop computer, desktop OS innovation must keep up.

Steal these iWatch App Ideas

I want you to steal my ideas.

I’ve said it before, and nothing has changed. Ideas are important, but they aren’t proprietary. I want these things to exist, so hopefully with this post I can inspire someone to make them, even if that person is me.

The iWatch (or whatever it is going to be called) is announced this week, and that’s exciting for entrepreneurs and developers for a lot of reasons. The ideas I present below are sort of like predictions; because I don’t know the features of the watch, I’m making a lot of assumptions.

As always, the ideas presented below are in no way my property. Think of them as money left on the sidewalk. In an envelope labeled “take me!”. The only thing I ask is that you contact me by emailing me or reach out to me on Twitter (@JCutrell).

1. Viral Crowdsourced Fashion

Very simple: allow people to make watch faces and share them. Make them available like a YouTube video. This idea is as old as desktop backgrounds, but it has a brand new twist: fashion.

Why I think it will work: Fashion is obviously a part of our lives. However, technology and fashion have not fused completely yet. The only company truly bridging the gap between personal technology and fashion is – you guessed it – Apple. Apple’s products provide social status and expression of “personality.” The iWatch will have own a large part of this market, and will mark the true fusion of technology and fashion.

Bonus points: Make the faces sellable and take a small commission. (Not sure how this would work with in-app purchases, but that’s where you can get creative, right?)

2. Access-oriented Applications

Everyone is talking about the mobile payment industry as a game changer for the iWatch, but let’s back up for a second and talk about the fundamental difference in payment from your watch versus from your wallet.

Wallets are no more “mobile” than watches. The cognitive change the iWatch introduces is a new sense of access. And along with that comes a new category in application development that hasn’t really taken off with mobile phones.

This is my personal theory, but I think the cause for the failure of mobile device access applications (case in point, Passbook) is at least partially due to the disconnect from the phone as directly connected to you at all times. With a wallet, it works partially because we were born into it (our parents did it), and also simply because we had the previous understanding of carrying cash.

With a watch, we have a newfound freedom to directly identify the watch with the wearer. I wouldn’t be surprised if the payment apps didn’t eventually take bio signs into account for fraud protection.

For these and many other reasons, it’s time to take these apps a step further. It’s time to start using the bio-connectedness that these devices will provide us with to grant access to restricted places requiring identity. This goes from more secure login services and better/more accurate TSA screenings, to check-ins and social physical presence applications.

3. Quantification, Meet The Informed Self

The iWatch will most likely mainstream the quantified self movement. We’ve seen this getting a large part of the media attention.

But for those getting into the game now, it’s time to start thinking past our current place in the quantified movement, and towards the next step. I’m making a prediction: that next step will be to make sense of the data.

Anyone who has worked with infographics for long will tell you that the single most important part of their job is finding and choosing the best metrics and clearest visualizations of those metrics. There is some science to this, but there’s also some gut.

What does this mean? It means that our quantifications don’t really provide us with anything other than raw data, regardless of how pretty it is. We need to take this raw data and turn it into something meaningful. Look at strong correlations, and suggest potential causation. Compare seemingly unrelated things, like lines of code versus number of steps walked per day. Give users a framework for making decisions, and then comes the fun part: using big data to come to conclusions about trending correlations in day-to-day behaviors of the population.

But the first step is to take the quantifiable numbers and show some kind of derived qualitative information.

4. The ultimate timer

Seriously, we’re still making time tracking applications? Yes. Because all of them suck. Well, maybe they don’t suck, but I haven’t found one that is natural. I end up watching a clock. Or my watch.

Aha!

A unique opportunity for productivity apps related to time: use the watch paradigm. This one seems obvious, but whoever wins this one will win big.

I’m thinking something like setting up a few behaviors that the watch can sense and infer with, but then make it very simple – tap, swipe to client, tap again to start timer, tap again to stop timer. On my watch, not my phone. Certainly not on my computer. I use my computer to create invoices. I use my timekeeper to time things.

Conclusion

The iWatch presents a lot more opportunities than I’ve even discussed here. I’d love to hear what you think about these ideas, and I’d love for you to share yours with me. Or, by all means, feel free to take mine and build it.

Hit me on twitter (@jcutrell) if you want to continue this conversation.