When people talk about their origin stories, there’s one thread that seems to be relatively common, give or take a few creators. At some point, the creator “just finally did it.”
Continue reading “Small Anchors Make a Big Difference”
When people talk about their origin stories, there’s one thread that seems to be relatively common, give or take a few creators. At some point, the creator “just finally did it.”
Last month, I had the opportunity to speak at the Chattanooga Developer’s lunch. I talked about how to be a great beginner. This talk isn’t just for beginners, though – it’s for anyone who is working in technology, or any industry that requires constant skill acquisition.
I’ve been using the metaphor of blacksmith and cobbler recently in discussions about craftsmanship. Why?
Blacksmiths and cobblers are renowned as dedicated to their craft. Repeating the motions every day. They both take raw materials, perform a process of actions using a set of tools to refine those materials, and consistently produce something of practical value that is ready to use. For blacksmiths, the tools and materials are quite different from the cobbler’s, but they share a lot of the same fundamental disciplines in practice.
With a little bit of Google research, you can find that a blacksmith today uses basically the same tools as blacksmiths before the industrial revolution, and those tools went largely unchanged for centuries. Of course, the Industrial Revolution changed our reliance on the local blacksmith, but the toolset of any blacksmith of any period would be quite similar to any blacksmith from any other period.
Similarly, the cobbler’s tools have remained relatively unchanged. When new tools have come into the scene, they come by absolute necessity. The blacksmith and the cobbler wouldn’t spend all of their time searching out new tools, constantly on the lookout for something to bend iron in a new way. More likely, the blacksmith would always be on the lookout for great scrap iron, and the cobbler on the lookout for good deals on leather, or building relationships with the people who fashion the cloth and buy the boots he makes.
We have a tooling problem.
Notice that neither the blacksmith nor the cobbler are careless about their tools. Both of them are highly focused, and highly skilled with their tools, and actually rely quite heavily on them. Their jobs depend on having a reliable toolset that they are experts at using.
I think developers today have a fundamental problem with tooling, and it’s simple: we are constantly shopping for new tools.
You might be thinking, “I’m not shopping – these tools are free!”
But that’s not totally true. Certainly, free and open-source projects don’t cost you the currency available in your bank account. But they cost you another, perhaps more valuable, currency: time. You spend time looking, adopting, and deprecating your tools. You spend time reading blogs and books that compare tools, just for the sake of picking one. You spend time switching your code from one set of tools to another. You spend time retraining muscle memory, sometimes to accomplish the same goals.
Admission: this analogy isn’t perfect.
Of course, we have a different trade from blacksmiths and cobblers. Our trade requires that we do, at a much faster pace, adapt our toolset to new demands in our industry. And perhaps it’s because the product we make is virtual, not physical – the materials we use are thoughts, not iron and leather. And, it’s also important to know that picking the right tools absolutely has an undeniable value proposition for your work. But the underlying truth is this: our constant shopping for new tools costs us that precious currency of time. The time you spend browsing the endless aisles of new programming libraries is time that could be spent becoming an expert craftsman, learning your existing toolset inside out.
I know, it’s a hard problem to solve. I know, it seems like not knowing about a particular tool or another might put you behind the ball. So I’m going to make a very clear distinction here:
I’m not saying to bury your head in the sand, and never investigate new tools. Some very smart, influential programmers recommend learning a new language every year. I don’t disagree; this keeps your brain sharp, and helps you stay aware in a constantly changing landscape. What I am saying is this: if you are constantly trying to master something new, you’ll never master anything. Instead, you should be consistently aware of new things, while focusing on mastering few things.
When to pick tools: 6 guidelines
Note: this is not a cohesive list of the only times you should pick tools, but it should give you some idea of how to think.
- You know Assembler? Learn C.
- You know PHP? Learn Laravel.
- You know LISP? Try out emacs.
- You know Ruby? Learn about Rack or mruby.
- You know Objective-C? Look into Swift.
- You know Java? Learn about virtualization, or learn Scala.
- You know Apache? Learn some Linux sysadmin basics.
Pro tip: Be certain that you differentiate your learning efforts from your tool-picking efforts. You might choose to learn a new language that doesn’t stack well with your current toolset, just to expand your brain’s ability to adapt and think in new patterns. This has value, even if you never use that language as an actual tool. This is the whole idea of code kata.
5 step guide to picking tools
- Look at your materials (read: consumer devices, browsers, iOS APIs, your unique service offering, etc)
- Look at the available toolset (languages, frameworks, hardware, outsourced efforts)
- Imagine your product (you fill in the parentheses on this one)
- Pick the tools that can turn your materials into your product (make your best educated guess)
- Repeat every 2-3 years, keeping a majority of the tools you picked previously, and dropping only the ones that didn’t work so well.
You use your toolset every day; it’s natural to want to refine that toolset. But changing your toolset constantly might mean that instead of spending time becoming a craftsman, you are just wasting your time constantly tool shopping. Learn to say no when your tools are perfectly capable of doing the job you need for them to do, and learn how to properly adopt new tools when the time and situation are right.
I recently did a talk on some of the concepts in the book I’m working on. The talk really helped clarify some of the ideas I want to share with the world, and I’m sharing that talk with you today. Specifically, I discuss why our perception of creativity as it relates to logic is wrongly constructed, and a better way to understand our differences.
Hope you enjoy!
Dustin Curtis wrote a fantastic article discussing the shortcomings of voice interfaces. I’d like to discuss a few further points in relation to this problem.
Go read his thoughts first, then come back. I’ll wait.
Dustin outlines that a lack of contextual awareness makes voice interfaces less than simple to use. I’d argue that this contextual awareness goes beyond voice interfaces and software, and affects the full product and experience design. At the end of Dustin’s article, he tells people to imagine what could be done; here are a few ideas I have about contextual awareness as it relates to our devices today.
Why do I have to tell my phone when I’m holding it with one hand?
The Freaking Huge iPhone 6 Plus and the Slightly Less Huge iPhone 6 bring to light some important contextual issues related to human factors. Specifically for those who have smaller hands (most applicably, women), using the phone with one hand becomes difficult. In fact, Apple has introduced a strange accessibility mode that brings the top of the screen down by hitting the home button a few times in a row.
That’s nice of them to help out, but why do I have to tell my phone how to be accessible every time I use it? Personalized accessibility should come as a standard. It shouldn’t be that difficult of a task. In fact, how cool would it be if I could tell my phone how well I could reach, and it remember that? Perhaps adapt my interface to work with my hand better. Or maybe track where my thumb is and move the icons nearer to it, magnet-style. Seems relatively doable if Amazon’s phone can track my eyes.
This kind of context is the low-hanging fruit; the things that continuously provide a positive return in user experience. This is the evolution of “user account preferences”.
My life has modes; why doesn’t my computer?
If you are like me, you have different modes of being. While I’m at work, I’m in work mode. Not everyone treats this issue in the same way, but perhaps you’ve experienced this before. This is especially important for people who use their computers in multiple contexts. I’d like to have a “mode” switcher on my computer. I should be able to flip a switch and open an environment on my computer that helps me focus, relax, or accomplish a specific set of tasks. For example, to record a screencast, I need my desktop cleared, a specific wallpaper set, a particular resolution on my screen, Do Not Disturb turned on, and an array of programs opened.
This should be a contextual setup on my computer, but it’s not.
Maybe you have kids that you want to allow to use your computer, but you don’t want to set up full user accounts for them. Why can’t you easily set access control and flip a switch to change contexts?
It’s very simple to make this happen – in fact, on a few occasions, I have set up scripts to make these kinds of contexts happen with a simple command. But unfortunately, my operating system doesn’t do this on its own, and my contexts shift over time. Thus, maintaining scripts to handle this for me is unrealistic.
Mobile isn’t replacing my laptop any time soon – it’s just expanding it at a different resolution.
It seems that our desktop/laptop computers are staying relatively stagnant. The atmosphere of operating systems have changed in our lives because of mobile, but the desktop isn’t keeping up. My computer should have sense-abilities that are currently reserved only for my phone, or at the very least it should be tightly connected with my mobile devices and use the sensors in my phone to inform it of context. My health tracking should be most accessible on my laptop, and should change in resolution as I move from the larger and more flexible laptop to more limited devices.
My laptop should be as smart or smarter than my mobile device. Until the resolution of interaction on a phone matches or surpasses that of the interaction on a desktop computer, desktop OS innovation must keep up.
I want you to steal my ideas.
I’ve said it before, and nothing has changed. Ideas are important, but they aren’t proprietary. I want these things to exist, so hopefully with this post I can inspire someone to make them, even if that person is me.
The iWatch (or whatever it is going to be called) is announced this week, and that’s exciting for entrepreneurs and developers for a lot of reasons. The ideas I present below are sort of like predictions; because I don’t know the features of the watch, I’m making a lot of assumptions.
As always, the ideas presented below are in no way my property. Think of them as money left on the sidewalk. In an envelope labeled “take me!”. The only thing I ask is that you contact me by emailing me or reach out to me on Twitter (@JCutrell).
1. Viral Crowdsourced Fashion
Very simple: allow people to make watch faces and share them. Make them available like a YouTube video. This idea is as old as desktop backgrounds, but it has a brand new twist: fashion.
Why I think it will work: Fashion is obviously a part of our lives. However, technology and fashion have not fused completely yet. The only company truly bridging the gap between personal technology and fashion is – you guessed it – Apple. Apple’s products provide social status and expression of “personality.” The iWatch will have own a large part of this market, and will mark the true fusion of technology and fashion.
Bonus points: Make the faces sellable and take a small commission. (Not sure how this would work with in-app purchases, but that’s where you can get creative, right?)
2. Access-oriented Applications
Everyone is talking about the mobile payment industry as a game changer for the iWatch, but let’s back up for a second and talk about the fundamental difference in payment from your watch versus from your wallet.
Wallets are no more “mobile” than watches. The cognitive change the iWatch introduces is a new sense of access. And along with that comes a new category in application development that hasn’t really taken off with mobile phones.
This is my personal theory, but I think the cause for the failure of mobile device access applications (case in point, Passbook) is at least partially due to the disconnect from the phone as directly connected to you at all times. With a wallet, it works partially because we were born into it (our parents did it), and also simply because we had the previous understanding of carrying cash.
With a watch, we have a newfound freedom to directly identify the watch with the wearer. I wouldn’t be surprised if the payment apps didn’t eventually take bio signs into account for fraud protection.
For these and many other reasons, it’s time to take these apps a step further. It’s time to start using the bio-connectedness that these devices will provide us with to grant access to restricted places requiring identity. This goes from more secure login services and better/more accurate TSA screenings, to check-ins and social physical presence applications.
3. Quantification, Meet The Informed Self
The iWatch will most likely mainstream the quantified self movement. We’ve seen this getting a large part of the media attention.
But for those getting into the game now, it’s time to start thinking past our current place in the quantified movement, and towards the next step. I’m making a prediction: that next step will be to make sense of the data.
Anyone who has worked with infographics for long will tell you that the single most important part of their job is finding and choosing the best metrics and clearest visualizations of those metrics. There is some science to this, but there’s also some gut.
What does this mean? It means that our quantifications don’t really provide us with anything other than raw data, regardless of how pretty it is. We need to take this raw data and turn it into something meaningful. Look at strong correlations, and suggest potential causation. Compare seemingly unrelated things, like lines of code versus number of steps walked per day. Give users a framework for making decisions, and then comes the fun part: using big data to come to conclusions about trending correlations in day-to-day behaviors of the population.
But the first step is to take the quantifiable numbers and show some kind of derived qualitative information.
4. The ultimate timer
Seriously, we’re still making time tracking applications? Yes. Because all of them suck. Well, maybe they don’t suck, but I haven’t found one that is natural. I end up watching a clock. Or my watch.
A unique opportunity for productivity apps related to time: use the watch paradigm. This one seems obvious, but whoever wins this one will win big.
I’m thinking something like setting up a few behaviors that the watch can sense and infer with, but then make it very simple – tap, swipe to client, tap again to start timer, tap again to stop timer. On my watch, not my phone. Certainly not on my computer. I use my computer to create invoices. I use my timekeeper to time things.
The iWatch presents a lot more opportunities than I’ve even discussed here. I’d love to hear what you think about these ideas, and I’d love for you to share yours with me. Or, by all means, feel free to take mine and build it.
Hit me on twitter (@jcutrell) if you want to continue this conversation.
I, like many developers and tech consultants, am a chronic underestimator. When I make an estimate, I do so believing that the estimate encompasses the effort necessary for me to accomplish each and every goal for that project.
And I’m wrong, nearly every time.
People have a completely skewed perception of time. Checkout this excerpt from a Huffington Post article from last year.
This vs That’s initial research is in line with previous research into time estimation, which has revealed that our ability to accurately estimate time is influenced by our emotional state, how hungry we are, how tired we are, whether our eyes are open or closed, what we are doing, among many other factors.
Aside from the fact that people in general are terrible time estimators, it’s also my opinion that estimating a multi-stage project all at once is about as useful as guessing who will win March Madness at the beginning of the bracket. It’s not a good idea to put your money on that bet.
Here’s one of the biggest reasons why we estimate improperly.
Our perception of effort and knowledge are different from our perception of implementation.
How long would it take you to make 100 sandwiches?
How easy is it to make a sandwich? Certainly not all that hard. You’ve done it a million times, so it’s not too difficult. Five minutes on a good day, 10 minutes tops.
So, how long does it take to make 100 sandwiches?
I asked my wife this question, and she estimated an hour and a half. Seems fair to me – probably about what I would have guessed as well.
Would you immediately think to guess that it would take 500 minutes (8.3 hours)? You probably think that you’d have a system – a way of solving common problems over and over by that point. 100 sandwiches shouldn’t take nearly 8 hours, considering how easy sandwich-making is. You’d have a killer sandwich assembly line.
But even if your amazing sandwich assembly line was world class and doubled your efficiency from 5 minutes to 2.5 minutes, you’re still going to finish sandwich 100 at the 250-minute mark.
This is the cognitive problem we face in estimating time for development. We see projects that we have the technical ability to solve without having to acquire any new knowledge, and therefore we have a tendency to underestimate. Things we already know how to do and systems we fully understand seem like they should take much less time to implement than they actually take.
Stop thinking about how easy a project is, and start thinking about how long it takes you to make one sandwich.
Trying to serve your Parse files via SSL/HTTPS? You’ll notice that you can’t force it, and Parse doesn’t support this via their file URL scheme. But you can use the same trick Parse uses on Anypic.
So if you start with this:
The final url will look something like this:
In ruby, that’s:
url.gsub "http://", "https://s3.amazonaws.com/"
var url = // your url... var subbedUrl = url.replace("http://", "https://s3.amazonaws.com/");
Boom – fully secure Parse files.
The first preview of the book can be found here:
I hope you enjoy it, and I can’t wait to share more of this book with you.