Oldie but goodie from Dave McClure: Acquisition; Activation; Retention; Referral; Revenue. These are the things that matter for product directors. This is also nice: Q: How do you choose what to build? A: Choose features for conversion improvement. 80% will be optimization of existing features. 20% will be wholly new features. Just guess, then A/B test for conversion improvement. Another nice nugget: each metric should be owned by one person. See what I’m talking about.
There is so much good in this rant. "Pictures under glass" is nice. So is the bit about the four fundamental grips (learning!). That the iPad was imagined in 1968. The importance of research & vision. But I think this is important: Needs vs. Technology vs. Capability.
In this rant, I’m not going to talk about human needs. Everyone talks about that; it’s the single most popular conversation topic in history.
And I’m not going to talk about technology. That’s the easy part, in a sense, because we control it. Technology can be invented; human nature is something we’re stuck with.
I’m going to talk about that neglected third factor, human capabilities. What people can do. Because if a tool isn’t designed to be used by a person, it can’t be a very good tool, right?
I think at UC we’ve focused mostly on the latter. Accidentally, probably, but that’s a happy accident.
See what I’m talking about.
The idea: Have 1-on-1 meetings with your team; Don’t delegate, but help set priorities; Manage the key activity, not the person in charge of it.
In that meeting, discuss:
What are the top five things you’ve been working on the last two weeks?
Do those match to the items you’re accountable for in the 90 Day Plan?
What are you doing to advance the careers of the people you lead?
The 1-on-1 meeting structure above really focuses your Team on what actions they’re taking to advance the company’s goals. I’m not delegating things to my Leaders. I’m asking them what they’re doing to advance the goals of the company. They have freedom to attack our top priorities however they see fit, and then I hold them accountable to that.
See what I’m talking about.
This is an excerpt from a lovely Medium piece about the trouble with distributed teams. The problem identified here is “Expensive Communications Loops.”
Technology for sharing problem spaces and collaborating online remains error-prone, buggy and unwieldy. Workers are typically in different time zones, with different working hours. Soft interrupts — leaning over to the person across from you, quick whiteboarding sessions, questions lobbed across a room — become hard interrupts. Chat messages, Skype calls, scheduled meetings. Multiply these factors in situations where multiple people, multiple sources of feedback, and multiple functional teams are required to complete projects. Distributed teams can carry a much larger coordination cost than centralized teams. Planning meetings, holding meetings, struggling with shitty collaboration and conferencing technologies, creating and distributing status updates, cross-company communication, and the cost of ambient online chatter adds up fast. Suddenly tasks like getting approvals, doing design and content reviews, gut-checking an idea, introducing a new project, brainstorming and whiteboarding and other work that benefits from a tight communications loop become time-consuming and frustrating.
This is a core reason why big companies struggle at The Internet, especially when they rely on outside partners to do a bulk of their digital work for them. In those cases, it’s not just that they have a distributed team, it’s that the distributed team has parts that carry their own overhead, their own business goals, and their own motivations. And when the contract runs out, or the relationship sours, all the institutional learning that the partners have gained goes away.
Not to mention the fact that you lose proximity and speed.
See what I’m talking about.
The following is from a great longish read on the design/development of the Rocketdyne F-1 engine (of Saturn V fame), and a modern rebuild/redesign using current technology.
Why was NASA working with ancient engines instead of building a new F-1 or a full Saturn V? One urban legend holds that key “plans” or “blueprints” were disposed of long ago through carelessness or bureaucratic oversight. Nothing could be further from the truth; every scrap of documentation produced during Project Apollo, including the design documents for the Saturn V and the F-1 engines, remains on file. If re-creating the F-1 engine were simply a matter of cribbing from some 1960s blueprints, NASA would have already done so.
A typical design document for something like the F-1, though, was produced under intense deadline pressure and lacked even the barest forms of computerized design aids. Such a document simply cannot tell the entire story of the hardware. Each F-1 engine was uniquely built by hand, and each has its own undocumented quirks. In addition, the design process used in the 1960s was necessarily iterative: engineers would design a component, fabricate it, test it, and see how it performed. Then they would modify the design, build the new version, and test it again. This would continue until the design was “good enough.”
Further, although the principles behind the F-1 are well known, some aspects of its operation simply weren’t fully understood at the time. The thrust instability problem is a perfect example. As the F-1 was being built, early examples tended to explode on the test stand. Repeated testing revealed that the problem was caused by the burning plume of propellent rotating as it combusted in the nozzle. These rotations would increase in speed until they were happening thousands of times per second, causing violent oscillations in the thrust that eventually blew the engine apart. The problem could have derailed the Saturn program and jeopardized President Kennedy’s Moon landing deadline, but engineers eventually used a set of stubby barriers (baffles) sticking up from the big hole-riddled plate that sprayed fuel and liquid oxygen into the combustion chamber (the “injector plate”). These baffles damped down the oscillation to acceptable levels, but no one knew if the exact layout was optimal.
I love that in the face of massive, barely understood complexity, the design process evolved to become more physical and ad-hoc.
See what I’m talking about.
Numerical weather modeling splits up the globe into a series of three-dimensional pixels. It applies a ton of math to the data representing each of those pixels to make predictions about the movement, intensity and impact of weather systems. These predictions are generated by specific models developed by groups of people in various weather agencies/administrations around the world, and some are better than others.
The systems that process these tasks are some of the most complex and powerful on the planet, and with good reason. Accurate weather predictions – especially some days out – are extraordinarily difficult to create, as “weather” is one of the most complex things we interact with on a daily basis. Further, billions of dollars and thousands of lives hang in the balance.
Right. So two interesting things worth looking at here: resolution and parameterization.
With complex systems, any accurate prediction of future behavior is directly connected to the resolution of the data provided to the predictor(s). Consider the Europe’s forecasting service (the European Centre for Medium-Range Weather Forecasts, or ECMWF) vs. the one used in the United States (the Global Forecasting Service, or GFS): the ECMWF operates with a 16km-wide “pixels” of information, and predicted Hurricane Sandy’s western turn, where the GFS’s 28km-wide pixels did not.
But in all complex systems systems, modelers must accept that there is a level of detail that is unresolvable. And sure, there’s a point where everything is measurable, and it’s computationally possible to use raindrop-level data in an analysis of a weather system, but up to that point weather modelers use parameters as stand-ins for operations that are too small or too numerous to include. This is called “parameterization” and I think it’s awesome. (For what it’s worth, we all do this when we make predictions, but in weather modeling there’s a process for how it works.)
Here’s an amazing, quick-ish read on Craig Ehlo’s experience guarding MJ.
Don’t remember Ehlo? He was the dude on the other side of The Shot and a pretty respectable player himself. Aside from the piece about MJ telling him what he was going to do, and then doing it, there’s this great quote:
I wouldn’t say I was the unfortunate one, because still, like my dad always said, you’ll be the best when you play the best. I was always thrilled to be in that position.
Teams don’t just have flow. So do their opponents. You play to the level of your competition as a system. That blows my mind.
See what I’m talking about.
Creating great content – for any platform – is really, really hard. If you want to do it right, you’ve got to invest a ton of money and time into it, and not in ways that companies are used to investing their money. Because in almost every category you’re going up against thousands and thousands of people that have a big head start on you, and no amount of advertising can make bad content work on the internet.
I’d argue that it’s a better idea to build true digital utility than to chase success in the content space. It’s not just better because you can capture essentially free and unlimited rights to whatever content and interactions are fostered by your utility, but also because it’s just plain easier. And by easier I mean cheaper and more likely to succeed.
What would you rather buy?
- The iPhone ($500MM R&D budget in 2006)
- House of Cards ($100MM production cost)
- John Carter ($250MM budget)
And don’t get me wrong. Content is great. It’s worth doing well. But I’d rather own the utility that the content lives on than just own the capability to produce it effectively.
Obsession is the fundamental element of the digital world.
All else being equal – including problem set – you will lose to the more dedicated, more fixated group. This wasn’t always the case, but the tables are turning in favor of those who care most about a given topic.
So this lens is probably the simplest of them all: care an unreasonable amount about what you do. Hire and compensate on how much of a shit someone gives about your goal. Build something that you care about, not that you think “a market” will care about. Be specific about your obsession, and keep chasing it. Everything possible depends on it.
I read this fantastically long SlideShare by John Willshire that had this phrase that I’m pretty sure he coined. I’ll end this lens with it because it sums up what I’m talking about so perfectly.
“People will only want the things you make as much as you want to make them.”