Good process, bad process

Process is one of the most contentious words at a startup. With increased focus on culture, many consider process to be a messy corporate virus which refuses to die. While culture is absolutely key to a startup’s success, all sales managers know that absent of a sales process with clear stages and qualifiers, forecasting would be out of control. The same holds true for software development. So what does good process look like and how do you know you are using it right?

Good process enables communication. When coupled with clarity on roles, and business systems to instrument communication, good process enables information to move quickly to relevant constituents of a business. All startups are in a race against time, and losing time because of lack of ability to move information rapidly is certain to send a startup into chaos. If you are bigger than 10 people, you will need some sort of process to avoid chaos.

Good process enables information to move quickly from one part of the business to another, or from the edges to the core. For instance, when sales reps learn that a company is losing business to competitors, good process allows for this information to be communicated to product management which can quickly move to attack the new threat. When a development team learns of potential schedule slip, good process allows for this information to be communicated to customers seamlessly. When a developer learns that a certain story is blowing past its estimate, good process enables other members of the team to learn about this meaningfully so they can help. Absent of good culture, this would be worthless but even with good culture, some process is required to allow developers to communicate. Bad process on the other hand creates silos & bottlenecks. Bad process results in a sales rep unable to communicate directly with a product manager when a prospect has questions about an upcoming feature. Bad process ensures that the customer success team is unable to provide any visibility to product management on common concerns of users.

Good process enables stakeholders to get new information that could influence their course of action as soon as possible. This applies as much to CEO learning of unhappy customers, to developers working on a new feature of changing requirements. On the other hand, bad process results when process itself becomes the goal. When an organization sets up rigid rules around how information MUST move, & managers attempt to enforce the process as if it had some inherent value, you know you have a bad process on your hands. When you find that members are consistently breaking from the process to move information, you have a bad process – not a bad employee.

Good process enables teams within the business, as well as the entire business to get on the same page as efficiently as possible. This is exactly why a sales process, or a development process or a customer success process is entitled to achieve. Further, good process also provides guard rails when faced with highs or lows of a startup. Good process allows for significant room for judgment. Bad process becomes a substitute for judgment. When faced with uncertainty, if you find yourself leaning on process to decide what to do, instead of thinking about the desired outcome to guide your actions, you are likely falling victim to process.

To summarize, good process simply enables teams with great culture to reap its benefits. Bad process on the other hand tries to stymie culture, and results in dis-empowerment of employees.

Good process, bad process

Value of Software Estimates

As a product manager, one of my favorite questions to ask of a developer is “what is the value of an estimate?”. Few developers are comfortable answering this question, and in fact if one does this successfully, you’ve found a gem.

One of the most controversial & often misunderstood aspects of software development is estimating (possibly, second only to deadlines which is related). Ask a developer what their estimate is for a certain feature, and you will inevitably end up making them uncomfortable. Many managers misunderstand this discomfort. They see this discomfort as being identical to a sales manager holding the sales rep accountable, or a fitness instructor holding a January-joiner at the gym accountable. Bad managers see this is vindication of their ability to manage. Many don’t understand that asking a developer or a development team for an estimate is like asking someone to predict the future. It is absolutely ridiculous, and you will only know the truth after the fact.

Yet, all businesses inherently make predictions about the future. In the same vein, estimates are a critical part of building software – even though they are always wrong (when they are right, it is purely accidental, so stop patting yourself on the back for hitting the estimate). There are many techniques for estimating, ranging from looking at comparables to doing a bottom – up estimate. This article is not about techniques for estimation. This article helps identify ways in which estimates are useful, and how they are abused, specifically for product companies (as opposed to custom software providers). Let’s start with misuse of estimates, however before I start, I want to state that everything I have learned has been through rich conversations with multiple members of our development team, current and past alike.

Abusing Estimates

If you do any of the following, think hard because you are abusing estimates

Predicting delivery dates

There is a strong temptation in managers to predict delivery dates. Many see their ability to make such predictions as a reflection of their competence. Of course, at the bottom of their heart, they know they are full of shit. In order to compensate for this, they rely on scientific looking graphs and empirical constants (man days / ideal days)  to provide a range of delivery period.

Estimation

Why do they do this? Because some stakeholder in the business (often a sales rep or marketeer) wants to know what the delivery date is. Instead of making an effort to understand why the stakeholder is interested in a delivery date, development managers indulge them by using all kinds of sorcery to come up with delivery dates. If you find yourself doing this, STOP please! You are doing your business & colleagues a disservice. Instead, have a conversation with the stakeholder and try to find creative ways of de-risking their initiatives such that they are as independent of delivery dates as possible.

This act becomes totally degenerate when a manager starts to think that a lower estimate implies greater competence. If you see this happening, SHUT IT DOWN by championing the manager who gives the longest estimate. This might seem to be radical behavior but it disrupts degenerate behavior rapidly.

Evaluating quality of developers

Many of us have heard executives and managers state that X bunch developers or Y development team is great because they always meet their estimates. In fact, I have fallen into this trap many times. But I know better, through experience.

There was a time when our development cycles were long, and painfully unpredictable. One of the statements heard often was that we need to give the product team time to catch up. This was a well meaning statement, but it pained me a lot. I can only imagine how our developers felt. In fact, it was painful enough that I told our team that we will know we have made progress when we stop hearing that phrase. With nearly the same team that was shipping one feature on 100 days in March 2013 & always blowing estimates, we shipped new business value (ranging from new features to faster analysis speed) 65+ times in Q1-2014 – more than 1 value adding release per workday (we delivered well under estimates in some places, where as slipped in others). In the process, we are helping fuel our double digit month over month revenue growth.

The takeaway is that making or missing estimates doesn’t automatically reflect the quality of developers.

Making external commitments

Sales reps often feel the pressure to make delivery commitments to prospects. Typical conversation goes like this. Sales reps via their managers go to the CEO and tell her that X deal (or deals) will only close if we can commit to certain features. The well meaning CEO starts a discussion with the development manager to come up with a delivery date to commit to the client or set expectations. The development manager starts the super scientific process of predicting the future using estimates. If you see this happening, STOP the madness! Do the following instead.

Spend some time talking with the sales rep to learn what the prospects want, and what their concerns are. Once you understand the prospect’s concerns, arrange to have a meeting with the prospect and have a rich & open discussion about the feature set. Be honest with them, because they have likely seen all too many promises broken. Finally, and most importantly, help your sales team win business. That’s what the sales team wants, and that’s what you want. Sales is the hardest job at a business, and no one else is measured by a number. So help your sales team as much as you can, but don’t make promises that are certain to break

When are estimates useful

Like I stated, predicting the future is important for planning. Therefore, estimating is vital to the success of a business. Let me tell you how.

Prioritizing features

Every good product manager has significant intuition about the value of a certain feature, no matter how small that feature is. She represents the customer after all. If all features required the same effort to build, you would simply create an ordinal list (assuming sequencing doesn’t matter which is a laughable assumption) and start development. The reality is otherwise. Different features require varying amounts of work. Without understanding the relative cost of features, you simply can’t prioritize. This is where estimates play a very meaningful role. They help you do cost-benefit analysis, and prioritize features

Structuring work & milestones

Just like predicting future, you are more apt to be correct if the future was regular and predictable. This automatically implies that you are more likely to be correct if you predict immediate future than further out. For instance, I will be flying for the next hour, but I really have no clue of what i will be doing tomorrow at this time (Hiking, i hope!). Therefore, a small estimate is more likely to be true, and a large estimate more apt to be wrong. So when faced with a large estimate, give the development team time to break the large estimate into smaller estimates such that milestones ride a delivery curve (1st milestone delivers some value, and the next delivers more, and so on). This will ensure that you will know of slips early which will help the business respond to changes, and it will also ensure that you will deliver value even though you miss certain dates. Also, by extension, failure on small estimates are small, and failure on large estimates are expensive.

Evaluating code quality & awareness

For a multi product company like ours, we often deliver similar features in multiple products. When the same value can be delivered in product X for significantly less than product Y, you know there are issues with code quality and awareness with the code in product Y. While this is an extreme example, good developers and managers keep an eye on this at all times for all features, big or small. The best mechanism to understand such issues are through estimates, which help validate code smell. When this happens, its time to allow the development team to bring up the quality of the code base in product Y as deliberately as possible.

 

To summarize, please continue to use estimates and use them wisely. But please stop using estimates as a measure of predicting the future, or measuring competence. Instead, use them for planning and de-risking software projects.

Value of Software Estimates

Team Size vs. Team Culture.

I read this article on team size when it came out several weeks ago. If you haven’t read it, I encourage you to. The author suggests that empirical data across various fields from medicine to military to software development suggests that 7 is the magic number for team sizes with a variability of +/- 2. This is a very satisfying conclusion for me of course since our average team size across projects is 7.5 with a standard deviation of 1.75. However, what’s more important in the article is the underlying reasons for why big teams suck. Independent of the magic number, let’s take a closer look at why big teams suck

  • Meetings become a waste of time
  • Interpersonal friction increases exponentially as team size increases
  • Individuals spend more time coordinating work than doing work as more handoffs are required to accomplish work
  • Large teams is immensely difficult to manage
  • Small teams do work faster and argue less
  • Small teams reduce confusion & discomfort about who to ask for help

If one inspects these reasons closely, two underlying factors emerge – nature of work or task at hand, and culture. Let’s discuss them in context of software development

Nature of work

Teams are organized to achieve a certain goal. In fact, what dictates success of a certain project is how work is structured within the project. Team size (between certain limits of min and max) provides trade space between speed and efficiency.

At Sefaira, our team size has varied from 3 people to 10 people based on the project at hand. While, we are one team of 10 developers, we organically organize around projects based on how tickets are structured. Each ticket is intended to be minimally viable, and intended to result in customer facing value. Based on how cross functional this work is, team sizes vary. Focus on work ensures meetings are structured around work in flight & rarely wasteful as team members who aren’t involved excuse themselves. Work within tickets is structured through high quality face to face conversation between developers. Minimizing work in process allows us to manage work in flight, which is distinct from managing teams. Admittedly, we struggle with managing work when we have 8-10 developers working on a ticket. However, at its core, it is a failure to structure work well enough.

Culture

Argument is often presented as a bad outcome of large teams. However, one team’s argument is another team’s debate. Based on what we are discussing, we easily achieve team wide consensus on some items (e.g. need for continuous scoping, need splitting work into smallest chunks that deliver business value) & on some items, we have spirited debates (e.g. technical path for a certain ticket, determination of good quality, size of pull requests, etc).

So in summary, while team size constraints are good guard rails to bear in mind, there are other bigger determinants of success.

Why do small teams suck?

It is also worth looking at what is lost when team sizes are fixed artificially, or perhaps small teams persist for extended periods of time.

Code Coverage

Small teams of fixed composition, especially when they persist for extended periods of time, confine knowledge of pieces of code to certain individuals. This makes it difficult to respond effectively to changing business needs (which is true for all Startups) as well as churn.

Degenerate ownership model

While constancy of purpose is an important motivator for individuals, individuals in small teams are at risk of developing a sense of ownership with the code, which makes it difficult to have objective debates about scope, technical path or design choices.

Reduced diversity

Software developers vary between two extremes – those who think there is infinite time & pursue great quality, and those that want to move extremely fast at risk of medium to long term quality. All developers are always right, when they frame the argument in their terms. The risk with small teams is that you naturally tend to have limited diversity, and this can adversely impact long term ability of product teams.

In conclusion, I would encourage managers to focus on creating a culture of empowerment and debate, while helping break large goals into tangible pieces of work that are structured to deliver value or test hypothesis incrementally. And then, they should let this drive team size and composition. To me, this seems like a reasonable approach.

Team Size vs. Team Culture.