AI Team Playbook


Deploying machine learning or AI systems is no easy task. The path to production is riddled with potential pitfalls. Even before you begin making strategic decisions within the context of a specific project, there are a few higher level principles that can help improve your team’s chances of success.

Here are my top 3 principles for production-oriented (rather than research-oriented) AI teams:

#1: Our probability of success is primarily a function of the health of our feedback mechanisms.

One of our biggest risks is that feedback will be too late or too infrequent or unbalanced or too narrow. Assume that this failure mode is the most likely scenario to emerge for a team like ours and embrace a mild paranoia.

Feedback pertains not just to the training and validation of a model, and it goes beyond asking a stakeholder how they feel about our work as they get glimpses of our work in progress.

This goes beyond “improved communication.” I like the term “feedback” because it more strongly suggests a virtuous flow of information which guides our efforts. Course corrections are expected.

#2: Project/task selection is one of the biggest force multipliers under our control.

In other words, don’t think that picking the right tools, the right data or the right algorithm is the main goal.

Choosing an AI problem that has a strong path to user acceptance, even if our work is average, will generally yield a better outcome than choosing a problem with a difficult path to user acceptance, even if our work is excellent.

We can go deep on any number of things, the real art is choosing what to go deep on. We have a limited amount of time and we have to be thoughtful about where we spend it.

Don’t underestimate the value of spending (sometimes a lot of) time, to think things through and try to frame the problem well. The time to “try things” is when you can pencil in a viable path to value. Anything else should be considered EDA, and the primary aim of EDA is to de-risk project/task selection.

#3 Risk management is an essential part of every conversation about our work.

Some problems, if left unsolved will be lethal to our goals & initiatives. We want to identify those threats most likely to be lethal and disarm them.

The specific risks to our current projects should be written down. As we learn more, we make note any changes to the perceived level of risk.

“Safe assumptions” are a major category of risk. Therefore we should write down the critical assumptions and be paranoid about any of them falling apart.

What do you think? Email me your ideas!