Below is an edited transcript.

Jessica Ching, CMO at Product Realization Group:

What can go wrong? In other words, what did go wrong? And what can we learn from that? You’ve told us what we should do to start putting the NPI process in place. This is an industry with little precedent. FreeWire had a very complex product, with high technical  requirements. So tell us about what can go wrong, and what we can learn.

Jay Feldis, Sr. TPM at Product Realization Group:

Of course, anything that can go wrong probably will. But I’d say very often in this case, [what went wrong were] assumptions about reliability. People will assume that because they bought a specified part, that it’s going to work as specified, or they assume something from the specs. [The assume] that the design works great.

The first one they built works great. They get it out in the field, they start building more than one, and things start failing. It takes a lot of time to figure out why. And it could be everything from, “Well, we’re not using it exactly the way they spec’d, or their specs weren’t really true.”  or “I think other things that can go wrong are … “

Generally if you have assumptions or beliefs that you did not validate or hadn’t gotten to validating, for example if they specified a part, then found the lead time is 18 weeks on the part,  so that blows your schedule. It can go on and on about whether specs can be met, and whether the thing’s going to fit in the chassis it’s intended to fit in.

So that’s why I go back to having requirements, and then building a plan around them. It’s a process of putting out what you want, and then validating. You have to validate each of these things as you go. And until you’ve validated it, you can’t necessarily assume it’s going to be true. That is the really important part of that NPI process. What tests are you doing in engineering– what tests are you doing with your early prototypes– to validate that what you think you’ve designed is really going to happen.

Jeff Rosen, Principal at JSROSEN Consulting:

I want to build on that and hit on what is always a very sensitive topic. The question is, what goes wrong? Shipping before you’re ready. And what are the implications of shipping before you’re ready. There’s always this difficult trade-off between wanting to get products in front of customers’ hands earlier rather than later. The challenge here though is that once you deploy a product, and if it’s got challenges on the reliability side or the functionality side, the problem doesn’t go away.

What we really struggled with in the middle of last year was a significant drain of resources to support these products deployed in the field. At the same time, Martin is trying to ramp up and productize that existing product, but the resources were limited.

When you are making decisions about product readiness for shipping, you have to be aware of the implications that go with that. Those implications are that it’s not going to be a low touch situation, and you are going to consume a lot of resources. When you’re going through that, there’s always the urgency of shipping to a customer, and that’s what matters.  But if you know ahead of time, and you know what’s going to happen [with the product], you are going to allow a lot of resources into that support role.

The second thing is not knowing what is going to go wrong. Assuming that you know how a product’s going to function without really putting it through its paces. Just not testing at all, and just assuming that you know how it’s going to behave or how the customer’s going to deploy this product. I think there were actually a lot of assumptions like this– about what the product could withstand, and how the customers were going to use the product.  This information was assumed, but diverged very significantly from how the customer really was using the product, and how the product would behave in different environments.

Audience Question:

Can you give some specific examples? I’m curious what you thought and what they were actually doing with it.

Jeff:

Martin [saw] this, because he had to go out and visit many of the customers. It’s the environmental, first of all.  There’s the visualization of how a person interacts with the product. Very different. Then there’s the notion that this is a product that’s out in the environment, and has to behave in the environment, whether that’s dust or moisture, and there wasn’t a lot of work put in to see a lot of the major controls. Just blocking and tackling stuff.

Just that starting out on 101, which is how is the customer going to interact with this? Can they see the screen? Do they know how easy it is to manage and program this thing to the environment that this thing is in?