PART 2

A guest post by F Prin, First Principle Based Engineering & Design

In the first part of this series, we explained the benefits of Knowledge-Driven Product Development (KDPD), a way to blend analytical and empirical data to arrive at a more complete, objective, understanding of the design.

In this part, we break down the individual steps as a team might experience them in the real work.

The steps we employ to accomplish this are:

  1.  Build simple “capability” scoping models
    1. Catalog high level requirements and define them in terms of very basic physics. For example:
      • A requirement is to deliver a certain amount of fluid from a pump with a certain overall pressure head, thereby defining fluid work – the “useful work” of the system
      • Another requirement is to achieve a certain package size or weight, which constrains energy storage
      • If the proposed energy storage capacity does not exceed the proposed work ambition by a large margin – to account for inevitable losses – then the team is not heading in a good direction.
    2. By doing this the team has completed a basic assessment of the ambition and technology – they have gotten to the figurative “high ground” and can expect to achieve the design goals without violating laws of physics.
  2. Reduce Complex Designs to Manageable Subsystems
    1. Break down the system into sensible subsystems or even sometimes into the actual components – depending on complexity
    2. Sub-systems usually breakdown according to function. For example, for a wearable pump:
      • Cannula/Needle insertion System
      • Fluid Delivery System
      • Primary container/Mechanical Displacement System
      • Energy Storage system
      • Electronics System
      • User interface
    3. Create an interface matrix – catalog and classify “touch-points” between sub-systems. An interface matrix – which may include aspects such as physical contact, or energy transfer – surfaces interactions that are potentially relevant and might otherwise be overlooked. For example, if component A touches component B there is a potential wear interface, or a force that both components need to accommodate. Or perhaps there is an interface between fluid and a component which influences material choices for the wetted component.
  3. Build First-Principles based, sub-system, scoping models
    1. Begin with relatively simple scoping models (non-FEA – e.g., Excel/Mathcad, or even hand calculations) on the least well-understood sub-systems/component
      • These are typically simple 1-D models of mechanical, fluid, thermal, or energy sub-systems where the team applies laws of fluid flow, statics and dynamics, heat transfer, conservation of energy, power, etc.
      • These models can typically be arranged in a series structure – output of one sub-system is the input to another
      • These models inform the product team of basic dependencies and sensitivities
      • They also inform the team of key unknowns – things that cannot be accessed from a catalog or website, things that require simple experiments to fill knowledge gaps
        1. Example: Coefficient of friction is nearly always something that needs to be determined, or refined, empirically
  4. Simple, Empirical Tests to inform and/or confirm the models
    1. Exercise the simple models to evaluate the response. Map the response surface to assess robustness. Look for parameters that have undue influence on the response. These are things that if varied by a small amount – such as can be expected with manufactured components or changes in operating conditions – result in a large change in output.
      • Preferably, the team has several design options to choose from, and they can be evaluated comparatively in a model-based environment.
      • Preferably, the team evaluates them objectively and against criteria that are aligned with the system-level needs and derived requirements.
    2. When appropriate build more integrated “breadboards” representing functional subsystems. These can be useful to verify the analytical model.
    3. Iterate the model – virtually and informed by the structure of the model. This is cheap, simple, and fast and gives fantastic insight into viable operating “zones” – the places that the design parameters should be “centered” and where the team can expect robust performance.
  5. Integrate subsystem models into a system model
    1. Begin to stitch the sub-system models together to provide a larger/system-based model
    2. The objective is to develop an accurate digital twin or ghost of the physical design
      • This is a tool that helps guide the design process and assess system level robustness
      • It allows the team to examine “what-if” scenarios quickly and cheaply
      • The team can easily and quickly look at extremes of variation/corner cases.
      • Ultimately, the team can perform Monte Carlo simulations on the entire system – building and testing large numbers of virtual devices that access the range of the inputs from virtual populations, combined randomly, without scaling up tooling or expensive test environments, and tested virtually.
    3. At an appropriate time in this process, migrate the model into a more capable modeling and simulation environment, such as Matlab or Matlab Simulink
  6. Use the model to Inform the design
    1. The Model is a means to the end.
    2. It is a design tool that guides the team on where to take the design
    3. It allows the team to explore “what-if” scenarios
    4. It gives the team:
      • An understanding of where the design sits on the response surface
      • Insight into the overall robustness of the design
  7. Stress and Test the System Virtually
    1. Now the team can impose virtual “over-pressure” conditions to understand design robustness – virtual HALT/HASS testing
    2. This gives insight into stress state and design margin
    3. Explore tolerance and use-case extremes and assess system response
  8. Prototype when confidence is High
    1. System-level physical prototyping occurs when you are satisfied that the design is well understood, and well-characterized analytically. You want to build and test to confirm the model output, rather than exploring the response space
      • This approach avoids wasting time “throwing” physical prototypes against the wall and hoping that they stick
    2. At this stage physical prototypes allow us to verify the model, thereby providing us a valuable tool to support anticipated DVT, NPI, scaleup, and manufacturing support
  9. BONUS: Preparing for Scale-up
    1. Once you have the model in place and confirmed, Monte Carlo simulations allow you to build virtual devices and give insight into:
      • Design Robustness
      • Yield
      • Vulnerability to recall
      • Etc.

 

Conclusion:

This “go slow now to go fast later,” First Principles based alternative to a DBA enables objective design decisions. Models, simple test, careful consideration of what the models tell us, and application of the models as design tools all support a more reliable, predictable, and efficient product development process.

This is the approach that we employ at FPrin. We perform analysis, modeling, and test to build a knowledge base for our design. We use this to determine strengths and weaknesses of the design and to map an informed approach to design development. This knowledge tells us where we need to focus our efforts – specifically on “make or break” design elements – and where we can afford to defer closer scrutiny. It allows us to put the right resources on the right problem at the right time.

With KDPD, the engineering team can gain a basic understanding of the underlying physics of the current design at an early stage in the design process. This is accomplished by doing basic scoping, or bounding case, analyses first. Then breaking the design down into components or sub-systems and creating simple, model-based analyses of these subsystems to gain a deeper understanding of their workings and their interactions with each other. This method not only informs the team of the basic mathematical relationships that govern the device functionality, it also gives the team insight into the terms that are likely the most significant (e.g., the terms that are raised to the fourth power). This method also informs the team about the unknown terms which may require simple, low-level, testing to define.

At FPrin, we strongly believe that the up-front effort associated with a KDPD method is well worth this initial investment in time due to the back-end benefits that are realized – specifically, shorter design cycles, predictable development timelines and costs, well-understood response of the design to variation and design assurance – all combine to make the difference between gambling on a design and an effective, predictable, risk-managed development effort.

About the authors

Fprin is a group of inquisitive engineers, designers and innovators who take a First Principles approach to product development. We dive deeply into the “why” to solve hard problems, help develop breakthrough products, and bring those products to market. Blending analytical and empirical data provides assurance to our clients. Learn more at fprin.com.