How to Set Up a Design of Experiments

By Perry Parendo, Perry’s Solutions, LLC

We have all been there. We get a call from production. Our new product is not working as it did during early development. How could that be? Plus, our time already is consumed. We have started working on our next project. That prototype needs to get built and tested soon. Our long-awaited vacation is scheduled for next week. We had to reschedule our trip last time because the previous project was delayed and behind. We finally were starting to feel relaxed from the pressure of everything. And now this?

Ok. Time to toss together a quick test plan and show how the product works. It certainly is fine. We are confident about that. Did they misunderstand something? We are not sure what they did wrong, but it sometimes happens. We can’t afford a delay of our next project, and certainly don’t need to be distracted during vacation! So, let’s just get this off of our desk and out of our mind.

DOE Adds Structure
Is this a reasonable strategy? For moderate-risk items, the quick tests do not create enough learning. We may get lucky in the short term, but eventually the problem will return – often at the worst possible time, such as during full-scale production or maybe with a critical customer! But everyone seems to do these quick tests and it gets them out of hot water in the moment. Why shouldn’t we do the same?

Instead, we should add just a little bit of structure to the testing process. When the process is understood, it actually is a very logical approach. The methodology is Design of Experiments (DOE), and it has been around for about 90 years. To create a useful DOE matrix, we need some simple homework completed first. While we can do these tasks when an issue occurs, it is far easier to do them early in development when people are fresh and open-minded. It is smart project management to establish this brainstorming baseline for higher-risk areas early. What are the steps? There are three areas to address.

1. What is the goal of the test?
Describe why we are testing in words. What do we need to accomplish? An explanation of the situation allows the team to ensure scope is appropriate. It should not be too narrow, which often happens with a quick test. It also should not be too broad. When solving complex problems, it can be beneficial to break them down into multiple tests, instead of trying to address unrelated problems in a single test. Avoid technical measures in this description. Focus on the right words to explain what is going on.

Practically speaking, if we realize we have two aspects to the test plan, are we able to test them at the same time? Or will they need to be run in series? This will impact the expected completion date to confidently resolve the problem. There are very important aspects for setting expectations related to the test process.

2. Define how to measure progress toward the goal
This is where we translate the test goal into technical measurements. Avoid simply repeating the standard production specifications and measurements. Some creativity can accelerate problem identification and resolution. It also is important to consider the tradeoff measures. At a minimum, cost could be included. We often call it “competing requirements.” Failing to collect the needed data can restrict the team’s ability to solve the problem in a timely manner. It also may create a surprise impact later, which could have been avoided.

For example, if we are trying to increase the durability of a product, we may want to consider the weight of the product as a tradeoff. Or if we want to increase the speed of the product, it may require more power and thus a larger battery. Every situation has some counter measure which often is negatively impacted by the changes being evaluated.

3. Define all potential input variables
The quick tests tend to only evaluate one input variable to find the smoking gun. At most, there will be two variables considered for testing (temperature and pressure, for example). It is not required to test every single variable listed, but the
variables identified need to be thought about in some fashion during the test. Further, we encourage listing all potential variables. These lists easily can grow beyond 20 variables. The ones that are top of mind probably already have been tested. They still may be important, but one of the variables listed later could create the subtle performance shift needed to correct the problem.

Our favorite variable example is humidity. Why do we list it if we cannot control it? While we may not be able to include it during the test, we will at least know the test condition. Why is this important? Let’s say we are having new failures in high-humidity environments. If we realize we tested during low-humidity conditions, this would be useful knowledge and will impact the solution.

Plan for success
When the technical situation requires a Design of Experiments, this baseline information allows a quick conversation to generate the test plan options. What are the downsides of not doing these steps early?

If performance testing is the only consideration, but we ignore cost and durability, we are going to have other problems down the road. A balance of business and technical needs provides a practical solution.

If we do not have a measurement system identified, we will not gather the knowledge needed to know when we have the solution. At times, we have needed to run an experiment to develop new test methods. Much testing was completed prior to this time, but without progress because of this oversight.

If we fail to identify important variables, we can be limited in the ability to make decisions. For example, we once were leading a comparative product test at two different locations. The test environment was visibly different. However, discussions with the technical experts indicated the differences should not impact the test results (because of the other controls already in place). When the test ended, some people challenged the recommendation. We overcame the resistance because we clearly documented the expert position that the test location was not a critical impact on performance.

The effort to collect this information is minimal, with it often taking one meeting to obtain. It will not delay the start of the test. It will not negatively impact the test duration. It will make a significant impact on the ability to make smart decisions at the end of the test efforts.

In a typical problem-solving tests, does the team race to get a test started, to feel progress is being made? Or does the team often follow steps similar to the examples mentioned earlier to ensure a confident test plan is developed? Further, what are the short or long-term results from the previous test efforts? Do the problems go away or do they reappear down the road? Would it be nice to have similar project impacts moving forward or to increase the company’s problem-solving batting average?

Someone asked us once, “What is your batting average with these DOE test methods?” The answer? It is 100%. We always learn and always move forward. We never end a test thinking, “I have no idea what happened there. What should we do now?” It is nice to consistently have conclusive evidence and support for the recommendation we communicate at the end of a test experience. It is available to everyone. A little discipline and structure are powerful when it comes to product and process testing!

Perry Parendo is the founder and president of Perry’s Solutions, LLC, a consultancy focused on speed and predictability in new product development, specializing in Design of Experiments methods. Parendo can be reached by calling 651.230.3861 or via www.perryssolutions.com.