Designing the Problem

8 minute read

In engineering, we often solve problems by breaking them down. Decomposition gives us clarity, focus, and manageable subproblems we can assign to teams, tools, or even entire disciplines. But decomposition comes with a cost: when functional requirements are tightly coupled, splitting a problem into independent parts can make some solutions (often the best ones) unreachable. The structure of the problem becomes the structure of the solution space. And that structure, more often than not, reflects historical accident more than deliberate design.

This post explores how decomposition shapes design, using principles from axiomatic design and systems engineering. We’ll see how the act of breaking a problem into parts can inadvertently encode coupling that no amount of downstream optimization can fix; and how reframing the problem, and embedding expertise more interactively, can unlock better solutions.

Axioms

Picture using a water faucet. There are two things you want to control: the temperature of the water, and the rate at which it flows. With a two-knob faucet, one knob controls the flow rate of the hot water, and the other the flow rate of the cold water. In order to get the overall temperature and flow rate you want, you typically need to fiddle with the knobs back and forth for a while, because adjusting one output affects the other. With a single-handle faucet, turning the lever left to right controls the temperature, and pulling it outward increases the flow rate. You can now directly and independently control the outputs you care about; all of the “fiddling” has already been engineered into the design of the faucet.

In the parlance of axiomatic design, the “engineering problem” of faucet use has two functional requirements (things the system must do): temperature and flow rate. Each style of faucet has two design parameters (things that can be adjusted): hot flow rate and cold flow rate, in the case of the two-knob faucet, and overall temperature and flow rate, in the case of the single-handle faucet. With the two-knob faucet, the FRs are coupled, because adjusting one DP (knob) affects both FRs. The single-handle faucet is considered a superior formulation, because, in contrast, the FRs are decoupled: each DP independently controls one FR. As you have no doubt experienced, this makes the “problem” of achieving the overall desired temperature and flow rate simpler to solve for the “engineer” using the faucet.

Modules

The foundational strategy of engineering problem solving is decomposition: breaking the design of a complex system into smaller, more manageable parts. When we say manageable, what we are really talking about is decomposing DPs: dealing with many DPs at once is difficult (the two-knob faucet), dealing with fewer DPs at once is easier (the single-handle faucet). In an ideal axiomatic design process, this follows from decomposing the FRs and identifying DPs iteratively until coupling is minimized, and whatever coupled “blocks” of FRs and DPs remain in the design matrix become the subsystems to be designed relatively independently.

In practice, cognitive and logistical tractability tend to constrain the decomposition of DPs: for example, all of the DPs for a given subsystem need to fall within a single team’s domain of expertise, or a single piece of design software’s capabilities. To compensate, we must redefine the FRs such that some such set of DPs maintains their independence, usually by inserting interface constraints or assumptions between subsystems. In effect, this means defining new pseudo-independent FRs whose only purpose is to create shared constraints that, for example, one team will design for, and another team can assume is met.

Systems

This will, of course, be familiar to every engineer. It’s how we approach every non-trivial design. Usually, the constraints on the decomposition of DPs exist for very good reasons, and the interface constraints don’t have a huge impact on the overall optimality of the resulting design, with respect to the original FRs. But this isn’t always true — and even if it is for this design, it may not be for a future design in which we would like to reuse some of the same components.

John Carmack gives a great example from his experience with Oculus. In setting out to develop the control loop for head-mounted VR displays, as with most software systems, the natural approach was to use a variety of existing components. After all, there is nothing especially new about the idea of taking some hardware input (in this case, an IMU) and having it update graphics output (in this case, the camera point of view): we have existing sensor firmware, input device drivers, operating systems, game engines, graphics libraries, and GPUs from which to build the whole stack. Unfortunately, they were all developed for other contexts, like video games, in which the particular FR of low latency isn’t as strict: while it’s typical for video games to exhibit latencies of over 100 milliseconds between a control input and a graphics output, VR headsets start becoming unusable at around 20 milliseconds.

Carmack’s solution is to take a systems engineering approach: to recognize the limitations imposed by the decomposition boundaries, dissolve them, and look at the problem holistically, end to end, with the integrity of the FRs that matter prioritized. The key insight is that the decomposition boundaries, for various reasons of convenience, cost, and inertia, tend to ossify and persist well outside their original scope, limiting optimization opportunities in designs to which they are ill fit. Carmack’s remedy of systems engineering is a recognition that the real FRs of interest (e.g. low latency) are too tightly coupled across decomposed DPs (e.g. sensor firmware, OS, game logic, and graphics concerns), and a comprehensive reformulation of the FRs and DPs that doesn’t compromise those FRs. That could mean a gnarly design matrix that essentially forces us into designing one big subsystem that comprises the whole system, hence systems engineering. This may seem like it’s violating the axioms, but the key insight just mentioned, put another way, is that they were already violated by the previous formulation.

Interactivity

So, how do we approach the actual design of such a big, highly-coupled system? Again, we like decomposing problems for a reason: teams and tools that can address the full scope of a big, cross-functional optimization problem with many DPs and thus a high-dimensional, highly nonlinear solution space are a tall order. Of course, it does make sense to begin directly building these teams and tools once a specific opportunity is identified; we see this manifest in the cross-functional mindset of agile and DevOps. It isn’t necessarily an all-or-nothing proposition, though.

All we really need to avoid is committing to a rigid decomposition that might exclude the global optimum. That doesn’t automatically mean a total monolithic co-design of everything. We can keep the full system in view, while still allowing various specialist sub-teams and tools to operate semi-independently. The key to this “soft” decomposition is interactive optimization: resolving hard coupling and nonlinearities encountered by one such sub-team or tool, perhaps around a particular DP, by having another sub-team or tool with domain expertise in that DP intervene with some heuristic steering. An effective engineering leader is able to identify where this makes sense and how frequently to iterate. Collaborative iteration and nested feedback loops will be familiar concepts to any practicioner of agile software development.

Perception

My own example is robot-mounted vision systems, such as those used for inspection or scanning of large parts. In such a system, the camera and the robot are means to an end: the top-level FRs will be things like visual coverage and resolution, cycle time, and cost. For practical reasons, the DPs are typically divided into two domains: the camera team uses optical modeling tools to address DPs like field of view, resolution, lens parameters, exposure time, and mounting pose, and the robot team uses robot simulation tools to address DPs like robot path, kinematics, and dwell times. This decomposition is driven by tooling and expertise; perhaps, historically, we’ve built a lot of vision systems and a lot of robots, but not so many robot-mounted vision systems. Clearly, the aforementioned FRs are highly coupled across both teams’ DPs, so now we need to introduce interface constraints; for example, we could create a new FR that insists on a particular geometric zone of coverage for the camera, which the camera team will design for and the robot team will assume is satisfied. This works, but it constrains us to a subset of the solution space that may exclude more optimal designs. What if a wider-angle camera allows for a faster robot path? What if there is a path that would permit a cheaper, lower-resolution camera to be used? These options aren’t available, because we’ve already hardened the decomposition boundary.

While optimizing a robot-mounted vision system as a monolith might be impractical with today’s teams and tools, where the ROI for better designs justifies it, we can start building tomorrow’s teams and tools. Human expertise in vision and robotics is already in significant overlap in other application areas, so we might get most of the way to solving this with relative ease. As for tools, state of the art camera and robot modeling and optimization algorithms can likely be combined to a high degree and still perform. This larger problem space undoubtedly creates thorny spots for someone traditionally coming from one or the other discipline, and nonlinearities the algorithms might struggle with, but the expert-guided heuristics of interactive optimization — effectively a “soft,” iterative kind of interface constraint — bridges the gap.

Optimum

The real work of engineering isn’t just solving the problem: it’s structuring the problem in a way that makes good solutions possible. When we split systems into subsystems, or teams into specialties, or workflows into tools, we’re making design decisions at the highest level. If those boundaries reflect the underlying structure of the problem, they accelerate progress. But if they obscure important couplings, they can block us from seeing, or even imagining, the best designs.

Updated: