## The power of adjoints

Computational time and numerical efficiency are two subjects that always need to be considered when performing numerical optimization. On the one hand, software must be efficient enough such that realistic problems can be solved in a reasonable time. On the other, users want to know that their specific problem can be solved quickly, or that the software employed is efficient for the purpose.

At Wolf Dynamics, we have cumulated decades of years of experience in adjoint optimization. If we consider an input-output formulation, the sensitivity can be written as the gradient of the output with respect to the input. For instance in gradient-based shape optimization, where the sensitivity is used iteratively to update the shape, a common output is the drag force while the input is a representation of the airfoil shape.

The efficiency comes from the fact that the gradient can be evaluated by a **single** evaluation of the adjoint equation that has a computational time comparable to a solution of the direct problem. This should be compared to the computational time needed to compute the same gradient using, for instance, finite differences. In this case, the governing equations must be solved as many times as the dimension of the input, which can be 1000 times larger (or more!).

Fields which take advantage of the power of adjoint equations range from optimal control to structural sensitivity analysis, from data assimilation to receptivity analysis. A recent review paper on adjoints is available here.

Base design | Optimize design |