Like it or not, our modern computerized world runs on algorithms. Everything from navigation to your recommendations on Netflix are determined using algorithms that have some defined speed of convergence. When you’re working a system that uses these algorithms in real-time, they need to have an extremely fast speed of convergence while still providing the most accurate results.
In electronics design, you’ll often need to perform some simulations to verify the functionality of your current design or proposed redesigns. Although there are many simulators out there, you’ll need to weigh your simulation capabilities carefully. This is about more than looking at exactly what the simulation calculates. Two different simulations may produce consistent results while considering a different range of information from your design. The level of detail included in a given simulation will affect the speed of convergence, and designers need to balance fast convergence rate with accuracy in their simulations.
Accuracy vs. Speed of Convergence in Numerical Simulations
Although there is no single statement that can be made regarding the accuracy of the results produced by any algorithm and its speed of convergence, there is a general tradeoff between the number of required calculations and accuracy for a given algorithm. Although using a different algorithm to run the same simulation may generally require fewer calculations to reach the required accuracy for a specific problem, there is still a tradeoff between accuracy and convergence rate for a fixed processor speed.
In simulations for electronics design and circuit simulations, numerical algorithms are used to take a complicated mathematics problem (usually a differential equation or set of differential equations in space and time) and convert it into a set of simple arithmetic problems. Complex systems can be broken down into millions of data points, requiring multiple millions of arithmetic calculations to solve the problem. While you can work out this calculation by hand, the sheer number of arithmetic calculations required to solve these problems quickly makes the problem intractable.
For very complicated simulations, it may take a human a significant portion of their lifetime just to complete the entire calculation, and this assumes that you do not make any errors along the way. As these calculations in electromagnetic and circuit simulations tend to be iterative (meaning each calculation depends in some way on all previous calculations), an error in one calculation makes all future calculations incorrect. One can quickly see how this is a major challenge for even the brightest human mathematicians.
The complexity increases further when a more refined model is used in a simulation. In numerical electromagnetic and multiphysics simulations, the number of partitions you use to calculate the electric and magnetic fields in space and time will determine the accuracy of the final solution. As you discretize the simulation space into smaller sections, you will get more accurate results, but this requires more calculations as the electromagnetic field must be calculated at each point in space and time. In order to speed up the rate of convergence, you need to use some technique to reduce the number of calculations, or you need to use a faster computer.
The speed of convergence in aerodynamics simulations is also affected by discretization
Quantifying Speed of Convergence
Obviously, a computer can be programmed to run a loop that executes a set of iterative calculations extremely quickly and without errors. As part of this set of calculations, the processing speed and available memory in your computer will affect the speed of convergence. With the large datasets used in some specialty machine learning and statistical analysis applications, parallel computing is used to spread the computational burden across multiple computers. This increases the number of calculations that can be performed per unit time, thus increasing the speed of convergence without sacrificing accuracy.
When building a simulation, the underlying algorithm will have a defined number of calculations that is required to produce results with a given level of accuracy. The speed of convergence will be some function of the required number of calculations, which in turn depends on the model for the system and the physical quantities being simulated. The exact functional relationship depends on the exact algorithm used for calculation in your numerical simulation. If you like, you can define the speed of convergence (in units of inverse time) as the number of calculations (N) divided by the processing speed of your computer:
Note that f(N) can be any function. For a desirable solution algorithm, f(N) will be a function that increases sublinearly, although this is normally a nonlinear function for higher dimensional simulations or in algorithms that require multiple processing steps in each iteration. Note that f(N) can also change as the simulation progresses, meaning the solution can start to converge faster or slower as the simulation progresses.
In 3D simulations, the typical output will look like the example shown below. This mechanical simulation result shows deformation in a mechanical part due to some applied stress. You can get higher accuracy by using finer discretization. This will make the different colored regions appear much smaller with a much smoother transition between different regions, but this will decrease the speed of convergence.
Example results from a mechanical simulation
Speed of convergence for discretization methods is often quantified using big O notation. This notation nicely summarizes one aspect of the behavior of f(N). For example, Gauss-Jordan elimination used in SPICE-based simulations the required number of calculations is proportional to O(n3), where n is the number of circuit elements in the system. In other words, f(N) is proportional to a third degree polynomial, then the speed of convergence decreases as the number of circuit elements increases.
There are other factors that affect the speed of convergence in an electromagnetic or multiphysics simulation. Note that these problems are not unique to circuit simulations or electromagnetic field simulations; they can arise in any type of simulation. This is a mathematically complex topic, but having some knowledge of issues that can generate errors in your results can help you design your simulations to have higher accuracy without significantly increasing computation time.
Working with the tools in the right PCB design and simulation software can help you design simulations that are tailored to your layout and schematics. Allegro PCB Designer and Cadence’s full suite of analysis tools include the simulation features you need to build simulation models directly from your design data and quickly analyze the results.
If you’re looking to learn more about how Cadence has the solution for you, talk to us and our team of experts.
About the AuthorFollow on Linkedin Visit Website More Content by Cadence PCB Solutions