Skip to main content

Using the Expectation Maximization Algorithm in Circuit Analysis

Brain with PCB traces

Give your circuit some brains with expectation maximization


When people think of clustering algorithms, they don’t normally think of circuit design and analysis. Techniques like evolutionary computing and Monte Carlo simulations are certainly used (sometimes together) to aid circuit optimization, but these are not necessarily learning algorithms in the classic sense. Expectation maximization, as a well-known technique used for clustering in machine learning, can be applied in numerous areas, ranging from finance to topics in structural and electronics engineering.

As part of circuit analysis, you can use expectation maximization to learn some important points about the way your circuit operates in the face of variations in component values, as well as due to hidden, unobservable effects like noise or EMI from external sources. By accounting for the possible influence of hidden variables governing your data, you can reliably quantify how noise influences the operation of your circuit.

Expectation Maximization vs. Reliability-Based Optimization

If you’re familiar with the latter term, you know that reliability-based optimization is a circuit design technique that considers variances in component values during circuit analysis. Essentially, you are given (or allowed to specify) the mean and variance on component values in your circuit. Your job is then to determine how the output from the circuit (either voltage, current, or both) will be affected by these variations in component values. Alternatively, you might calculate the current and/or voltage in a specific portion of the circuit given these variations. By looking at the variations in the calculated current and/or voltage, you can then determine the likelihood of failure of a given component.

Just like probability and statistics approach the same problem from different directions, so do expectation maximization and reliability-based optimization. In reliability-based optimization, you define some probability distribution that determines your component values, and you simulate the electrical data you would expect to observe in an experiment. In contrast, expectation maximization involves determining the parameters of the probability distribution that defines your observations, given some set of random inputs (in this case, the variance in your component values).

If this sounds like log-likelihood maximization, then you are correct; expectation maximization is a method for maximizing the log-likelihood function in the presence of some other latent variables. Here, latent really means hidden, or unobserved; these are normally labelled Z in expectation maximization. In the context of circuit analysis, a hidden variable could be an unaccounted-for noise source, external EMI, mechanical vibration, manufacturing imperfections, or any other perturbation in the circuit that has little or nothing to do with natural variations in your component values. 

Although expectation maximization and reliability-based optimization can be used to account for variations in component values, the end goals of the two techniques are quite different. Reliability-based optimization is all about designing the circuit, while expectation maximization is used to interpret observations. Think of this as comparing a prediction versus experiment; by comparing expectation maximization results with an interpolation from reliability-based optimization, you can directly determine how unobserved perturbations in your circuit affect its operation.


Example expectation maximization results

Example results you might see when comparing reliability-based optimization with expectation maximization. Here, Z is the set of latent variables.


Applying Expectation Maximization to Circuit Analysis

Note that the probability distribution you want to obtain in expectation maximization is the conditional probability distribution for your circuit in the presence of the aforementioned latent variables. You can easily consider variations in your component values as being independent identically distributed (i.i.d.) random variables. Depending on the relationship between your latent variables and component values, your latent variables may not be independent of your component variations. For example, externally radiated EMI would be independent of your component values, while the strength of a crosstalk signal between two portions of a PCB would not be independent of your component values.

This possibility of dependence between the latent variables and your random component values is accounted for in expectation maximization. The simplest way to do this is to define a binary conditional distribution, or to define a linear relationship between the two sets of variables (this is a natural choice for linear time-invariant circuits). Both choices are used in many introductory treatments on expectation maximization.

The natural choice to describe your conditional distribution for your measured voltage and/or current is a multivariate normal distribution (thanks to the Central Limit theorem). Note that this may not hold with nonlinear circuits, or in linear circuits with feedback. The goal is to determine the mean and variance of this conditional distribution directly from your data.

Performing Expectation Maximization

To start, you need to define the following function for the expected value of the log-likelihood of your conditional probability density function P:


Log-likelihood of the conditional distribution function

Log-likelihood of the conditional probability density function


In the first step (called the Expectation step), you define the above expectation function based on some initial estimate of the parameters that govern your distribution (normally the mean and variance) and your definition of Z. In the second step (called the Maximization step), you maximize Q with respect to the distribution parameters by setting its derivative equal to zero. This Maximization step is guaranteed to cause your distribution to start converging towards your data. The new parameters you calculated are then used to create a new Q function, and the process is repeated.

Admittedly, going through the entire algorithm is beyond the scope of this post, but hopefully you’ll see how this powerful technique can be used to examine the effects of noise in your circuits. There is a great tutorial of expectation maximization from a 1996 article in IEEE Journal of Signal Processing. There is another great tutorial for more general problems written by Sean Borman at University of Utah. Once you do determine an appropriate distribution, you can evaluate the goodness of fit using standard statistical tests.

If you are working with paired voltage and current measurements, then Rao-Blackwellization can be used within expectation maximization to analyze the distributions of each quantity separately. One paper that uses Rao-Blackwellization and expectation maximization together to fit linear jump Markov models can be found on arXiv.


Rao-Blackwellization as part of expectation maximization

Rao-Blackwellization of paired voltage and current can be used as part of expectation maximization


The right PCB design and analysis software can help you extract the net lists you need to use in expectation maximization or in a whole host of statistical analyses for your circuits. Allegro PCB Designer and Cadence’s full suite of analysis tools allow you to easily extract this information from your designs.

If you’re looking to learn more about how Cadence has the solution for you, talk to us and our team of experts.