Optimized EVM Testing for IEEE 802.11a/n RF ICs is a paper from the 2008 ITC and read it.
The authors are: E. Acar, S. Ozev, G. Srinivasan, F. Taenzler
In this paper the authors propose a method of performing Error Vector Magnitude (EVM) testing on an RF IC, a method that is optimized for high speed production testing.
The authors are focusing on testing circuits that implement the 802.11a standard. In this standard, they recommend an input test signal of 320 symbols over 16 frames. This input signal poses several problems to an ATE system. Mainly, this can be a large amount of data to deal with, especially in a multi-site test system. Furthermore, simply using a smaller input signal can result in a lot of false failures and passes.
What the authors propose, is to test a population of the circuits with the full length (320 symbols) input signal and look for the individual symbols that have the most significant results. The most significant results are the individual symbols that show a higher than typical EVM measurement. The EVM measurement from the 320 symbol input signal is a summation of the EVM of each of the individual symbols. So, the idea is that the individual symbols that show the highest EVM are the most significant and represent a corner case or something else critical. They then extract these symbols to create a shorter test vector that will still detect the failing circuits.
The authors simulated this method through MATLAB and Agilent ADS and performed an actual experiment with a population on commercial devices.
The authors evaluate their results based on the full 320 symbol test. If the circuit (or simulation) passes or fails the full test it should pass or fail the reduced vector test. The authors also show that just randomly reducing the number of symbols in the test vector does not yield a test highly correlated with the full 320 symbol vector.
The simulation is a Monte-Carlo type simulation where low level process parameters are varied. This resulted in 2048 different circuits to evaluate with the EVM as either good or bad. The IEEE 802.11a standard allows for multiple modulation schemes, the authors chose QAM-64 for the input signal. They use several random full length input signals to determine the baseline misclassification rate (misclassification is the event of passing bad or failing good circuits). They have an equation to determine what constitutes good or bad based on the whole population. (From what I understand, they use the whole population of results to find a kind of average and if the EVM value for an individual symbol result is above the average it’s considered bad, below good)
For the physical experiment the authors use a commercially available power amplifier with an ATE system to generate the input vectors. The DUT population includes 30 acceptable and 30 unacceptable devices. They use a randomly generated, full-length input vector and extract the corner case symbols from the result to create the optimized input vector. This optimized input vector correctly classified all 30 good and all 30 bad devices. The authors acknowledge that this might seem statistically insignificant, but show that five additional randomly generated short input vectors generate many misclassifications.
I wanted to write a small section on EVM and my understanding of what this is. Every sinusoidal signal can be represented in the complex coordinate plane. Each symbol should ideally occupy a specific location, but when that symbol is actually received it may be at a different location. The difference between the ideal and actual points is the EVM. See Figure 1.
Figure 1. Graphical definition of EVM
This was an interesting idea to look at a large test result and pick out the individual signals that reveal the failures. This paper did a good job clearly explaining how EVM works. I’m slowly learning more and more about RF chip testing.