DanaFosmer.com
  • Test Engineering Articles
  • Electrical Engineering Articles
  • Projects and Code
  • Notes
  • Archives
  • About
  • Test Engineering Articles
  • Electrical Engineering Articles
  • Projects and Code
  • Notes
  • Archives
  • About

Journal Summary - Optimized EVM Testing for IEEE 802.11a/n RF ICs

2/1/2014

0 Comments

 
Optimized EVM Testing for IEEE 802.11a/n RF ICs is a paper from the 2008 ITC and read it.

The authors are: E. Acar, S. Ozev, G. Srinivasan, F. Taenzler

Summary

In this paper the authors propose a method of performing Error Vector Magnitude (EVM) testing on an RF IC, a method that is optimized for high speed production testing.

The authors are focusing on testing circuits that implement the 802.11a standard. In this standard, they recommend an input test signal of 320 symbols over 16 frames. This input signal poses several problems to an ATE system. Mainly, this can be a large amount of data to deal with, especially in a multi-site test system. Furthermore, simply using a smaller input signal can result in a lot of false failures and passes.

What the authors propose, is to test a population of the circuits with the full length (320 symbols) input signal and look for the individual symbols that have the most significant results. The most significant results are the individual symbols that show a higher than typical EVM measurement. The EVM measurement from the 320 symbol input signal is a summation of the EVM of each of the individual symbols. So, the idea is that the individual symbols that show the highest EVM are the most significant and represent a corner case or something else critical. They then extract these symbols to create a shorter test vector that will still detect the failing circuits.

The authors simulated this method through MATLAB and Agilent ADS and performed an actual experiment with a population on commercial devices.

The authors evaluate their results based on the full 320 symbol test. If the circuit (or simulation) passes or fails the full test it should pass or fail the reduced vector test. The authors also show that just randomly reducing the number of symbols in the test vector does not yield a test highly correlated with the full 320 symbol vector.

Simulation Results

The simulation is a Monte-Carlo type simulation where low level process parameters are varied. This resulted in 2048 different circuits to evaluate with the EVM as either good or bad. The IEEE 802.11a standard allows for multiple modulation schemes, the authors chose QAM-64 for the input signal. They use several random full length input signals to determine the baseline misclassification rate (misclassification is the event of passing bad or failing good circuits).  They have an equation to determine what constitutes good or bad based on the whole population. (From what I understand, they use the whole population of results to find a kind of average and if the EVM value for an individual symbol result is above the average it’s considered bad, below good)

Experiment Results

For the physical experiment the authors use a commercially available power amplifier with an ATE system to generate the input vectors. The DUT population includes 30 acceptable and 30 unacceptable devices. They use a randomly generated, full-length input vector and extract the corner case symbols from the result to create the optimized input vector. This optimized input vector correctly classified all 30 good and all 30 bad devices. The authors acknowledge that this might seem statistically insignificant, but show that five additional randomly generated short input vectors generate many misclassifications.

EVM

I wanted to write a small section on EVM and my understanding of what this is.  Every sinusoidal signal can be represented in the complex coordinate plane. Each symbol should ideally occupy a specific location, but when that symbol is actually received it may be at a different location. The difference between the ideal and actual points is the EVM. See Figure 1.
Picture
Figure 1. Graphical definition of EVM

My Takeaway

This was an interesting idea to look at a large test result and pick out the individual signals that reveal the failures. This paper did a good job clearly explaining how EVM works. I’m slowly learning more and more about RF chip testing. 
0 Comments

Summary of Low Cost Characterization of RF Transceivers through IQ Data Analysis

1/24/2014

0 Comments

 
Low Cost Characterization of RF Transceivers through IQ Data Analysis is a paper from the 2007 ITC and I read it.

The Authors are: E. Acar and S. Ozev

Summary

The authors discuss how previous work suggests measuring either high or low level parameters in order to test an RF system. However, both methods have drawbacks. The high level measurements such as Error Vector Magnitude (EVM) or Bit Error Rate (BER) typically require very long test times. The technique of measuring low-level parameters can require multiple specific test setups according to the measurement being taken, which can be costly and slow. Other efforts to reduce the need for expensive RF instrumentation have been suggested, including moving RF test circuits to test boards and utilizing DFT features.

The authors work improves on past work by taking a simple test setup typically used for Error Vector Magnitude (EVM) or Bit Error Rate (BER) measurements and developing analytical methods to determine lower level performance parameters without having to use the multiple expensive test setups typically required. This has the benefits of faster test times on less expensive equipment.

In order to calculate the lower level parameters the authors construct a mathematical model of a quadrature transmitter. The quadrature transmitter has a digital input signal, an IQ modulator and a power amplifier.  This type of system is susceptible to three basic potential problems.

1.       Noise
2.       Gain and phase mismatches between the I and Q signals
3.       Non-linearity problems

I wrote about IQ signals in another blog post.

The authors then go on to explain how they derive the mathematical model for the output stage of a quadrature transmitter, similar to Figure 1 (there should be amplifiers on the two inputs and the output – not shown). The models account for the three potential problems listed above in the form of an expression for noise, phase and gain imbalance and non-linear compression.
Picture
Figure 1. Quadrature Transmitter

The authors go on to discuss some models for noise that are modeled as Gaussian distributions. They then talk about how to obtain the constellation point of a symbol under the effect of this type of noise.

I was not familiar with the idea of a “symbol” or what a constellation diagram is. I’m gathering that a symbol is sort-of a piece of digital information.  So in a digital modulation/demodulation there are discrete digital values encoded on the carrier, these are called symbols.

A constellation diagram is a plot of the complex plane of the I and Q signals where each symbol is assigned a spot on the diagram. Well, it’s not that each symbol is arbitrarily assigned a location; it’s where the symbol (that piece of encoded data) will fall on the complex (IQ) plane. The constellation diagram is then useful for visualizing the impairments on the system (impairments like: noise, non-linearity, and mismatch). Figure 2 shows a constellation diagram with for ideal symbol point and where the points are skewed as the result of impairments, where the ideal points form the square and the impairments form the parallelogram.
Picture
Figure 2. Example constellation diagram with ideal and impaired symbol points

Gain and Phase Imbalance

Continuing with the constellation diagram, an impairment of a gain imbalance (a higher gain of either the I or Q signal relative to the other one) would cause the constellation diagram square to become a rectangle. This is because, either the I or Q amplitude would push the points out on the I or Q axis. When we incorporate a phase imbalance we get the parallelogram shape.

The authors go into how to analyze the shape of the constellation diagram to determine the gain and phase imbalance (based on the magnitudes and angles)

Third Order Input Intercept Point (IIP3)

The authors state the definition of the Third Order Input Intercept Point is, “the input power at which the power of the third order term is equal to the power of the fundamental term.” IIP3  is a way to measure the non-linearity of the system. If I understand, this is a measure of non-linearity because a perfectly linear system would no power at the harmonic frequencies, but when there is non-linearity you start getting the power shifted to the harmonic frequencies and the power of the fundamental frequency goes down. So, from the definition, there is some input power level where the fundamental and third-order harmonic has equal powers. I’m guessing here, but I think this means you could take two systems and compare their IIP3 points to see which will work at higher power before getting to this established point of non-linearity.

Bit Error Rate

Directly testing the BER can be slow because typically systems do not have very many errors and so it takes a long input vector to get an accurate measure of the real BER. The authors propose a statistical method to calculate BER from the constellation diagram. So, basically, they run a smaller vector and look at the variation in the constellation points to estimate the number of bit errors that would occur.

Multi-Carrier Systems

The authors continue on to discuss using constellation analysis on a multi-carrier system and derive the equations for gain/phase imbalance and IIP3.

They use an OFDM modulation for this work.

Results

The authors evaluate their methods by simulation and build a demonstration system from discrete components.

For the simulation the simple test setup is modeled in MATLAB, and then used a 2000 bit input vector. The different impairments tested were for phase and gain imbalance. The authors show that calculated error in calculating the imbalances was less than two percent for all test cases.

The details of the multi-carrier system analysis are discussed.

My Takeaway

I learned a lot by reading this paper but it was almost too much to take on. I don’t have much background in RF and modulation; I’m trying to learn that. I didn’t really summarize all of the results as it was too much. I’ll try again and keep building up what I know.
0 Comments

Summary of Integrated RF-CMOS Transceivers challenge RF Test Validation

1/17/2014

0 Comments

 
Integrated RF-CMOS Transceivers challenge RF Test Validation is a paper from the 2006 IEEE International Test Conference and I read it.

The author is Frank Demmerle

Before I begin I would like to extend my apologies to the author of this paper if I completely mangle his work in my summary of it.

Summary

The first part of this paper discusses how RF systems are being consolidated into a single chip with the advent of RF functions being implemented in CMOS technology. Base-band analog functions have been in CMOS for a long time, but RF had required a mixture of RF and CMOS. As the RF circuits migrate entirely to CMOS these functions can be combined to a single chip.

This poses a problem for test as the interface between the RF and base-band analog chips provided an access point for RF testing. While the trend may be towards a single chip there are still many systems that have the digital portion (DSP) separate from the analog/RF and they are connected by a digital only interface.

This again is a problem for test because now the only access point for an RF ATE is a high speed digital interface, and an RF ATE typically cannot handle such high speed communication without very expensive upgrades.

Modulation Testing

These RF chips transmit data by modulation the data with a carrier signal. The modulation signal is generated by a voltage controlled oscillator (VCO). The VCO frequency is controlled by a phase-locked loop (PLL). The quality of the output transmitted signal is highly dependent on this VCO circuit. Traditionally the way to test the modulation circuit was to source I and Q signals from the ATE to the DUT and look at the frequency content of the output transmitted signal. I and Q are (I think I got this right) two signals that are out of phase by 90 degrees. So, you can just use a sine and cosine signals.

The trouble comes in here when you implement the RF circuit entirely in CMOS the architecture changes to a polar modulator and the sine/cosine stimulus doesn’t work.

Since the VCO is so important to the modulation (specifically the gain and linearity) there is often built in circuitry to tune the VCO itself, this is called self-alignment of the PLL. Since the gain and linearity of the VCO are so important to the operation of the system these parameters needs to be tested. The way the VCO would typically be tested is to apply a number of input voltages and measure the output frequency. The self-alignment circuit allows a DFT type capability since the self-alignment has to measure frequency as part of its operation and thus that frequency measurement can be utilized for test purposes. The author goes on to talk about how this test method is much faster and more accurate than an external frequency measurement would be.

System Tests

The author then explains how additional system level testing is necessary beyond the self-alignment DFT test described above. Tests like phase noise testing/phase error testing, modulation mask test/adjacent channel power rejection test (ACPR) and error vector magnitude (EVM) are typical system level RF tests. However, these tests can be complex, slow and require expensive equipment. They may be necessary if there is a lack of test access to smaller lower level circuit blocks.

Digital Interface

As I mentioned before, some RF chips may have only a high speed digital interface that requires very expensive RF ATE hardware to access. One possible solution to this, the author discusses, is putting an FPGA in-between the RF chip and the ATE to buffer those high-speed digital signals. What that would do is the FPGA could run at high speed to communicate with the RF chip and parallelize data to a lower data rate the ATE can deal with. The FPGA could also be utilized to do specialized processing of data best handled by an FPGA processor, like an FFT or other signal processing tasks.

My Takeaway

I want learn more about RF testing in general and this paper gave me some idea about what seems to be some common RF chip testing tasks. This paper is a few years old and I wonder if some of the ATE abilities, such as working with high-speed digital interfaces have improved and gotten cheaper. I also wonder to what extent the RF and digital chips have been combined into one chip, which would make these methods less applicable. This paper was well written and I learned a lot.
0 Comments

Summary of Alternate Test of RF Front Ends with IP Constraints: Frequency Domain Test Generation and Validation 

1/10/2014

0 Comments

 
Alternate Test of RF Front Ends with IP Constraints: Frequency Domain Test Generation and Validation is a paper from the 2006 IEEE International Test Conference and I read it.

The authors are: Akbay, Torres, Rumer, Chatterjee and Amtsfield

Before I begin I would like to extend my apologies to the authors of this paper if I completely mangle their work in my summary of it.

Summary

In this paper the authors explain the work they have done on “alternate test.” The idea behind alternate test is to take a number of individual tests and combine them into one test. Specifically, multiple separate input signals are combined in to a single input signal to, essentially, take multiple specified measurements at once.

This reminds me of a test I worked on once where I wanted to apply a sine wave to a circuit at three different frequencies and look at the amplitude of the output signal. Well, instead of applying three separate sine wave inputs sequentially, I created a single input signal that was the sum of the three sine waves. The output amplitude could then be measured by performing the FFT and looking at the three frequencies of interest.

The main advantages of alternate test are shortened test times and less expensive test equipment.

The authors say the primary difference of this paper compared to previous work is that they are working on a method of alternate test that does not require input into the chip design by the test engineer and details of the chip design are not available due to IP blocks being used. That is the test engineers do not have the semiconductor device models, process variation and low-level netlist information available and these things are not required.

If I understand correctly, they are building a high-level Matlab simulation model of a chip using a combination of information from the datasheet and physical measurement results. The input is then simulated and the results are mapped back to which specifications they correspond, with a statistical process. This result is then compared to the actual set of measurements and specification checking for accuracy. This whole process is run repeatedly through and optimization algorithm. Apparently, this is typically done with a greedy algorithm but they have developed a genetic algorithm which is more capable of finding an optimal solution and less dependent on the initial inputs.

Results

These alternate test methods were tested on population of a purchased RF amplifier IC. The test was first implemented on a bench top system and then on an ATE system. The authors say the alternate test method was an order of magnitude faster than testing each of the specifications individually. This was mostly due to the reduction of slow GPIB commands to instrumentation. The cost of the equipment to execute the alternate test was 60% less than the standard test. These comparisons were for the bench-top versions. On an ATE system the alternate test cost 48% less and was 36% faster. The authors also show that the results obtained with the alternate test are as accurate as the original specification tests.

My Takeaway

I was not familiar with the concept of alternate test although it seems like it makes sense.  It seems like this is a fairly complex process and would be difficult to implement. This was interesting.
0 Comments

    Archives

    February 2014
    January 2014

    Categories

    All
    RF Testing

    RSS Feed