An 8-bit, 3.2 GS/s, time interleaved (TI) successive approximation register (SAR) analog-to-digital converter (ADC) with a non-buffered hierarchical demultiplexing architecture is proposed and fabricated. Compared to a typical hierarchical TIADC, (i) all track-and-hold (T&H) related noise sources and (ii) wide-band ampli?ers for buffering of the input signal are avoided. In this way, the proposed solution can improve the signal-to-noise-ratio and reduce power consumption. The concept is demonstrated in an 8-bit 3.2 GS/s TI-ADC design based on 32 asynchronous SAR ADCs and fabricated in a 0.13µm CMOS process. The prototype includes (i) a programmable delay cell array to adjust four front sampling phases, and (ii) a 25.6 Gb/s low voltage differential signaling (LVDS) interface. Measurements of the fabricated TI-ADC show 44.6 dB of peak signal-to-noise-anddistortion ratio and 105 mW of power consumption at 1.2 V.
This work describes a 1 Gb/s digital communication system implemented on an FPGA-based platform to investigate mixed-signal calibration techniques of time-interleaved analog-to-digital converters (TI-ADCs). Design of multi-gigabit TI-ADCs is of great interest for next generation digital communication systems such as optical coherent networks. In these applications, mismatches of the sampling time, gain, offset, and frequency response among the interleaves of a TI-ADC limit the performance of the converter unless they are compensated. Typically, long computer simulation run time is required to evaluate the performance of mixed-signal calibration algorithms. We show that the FPGA-based system described in this paper drastically reduces the emulation time by more than hundreds of magnitude orders. The proposed FPGA framework includes: (i) a diagnostic and control unit built upon an embedded processor NIOSII, (ii) DSP blocks to implement the transmitter and the receiver, and (iii) a Gaussian number generator to emulate the noise channel component. Experimental results with a 2 GS/s 6-bit CMOS TI-ADC demonstrate the excellent capability of the implemented FPGA-based emulator to evaluate the performance of a mixed-signal calibration algorithm.
In this work we investigate a new background calibration technique to compensate sampling phase errors in time-interleaved analog-to-digital-converters (TI-ADCs). Timing mismatches in TI-ADC degrade significantly the performance of ultra-high-speed digital transceivers. Unlike previous proposals, the calibration technique used here optimizes a metric directly related to the performance of the communication system. Estimation of gradient of the mean-squared-error (MSE) at the slicer with respect to the sampling phases of each interleave, are computed to minimize the time errors of the TI-ADC by controlling programmable analog time delay-cells. Since (i) dedicated digital signal processing (DSP) such as cross-correlations or digital filtering of the received samples are not required, and (ii) metrics such as MSE are available in most commercial transceivers, the implementation is reduced to a low speed state-machine. The technique is verified experimentally by using a programmable logic-based platform with a 2 GS/s 6-bit TI-ADC. The latter has been fabricated in $0.13μm CMOS process, and it provides flexible sampling phase control capabilities. Experimental results show that the signal-to-noise ratio penalty of a digital BPSK receiver caused by sampling time errors in TI-ADC, can be reduced from 1 dB to less than 0.1 dB at a bit-error-rate of 10 -6 .
A 6-bit 2-GS/s time interleaved (TI) successive approximation register (SAR) analog-to-digital converter (ADC) is designed and fabricated in a 0.13 μm CMOS process. The architecture uses 8 time-interleaved track-and-hold amplifiers (THA), and 16 SAR ADC's. The chip includes (i) a programmable delay cell array to adjust the interleaved sampling phase, and (ii) a 12 Gbps low voltage differential signaling (LVDS) interface. These blocks make the fabricated ADC an excellent platform to evaluate mixed-signal calibration techniques, which are of great interest for application in high-speed optical systems. Measurements of the fabricated ADC show 33.9 dB of peak signal-to-noise-and-distortion ratio (SNDR) and 192 mW of power consumption at 1.2 V.
A 6-bit 2-GS/s time interleaved (TI) successive approximation register (SAR) analog-to-digital converter (ADC) is designed and fabricated in a 0.13 μm CMOS process. The architecture uses 8 time-interleaved track-and-hold amplifiers (THA), and 16 SAR ADC's. The chip includes (i) a programmable delay cell array to adjust the interleaved sampling phase, and (ii) a 12 Gbps low voltage differential signaling (LVDS) interface. These blocks make the fabricated ADC an excellent platform to evaluate mixed-signal calibration techniques, which are of great interest for application in high-speed optical systems. Measurements of the fabricated ADC show 33.9 dB of peak signal-to-noise-and-distortion ratio (SNDR) and 192 mW of power consumption at 1.2 V.
A joint sampling-time error and channel skew background calibration technique for time interleaved analog to digital converters (TI-ADC) is presented. The technique is aimed at applications in dual-polarization QPSK/QAM receivers for coherent optical communications at high data rates (e.g., 40Gb/s and beyond). Unlike previous proposals, the calibration algorithm introduced here is used to jointly compensate for sampling-time and channel skew errors. Estimates of the gradient of the mean squared error (MSE) or the bit error rate (BER) with respect to the sampling phases of the different signal lanes and interleaves are computed and used to iteratively minimize a cost function (i.e., MSE or BER). Computer simulations demonstrate the excellent behavior of the proposed compensation technique. The calibration algorithm can be implemented with minimal hardware requirements and with a slow clock. This allows power dissipation in a CMOS VLSI implementation to be minimized.