Most of newer ADCs and DACs tend to use the JESD204b/c standards simply due to higher performance requirements – the ever growing need for more usable Bandwidth, faster ADCs and DACs with increased resolution per sample and multi-channel implementations. The JESD internally uses Gigabit transceivers (Or MGTs – Multi gigabitTransceivers ) with speeds of up to 32 Gbit/s per single lane (Xilinx Ultrascale + FPGA). This is indeed much higher than the older LVDS standard can handle. LVDS is in most cases limited to 1.6 Gbit/s per single pair (Xilinx Ultrascale+ FPGA), but reaching this speed is not easy and requires an extra care designing such an interface. Because 1.6Gbit/per pair (Or more realistically approx 800Mbit/s per pair) is not usually nowadays sufficient, more LVDS pairs tend to be used for a single interface. This differentiates the ADCs/DACs into Serial and Parallel. In Serial LVDS interface, the data word is transmitted within multiple bit clock periods based on the serialization factor, while in parallel implementation, the dataword is transmitted via multiple lines at approximately the same clock speed as is the sampling rate of the device (May differ though).

What is also quite common is the usage of DDR LVDS (Double Data Rate LVDS), which reduces the clock speed by a factor of 2. Therefore 800 Mbits/s requires a bitclock of 400 MHz. In this mode, data are sampled on both the rising and falling edges of the bitclock. The 400 MHz frequency is also quite reasonable for the FPGA, as with newer technology, the FPGAs can reach speeds of 500 MHz +,but this on the other hand may complicate the internal design and requires some optimizations to meet the timing inside the Chip. Therefore 400 MHz is a good compromise between speed and design complexity (At least for Ultrascale+). The other commonly used standard is SDR LVDS (Single Data Rate LVDS), where the data are sampled on each rising (or falling) edge of the bitclock. In this mode the transmission rate and the clock rate are the same. 

What is important for reconstruction of the dataword at the receive side is the data format, or container (Usually denoted as CDF: Clk + Data + Frame). Here, Clk is the Bitclock, Data is the data sent and Frame denotes the boundaries of Words for successful deserialization of the words. ADCs and DACs have different CDF specifications for their modes, so that each time, the designer is responsible for properly aligning the data words to the frame boundaries. Below is an example of the CDF for DDR [Frame = 1,1,1,1,0,0,0,0],[Data = 1,1,1,0,1,1,0,0].

Capturing interface on the FPGA side can be made via a so called “Component Mode” or “Native mode“. The component mode is approximately the same as in the 7000 series devices (With limitations, but the topology is identical). On the other hand, the Native mode supports higher data rates (1600 Mbit in native vs 1250 Mbit in component ) and support features such as auto calibration of the interface. The native mode has also a dedicated clocking mechanism inside the IOB, which cannot be used in component mode. Though Xilinx recommends to use the Vivado’s HSSIO IP (High Speed Select IO Wizard),it not required to use it and the native mode interface may be instantiated inside RTL in a similar way the Component mode is instantiated there.

LVDS Component Mode Clocking

First of all, for proper FPGA clocking needs to be used a Clock capable pin (Usually denoted as: GC ). These pins have dedicated routes to global clocking network inside the FPGA. What’s worth to mention as well is that not all FPGA banks support LVDS standard, for example HD (High Density) banks doesn’t support LVDS while HP (High Performance) banks does. Additional recommendation is that all the Data, Frame and Clock are located within a single bank and not distributed in multiple FPGA banks. Though this is not required, its basically highly recommended as it avoids some design problems.

The easiest method of clocking the IDDR or SERDES is via a global clock buffer. In fact all of the clocking options in component mode do require a global clock buffer clocking, but this specific clocking mechanism should more likely be called a Direct Clocking. That is that the clock from the GC pin is routed to global clock buffer and distributed to all IDDR/SERDES. In case of SERDES capture, there needs to be used a Clock Buffer with internal divider based on the required deserialization factor (IE BUFGCE_DIV in Ultrascale+). 

One of the other options is to use a PLL or MMCM. This has some advantages over Direct clocking. At first, the internal VCO inside the PLL/MMCM is compensated for PVT (Power, Voltage, Temperature) variations, which makes sure, that proper clocking is applied across different operating ranges of the device. Both the PLL and MMCM can also serve as jitter cleaners, which may further stabilize clocking, not to mention, that the MMCM can also deskew the clock. And at last, the internal fine phase shift allows for compensation of different route lengths on the PCB. This can be adjusted as well by the IDELAY components, but when IDELAY is used for clock, the IDELAY output cannot be routed to global clock network. Basically, I prefer using the IDELAY since it adjust the data delay on per-pair bases.

The designer may use the Xilinx’s Clocking wizard, but I always prefer to figure out the correct configuration myself and setup the internal dividers as needed. Information about PLL (Phase Locked Loop) / MMCM (Mixed Mode Clock Manager) configuration may be found in Xilinx’s UG572. One has to take care of the VCO’s operational range, this is specified in DS925 for Ultrascale+, and is between 800 and 1600 MHz. Note that when using the ISERDES, the bitclock needs to be divided by 2 (DDR deserialization 4 times or SDR deserialization 2 times) or by 4 (DDR deserialization 8 times or SDR deserialization 4 times). In case of using the ISERDES, the build-in FIFO inside the IOB could be used for synchronization between the 2 clock domains. The SERDES is likely required for data rates above approx 1Gbit/s.

Deserialization and Frame Alignment

After Successful capture of data bits via IDDR or SERDES, the data word needs to be reconstructed. This is usually simpler in case of parallel LVDS since differential pairs represent the bits of the data words. In case of Serial LVDS, one has to take care, whether LSB or MSB of the dataword is sent first. In case LSB is sent first, the Word needs to be reconstructed from LSBs. The same applies for MSB-First mode. What is however even more important is that the dataword needs to be aligned to the Frame boundaries. Below is an example of how the data word is sent from the ADC in LSB-First mode.

If the ADC clock would start synchronously with the deserialization inside FPGA,there would be no need to align the data to frame boundaries. But because either the FPGA or ADC clock starts asynchronously at random time, the data are likely deserialized such that the deserialized data word contains usually 2 ADC words instead of one. The Ideal capture of the data word is depicted below. Take an extra look at the time when the data bits of the words are sampled (LSB First Mode).

As was just mentioned, the deserialized word needs to be aligned to Frame boundaries. This can be implemented in several ways. In the previous 7000 series devices,there is a feature called bitslip – in case the data coming from serdes were not aligned, then some Logic FSM would trigger a bitslip operation to delay the data inside SERDES by 1 bit (See UG471). This would loop until the deserialized data word was properly aligned to Frame boundaries. In Zynq Ultrascale+, there is no bitslip, so that the data needs to be aligned manually. This can be done for example in case of IDDR (Input Double Data rate Register) such a way, that the deserialization FSM waits for Frame boundary (Usually transition from 000… to 111…). What I believe is a more robust solution however is to wait for 2 consecutive deserialized words and then do the alignment. This is depicted below.

The left upper part (12b) is the deserialized word received at T – 1 and the right upper part (12b) is the deserialized word received at time T – 0. It can be seen, that these 2 12-bit words contain in fact 3 partial pieces of the ADC/DAC words. The W2 word is the word just being transmitted from the ADC, but its not yet fully transmitted at T – 0 and its bits are useless for alignment. Similarly, the W0 word is not complete and its previous bits are already discarded. What can be done however is the reconstruction of W1 word. In this situation, the 5 LSBs were already received at time T-1 and the rest has been received at T – 0. Therefore, the Word is aligned in the bottom as concatenation of the 2 parts of the deserialized 12-bit words. Once aligned, it can be considered static until either the deserializer, clock of ADC is reset. To be more precise, this is in fact a Barrel-Shift Register implementation (Basic concept is shown below: )

At last, I would just love to add, that depending on the required deserialization factor, not all capture techniques may be optimal. For example, if the factor is 12, it doesn’t make much sense to use 8-bit deserialization, since additional logic will be required to properly deserialize the words. Furthermore whats likely more important is that in some cases (For example Deser by a factor of 14). The deserialized data words will not be available at a constant period, which may cause some problems to DSP pipelines, which usually expect a valid sample each x periods:

In the Image above, Data are coming from SERDES in 8 bits. We need to deserialize into 14-bit words. Therefore W0 needs to wait for 2 8-bit samples. After W0 is released,we have still 2 bits available for next word W1, but we still have to wait 2 8-bit words until we are able to deserialize into W1. Similarly for W2 – also have to wait for 2 8-bit words from the SERDES. But After W2,we have available 6 bits, so that in order to release W3, we only need to wait for the next SERDES 8-bit sample. Therefore the period is not constant and varies between 1 and 2.

An example of 14b deserialization FSM for both LSB & MSB-First modes (IDDR) is given below:

 

Bit alignment

Proper bit alignment means that the data bits are captured, when they are stable – IE in the middle of the valid period. There are two commonly used types of timing diagrams with respect to the bit clock. Either the data are edge-aligned or center-aligned. Edge aligned means, that the data bits change on rising and falling edges (DDR) and Center-aligned means, that data change their values with a 90° deg shift with respect to the clock. Usually, in case of edge-aligned interfaces, the receive clock needs to be shifted to the middle of the eye with a 90° shift on the clock itself. On the other hand, center aligned interfaces do not require additional phase shift on the clock signal. 

The receive and transmit sides of the FPGA may delay the data bits by a small amount of time (Usually 0 – 1.1 ns) to compensate for length variations on the PCB among the LVDS PN pairs. It is still of course recommended to use matching lengths for the interfaces where possibleIn case of Ultrascale+ devices, there is also a possibility to cascade these IO blocks in order to delay the data by even larger amount of time (Again, see UG571 IDELAYE3 /ODELAY3 for more information). In the simplest form (or so called “COUNT mode” along with VAR_LOAD mode), the delay inside the FPGA may be modified on the fly. In combination with an ADC, that has a build-in test-pattern generator, one can estimate the middle of the Valid Data bit with respect to the configured delay value by looping across all the configurable delays (Taps) and measuring whether the received data changes overtime. This required the test pattern to be constant (Any Constant pattern except for zero pattern will do just fine) An example of such a sweep across all 512 taps is given below.

In this example.we see that based on the tap delay, we are able to properly align for the middle of bit 0 (Tap approx 150) and bit 1 (Approx 380). The choice is arbitrary if the frame alignment is implemented properly (via Barrel Shift for example). This specific result is however valid only for a single DATA pair. The same needs to be done for other pairs of the entire interface, including the frame itself. Be aware, that all of the signals needs to be shifted in the middle of BIT 0 or BIT1, but that these should not be mixed! Its also better to treat the frame signal as data signal and not as a clock signal – Use the same capture technique for both data and frame.   

Native Mode

Native mode is a new feature in Ultrascale+ devices which includes the entire physical layer of the interface with improved calibration, management and includes a better clocking distribution inside the IOB. Because the clock is no longer routed through a global clock network, higher data rates are achievable in comparison with the older “Component mode”. The Build-in BISC (Build-in Self Calibration) ensures that data are sampled at the right time without any user intervention. The key to understanding the native mode is the hierarchy inside each HP IO bank:

Each HP IO bank has 4 Byte Groups. Each Byte group then consists of 2 Nibbles – the Lower and the upper nibble. Each of these nibbles then contain 6 to 7 RX_TX_BITSLICES (Lower 6, Upper 7). What is important is that if any bitslice is instantiated somewhere in RTL, its automatically placed near the corresponding FPGA PAD. Therefore each PAD has its own BITSLICE. This is important as there are no other routes to or from other SLICES. Furthermore, each slice with index 0 can be used for clocking. (Eg. Clock must always be routed to any Slice 0 FPGA PAD). There are basically only 3 types of these:

  • DBC (Dedicated Byte Clock)
  • QBC (QUAD Byte Clock)
  • GC (Global Clock)
  • GC_QBC (Combination – Routable to All Bytes and Global Clock network)

DBC may be used to clock the entire byte, but its routability is limited to within the byte itself so that the clock cannot be distributed to other bytes of the same IO Bank. On the other hand QBC-capable pins may be used as clocking for all the bytes within the same IO bank. Furthermore GC pins are pins routable to global clock network. Most valuable is however the combination of GC_QBC pin,which is only one per entire IO bank. Clock from this pin may not only be distributed to the IO, but also to the global clock network. A very useful is the Xilinx’s website for pinout specification of the Zynq Ultrascale+ devices: LINK.

Based on the required clocking, there may be some other requirements for the design:

  • Inter-Byte Clocking (Clock from one Byte needs to be distributed to other Byte)
  • Inter-Nibble Clocking (Clock from one Nibble needs to be distributed to the other Nibble)

The Inter-Nibble and Inter-Byte Clocking is managed via a dedicated routing mechanism available to the BITSLICE_CONTROL primitive. This primitive is associated with the corresponding slices. In case you find unroutable nets in the design, its likely that you are trying to connect bitslices to different SLICE_CONTROL ports (For example bitslice 1 to port 0 of bitslice control). Note that for differential interfaces such as LVDS, P and N signals have 2 pads, so that they also have 2 corresponding BITSLICEs in the IOB. The data may be taken from the P-part of the interface only however. Based on the amount of required pads for the interface,there might be a need to connect NIBBLES and or BYTES together in order to build the interface properly. One may use the HSSIO wizard instead, but note that this wizard as of 02.2020 initializes the entire IO BANK and clocking across multiple IO banks is not supported.

Apart from all this, the recommended clocking tree is via PLL, which in Ultrascale+ has a dedicated clock route to all BITSLICE_CNTRLs (Port : CLKOUTPHY ). Additional options such as deserialization ratio, Delay format,Delay value and so on are accessible via RTL attributes. More on these may be found in UG571  under native mode description or in UG572 under PLLE4_ADV. The overall “Simplified” data and clocking chart is shown below:

The red signal is clock heading from FPGA PIN (GC/QBC) to PLL and via dedicated PHY clock to both instantiated BITSLICE_CNTRLs. The green clock in this example is taken as PLL output as 1/4 Data clock due to DDR deserialization by 4 as the read clock to the build-in synchronization FIFOs inside the BITSLICEs. The example shown also uses the BG2 (Bank group 2), HP Bank 66 of 7EV Device on the ZCU104 (Just for simulation purposes). RIU interface is not depicted and likely not needed for basic operation, though accessing the RIU though APB will likely not hurt anybody as well :)

Additional info about constraining the Interface (Specifying input delay) may be found inside the Vivado templates under: Tools->Language Templates->XDC->Timing Constraints->Input Delay->Source Synchronous->Center-Aligned->DDR/SDR. Below is the copy from Vivado 2019.1,but always lookup the template in your own environment and Vivado version. Also remember, that even though the design may fail the timing on the input interface (Setup and Hold on ISERDES/IDDR), you may still be able to manually properly align the window via technique discussed earlier for example. Setting false paths on the input is however not recommended. Additional tip for manual dynamic calibration is to set all delay taps to zero(Initial Value), so that they are not included in the timing summary. Also make sure, that clock into the IDELAY matches the clock to the ISERDES(CLK_DIV)/IDDR. If these clocks are different, the IDELAY for the FPGA PAD is placed automatically to different PAD locations, which worsen timing.

Additional tip is to enable the on-die chip termination for the LVDS interface in case proper termination is not done by the PCB designer or if custom SOM module is used for the design. This can be done via GUI (IO Planning ->DIFF_TERM_ADV->TERM_100) or manually via XDC such as:  set_property DIFF_TERM_ADV TERM_100 [get_ports DATA_P[0]].

Overall, the native interface is very similar to the old component mode in way that it include all the components as well, but improves clocking tree, performance, features such as autocalibration and unifies the design topology. Though new in Ultrascale+, Native mode is “Quite easy” to understand – well, it needs some time, but here comes the great + for all the XIlinx’s datasheets: They are quite readable and you  will likely not encounter much issues and or misunderstandings inside them. The designer is therefore no longer needed to fully understand the interface itself and may concentrate or more important things such as going out for a walk with the dog or whatever :)